Notes PDF
Notes PDF
in L is L L
,
the set of all words over which are not contained in L
.
Concatenation. Because we can concatenate words we can use the
concatenation operation to dene new languages from existing ones by
extending this operation to apply to languages as follows. Let L
1
and
L
2
be languages over some alphabet . Then
L
1
L
2
= s t [ s L
1
and t L
2
.
n-ary Concatenation. If we apply concatenation to the same lan-
guage by forming L L there is no reason to stop after just one con-
catenation. For an arbitrary language L we dene
L
n
= s
1
s
2
s
n
[ s
i
L for all 1 i n.
We look at the special case
L
0
= s
1
s
n
[ s
i
L for all 1 i n = ,
since is what we get when concatenating 0 times.
12
Kleene star. Sometimes we want to allow any (nite) number of
concatenations of words from some language. The operation that does
this is known as the Kleene star and written as ()
. The denition
is:
L
= s
1
s
2
s
n
[ n N, s
1
, s
2
, . . . , s
n
L
=
nN
L
n
.
Note that
nN
n
= = L
0
for all L, which may be a bit unexpected.
Exercise 1. To gain a better understanding of the operation of concatena-
tion on languages, carry out the following tasks.
(a) Calculate a, aaa, aaaaaa , bb, bbb.
(b) Calculate , a, aa aa, aaa.
(c) Calculate a, a
3
, a
6
b
0
, b
2
, b
3
.
(d) Describe the language 0
m
1
n
[ m, n N as the concatenation of two
languages.
(e) Calculate 0, 01, 001
2
.
Exercise 2. To gain a better understanding of the Kleene star operation
carry out the following tasks. (You may use any way you like to describe the
innite languages involved: Just using plain English is the easiest option. I
would prefer not to see . . . in the description.)
(a) Calculate a, b
, a
, a
, aa
aaaa
and the
complement of a
.
(c) Describe the set of all words built from a given alphabet using the
Kleene star operation.
2.3 Describing languages through patterns
By denition a languages is a set. While sets are something mathematicians
can deal with very well, and something that computer scientists have to
be able to cope with, computers arent at all well suited to make sense of
expressions such as
(01)
n
[ n N.
There are better ways of showing a computer what all the words are that
we mean in a particular case. We have already seen that we can express the
language L in a dierent way, namely as
01
.
If there is only one word in a language, here 01, we might as well
leave out the curly brackets and write
1
(01)
because both of
these include words that contain 0 as well as 1, whereas any word in our
target language consists entirely of 0s or entirely of 1s. We need to have a
way of saying that either of two possibilities might hold. For this, we use
the symbol [. Then we can use 0
[1
(a[b)
(a[b)
ab[b[a
Exercise 4. Describe all the words matching the following patterns. For
nite languages just list the elements, for innite ones, try to describe the
words in question using English. If you want to practise using set-theoretic
notation, add a description in that format.
(a) (0[1[2)
(b) (0[1)(0[2)2
(c) (01[10)
(d) 0
1,
(e) (01)
1,
(f) 0
,
(g) (010)
,
(h) (01)
,
(i) (01)
(01)
.
(j) (0[1)
(k) 0
[1
[2
(l) 0[1
[2
2
What should the computer do if the string is empty?
3
What if there isnt a next symbol?
14
2.4 Regular expressions
So far we have been using the idea of a pattern intuitivelywe have not
said how exactly we can form patterns, nor have we properly dened when
a word matches a pattern. It is time to become rigorous about these issues.
For reason of completeness we will need two patterns which seem a
bit weird, namely (the pattern which is matched precisely by the empty
word ) and (the pattern which is matched by no word at all).
Denition 1. Let be an alphabet. A pattern or regular expression
over is any word over
pat
= , , [, , (, )
generated by the following inductive denition.
Empty pattern The character is a pattern;
Empty word the character is a pattern;
Letters every letter from is a pattern;
Concatenation if p
1
and p
2
are patterns then so is (p
1
p
2
);
Alternative if p
1
and p
2
are patterns then so is (p
1
[p
2
);
Kleene star if p is a pattern then so is (p).
In other words we have dened a language
4
, namely the language of all
regular expressions, or patterns. Note that while we are interested in words
over the alphabet we need additional symbols to create our patterns. That
is why we have to extend the alphabet to
pat
.
In practice we will often leave out some of the brackets that appear in
the formal denitionbut only those brackets that can be uniquely recon-
structed. Otherwise we would have to write ((0[1)
.
This is an example of a recursive denition, and you will see more of
these in COMP10020.
5
Note that in the denition of a pattern no meaning is attached to any of
the operators, or even the symbols that appear in the base cases. We only
nd out what the intended meaning of these is when we dene when a word
matches a pattern. We do this by giving a recursive denition as outlined
above.
Denition 2. Let p be a pattern over an alphabet and let s be a word
over . We say that s matches p if one of the following cases holds:
Empty word The empty word matches the pattern ;
Base case the pattern p = x for a character x from and s = x;
Concatenation the pattern p is a concatenation p = (p
1
p
2
) and there are
words s
1
and s
2
such that s
1
matches p
1
, s
2
matches p
2
and s is the concatenation of s
1
and s
2
;
Alternative the pattern p is an alternative p = (p
1
[p
2
) and s matches
p
1
or p
2
(it is allowed to match both);
Kleene star the pattern p is of the form p = (q) and s can be written
as a nite concatenation s = s
1
s
2
s
n
such that s
1
, s
2
,
. . . , s
n
all match q; this includes the case where s is empty
(and thus an empty concatenation, with n = 0).
Note that there is no word matching the pattern .
Exercise 5. Calculate all the words matching the patterns and a (for
a ) respectively.
Exercise 6. For the given pattern, and the given word, employ the recursive
Denition 2 to demonstrate that the word does indeed match the pattern:
(a) the pattern (ab)
[1
[ s matches p.
Note that dierent patterns may dene the same language, for example
L(0
) = L([00
).
Note that we can also dene the language given by a pattern in a dierent
way, namely recursively.
L() = ;
L() = ;
L(x) = x for x ;
and for the operations
L(p
1
p
2
) = L(p
1
) L(p
2
);
L(p
1
[p
2
) = L(p
1
) L(p
2
);
L(p
) = (L(p))
.
We can use this description to calculate the language dened by a pattern
as in the following example.
L((0[1)
) = (L(0[1))
= (L(0) L(1))
= (0 1)
= 0, 1
[1
)
(b) (01)
0
(c) (00)
(d) ((0[1)(0[1))
.
Can you describe these languages in English?
In order to tell a computer that we want it to look for words belonging
to a particular language we have to nd a pattern that describes it. The
following exercise lets you practise this skill.
Exercise 9. Find
7
a regular expression p over the alphabet 0, 1 such that
the language dened by p is the one given. Hint: For some of the exercises it
may help not to think of a pattern that is somehow formed like the strings you
want to capture, but to realize that as long as theres one way of matching
the pattern for such a string, thats good enough.
(a) All words that begin with 0 and end with 1.
(b) All words that contain at least two 0s.
(c) All words that contain at least one 0 and at least one 1.
(d) All words that have length at least 2 and whose last but one symbol
is 0.
(e) All words which contain the string 11, that is, two consecutive 1s.
(f) All words whose length is at least 3.
(g) All words whose length is at most 4.
(h) All words that start with 0 and have odd length.
(i) All words for which every letter at an even position is 0.
(j) All words that arent equal to the empty word.
(k) All words that contain at least two 0s and at most one 1.
(l) All words that arent equal to 11 or 111.
(m) All words that contain an even number of 0.
(n) All words whose number of 0s is divisible by 3.
(o) All words that do not contain the string 11.
(p) All words that do not contain the string 10.
(q) All words that do not contain the string 101.
(r)* All words that contain an even number of 0s and whose number of 1s
is divisible by 3.
7
If you nd the last few of these really hard then skip them for now. The tools of the
next chapter should help you nish them.
18
Exercise 10. Find a regular expression p over the alphabet a, b, c such
that the language dened by p is the one given.
(a) All the words that dont contain the letter c.
(b) All the words where every a is immediately followed by b.
(c) All the words that do not contain the string ab.
(d) All the words that do not contain the string aba.
2.7 Regular languages
We have therefore dened a certain category of languages, namely those that
can be dene using a regular expression. It turns out that these languages
are quite important so we give them a name.
Denition 4. A language L is regular if it is the set of all words matching
some regular expression, that is, if there is a pattern p such that L = L(p).
Regular languages are not rare, and there are ways of building them.
Nonetheless in Chapter 3 we see that not all languages of interest are regular.
Proposition 2.1. Assume that L, L
1
, and L
2
are regular languages over
an alphabet . Then the following also are regular languages.
(a) L
1
L
2
(b) L
1
L
2
(c) L
n
(d) L
is an arbitrary string of
terminal and non-terminal symbols, which can be read as R can be
replaced by X.
1
In the example above, the object alphabet is
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, +, , /, , (, ).
The non-terminal symbols are B and S.
Why do we call grammars whose production rules satisfy these conditions
context-free? It is because the rules tell us that we may replace any occur-
rence of a non-terminal as directed. A context-sensitive grammar would
be able to have rules such as You may replace the non-terminal A by the
string (A) provided that there is an S immediately to the left of A, so a
production rule might look like this: SA S(A).
In general, what context-sensitivity means is that according to their
production rules strings consisting of terminal and non-terminal symbols
can be replaced by other such strings. The advantage of context-freeness is
that we know that once a terminal character is in place, it will remain in
the word even if further reduction rules are applied. Further, we dont have
to worry about having to choose between dierent production rules taking
us in dierent directions to the same degree.
Assume we had rules
AB A and AB B.
Given AB occurring in some string we would then have to make a choice
whether to keep the A or the B. For context-free grammars, if we have two
distinct non-terminal symbols (such as A and B here) we can apply any one
rule for A, and any one rule for B, and it does not matter which order we
do that in.
If we allowed context-sensitive rules then our grammars would be much
more complicated, and while it would be possible to express more compli-
cated features it would be much harder to see, for example, which language
a grammar generates. Natural languages certainly require context-sensitive
grammars, but when it comes to programming we can go a very long way
without having to use this idea.
The following denition says how we may generate strings from a gram-
mar, and how a grammar denes a language.
Denition 6. A string X ( )
is generated by a grammar G if
there is a sequence of strings
S = X
0
X
1
X
n
= X
such that each step X
i
X
i+1
is obtained by the application of one of Gs
production rules to a non-terminal occurring in X
i
as follows.
Let R occur in X
i
and assume that there is a production rule R Y .
Then X
i+1
is obtained from X
i
by replacing one occurrence of R in X
i
by
the string Y .
The language generated by a grammar G is the set of all strings
over which are generated by G.
2
1
This assumes that we have a special symbol, namely which should not be contained
in either or to avoid confusion.
2
Note that such a string may not contain any non-terminal symbols.
22
Let us look at another example. Let us assume that we want to generate
a grammar for the language of all strings that are non-empty and start with
0 over the alphabet 0, 1. How do we go about it? Well, we know what
our underlying alphabet of terminal symbols is going to be. How about
non-terminal ones? This is harder to decide a priori. We know that we need
at least one of these, namely the start symbols S. Whether or not further
non-terminal symbols are required depends on what it is that we have to
remember when creating a word (just as the number of states required for
a nite automaton depends on what we need to remember about a word
3
).
How do we make sure that our word starts with 0? Well, we have a
production rule for S that says S 0 then we certainly create 0 as the
left-most symbol, and this will stay the left-most symbol. But what should
the dots be?
If we try S 0S then we do get a problem: Eventually we have to allow
1s into our string, but if we have a rule that says S 1S then we could
generate words whose rst letter is 1. An obvious solution to this problem
is to use a new non-terminal symbol, say A. Then we can have a rule
S 0A,
and all we now have to do is to allow A to create whatever letters it likes,
one at a time, or to vanish into the empty string. Hence we want rules
A 0A [ 1A [ .
Exercise 13. Can you create a CFG for the same language using just one
non-terminal symbol? Hint: If you nd this tough now then come back to it
after you have had more practice.
Exercise 14. This exercise asks you to create CFGs for various languages
over the alphabet 0, 1.
(a) All words that contain two consecutive 1s. Now give a derivation for
the word 0111. Can you create a grammar for this language with just one
non-terminal symbol?
(b) All words that contain at least two 1s. Now give a derivation for 01010.
(c) All words whose length is precisely 3.
(d) All words whose length is at most 3.
(e) All words whose length is at least 3.
(f) All words whose length is at least 2 and whose last but one symbol is 0.
Now give a derivation for 0101.
(g) All words for which every letter at an even position is 0. Give a deriva-
tion of 101, and of .
(h) All words consisting of an arbitrary number of 0s followed by the same
number of 1s. Try to explain how to generate a word of the form 0
n
1
n
where
n N.
(i) All words that contain an even number of 0s. Make sure that your
language can generate strings such as 001010 and 0001001000.
3
However, in a nite state machine we only get to scan the word from left to right;
when creating a derivation we may be working on several places at the same time.
23
(j) All words that do not contain the string 01.
Which ones of those are easy to do, which ones harder? Look at other
languages from Exercises 9 and 10. Which ones of those, do you think,
would be particularly hard?
Part (h) of this exercise already indicates that we can express languages
using CFGs that we cannot express using nite automata or regular expres-
sions.
We give another example. Assume we want to generate the set of all
words of the language
s1
n
[ s a, b, c
= (q, x))
we introduce a production rule
q xq
.
For every accepting state q F we add a production rule q .
We recall the following automaton from Section 4 where we have changed
the names of the states.
S A
0
0
1 1
We can now use non-terminal symbols S and A as well as terminal sym-
bols 0 and 1 for a CFG with the following production rules:
S [ 1S [ 0A
A 1A [ 0S
This grammar will generate the language of those words consisting of 0s
and 1s which contain an even number of 0s.
Exercise 18. Use this construction of a context-free grammar to produce
grammars for the regular languages dened as follows:
(a) By the pattern (ab[a)
[001
Hint: as above.
25
(c) The words that do not contain the string 010. Hint: This language
appears in Exercises 10 and 28 so you already have a suitable DFA. Could
you have found such a grammar without using this method?
(d) By the pattern ab
[a
,
where x and R
, or R x, where
4
x , or R . We call
grammars were all production rules are of this shape right-linear. Such
grammars are particularly simple, and in fact every language generated by
a right-linear grammar is regular.
We can do even more for regular languages using CFGs. This is our rst
truly applied example of a context-free grammar.
There is a context-free grammar for the language of regular expressions
over some alphabet .
The underlying alphabet of terminal symbols
5
we require here is
, , [, .
The alphabet of non-terminal symbols can be S. We use the following
production rules:
S .
S .
S x, for all x .
S SS for concatenation.
S S[S for alternative.
S S
S
T
a T
a
Hence we know that this grammar is also ambiguous. How can we turn
it into an unambiguous one? First of all we have to nd out which language
is described by the grammar. In this case this is not so dicult: It is the
set of all words over the alphabet a, b which start with a or end with a
(or both). The ambiguity arises from the rst rule: The a that is being
7
The usual convention that tells us that multiplication should be carried out before
addition is there so that we do not have to write so many brackets for an expression while
it can still be evaluated in a way that leads to a unique result.
28
demanded may either be the rst, or the last, letter of the word, and if the
word starts and ends with a then there will be two ways of generating it.
What we have to do to give an unambiguous grammar for the same
language is to pick the rst or last symbol to have priority, and the other
one only being relevant if the chosen one has not been matched. A suitable
grammar is as follows:
S aT [ bU
T aT [ bT [
U aU [ bU [ a
Now we generate any word from left to right, ensuring the grammar is
unambiguous. We also remember whether the rst symbol generated is a,
in which case the remainder of the word can be formed without restrictions,
or b, in which case the last symbol has to be a. It turns out that we do not
need three non-terminal symbols to do this: Can you nd a grammar that
uses just two and is still unambiguous?
Exercise 19. For both grammars over the alphabet = a, b given below
do the following: Show that the grammar is ambiguous by nding a word
that has two parse trees, and give both the parse trees. Now try to deter-
mine the language generated by the grammar. Finally give an unambiguous
grammar for the same language.
(a) Let = S, T with production rules S TaT and T aT [ bT [ .
(b) The grammar with = S and S aS [ aSbS [ .
3.5 A programming language
Dening a programming language is a huge task: First we have to describe
how programs can be formed (we have to dene the syntax of the language),
and then we have to describe how a computer should act when given a
particular program (we have to give a semantics).
The syntax of a language tells us how to write programs that are consid-
ered well-formed (so if it is a compiled language, for example, the compiler
will not give any syntax errors). However, this says nothing about what any
program actually means. When you see a fragment of a program you prob-
ably have some kind of idea what at least some of the instructions within it
are supposed to do, but that is because program designers stick to certain
conventions (so that a + b will typically mean adding the content of the
variable a to that of the variable b), but a computer doesnt know anything
about these preconceptions. Therefore in order to fully dene a program-
ming language one also has to say what a program actually means. There
is a third year course unit that concerns itself with dierent ways of doing
so.
Most real programming languages such as Java, C, or ML, or PHP, typ-
ically have very large grammars and we do not have the time here to study
such a big example.
We look at a very small programming language called While. At rst
sight it looks as if there cant possibly anything interesting to be computable
with it: It only knows about boolean data types (such as true, T, here tt,
and false, F, here ff), and natural numbers, say from 0 to some number
29
we call maxint. We also need variables which can take on the value of any
of our natural numbers. Lets assume for the moment we only have three
variables, x, y, and z. The language has three kinds of entities:
arithmetic expressions A
boolean expressions B and
statements S.
For each of these we have a production rule in our grammar.
A 0 [ 1 [ [ maxint [ x [ y [ z [ A+A [ AA [ AA
B tt [ ff [ A = A [ A A [ B B [ B
S x := A [ y := A [ z := A [ skip [ S; S [ if Bthen S else S [ while Bdo S
This may surprise you, but if we add more variables to our language
8
then
it can calculate precisely what, say, Java can calculate for natural numbers
and booleans.
We can now look at programs in this language. Assume that the value
held in the variable y is 10.
x := 1; while y = 0 do x := 2 x; y := y 1
This program will calculate 2
10
and store it in x. In fact, it will calculate
2
y
(assuming y holds a natural number).
Exercise 20. We look at this example in more detail.
(a) Give a derivation for the program above.
(b) Is this grammar unambiguous? If you think it isnt, then give an ex-
ample and show how this grammar might be made unambiguous, otherwise
explain why you think it is.
(c) Give a parse tree for the above program. Can you see how that tells you
something about what computation is supposed to be carried out?
3.6 The Backus-Naur form
In the literature (in particular that devoted to programming languages)
context-free grammars are often described in a slightly dierent form to the
one used here so far. This is known as the Backus-Naur form (sometimes
also described as Backus Normal Form but that isnt a very accurate name).
There is a simple translation process.
Non-terminal symbols dont have to be capitals and are distinguished
from other symbols by being in italics, or they are replaced by a de-
scriptive term included in angled brackets and ).
The rewriting arrow is replaced by the symbol ::=.
8
When people dene grammars for programming languages they typically use abbre-
viations that allow them to stipulate any variable without listing them all explicitly, but
we dont want to introduce more notation at this stage.
30
We repeat the grammar for the While language from above to illustrate
what this looks like.
Here we assume that a ranges over arithmetic expressions, AExp, b
ranges over boolean expressions, BExp, and S ranges over statements, Stat.
a ::= 0 [ 1 [ [ maxint [ x [ y [ z [ a +a [ a a [ a a
b ::= tt [ ff [ a = a [ a a [ b b [ b
S ::= x := a [ y := a [ z := a [ skip [ S; S [ if b then S else S [ while b do S
In the next example we assume that instead we are using aexp) to range
over AExp, bexp) over BExp, and stat) over Stat respectively.
aexp) ::= 0 [ 1 [ [ maxint [ x [ y [ z [
aexp) +aexp) [ aexp) aexp) [ aexp) aexp)
bexp) ::= tt [ ff [ aexp) = aexp) [ aexp) aexp) [
bexp) bexp) [ bexp)
stat) ::= x := aexp) [ y := aexp) [ z := aexp) [ skip [ stat); stat) [
if bexp) then stat) else stat) [ while bexp) do stat)
3.7 Properties of context-free languages
There is a description for building new regular languages from old ones in
Section 4. We can use almost any way we like to do so; set-theoretic ones
such as unions, intersections and complements, as well as ones using concate-
nation or the Kleene star; even reversing all the words work. Context-free
languages are rather more ddly to deal with: Not all of these operations
work for them.
Concatenation. This does work. We do the following. Assume that
we have two grammars with terminal symbols taken from the alphabet ,
and non-terminal symbols
1
respective
2
. We now take every symbol in
1
and put a subscript 1 onto it, and similarly for every symbol in
2
, and
the subscript 2. We have now forced those two alphabets to be disjoint, so
when we form their union the number of symbols in the union is the sum
of the symbols in
1
and
2
. We add a new start symbol S to the set of
non-terminal symbols. We now take all the production rules from the rst
grammar, and put subscripts of 1 onto each non-terminal symbol occurring
in it, and do the same for the second grammar with the subscript 2. We
add one new production rule S S
1
S
2
.
Exercise 21. Use your solution to Exercise 14 (c) to produce a grammar
for the language of all words whose length is at least 6, using this procedure.
Kleene star. This looks as if it should be more complicated than con-
catenation, but it is not. We merely add two production rules to the gram-
mar (if they arent already there) for our given language, namely S SS,
and S . If we then wish to generate a word that is the n-fold concate-
nation of words in our language, where n N
+
we merely start by applying
the rule S SS (n 1) times, giving us n copies of S. For each one of
these we can then generate the required word. If we wish to generate the
empty word we can do this by applying the second new rule, S .
31
Exercise 22. Use your solution to Exercise 14 (c) to produce a grammar for
the language of all words whose length is divisible by 3, using this procedure.
Reversal. This is quite easy. Leave the two alphabets of terminal
and non-terminal symbols as they are. Now take each production rule, and
replace the string on the right by its reverse. So if there is a production rule
of the form R 00R1, replace it by the rule R 1R00.
Exercise 23. Use this procedure to turn your solution for Exercise 12 into
a grammar that generates all words over 0, 1 that start with 1 and end
with 0.
Unions. This does work.
Exercise 24. Show that the union of two context-free languages over some
alphabet is context-free.
Intersections. Somewhat surprisingly, this is not an operation that
works. The intersection of a context-free language with a regular one is
context-free, but examining this issue in more detail goes beyond the scope
for this course.
Complements. This also does not work. Just because we have a way
of generating all the words in a particular set does not mean we can do the
same for all the words not belonging to this set.
3.8 Limitations of context-free languages
Given the idea of context-sensitive grammars mentioned at the start of this
section it should not come as a surprise that there are languages which
are not context-free. We do not develop the technical material here that
allows us to prove this statement. In Section 4 there is an explanation why
nite state automata can only do a very limited amount of counting, and
context-free grammars can certainly do more than that. However, there are
limitations to this as well. The language
a
n
b
n
c
n
[ n N
over a, b, c is not context-free.
There is a formal result, called the Pumping Lemma for context-free
languages that describes some of the possible properties that indicate that
a language fails to be context-free.
There are automata which can be used to describe the context-free lan-
guages which are known as pushdown-automata. These do have a memory
device that takes the form of a stack. That makes them fairly complicated
to cope with, and that is why they are beyond the scope of this course.
Moreover, for these automata, deterministic and non-deterministic versions
are not equivalent, making them even more complex to handle.
3.9 Summary
A context-free grammar or CFG is given by two alphabets, one of
terminal, and one of non-terminal symbols and by production rules
that tell us how to replace non-terminals by strings consisting of both,
terminal and non-terminal symbols.
32
We generate words for a CFG by starting with the non-terminal start
symbol S. In the word generated so far we then pick one non-terminal
symbol and replace it by a string according to one of the production
rules for that symbol.
A CFG generates a language made up of all the words consisting en-
tirely of terminal symbols that can be generated using it. The lan-
guages that can be generated in this way are known as the context-free
languages.
Every regular language is context-free, but there are some context-free
languages that are not regular.
Rather than a derivation for a word we can give a parse tree which
is sucient to tell us how to carry out calculations, for example.
Programming languages are dened using grammars, and a compiler
parses a program to check whether it is correctly formed.
We can apply the union, concatenation, Kleene star and reversal oper-
ations to context-free languages and that gives us another context-free
language.
There are languages that are not context-free.
33
Chapter 4
How do we come up with
patterns?
As you should have noticed by now (if you have done the last two exercises
from the previous section) coming up with a pattern that describes a partic-
ular language can be quite dicult. One has to develop an intuition about
how to think of the words in the desired language, and turn that into a
characteristic for which a pattern can be created. While regular expressions
give us a format that computers understand well, human beings do much
better with other ways of addressing these issues.
4.1 Using pictures
Imagine somebody asked you to check a word of an unknown length to see
whether it contains an even number of 0s. The word is produced one letter
at a time. How would you go about that?
In all likelihood you would not bother to count the number of 0s, but
instead you would want to remember whether you have so far seen an odd,
or an even, number of 0s. Like icking a switch, when you see the rst 0,
youd remember odd, when you see the second, youd switch back (since
thats the state youd start with) to even, and so forth.
If one wanted to produce a description of this idea it might look like this:
Even Odd
0
0
Every time we see 0 we switch from the even state to the odd state
and vice versa.
If we wanted to give this as a description to somebody else then maybe
we should also say what we are doing when we see a letter other than 0,
namely stay in whatever state were in. Lets assume we are talking about
words consisting of 0s and 1s. Also, wed like to use circles for our states
because they look nicer, so well abbreviate their names.
E O
0
0
1 1
34
So now somebody else using our picture would know what to do if the
next letter is 0, and what to do if it is 1. But how would somebody else
know where to begin?
E O
0
0
1 1
We give them a little arrow that points at the state one should start
in. However, they would still only know whether they nished in the state
called E or the one called O, which wouldnt tell them whether this was the
desired outcome or not. Hence we mark the state we want to be in when
the word comes to an end and now we do have a complete description of our
task.
E O
0
0
1 1
We use a double circle for a state to indicate that if we end up there
then the word we looked at satised our criteria.
Lets try this idea on a dierent problem. Lets assume were interested
in whether our word has 0 in every position that is a multiple of three, that
is, the third, sixth, ninth, etc, letters, if they exist, are 0.
So we start in some state, and we dont care whether the rst letter is 0
or 1, but we must remember that weve seen one letter so that we can tell
when we have reached the rst letter, so wed draw something like this:
0 1
0, 1
Similarly for the second letter.
0 1 2
0, 1 0, 1
Now the third. This time something happens: If we see 0, were still
okay, but if we see 1 then we need to reject the word.
0 1 2 3
4
0, 1 0, 1 0
1
So what now? Well, if we reach the state labelled 4 then no matter
what we see next, we will never accept the word in question of satisfying
our condition. So we simply think of this state as one that means we wont
accept a word that ends there. All the other states are neif we stop in
any of them when reading a word it does satisfy our condition. So we mark
all the other states as good ones to end in, and add the transitions that keep
us in state 4.
35
0 1 2 3
4
0, 1 0, 1 0
1
0, 1
But what if the word is okay until state 3? Then we have to start all
over again, not caring about the next letter or the one after, but requiring
the third one to be 0. In other words, were in the same position as at the
startso the easiest thing to do is not to create state 3, but instead to have
that edge go back to state 0.
0 1 2
4
0, 1 0, 1
0
1
0, 1
Exercise 25. For the languages described in parts (b) and (c) of Exercise 9,
draw a picture as in the examples just given.
4.2 Following a word
Consider the following picture.
a
a
b
b
a, b
b
a
a, b
What happens when we try to follow a word, say abaa?
To begin with we are in the start
state.
a
a
b
b
a, b
b
a
a, b
We see a, so we follow the edge la-
belled a. abaa.
a
a
b
b
a, b
b
a
a, b
Having followed the rst letter, we
can drop it.
36
Now we see the letter b, so we go
once round the loop in our current
state. baa.
a
a
b
b
a, b
b
a
a, b
Now we have aa so we follow the
edge labelled a
a
a
b
b
a, b
b
a
a, b
Lastly we have the word a, so we
follow the edge to the right, la-
belled a, b.
a
a
b
b
a, b
b
a
a, b
We end up in a non-accepting state so this wasnt one of the words we
were looking for.
Exercise 26. Follow these words through the above automaton. If you
can, try to describe the words which end up in the one desirable state (the
double-ringed one).
(a) abba
(b) baba
(c) baab.
4.3 Finite state automata
So far we have been very informal with our pictures. It is impossible to
dene an automaton without becoming fairly mathematical. For any of
these pictures we must know which alphabet we are considering.
Every automaton consists of
a number of states,
one state that is initial,
a number (possibly 0) of states that are accept-
ing
for every state and every letter x from the al-
phabet precisely one transition from that
state to another labelled with x.
x
Formally we may consider the set of all the states in the automaton,
say Q. One of these states is the one we want to start in, called the start
state or the initial state. Some of the states are the ones that tell us if we
37
end up there we have found the kind of word we were looking for. We call
these accepting states. They form a subset, say F, of Q.
The edges in the graph are a nice way of visualizing the transitions.
Formally what we need is a function that
takes as its input
a state and
a letter from
and returns
a state.
We call this the transition function, . It takes as inputs a state and a letter,
so the input is a pair (q, x), where q Q and x . That means that the
input comes from the set
Q = (q, x) [ q Q, x .
Its output is a state, that is an element of Q. So we have that
: Q
-
Q.
The formal denition then is as follows.
Denition 8. Let be a nite alphabet of symbols. A (deterministic)
nite
1
automaton, short DFA, over consists of the following:
A nite non-empty set Q of states;
a particular element of Q called the start state (which we often denote
with q
);
a subset F of Q consisting of the accepting states;
a transition function which for every state q Q and every symbol
x returns the next state (q, x) Q, so is a function from Q
to Q. When (q, x) = q
we often write
q
x
-
q
.
We sometimes put these four items together in a quadruple and speak of the
DFA (Q, q
, F, ).
Sometimes people also refer to a nite state machine.
Note that for every particular word there is precisely one path through
the automaton: We start in the start state, and then read o the letters
one by one. The transition function makes sure that we will have precisely
one edge to follow for each letter. When we have followed the last letter
of the word we can read o whether we want to accept it (if we are in an
accepting state) or not (otherwise). Thats why these automata are called
deterministic; we see non-deterministic automata below.
For every word x
1
x
2
x
n
we have a uniquely determined sequence of
states q
, q
1
, . . . , q
n
such that
(q
= q
0
)
x
1
-
q
1
x
2
-
q
2
-
x
n
-
q
n
.
We accept the word if and only if the last state reached, q
n
is an accepting
state. Formally it is easiest to dene this condition as follows.
1
All our automata have a nite number of states and we often drop the word nite
when referring to them in these notes.
38
Denition 9. A word s = x
1
x
n
over is accepted by the deter-
ministic nite automaton (Q, q
, F, ) if
(q
, x
1
) = q
1
,
(q
1
, x
2
) = q
2
,
. . . ,
(q
n1
, x
n
) = q
n
and
q
n
F, that is, q
n
is an accepting state.
In particular, the empty word is accepted if and only if the start state is
an accepting state.
Just as was the case for regular expression we can view a DFA as dening
a language.
Denition 10. We say that a language is recognized by a nite au-
tomaton if it is the set of all words accepted by the automaton.
Exercise 27. Design DFAs that accept precisely those words given by the
languages described in the various parts of Exercise 9 (you should already
have taken care of (b) and (c)). Note that (r) is no longer marked with
here.
Exercise 28. Do the same for the languages described in Exercise 10.
4.4 Non-deterministic automata
Sometimes it can be easier to draw an automaton that is non-deterministic.
What this means is that from some state, there may be several edges labelled
with the same letter. As a result, there is then no longer a unique path when
we follow a particular word through the automaton. Hence the procedure
of following a word through an automaton becomes more complicatedwe
have to consider a number of possible paths we might take.
When trying to draw an automaton recognizing the language from Ex-
ercise 9 (a) you may have been tempted to draw such an automaton.
The language in question is that of all words over 0, 1 that begin with
0 and and with 1. We know that any word we accept has to start with 0,
and if the rst letter we see is 1 then we never accept the word, so we begin
by drawing something like this.
0 1
2
0
1
0, 1
Now we dont care what happens until we reach the last symbol, and
when that is 1 we want to accept the word. (If it wasnt the last letter then
we shouldnt accept the word.) The following would do that job:
39
0 1
2
3
0
1
0, 1
0, 1
1
0, 1
But now when we are in state 1 and see 1 there are two edges we might
follow: The loop that leads again to state 1 or the edge that leads to state 3.
So now when we follow the word 011 through the automaton there are two
possible paths:
From state 0, read 0, go to state 1.
Read 1 and go to state 3.
Read 1 and go to state 2.
Finish not accepting the word.
From state 0, read 0, go to state 1.
Read 1 and go to state 1.
Read 1 and go to state 3.
Finish accepting the word.
We say that the automaton accepts the word if there is at least one such
path that ends in an accepting state.
So how does the denition of a non-deterministic automaton dier from
that of a deterministic one? We still have a set of states Q, a particular
start state q
.
Exercise 29. Go back and check your solutions to Exercises 27 and 28.
Were they all deterministic as required? If not, redo them.
We can turn this idea into a formal denition.
Denition 11. A non-deterministic nite automaton, short NFA, is
given by
a nite non-empty set Q of states,
a start state q
in Q,
a subset F of Q of accepting states as well as
a transition relation which relates a pair consisting of a state and a
letter to a state. We often write
q
x
-
q
if (q, x) is -related to q
.
We can now also say when an NFA accepts a word.
2
You will meet relations also in COMP10020.
40
Denition 12. A word s = x
1
x
n
over is accepted by the non-
deterministic nite automaton (Q, q
, q
1
, . . . , q
n
such that for all 0 i < n, relates (q
i
, x
i
) to q
i+1
and such that q
n
F,
that is, q
n
is an accepting state. The language recognized by an NFA
is the set of all words it accepts.
An NFA therefore accepts a word x
1
x
2
x
n
if there are states
q
= q
0
, q
1
, . . . , q
n
such that
(q
= q
0
)
x
1
-
q
1
x
2
-
q
2
-
x
n
-
q
n
,
and q
n
is an accepting state. However, the sequence of states is no longer
uniquely determined, and there could potentially be many.
Exercise 30. Consider the following NFA.
b
a, b a
a
Which of the following words are accepted by the automaton? , aba,
bbb, bba, baa, baba, babb, baaa, baaaa, baabb, baaba. Can you describe the
language consisting of all the words accepted by this automaton?
Note that in the denition of NFA there is no rule that says that for
a given state there must be an edge for every label. That means that
when following a word through a non-deterministic automaton we might
nd ourselves stuck in a state because there is no edge out of it for the letter
we currently see. If that happens then we know that along our current
route, the word will not be accepted by the automaton. While this may
be confusing at rst sight, it is actually quite convenient. It means that
pictures of non-deterministic automata can be quite small.
Take the above example of nding an NFA that accepts all the words
that start with 0 and end with 1. We can give a much smaller automaton
that does the same job.
0
0, 1
1
You should spend a moment convincing yourself that this automaton
does indeed accept precisely the words claimed (that is, the same ones as
the previous automaton).
3
This is such a useful convention that it is usually also adopted when
drawing deterministic automata. Consider the problem of designing a DFA
that recognizes those words over the alphabet 0, 1 of length at least 2
for which the rst letter is 0 and the second letter is 1. By concentrating
on what it takes to get a word to an accepting state one might well draw
something like this:
3
Note that unless we have to describe the automaton in another way, or otherwise have
reasons to be able to refer to a particular state, there is no reason for giving the states
names in the picture.
41
0 1 2
0 1
0, 1
This is a perfectly good picture of a deterministic nite automaton. How-
ever, not all the states, and not all the transitions, are drawn for this au-
tomaton: Above we said that for every state, and every letter from the
alphabet, there must be a transition from that state labelled with that let-
ter. Here, however, there is no transition labelled 1 from the state 0, and
no transition labelled 0 from the state 1.
What does the automaton do if it sees 1 in state 0, or 0 in state 1? Well,
it discards the word as non-acceptable, in a manner of speaking.
We can complete the above picture to show all required states by assum-
ing theres a hidden state that we may think of as a dump. As soon as we
have determined that a particular word cant be accepted we send it o in
that dump state (which is certainly not an accepting state), and theres no
way out of that state. So all the transitions not shown in the picture above
go to that hidden state. With the hidden state drawn our automaton looks
like this:
0 1 2
3
0 1
0, 1
1
0
0, 1
This picture is quite a bit more complicated than the previous one, but
both describe the same DFA, and so contain precisely the same information.
I am perfectly happy for you to draw automata either way when it comes
to exam questions or assessed coursework.
Exercise 31. Consider the following DFA. Which of its states are dump
states, and which are unreachable? Draw the simplest automaton recogniz-
ing the same language.
b
a a
b, c
c
c
c
a, b
a, b
Describe the language recognized by the automaton.
Exercise 32. Go through the automata you have drawn for Exercise 27
and 28. Identify any dump states in them.
4.5 Deterministic versus non-deterministic
So far we have found the following dierences between deterministic and non-
deterministic automata: For the same problem it is usually easier to design a
non-deterministic automaton, and the resulting automata are often smaller.
42
On the other hand, following a word through a deterministic automaton is
straightforward, and so deciding whether the word is accepted is easy. For
non-deterministic automata we have to nd all the possible paths a word
might move along, and decide whether any of them leads to an accepting
state. Hence nding the language recognized by an NFA is usually harder
than to do the same thing for a DFA of a similar size.
So both have advantages and disadvantages. But how dierent are they
really? Clearly every deterministic automaton can be viewed as a non-
deterministic one since it satises all the required criteria.
It therefore makes sense to wonder whether there are things we can do
with non-deterministic automata that cant be done with deterministic ones.
It turns out that this is false.
Theorem 4.1. For every non-deterministic automaton there is a determin-
istic one that recognizes precisely the same words.
Algorithm 1, example
Before looking at the general case we consider an example. Consider the
following NFA.
0 1 2
a, b
b
a
b
a
a
We want to construct a deterministic automaton from this step by step.
We start with state 0, which we just copy, so it is both initial and an ac-
cepting state in our new automaton.
0
With a, we can go from state 0 to states 1 and 2, so we invent a new state
we call 12 (think of it as being a set containing both, state 1 and state 2).
Because 2 is an accepting state we make 12 an accepting state too.
0 12
a
With a letter b we can go from state 0 to state 1, so we need a state 1
(think of it as state 1) in the new automaton too.
0 12
1
a
b
Now we have to consider the states we have just created. In the original
automaton from state 1, we cant go anywhere with a, but with b we can go
to state 2, so we introduce an accepting state 2 (thought of as 2) into our
new automaton.
43
0 12
1 2
a
b
b
To see where we can go from state 12 we do the following. In the original
automaton, with a we can go from 1 nowhere and from 2 to 0 and 2, so we
lump these together and say that from 12 we can go to a new state 02 (really
0, 2) with a. Because 2 is an accepting state we make 02 one too.
0 12
1 2
02
a
b
b
a
In the original automaton from state 1 with b we can go to state 2, and
from state 2 we can go to state 1 with the same letter, so from state 12 we
can go back to 12 with b.
0 12
1 2
02
b
a
b
a
b
In the original automaton with a we can go from state 2 to states 0 and
2, so we need a transition labelled a from state 2 to state 02 in our new
DFA.
0 12
1 2
02
b
a
b
a
b
a
With b from state 2 we can only go back to 1, so we add this transition
to the new automaton.
0 12
1 2
02
b
a
b
a
b
a
b
Now for the new state 02. From 0 we can go with a to states 1 and 2,
and from state 2 we can get to states 0 and 2, so taking it all together from
state 02 we can go to a new accepting state we call 012.
44
0 12
1 2
02 012
b
a
b
a
b
a
b
a
With b from state 0 we can go to state 1 in the old automaton, and from
state 2 we can also only go to state 1 with a b, so we need a transition from
the new state 02 to state 1 labelled b.
0 12
1 2
02 012
b
a
b
a
b
a
b
a
b
Following the same idea, from 012 with a we can go back to 012, and
with b we can go to 12.
0 12
1 2
02 012
b
a
b
a
b
a
b
a
b
a
b
So what we have done here? We have created a new automaton from
the old one as follows.
States of the new automaton are sets of states from the old one.
The start state for the new automaton is the set containing just the
initial state from the original automaton.
A state of the new automaton is an accepting state if one of its states
in the old automaton is an accepting state.
There is a transition labelled x from some state in the new automaton
to another if there are states of the old automaton in the two sets that
have a corresponding transition.
Exercise 33. For the automaton given below carry out the same process.
Hint: If you do nd this very hard then read on a bit to see if that helps.
0 1 2
a b
b
b
a
45
Algorithm 1, general case
We now give the general case.
Assume that we have an NFA (Q, q
.
A state S is an accepting state if and only if there is q S such that
q F, that is, q is an accepting state in the original automaton. Hence
the new set of accepting states is
F
= S Q [ q S with q F.
For a new state S TQ and a letter x the transition labelled
with x leads to the state given by the set of all states q
for which
there is a q S such that there is a transition labelled x from q to q
,
that is
q
Q [ q S with q
x
-
q
.
In other words, to see where the edge labelled x from S should lead
to, go through all the states q in S and collect all those states q Q
which have an edge labelled x from q.
Note that if there is no state in S which is labelled with x then
(S, x) is
still denedit is the empty set in that case. Also note that the state given
by the empty set is always a dump state. Because there are no states in ,
(, x) = is true for all x , so it is a state one can never escape from.
Also, it cannot be an accepting state because it contains no accepting state
as an element.
We call the resulting automaton the DFA generated by the NFA
(Q, q
, F, ).
For the above example, the new automaton has states , 0, 1, 2,
0, 1, 0, 2, 1, 2 and 0, 1, 2. In the image above we have kept our
labels shorter by only listing the elements of the corresponding set without
any separators or braces. For larger automata this would make it impossible
to distinguish between the state 12 and the state 1, 2, but in such a small
example this is harmless.
However, not all the states we have just listed appear in the picture we
drew above. This is so because we used a method that only draws those
states which are reachable from the start state. We next draw the full
automaton as formally dened above.
0 12
1 2
02 012
01
b
a
b
a
b
a
b
a
b
a
b
a
a, b
ab
46
We already know that we may leave out dump states like when drawing
an automaton.
We have another extra state in this picture, namely 0, 1. This is a
state we can never get to when starting at the start state 0, so no word
will ever reach it either. It is therefore irrelevant when it comes to deciding
whether or not a word is accepted by this automaton. We call such states
unreachable and usually dont bother to draw them.
Exercise 34. For the NFA from the previous exercise draw the picture of
the full automaton with all states (given by the set of all subsets of 0, 1, 2,
including the unreachable ones) and all transitions.
Exercise 35. For each of the following NFAs, give a DFA recognizing the
same language.
(a)
0 1
a
a, b
b
b
(b)
0 2
1
a
a
b
b
b
a, b
a
(c)
0 2
1
a
a
b
b
b
a, b
a
b
a
It is possible to avoid drawing unreachable states by doing the following:
Start with the start state of the new automaton, q
.
Pick a state S you have already drawn, and a letter x for which you
have not yet drawn a transition out of the state S. Now there must
be precisely one state S
, F
bb
)(aaa
bb
.
However, if the automaton is more complicated then reading o a pattern
can become very dicult, in particular if there are several accepting states.
If the automaton is moreover non-deterministic the complexity of the task
worsens furtherusually it is therefore a good idea to rst convert an NFA
to a DFA using Algorithm 1 from page 46.
In order to show that it is possible to construct a regular expression
dening the same language for every automaton we have to give an algorithm
that works for all automata. This algorithm may look overly complex at rst
sight, but it really does work for every automaton. If you had to apply it a
lot you could do the following:
Dene a data structure of nite automata in whatever language youre
using. In Java you would be creating a suitable class.
48
Implement the algorithm so that it takes as input an object of that
class, and produces a pattern accordingly.
Algorithm 2, rst example
Because the algorithm required is complicated we explain it rst using a
simple example. With a bit of experience you will probably be able to read
o a pattern from the automaton, but you wont be able to do this with
more complicated automata (nor will you be able to program a computer to
do the same).
Consider the following DFA.
0 1
2
a
b
a
c
There are two accepting states. Hence a word that is accepted will either
start in state 0 and end in state 0 or start in state 0 and end in state 2.
That means the language L accepted by this automaton is the union of two
languages which we write as follows:
L = L
00
L
02
The indices tell us in which state we start and in which state we n-
ish. This is already a useful observation since we can now concentrate on
calculating one language at a time.
To calculate L
00
we note that we can move from state 0 to state 0
in a number of ways. It is so complicated because there are loops in the
automaton. What we do is to break up these ways by controlling the use of
the state with the highest number, state 2, as follows:
To get from state 0 to state 0 we can
either not use state 2 at all (that is, go only via states 0 and 1) or
go to state 2, return to it as many times as we like, and then go from
state 2 to state 0 at the end.
At rst sight this does not seem to have simplied matters at all. But
we have now gained control over how we use the state 2 because we can use
this observation to obtain the following equality.
L
00
= L
2
00
= L
1
00
L
1
02
(L
1
22
)
L
1
20
We have introduced new notation here, using L
1
ij
to mean that we go
from state i to state j while using only states 0 and 1 in between, that is,
states with a number 1. While our expression has grown the individual
languages are now easier to calculate.
L
1
00
. We cannot go from state 0 to state 0 while only using states 0
and 1 in between. Hence this language is equal to . (You may have
expected the empty set here, but we are trying to concatenate words
encountered on this way with other words, and that means the empty
word is the right thing to put here.)
49
L
1
02
. To go from state 0 to state 2 without using state 2 in between
we must see a followed by b. Hence this language is equal to ab.
L
1
22
. To go from state 2 to state 2 without using state 2 in between
there are two possibilities:
we can go from state 2 to state 1 and back, seeing ab along the
way or
we can go from state 2 to state 0 to state 1 to state 2 and see cab
along the way. (It would be a good idea now to convince yourself
that there is no other possibility.)
Hence this language is equal to ab cab = ab, cab.
L
1
20
. The only way of getting from state 2 to state 0 using only
states 0 and 1 in between is the direct route, which means we must
see c. Hence this language is equal to c.
Putting all these together we get the following.
L
00
= L
2
00
= L
1
00
L
1
02
(L
1
22
)
L
1
20
= ab ab, cab
c
We can now read a pattern
4
o from this expression, namely
[ab(ab[cab)
c,
and we have
L
00
= L([ab(ab[cab)
c).
Note that this is not the simplest pattern we could possibly have used
(a(ba)
bc)
.
Moreover, we already have worked out the languages required here, so
we can continue directly to write the following.
4
If you nd this dicult go back and do Exercises 1, 2, and 8 and have another look
at page 17.
5
It would be a good idea to work out why this is dierent from the case of lan-
guage L
00
.
50
L
02
= L
1
02
(L
1
22
)
= ab ab, cab
, and
we get
L
02
= L(ab(ab[cab)
).
Altogether that means
L = L
00
L
02
= L([ab(ab[cab)
c) L(ab(ab[cab)
)
= L([ab(ab[cab)
c[ab(ab[cab)
)
If we like we can simplify that as follows.
L([ab(ab[cab)
c[ab(ab[cab)
) = L([ab(ab[cab)
(c[))
Alternatively, with a bit of experience one might read o this regular
expression directly from the automaton:
(a(ba)
bc)
([a(ba)
b)
Exercise 38. Carry out the algorithm just described for the following au-
tomaton. Use the notation with the various languages L
k
ji
for the rst two
or three lines at least, even if you can read o a pattern directly.
0
1
2
3
a
c
b c
a
b
Algorithm 2, second example
Lets look at a more complicated example.
0 2
1
a, c
b
a
b
b
c
What is confusing about this automaton is not its size in terms of the
number of states, but the way the transitions criss-cross it.
In order to nd all the words accepted by the automaton we have to
identify all the words that
when starting in state 0 end up in state 0. We call the resulting
language L
00
. We also need all the words that
51
when starting in state 0 end up in state 2. We call the resulting
language L
02
.
In other words the language L recognized by the automaton can be
calculated as
L = L
00
L
02
.
This now allows us to calculate these two languages separately, which
already makes the task a little bit easier. Note that what we have done is to
split up the recognized language into all the languages of the form L
q
q
,
where q
is the start state and q ranges over all the accepting states (that
is all the elements of F).
In general we would therefore write for the language L recognized by
some automaton (Q, q
, F, )
L =
qF
L
q
q
.
But nding all the dierent paths that lead from 0 to 0, or from 0 to 2
is still pretty tough. The way we simplify that is by taking the state with
the highest index, namely 2, out of consideration as follows.
Every path from the state 0 to the state 0 can do one of the following:
It either doesnt use the state 2 at all or
it goes from the state 0 to the state 2, then goes back to the state 2
as often as it likes, and ultimately goes to the state 0.
At rst sight this doesnt look like a very useful observation. But what
we have done now is to break up any path that starts at the state 0 and
nishes at the state 0 into a succession of paths that only use the state 2 at
controlled points.
We use the same notation as before: All words that follow a path that
goes from state 0 to state 0 while only using states 0 and 1 (but not state 2)
in between make up the language L
1
00
. This works similarly for other
start or end states. Reformulating our last observation means then that
every word that follows a path from state 0 to state 0 satises one of the
following:
It either is an element of L
1
00
or
it is an element of
L
1
02
(L
1
22
)
L
1
20
.
That means that we have the equality
L
00
= L
2
00
= L
1
00
L
1
02
(L
1
22
)
L
1
20
.
While the equality may appear to make things more confusing at rst
sight, we now have languages which we can more easily determine on the
right hand side of the equality.
We now have the choice between trying to determine the languages on
the right directly, or applying the same idea again.
L
1
00
. How do we get from state 0 to state 0 only using states 0
and 1? The simple answer is that we cant move there, but we are
already there so L
1
00
= .
52
L
1
02
. Going from state 0 to state 2 using only states 0 and 1 can be
done in two ways, either directly using the letter b or via state 1 using
ab or cb. Hence L
1
02
= b, ab, cb.
L
1
22
. This is more complicated. Instead of trying to work this out
directly we apply our rule again: When going from state 2 to state 2
using only states 0 and 1 we can either go directly from state 2 to
state 2 or we can go from state 2 to state 1, return to state 1 as often
as we like using only state 0, and then go from state 1 to state 2. In
other words we have.
L
1
22
= L
0
22
L
0
21
(L
0
11
)
L
0
12
.
We now read o L
0
22
= b, ab, L
0
21
= aa, ac, c, L
0
11
= and
L
0
12
= b. That gives us
L
1
22
= b, ab aa, ac, c
b
= b, ab aa, ac, c b
= b, ab, aab, acb, cb.
One step in this calculation merits further explanation, namely
,
which is calculated to be on page 12.
L
1
20
. This one we can read o directly, it is a.
Altogether we get
L
00
= b, ab, cb b, ab, aab, acb, cb
a
= L() L(b[ab[cb) (L(b[ab[aab[acb[cb))
L(a)
= L([(b[ab[cb)(b[ab[aab[acb[cb)
a).
See the top of page 17 for an explanation of the last step.
This leaves the calculation of L
02
. We once again apply the earlier
trick and observe this time that a path from state 0 to state 2 will have to
reach state 2 (for the rst time) using only states 0 and 1 along the way,
and then it can return to state 2 as often as it likes, giving
L
02
= L
2
02
= L
1
02
(L
1
22
)
.
Now L
1
02
we already calculated above, it is equal to b, ab, cb. We also
know already that L
1
22
= b, ab, aab, acb, cb. Hence
L
02
= b, ab, cb b, ab, aab, acb, cb
= L((b[ab[cb)(b[ab[aab[acb[cb)
).
Hence the language recognized by the automaton is
L
00
L
02
= L([(b[ab[cb)(b[ab[aab[acb[cb)
a) L((b[ab[cb)(b[ab[aab[acb[cb)
)
= L([((b[ab[cb)(b[ab[aab[acb[cb)
a)[((b[ab[cb)(b[ab[aab[acb[cb)
))
= L((b[ab[cb)(b[ab[aab[acb[cb)
([a)),
and a regular expression giving the same language is
[(b[ab[cb)(b[ab[aab[acb[cb)
([a).
53
Algorithm 2, general case
It is time to generalize what we have done. It is useful to assume that the
states in the automaton are given by 0, 1, . . . , n, and that 0 is the start
state. If we get an automaton that does not conform with this we can always
relabel the states accordingly. Then the language L of all the words accepted
by the automaton is equal to
iF
L
0i
=
iF
L
n
0i
,
where L
0i
is the language of all words that, when starting in state 0 end
in state i. Since a word is accepted if and only if it ends in an accepting
state the above equality is precisely what we need.
We can now think of some L
0i
as equal to L
n
0i
: the language of all
words that, when starting in state 0, end up state i is clearly the language of
all words that do so when using any of the states in 0, 1, . . . , n. In general,
L
k
ji
is the language of all those words that, when starting in state j end
in state i, use only states with a number less than or equal to k in between.
It is the languages of the form L
k
ji
for which we can nd expressions that
reduce k by 1: Any path that goes from state j to state i using only states
with numbers at most k will
either go from state j to state i only using states with number at most
k 1 in between (that is, not use state k at all)
or go from state j to state k (using only states with number at most
k 1 in between), return to state k an arbitrary number of times,
and then go from state k to state i using only states with number at
most k 1 in between.
Hence we have
L
k
ji
= L
k1
ji
L
k1
jk
(L
k1
kk
)
L
k1
ki
.
We note that if j = k or i = k then in the above expression only two
dierent languages appear, namely
if j = k then L
k1
ji
= L
k1
ki
and L
k1
jk
= L
k1
kk
and
if i = k then L
k1
ji
= L
k1
jk
and L
k1
ki
= L
k1
kk
.
Thus we get slightly simpler expressions (compare L
02
in the above exam-
ple):
L
j
ji
= (L
j1
jj
)
L
j1
ji
L
i
ji
= L
i1
ji
(L
i1
ii
)
= =
;
L = = L ;
L = L = L
( L)
= L
= (L )
.
Once we have an expression consisting entirely of the simple languages
of the form L
1
ji
we can insert regular expressions generating these, and use
the rules from page 17 to get one regular expression for the overall language.
The advantage of having a general algorithm that solves the problem is
that one can now write a program implementing it that will work for all
automata, even large ones.
Exercise 39. Using the construction just described nd a pattern for the
following automata. Can you read o a pattern from each automaton? Does
it match the one constructed?
(a)
0 2
1
a
b
c
(b)
0 1 2
a
a
b
a
Exercise 40. Give regular expressions dening the languages recognized by
the following automata using Algorithm 2. Hint: Recall that the way you
number the states has an impact on how many steps of the algorithm you
will have to apply!
55
(a)
a
b
a
b
(b)
a
b
a
b
a
b
(c)
a, b
b
b
a
a
By giving the algorithm we have established the following result.
Theorem 4.2. For every DFA there is a regular expression such that the
language recognized by the DFA is the language dened by the regular ex-
pression.
We get the following consequence thanks to Theorem 4.1.
Theorem 4.3. For every DFA or NFA it is the case that the language it
recognizes is regular.
4.7 From patterns to automata
We have a way of going from an automaton to a pattern that we can com-
municate to a computer, so a natural question is whether one can also go
in the opposite direction. This may sound like a theoretical concern at rst
sight, but it is actually quite useful to be able to derive an automaton from
a pattern. That way, if one does come across a pattern that one doesnt
entirely understand one can turn it into an automaton. Also, changing ex-
isting patterns so that they apply to slightly dierent tasks can often be
easier done by rst translating them to an automaton.
For some patterns we can do this quite easily
Exercise 41. Design DFAs over the alphabet a, b, c that recognize the
languages dened by the following patterns.
(a) (a[b)cc.
(b) cc(a[b).
(c) aa[bb[cc.
(d) c(a[b)
c.
Now assume that instead we want to recognize all words that contain a sub-
string matching those patterns. How do you have to change your automata
to achieve that?
For slightly more complicated patterns we can still do this without too
many problems, provided we are allowed to use NFAs.
56
Exercise 42. Design NFAs over the language 0, 1 that recognize the lan-
guages dened by the following patterns.
(a) (00)
[(01)
(b) (010)
[0(11)
and b
.
The latter are quite easily constructed:
a b
Now we would somehow like to stick these automata together, so that
the rst part accepts the a
part. For that wed like to have a way of going from the left state above to
the right state above in a way that doesnt consume any of the letters of our
word, something like this:
a b
, q
1
, . . . , q
l
6
You can always write down some reasonably complex patterns and give it a go.
57
and, for all 1 i n, transitions
q
m
i1
-
q
m
i1
+1
-
. . .
-
q
m
i
1
x
i
-
q
m
i
as well as transitions
q
m
n
-
q
1
m
n
+1
-
. . .
-
q
l
,
such that m
0
= 0 and such that q
l
is an accepting state. Here in each case
the number of -transitions may be 0. The language recognized by an
NFA with -transitions is the set of all words it accepts.
While we do need NFAs with -transitions along the way to construct-
ing an automaton from a pattern we do not want to keep the -transitions
around since they make the automata in question much more confusing. We
introduce an algorithm here that removes the -transitions from such an
automaton. For that let be an arbitrary alphabet not containing .
Before we turn to describing the general case of Algorithm 3 we
investigate the algorithm that removes -transitions.
Algorithm 4, general case
This is the only time when it makes sense to give the general case and then
looking at an example.
Let (Q, q
, F
= F q Q [ q = q
0
, . . . , q
n
Q with
q
i
-
q
i+1
for all 0 i n 1 and q
n
F.
In other words we add to F those states for which we can get to an
accepting state by following only moves.
For q, q
relates (q, x) to q
if
and only if there are states q = q
1
, q
2
. . . q
n
= q
such that
q = q
1
-
q
2
q
n2
-
q
n1
x
-
q
n
= q
.
In other words for x and q, q
in
in by following an
arbitrary number of -transitions followed by one transition labelled x.
The resulting automaton may then contain unreachable states that can
be safely removed. It recognizes precisely the same language as the automa-
ton we started with.
Algorithm 4, example
Here is an example. Consider the following automaton with -transitions.
a
b
58
We copy the states as they are, and create some new accepting states as
follows:
Pick a non-accepting state. If from there we can reach an accepting state
(in the original automaton) using only -transitions, we make this state an
accepting state.
For the example, this means the initial state and the third state from
the left are now accepting states.
We then copy all the transitions labelled with a letter other than from
the original automaton.
a
b
We now look at each of these transitions in turn. Take the transition
labelled a. We add a transition labelled a from state i to state j if it
is possible to get from state i to state j using a number of -transitions
followed by the transition labelled a.
a a
b
a
Alternative
An automaton accepting the language dened by the pattern p
1
[p
2
is given
below. We add a new start state and connect it with -transitions to the
start states of A
1
and A
2
(so these are no longer start states).
A
1
A
2
combine to
A
1
A
2
Kleene Star
We assume that we have a pattern p and an automaton A that recognizes
the language dened by p.
An automaton accepting the language dened by the pattern p
is given
below. Given an automaton A we introduce a new start state. This state
10
Again we could leave out the transition and the right hand state, which is a dump
state.
61
is accepting, and it has an -transition to the old start state. Further we
introduce an -transition from each accepting state to the old start state.
A
A
becomes A
.
We illustrate the process with an example. We return to the pattern
a
for that.
For the pattern a we have the automaton
a
For this automaton we apply the Kleene star.
a
We get the corresponding automaton for the pattern b
, so putting them
together gives the following.
a
) is the following.
a b
b
In practice strictly following Algorithm 3 from page 4.7 introduces rather
more -transitions than strictly necessary. In particular when concatenating
very simple automata, the -transition connecting them can often be safely
left out. I am happy for you to do so in assessed work.
Exercise 45. Construct DFAs that recognize the languages dened by the
following patterns.
(a) (ab[a)
(b) (01)
[001
(c) ((00)
11[01)
(d) ((ab)
b[ab
a(a[b).
(b) Take your result from (a) and remove the -transitions by applying
Algorithm 4 on page 58.
(c) Take your result from (b) and turn it into a DFA by using Algorithm 1
on page 46. How many states would your DFA have if you didnt remove
unreachable states rst? How many states would your DFA have if you
didnt only draw the reachable ones?
We have given an idea of how to prove the following.
Theorem 4.5. For every regular language we can nd the following recog-
nizing it:
deterministic nite automaton;
non-deterministic nite automaton;
non-deterministic nite automaton with -transitions.
Altogether, we have shown in various parts of this section the following
result.
Theorem 4.6. The following have precisely the same power when it comes
to describing languages:
regular expressions,
deterministic nite automata,
non-deterministic nite automata.
Hence for every regular language we can nd a description using any one of
the above, and given one description we can turn it into any of the others.
63
4.8 Properties of regular languages
In Section 2 there are examples of how we can build new regular languages
from existing ones. At rst sight these may seem like theoretical results
without much practical use. However, what they allow us to do is to build
up quite complicated languages from simpler ones. This also means that we
can build the corresponding regular expressions or automata from simple
ones, following established algorithms. That makes nding suitable patterns
of automata less error-prone, which can be very important.
If our language is nite to start with then nding a pattern for it is very
easy.
Exercise 47. Show that every nite language is regular.
Assume we have two regular languages, L
1
and L
2
. We look at languages
we can build from these, and show that these are regular.
Concatenation
In Section 2 it is shown that the concatenation of two regular languages
is regular. If L
1
and L
2
are two regular languages, and p
1
is a regular
expression dening the rst, and p
2
works in the same way for the second,
then
L
1
L
2
= L(p
1
) L(p
2
) = L(p
1
p
2
),
so this is also a regular language. There is a description how to construct
an automaton for L
1
L
2
from of those for L
1
and L
2
on page 60.
Kleene star
Again this is something we have already considered. If L is a regular lan-
guage then so is L
is one for L
,
and again our algorithm for turning patterns into automata shows us how
to turn an automaton for L into one for L
.
Reversal
If s is a word over some alphabet then we can construct another word over
the same alphabet by reading s backwards, or, in other words, reversing it.
For a language L over we dene
L
R
= x
n
x
n1
x
2
x
1
[ x
1
x
2
x
n1
x
n
L.
Exercise 48. Here are some exercises concerned with reversing strings
that oer a good opportunity for practising what you have learned so far.
Parts (a) and (e) require material that is discussed in Appendix A, so it only
makes sense to do these if you are working through this part of the notes.
(a) Dene the reversal of a string as a recursive function.
(b) Look at an automaton for the language of all non-empty words over
the alphabet a, b which start with a. How can it be turned into one for
the language of all words which end with a? Hint: How could we have a
given word take the reverse path through the automaton than it would do
ordinarily?
64
(c) Look at the language given in Exercise 9 (i). What do the words in its
reversal look like? Now look at an automaton for the given language and
turn it into one for the reversed language. Hint: See above.
(d) In general, describe informally how, given an automaton for a language
L, one can draw one for the language L
R
.
(e) Take the formal description of a DFA recognizing a language L as in Def-
inition 8 and turn that into a formal denition for an NFA with -transitions
which recognizes the language L
R
.
Unions
If we have regular expressions for L
1
and L
2
, say p
1
and p
2
respectively, it is
easy to build a regular expression for L
1
L
2
, namely p
1
[p
2
. But how do we
build an automaton for L
1
L
2
from those for L
1
and L
2
? We have already
seen how to do that as wellform an NFA with a new start state which has
-transitions to the (old) start states of the two automata, as illustrated on
page 60. If we like we can then turn this NFA with -transitions into a DFA.
Intersections
This isnt so easy. Its not at all clear how one would go about about it
using patterns, whereas with automata one can see how it might work. The
problem is that we cant say rst get through the automaton for L
1
, then
through that for L
2
: When we have followed the word through the rst
automaton it has been consumed, because we forget about the letters once
we have followed a transition for them. So somehow we have to nd a way
to let the word follow a path through both automata at the same time.
Lets try this with an example: Assume we want to describe the language
L of all words that have an even number of as and an odd number of bs.
Clearly
L = s a, b
, F
1
,
1
) and (Q
2
, q
2
, F
2
,
2
) we can form
an automaton that recognizes the intersection of the languages recognized
by the two DFAs as follows.
States: Q
1
Q
2
.
Start state: (q
1
, q
2
).
Accepting states: F
1
F
2
.
Transition function: maps (q
1
, q
2
) and x to ((q
1
, x), (q
2
, x)). In
other words, there is a transition
(q
1
, q
2
)
x
-
(q
1
, q
2
)
if and only if there are transitions
q
1
x
-
q
1
and q
2
x
-
q
2
.
We call the result the product of the two automata. If we do need
a pattern for the intersection we can now apply Algorithm 2 from page 54.
Exercise 49. Use this construction to draw DFAs recognizing the following
languages over the alphabet a, b. First identify the two languages whose
intersection you want to form, then draw the automata for those languages,
then draw one for their intersection.
(a) All non-empty words that begin with a and end with b.
(b) All words that contain at least two as and at most one b.
(c) All words that have even length and contain an even number of as.
(d) All words that have length at least 3 and whose third symbol (if it
exists) is a.
Complements
If L is a language over the alphabet then we may want to consider its
complement, that is
L,
the set of all words over that do not belong to L.
Exercise 50. This is a question concerned with the complement of a lan-
guage.
66
(a) Consider the language discussed on page 35 of all words which have a
0 in every position that is a multiple of 3. Begin by dening the com-
plement of this language in 0, 1
,
which recognize the languages L and L
. Note that
L = L
if and only if L L
and L
L
if and only if L (
) = and L
L) = .
But we know how to construct automata for the complement of a lan-
guage, and we also know how to construct automata for the intersection
of two languages, so we can reduce our question to that whether the two
automata corresponding to the two languages appearing in the above accept
no words at all.
Via bisimulation
Note that both the methods for deciding equivalence of automata we have
mentioned so far work for DFAs, but not for their non-deterministic rela-
tions. Clearly the question of whether two NFA recognize the same language
is even harder than that for two DFAs. Nonetheless there is an idea that
helps with this.
The idea is this: Assume we have two NFAs, say A = (Q, q
, F, ) and
B = (P, p
;
if q p for some q Q and p P then
if q
x
-
q
P such that p
x
-
p
and
relates q
and p
if p
x
-
p
Q such that q
x
-
q
and
q
;
if q p for some q Q and p P then (q F if and only if p E).
Why are bisimulations useful? It is because of the following result.
Theorem 4.7. If there is a bisimulation between two NFAs then they are
equivalent, that is they accept the same language.
Here is an example. Consider the following two automata.
a
a
b
a
a
a
a
b
a
a
b
a
68
We claim that they dene the same language, and we demonstrate this by
showing there is a bisimulation between them. We draw the two automata
above each other and show which states the bisimulation relates using grey
(instead of black) lines.
a
a
b
a
a
a
b
a
a
a
b
a
Exercise 51. For the following automata, either nd a bisimulation, or
argue that none exists.
(a)
a b
b
b
a
a
b
a
b
(b)
0 0
1 1
0
1
0 0
1
1
0
4.10 Limitations of regular languages
So far we have assumed implicitly that all languages of interest are regu-
lar, that is, that they can be described using regular expressions or nite
automata. This is not really true, but the reason regular languages are so
important is that they have nice descriptions, and that they suce for many
purposes.
Something a nite automaton cannot do is to countor at least, it can-
not count beyond a bound dened a priori. Such an automaton has a nite
number of states, and its only memory is given by those states. Hence it
cannot remember information that cannot be encoded into that many states.
If we want an automaton that decides whether or not a word consists
of at least three letters then this automaton needs at least four states: One
to start in, and three to remember how many letters it has already seen.
Similarly, an automaton that is to decide whether a word contains at least
55 as must have at least 56 states.
However, if we try to construct an automaton that decides whether a
word contains precisely as many 0s as 1s, then we cannot do this: Clearly
the automaton must have a dierent state for every number of 0s it has
already seen, but that would require it to have innitely many states, which
is not allowed.
Similarly, how would one construct a pattern for the language
L = 0
n
1
n
[ n N?
We can certainly cope with the language (01)
n
[ n N by using the
pattern (01)
0 s =
[s
[ + 1 s = s
x
Exercise A.2. Try to come up with a non-recursive denition of the length
function. Hint: Look at the original denition of word, or at the denition
of concatenation to get an idea. Use the recursive denition of a word to
argue that your denition agrees with the original.
We have a binary operation on words, namely concatenation.
Denition 19. Given an alphabet , concatenation is an operation from
pairs of words to words, all over , which, for word s and t over , we write
as s t. It is dened as follows:
x
1
. . . x
m
y
1
. . . y
n
= x
1
. . . x
m
y
1
. . . y
n
.
Exercise A.3. Recall the denition of an associative or commutative oper-
ation from COMP11120.
(a) Argue that the concatenation operation is associative.
(b) Show that the concatenation operation is not commutative.
Exercise A.4. Use recursion to give an alternative denition of the concate-
nation operation. (You may nd this diculttry to give it a go anyway.)
Exercise A.5. Show that [s t[ = [s[ +[t[. Hint: You may well nd it easier
to use the non-recursive denition for everything.
Exercise A.6. Practice your understanding of recursion by doing the fol-
lowing.
(a) Using the recursive denition of a word give a recursive denition of
the following operation. It takes a word, and returns the word where every
letter is repeated twice. So ab turns into aabb, and aba into aabbaa, and aa
to aaaa.
(b) Now do the same thing for the operation that takes a word and returns
the reverse of the word, so abc becomes cba.
In order to describe words of arbitrary length concisely we have adopted
notation such as a
3
for aaa. This works in precisely the same way as it does
for powers of numbers: By a
3
we mean the word that results from applying
the concatenation operation to three copies of a to obtain aaa, just as in
arithmetic we use 2
3
to indicate that multiplication should be applied to
three copies of 2 to obtain 2 2 2. So all we do by writing a
n
is to nd
a shortcut that tells us how many copies of a we require (namely n many)
without having to write them all out.
Because we know both these operations to be associative we do not have
to use brackets here: (2 2) 2 is the same as 2 (2 2) and therefore the
notation 2 2 2 is unambiguous, just as is the case for 2 2 2.
76
What should a
1
mean? Well, this is simple, it is merely the word con-
sisting of one copy of a, that is a. The question of what a
0
might mean is
somewhat trickier: What is a word consisting of 0 copies of the letter a? A
useful convention in mathematics is to use this to mean the unit for the un-
derlying operation. Hence in arithmetic 2
0
= 1. The unit for concatenation
is the empty word and so we think of a
0
as a way of referring to the empty
word .
This way we obtain useful rules such as a
m
a
n
= a
n+m
. Similarly
we have (a
m
)
n
= a
nm
, just as we have for exponentiation in arithmetic.
However, note that (ab)
n
consists of n copies of ab concatenated with each
other, rather than a
n
b
n
, as we would in arithmetic.
1
Exercise A.7. Write out in full the words 0
5
, 0
3
1
3
, (010)
2
, (01)
3
0, 1
0
.
Languages are merely collections of words.
Denition 20. Let be an alphabet. A language over is a set of words
over .
As mentioned in Chapter 2 using this denition we automatically obtain
set-theoretic operations: We can form the unions, intersections, comple-
ments and dierences of languages in precisely the same way as we do this
for other sets. Hence expressions such as L
1
L
2
, L
1
L
2
and L
1
L
2
are
immediately meaningful.
Exercise A.8. Let be the alphabet 0, 1, 2 and let
L
1
= s [ s is a word consisting of 0s and 1s only,
L
2
= s [ s is a word beginning with 0 and ending with 2.
Calculate the following languages: L
1
L
2
, L
1
L
2
and the complement of
L
1
in the language of all words over .
Note that we have notation to nd a more compact description of the
language
, 1, 11, 111, 1111, . . .
of all words over the alphabet 1 as
1
n
[ n N.
Exercise A.9. Write down the following languages using set-theoretic no-
tation:
(a) All the words consisting of the letters a and b which contain precisely
two as and three bs.
(b) All the words consisting of the letter 1 that have even length.
(c) All the words consisting of an arbitrary number of as followed by at
least one, but possibly more, bs.
(d) All the non-empty words consisting of a and b occurring alternatingly,
beginning with an a and ending with a b.
1
The reason this rule doesnt hold is that the concatenation operation isnt commuta-
tive (see Exercise A.3), and so we cant swap over the as and bs to change the order in
which they appear.
77
(e) All the non-empty words consisting of a and b occurring alternatingly,
beginning with an a and ending with an a.
(f) All the words consisting of an arbitrary number of 0s followed by the
same number of 1s.
(g) All the words over some alphabet .
Note that if we have two alphabets
can
be viewed as a word over the alphabet . Alternatively, we may restrict a
language L over to all those words that only use letters from
by forming
s L [ s only uses letters from
.
The denitions in Chapter 2 for forming new languages from existing
ones are rigorous as given. You may want to improve your understanding
of them by carrying out the following exercises.
Exercise A.10. (a) Calculate a, b
2
and ab, ba
2
.
(b) Find the shortest set-theoretic expression you can nd for the set of all
words over the alphabet 0, 1, 2 which consist of precisely three letters.
(c) Calculate 00, 11
.
(d) If
1
2
are two alphabets, argue that
2
.
A.2 Regular expressions
The formal denition of a pattern, and that of a word matching a pattern is
given in Chapter 2. There are exercises in that chapter that encourage you
to explore the recursive nature of the two denitions. The following should
not be considered a formal exercise. Just think about it.
Exercise A.11. What would it take to demonstrate that a given word does
not match a given pattern?
We have already seen that dierent patterns may describe the same
language. In fact, we can come up with a number of rules of rewriting
regular expressions in such a way that the language they dene stays the
same. Here are some examples, where p, p
1
and p
2
are arbitrary patterns
over some alphabet .
L(p) = L(p).
L(p
1
[p
2
) = L(p
2
[p
1
).
L(p
) = L([pp
).
L((p
1
[p
2
)p) = L(p
1
p[p
2
p).
Exercise A.12. Come up with three further such rules.
Exercise A.13. Argue that the equalities used in the recursive denition
of a language matching a pattern on page 17 have to hold according to
Denition 3.
Exercise A.14. For Exercise 8 nd set-theoretic notation for the sets you
described in English.
Exercise A.15. Prove Proposition 2.1. Hint: Take a closer look at the
recursive denition of the language dened by a pattern above.
78
A.3 Finite state automata
Matching up the formal denition of an automaton with its image is fairly
straightforward.
As an example consider the picture on page 35
E O
0
0
1 1
of an automaton that accepts precisely those words over 0, 1 that con-
tain an even number of 0s. We have two states, which in the picture are
labelled E and O, so the set of all states is E, O. The initial state is E,
and there is only one accepting state which is also E.
The transition function for this example is given by the following table:
input output
(E, 1) E
(E, 0) O
(O, 1) O
(O, 0) E
If we follow a word such as 001 through the automaton we can now do
this easily: We start in state E and see a 0, so we next need to go to (E, 0)
which is O. The next letter is 0 again, and from the new state O we need to
go to (O, 0) = E. The last letter is 1, so from the current state E we need
to go to (E, 1) = E. Since E is an accepting state we accept the word 001.
Exercise A.16. Describe the nal picture from Section 4.1 in this way.
When moving to non-deterministic automata we no longer have a tran-
sition function, but a transition relation. Therefore we cannot use a table
as above to write it out.
As an example, take the non-deterministic automaton from page 39.
0 1
2
3
0
1
0, 1
0, 1
1
0, 1
It has states 0, 1, 2, 3, start state 0 and accepting states 3. The
transition relation is dened as follows.
relates to
state letter 0 1 2 3
0 0
0 1
1 0
1 1
2 0
2 1
3 0
3 1
79
Think of the ticks as conrming that from the given state in the left
hand column there is an edge labelled with the letter in the second column
to the state in the top row. We can see that here is a relation rather than
a function because for the input state 1, letter 1, we nd two ticks in the
corresponding row.
Exercise A.17. Draw a non-deterministic automaton for the language de-
scribed in Exercise 9 (d). Then describe it in the same way as the above
example.
Exercise A.18. Draw the automata with states 0, 1, 2, 3, start state 0,
accepting states 0, 2 and the following transitions.
(a)
relates to
state letter 0 1 2 3
0 a
0 b
1 a
1 b
2 a
2 b
3 a
3 b
(b)
relates to
state letter 0 1 2 3
0 a
0 b
1 a
1 b
2 a
2 b
3 a
3 b
Can you say anything about the resulting automata?
Exercise A.19. Using Algorithm 1 on page 46 we know how to turn a non-
deterministic automaton into a deterministic one. Write out a denition of
the transition function for the new automaton.
Exercise A.20. Prove Theorem 4.1. All that remains to be established is
that a DFA generated by an NFA using Algorithm 1 from page 46 accepts
precisely the same words as the original automaton.
Pictures are good and well when it comes to describing automata, but
when such a description has to be given to a computer it is best to start
from the mathematical description and turn it into a data structure.
Exercise A.21. What would it take to dene an automaton class in Java?
How would a class for deterministic automata dier from one for non-
deterministic ones? Hint: You may not know enough Java yet to decide
this. If this is the case then return to the question later.
Exercise A.22. Algorithm 2 is quite complicated. To try to understand
it better I recommend the following for the general description which starts
on page 54.
80
(a) Justify to yourself the expressions for L
j
ji
and L
i
ji
from the general
ones for L
k
ji
.
(b) Justify the equalities given at the end of the description of the general
case.
Exercise A.23. * Explain why it is safe not to draw the dump states for the
two automata when constructing an automaton for the intersection of the
languages recognized by the automata as described on page 66 by forming
the product of the two automata.
Exercise A.24. Exercise 48 has two parts that are mathematical, namely a
and (e). If you have delayed answering these do so now.
Let us now turn to the question of when to automata recognize the same
language. We begin by giving a proper denition of what it means for two
automata to be isomorphic.
Denition 21. An automaton (Q, q
) = p
;
q is in F if and only if f(p) is in E for all q Q and all p P;
in case the automata are
deterministic: f((q, x)) = (f(q, x)) for all q Q and all x ;
non-deterministic: q
x
-
q
if
and only if S (T S
) = and S
(T S) = .
The notion of bisimulation deserves additional study and, indeed, this is
a concept that is very important in concurrency. Here we restrict ourselves
to one more (if abstract) example.
Exercise A.27. Show that there is a bisimulation between an NFA and the
result of turning it into a DFA using Algorithm 1.
Exercise 52 is somewhat mathematical in the way it demands you to
think about languages. Try to do as many of its parts as you can.
81
A.4 Grammars
Exercises in this section that asks you to reason mathematically are Exer-
cises 16 and 17they ask you to demonstrate something.
In Section 3.3 there is a method that takes an automaton and turns it
into a context-free grammar. How can we see that the words generated by
this grammar are precisely those accepted by the automaton?
If we have a word x
1
x
2
x
n
accepted by the automaton then we get
the following for the grammar.
In the automaton:
We start in state q
and go to
q
1
= (q
, x
1
).
Once we have reached state
q
i
= (q
i1
, x
i
),
we go to
q
i+1
= (q
i
, x
i+1
)
and keep going until we reach
q
n
= (q
n1
, x
n
).
Now q
n
is an accepting state.
For the grammar:
We have S = q
x
1
q
1
.
Once we have reached the word
x
1
x
2
x
i
q
i
we apply a production rule to ob-
tain
x
1
x
2
x
i
x
i+1
q
i+1
,
where
q
i+1
= (q
i
, x
i+1
),
and keep going until we reach
x
1
x
2
x
n
q
n
.
Since we know that q
n
is an ac-
cepting state we may now use the
production rule q
n
to obtain
x
1
x
2
x
n
.
Exercise A.28. Use a similar argument to reason that if s is a word over
generated by the grammar then it is also accepted by the automaton.
Exercise A.29. * We have already demonstrated that every regular lan-
guages can be generated by a right-linear grammar. Try to show the op-
posite, that is, that every language generated by a right-linear grammar is
regular.
82
COMP11212 Fundamentals of
Computation
Exercise Sheet 1
For examples classes in Week 2
Exercises for this week
Exercises marked this week:
Exercise 4 (a), (c), (j), (k)
Exercise 9 (a)(c)
Exercise 9 d, (e)
Exercise 10 (a), (b)
Exercise 27 (a)(c)
Exercises preparing the marked exercises
If you nd the exercises we will mark dicult you should rst try the easier
exercises preparing them.
Exercise 3 might help with Exercise 4.
Exercise 25 and Exercise 26 should help with Exercise 27.
Finding regular expressions and automata for a given language requires
practice. Theres no algorithm for this!
Foundational exercises
The foundational exercises prepare important concepts appearing in Chap-
ters 4 and 3.
Exercise 1
Exercise 2
Exercise 8 (a)(c)
Optional exercises from Appendix A
The exercises in Sections A.1 and A.2.
Optional further suitable exercises
Exercise 5
Exercise 6
Exercise 7 (This exercise makes a connection between our regular expressions
and those standard in Unix.)
Further parts of Exercises 4, 8, 9, 10, 27
83
COMP11212 Fundamentals of
Computation
Exercise Sheet 2
For examples classes in Week 3
Exercises for this week
Exercises marked this week
Exercise 27 (d), (e)
Exercise 28 (a), (b)
Exercise 30
Exercise 33
Exercise 35 (a)
Exercises preparing the marked exercises
If you nd the exercises we will mark dicult you should rst try the easier
exercises preparing them.
Exercise 25 and Exercise 26 should help with Exercises 27 and 28.
Exercise 29 will help with understanding the dierence between NFAs
and DFAs, and so help with Exercise 30.
The last two exercises are about applying Algorithm 1, which has an
example spelled out in the notes.
Foundational exercises
Exercise 31
Exercise 32
Exercise 34
Optional exercises from Appendix A
Exercises A.16A.20
Optional further suitable exercises
Further parts of Exercises 28 and 35
84
COMP11212 Fundamentals of
Computation
Exercise Sheet 3
For examples classes in Week 4
Exercises for this week
Exercises marked this week
Exercise 36
Exercise 37
Exercise 39 (a)
Exercise 40 (b)
Exercise 46 (a) and (b)
Exercises preparing the marked exercises
If you nd the exercises we will mark dicult you should rst try the easier
exercises preparing them.
Theres nothing required for Exercise 36 but a bit of common sense.
Be warned that the resulting automaton is quite large!
Exercise 37 mostly consists of tasks you should begin to nd easy.
Exercise 38 and Exercise 40 (a) ask you to apply Algorithm 2 to simpler
automata and should help with Exercises 39 and 40.
Exercise 46 asks you to apply Algorithms 3 and 4, and Exercise 43
and Exercise 45 (a) allow you to practice these.
Optional exercises from Appendix A
Exercises A.21A.23
Optional further suitable exercises
Exercise 41
Exercise 42
Exercise 44
Further parts of Exercises 39, 40, 45 and 46
85
COMP11212 Fundamentals of
Computation
Exercise Sheet 4
For examples classes in Week 5
Exercises for this week
Exercises marked this week:
Exercise 49 (a)
Exercise 50 (a) and (b)
Exercise 50 (c) and (d) Note that part (d) requires having read the Appendix.
Exercise 51
Exercise 52 (a)(c)
Exercises preparing the marked exercises
You might nd the following helpful.
There is a worked example of the method used in Exercise 49 in the
notes.
Exercise 48 (b)(d) will help with Exercise 50.
Exercise 51 is preceded by an example.
In order to answer Exercise 52 youll have to practice thinking in the
right way. Theres no xed method for this.
Foundational exercises
Exercise 47
Optional exercises from Appendix A
Exercises A.24A.27
Optional further suitable exercises
Further parts of Exercises 49 and 52
86
COMP11212 Fundamentals of
Computation
Exercise Sheet 5
For examples classes in Week 6
Exercises for this week
Exercises marked this week:
Exercise 14 (c)
Exercise 15 (a)(b)
Exercise 18 (a)
Exercise 19 (a)
Exercise 22
Exercises preparing the marked exercises
You might want to note the following.
Exercise 11 helps you understand how to derive a word from a gram-
mar, and so to design grammars as in Exercises 14 and 15.
Exercise 14 (a)(b) might be easier to start with than part (c).
There is a worked example demonstrating what has to be done in
Exercise 18 in the notes.
Exercise 19 is preceded by an example in the notes. There is no
method that will solve this problem generally; you will have to think
about each one separately.
Exercise 22 is preceded by a general description of the method to be
applied here.
Optional exercises from Appendix A
Exercises in Section A.4
Optional further suitable exercises
Exercise 12
Exercise 13
Exercise 16
Exercise 17
Exercise 20
Exercise 23
Exercise 21
Exercise 24
Further parts of Exercises 14, 15, and 18
87