Atc Module 5 2021
Atc Module 5 2021
MODULE 5
Decidability: Definition of an algorithm, decidability, decidable languages, Undecidable languages,
halting problem of TM, Post correspondence problem.
Complexity: Growth rate of functions, the classes of P and NP, Quantum Computation: quantum
computers, Church Turing thesis.
Applications : G.1 Defining syntax of programming language, Appendix J: Security
Textbook 2: 10.1 to 10.7, 12.1, 12.2, 12.8, 12.8.1, 12.8.2
Textbook 1: Appendix: G.1(only), J.1 & J.2 RBT: L1, L2, L3
Chapter 1: Decidability
5.1The Definition of an Algorithm
“An algorithm is defined as a procedure (finite sequence of instructions which can be
mechanically carried out) that terminates after a finite number of steps for any input”.
The earliest algorithm one can think of is the Euclidean algorithm, for computing the
greatest common divisor of two natural numbers. In 1900, the mathematician David Hilbert, in
his famous address at the International congress of mathematicians in Paris, averred that every
definite mathematical problem must be susceptible for an exact settlement either in the form of
an exact answer or by the proof of the impossibility of its solution. He identified 23
mathematical problems as a challenge for future mathematicians: only ten of the problems have
been solved so far.
Hilbert’s tenth problem was to devise “a process according to which it can be determined
by a finite number of operations”. whether a polynomial over Z has an integral root. (He did
not use the word 'algorithm' but he meant the same.) This was not answered until 1970. The
formal definition of algorithm emerged after the works of Alan Turing and Alanzo Church in
1936.
The Church-Turing thesis states that any algorithmic procedure that can be carried
out by a human or a computer, can also be carried out by a Turing machine. Thus the
Turing machine arose as an ideal theoretical model for an algorithm. The Turing
machine provided a machinery to mathematicians for attacking the Hilbert’s tenth problem.
The problem can be restated as follows: does there exist a TM that can accept a polynomial
over n variables if it has an integral root and reject the polynomial if it does not have one.
In 1970. Yuri Matijasevic, after studying the work of Martin Davis, Hilary Putnam and
Julia Robinson showed that no such algorithm (Turing machine) exists for testing whether a
polynomial over n variables has integral roots. Now it is universally accepted by computer
scientists that Turing machine is a mathematical model of an algorithm.
5.2 Decidability
When a Turing machine reaches a final state, it halts. We can also say that a Turing machine M
halts when M reaches a state q and a current symbol a to be scanned so that δ(q, a) is undefined. There
are TMs that never halt on some inputs in any one of these ways, So we make a distinction between
the languages accepted by a TM that halts on all input strings and a TM that never halts on some input
strings.
Both the above conditions assure us that the TM always halts accepting w under Condition (i) and not
accepting under Condition (ii). So a TM, defining a recursive language always halts eventually just as
an algorithm eventually terminates.
Definition: A problem with only two answers Yes/No can be considered as a language L. A problem
with two answers (Yes/No) is decidable if the corresponding language is recursive. In this case,
the language L is also called decidable.
Theorem : ADFA = {(B, w) | B accepts the input string w} Then ADFA is decidable. (ie., Regular
language is decidable.)
Proof : To prove the theorem, we have to construct a TM that always halts and also accepts ADFA .
Note that a DFA B always ends in some state of B after n transitions for an input string of length n.
We define a TM M as follows:
1. Let B be a DFA and w an input string. (B, w) is an input for the Turing machine M.
2. Simulate B and input w in the TM M.
3. If the simulation ends in an accepting state of B, then M accepts w. If it ends in a
nonaccepting state of B, then M rejects w.
We can discuss a few implementation details regarding steps 1, 2 and 3 above. The input (B,
w) for M is represented by representing the five components Q, Σ, δ, q0, F by strings of Σ* and input
string w ∈ Σ*.
M checks whether (B, w) is a valid input. If not, it rejects (B, w) and halts. If (B, w) is a valid
input, M writes the initial state q0 and the leftmost input symbol of w. It updates the state using δ and
then reads the next symbol in w. This explains step 2.
If the simulation ends in an accepting state w, then M accepts (B, w). Otherwise, M rejects (B,
w). This is the description of step 3. It is evident that M accepts (B, w) if and only if w is accepted by
the DFA B.
Definition: ACFG = {(G, w) | the context-free grammar G accepts the input string w}
Theorem: ACFG is decidable.
Proof: We convert a CFG into Chomsky normal form. Then any derivation of w of length k
requires 2k - l steps if the grammar is in CNF. So for checking whether the input string w of
length k is in L (G), it is enough to check derivations in 2k - l steps. We know that there are only
finitely many derivations in 2k - 1 step. Now we design a TM M that halts as follows.
1. Let G be a CFG in Chomsky normal form and w an input string (G, w) is an input for M.
2. If k = 0, list all the single-step derivations. If k ≠ 0, list all the derivations with 2k - 1 steps
3. If any of the derivations in step 2 generates the given string w, M accepts (G, w).
Otherwise M rejects.
The implementation of steps 1-3 is similar to the steps in Theorem, ADFA is decidable. (G, w) is
represented by representing the four components VN, Σ, P, S of G and input string w. The next step of
the derivation is got by the production to be applied. M accepts (G, w) if and only if w is accepted by
the CFG G.
A context-sensitive language is recursive. The main idea of the proof of Theorem was to
construct a sequence { W0 , W1 , ••• , WK} of subsets of (VN ᴜ Σ )*, that terminates after a
finite number of iterations. The given string w ∈ L* is in L(G) if and only if w ∈ WK, With this idea in
mind we can prove the decidability of the context- sensitive language.
Definition: ACSG = {(G, w) | the context-sensitive grammar G accepts the input string w}.
Theroem : ACSG is decidable.
Proof: The proof is a modification of the proof of Theorem ACFG is decidable. In Theorem
ACFG is decidable, is considered derivations with 2k - 1 steps for testing whether an input string of
length k was in L(G). In the case of context-sensitive grammar we construct Wi
There exists a natural number k such that Wk = Wk+l = Wk+ 2 = ... (refer to proof of
Theorem).
So w ∈ L(G) if and only if w ∈ Wk. The construction of Wk is the key idea used in the construction
of a TM accepting ACSG · Now we can design a Turing machine M as follows:
l. Let G be a context-sensitive grammar and w an input string of length n. Then (G, w) is an
input for TM.
2. Construct W0 = {S}. I there exists αi ∈ Wi, such
that α →β and |β| <=n}. Continue until Wk = Wk+1 for some k.
3. If w ∈ Wk, w ∈ L(G) and M accepts (G, w); otherwise M rejects (G, w).
For example: If A is the problem of finding some root of x4 - 3x2 + 2 = 0 and B is the problem
of finding some root of x2 - 2 = 0, then A is reducible to B. As x2 - 2 is a factor of x4 - 3x2 + 2 = 0 .
Note: If A is reducible to B and B is decidable then A is decidable. If A is reducible to B and A is
undecidable, then B is undecidable.
Theorem: HALTTM = {(M, w) | The Turing machine M halts on input w} is undecidable.
Proof: We assume that HALTTM is decidable, and get a contradiction. Let M1 be the TM such that
T(M1) = HALTTM and let M1 halt eventually on all (M, w). We construct a TM M2 as follows:
1. For M2, (M, w) is an input.
2. The TM M1 acts on (M, w).
3. If M1 rejects (M, w) then M2 rejects (M, w).
4. If M1 accepts (M, w), simulate the TM M on the input string w until M halts.
5. If M has accepted w, M2 accepts (M, w); otherwise M2 rejects (M, w).
When M1 accepts (M, w) (in step 4), the Turing machine M halts on w. In this case either an
accepting state q or a state q' such that δ (q', a) is undefined till some symbol a in w is reached. In the
first case (the first alternative of step 5) M2 accepts (M, w). In the second case (the second alternative
of step 5) M2 rejects (M, w).
It follows from the definition of M2 that M2 halts eventually. Also,
T(M2) = {(M, w) | The Turing machine accepts w}
= ATM
This is a contradiction since ATM is undecidable.
Example 1: Does the PCP with two lists x = (b, bab3, ba) and y = (b3, ba, a) have a solution?
Solution
We have to determine whether or not there exists a sequence of substrings of x such that the string
formed by this sequence and the string formed by the sequence of corresponding substrings of y are
identical.
The required sequence is given by i1 =2, i2 = 1, i3 = 1, i4 = 3, i.e. (2, 1, 1, 3), and m = 4.
Example 2: Prove that PCP with two lists x = (01, 1, 1), y = (012, 10, 11) has no solution.
Solution
For each substring xi ∈ x and yi ∈ y, we have | xi | < | yi | for all i. Hence the string generated by a
sequence of substrings of x is shorter than the string generated by the sequence of corresponding
substrings of y. Therefore, the PCP has no solution.
Note: If the first substring used in PCP is always x1 and y1 then the PCP is known as the Modified
Post Correspondence Problem.
Example 3:
Chapter 2: Complexity
When a problem/language is decidable, it simply means that the problem is computationally
solvable in principle, It may not be solvable in practice in the sense that it may require enormous
amount of computation time and memory, we discuss the computational complexity of a problem, The
proofs of decidability/undecidability are quite rigorous, since they depend solely on the definition of a
Turing machine and rigorous mathematical techniques. But the proof and the discussion in complexity
theory rests on the assumption that P≠NP. The computer scientists and mathematicians strongly
believe that P≠NP. But this is still open. This problem is one of the challenging problems of the 21st
century. This problem carries a prize money of $lM.
P stands for the class of problems that can be solved by a deterministic algorithm (i.e. by a
Turing machine that halts) in polynomial time: NP stands for the class of problems that can be solved
by a nondeterministic algorithm (that is, by a nondeterministic TM) in polynomial time; P stands for
polynomial and NP for nondeterministic polynomial. Another important class is the class of NP-
complete problems which is a subclass of NP.
EXAMPLE
Example: Construct the time complexity T(n) for the Turing machine M that accepts
{0n1n | n>=1}
Solution : In this TM the step (i) consists of going through the input string (0"l") forward and
backward and replacing the leftmost O by x and the leftmost 1 by Y. So we require at most
2n moves to match a 0 with a 1. Step (ii) is repetition of step (i) n times. Hence the number of
moves for accepting 0n1n is at most (2n)(n). For strings not of the form 0n1n TM halts with less
than 2n: steps. Hence T(M) = 0(n2).
Example: Find the running time for the Euclidean algorithm for evaluating gcd(a, b),
where a and b are positive integers expressed in binary representation.
Solution
The Euclidean algorithm has the following steps:
l. The input is (a, b)
2. Repeat until b = 0
3. Assign a a mod b
4. Exchange a and b
5. Output a.
Step 3 replaces a by a mod b. If a/2 >= b, then a mod b < b <= a/2. If a/2 < b, then a < 2b.
Write a = b + r for some r < b. Then, a mod b = r < b < a/2. Hence a mod b= a/2. So a is
reduced by at least half in size on the application of step 3. Hence one iteration of step 3 and
step 4 reduces a and b by at least half in size. So the maximum number of times the steps 3
and 4 are executed is min{ } If n denotes the maximum of the number of
digits of a and b. that is max then the number of iterations of
steps 3 and 4 is O(n ). We have to perform step 2 at most min{ } times or
n times. Hence T(n) = nO(n) = O(n2).
Note: The Euclidean algorithm is a polynomial algorithm.
The qubit gate conesponding to NOR is the controlled-NOT or CNOT gate. It can be described by
using addition modulo 2
The action on computational basis is , |00〉→ |01〉 , |10〉 → |11〉 , |10〉→ |11. It can be described
by the following 4 x 4 unitary matrix:
Any algorithmic process can be simulated efficiently by a Turing machine. But a challenge to
the strong Church-Turing thesis arose from analog computation. Certain types of analog computers
solved some problems efficiently whereas these problems had no efficient solution on a Turing
machine. But when the presence of noise was taken into account, the power of the analog computers
disappeared.
In mid-1970s. Robert Soiovay and Volker Strassen gave a randomized algorithm for testing the
primality of a number. (A deterministic polynomial algorithm was given by Manindra Agrawal. Neeraj
Kayal and Nitein Saxena of IIT Kanpur in 2003.) This led to the modification of the Church thesis.
Any algorithmic process can be simulated efficiently using a nondeterministic Turing machine.
In 1985, David Deutsch tried to build computing devices using quantum mechanics.
Computers are physical objects, and computations are physical processes. What computers can or
cannot compute is determined by the law of physics alone, and not by pure mathematics -David
Deutsch.
But it is not known whether Deutsch's notion of universal quantum computer will efficiently
simulate any physical process. In 1994, Peter Shor proved that finding the prime factors of a
composite number and the discrete logarithm problem (i.e. finding the positive value of s such that b =
as for the given positive integers a and b) could be solved efficiently by a quantum computer. This
may be a pointer to proving that quantum computers are more efficient than Turing machines (and
classical computers).
Chapter 3: Applications
5.11.1 BNF
It became clear early on in the history of programming language development that designing a
language was not enough. It was also necessary to produce an unambiguous language specification.
Without such a specification, compiler writers were unsure what to write and users didn’t know what
code would compile. The inspiration for a solution to this problem came from the idea of a rewrite or
production system as described years earlier by Emil Post.
In 1959, John Backus confronted the specification problem as he tried to write a description of
the new language ALGOL 58. Backus later wrote [Backus 1980], “As soon as the need for precise
description was noted, it became obvious that Post’s productions were well-suited for that purpose. I
hastily adapted them for use in describing the syntax of IAL [Algol 58].”
The notation that he designed was modified slightly in collaboration with Peter Naur and used
in the definition, two years later, of ALGOL 60. The ALGOL 60 notation became known as BNF ,
for Backus Naur form or Backus Normal form. For the definitive specification of ALGOL 60,
using BNF, see [Naur 1963]. Just as the ALGOL 60 language influenced the design of generations of
procedural programming languages, BNF has served as the basis for the description of those new
languages, as well as others.
The BNF language that Backus and Naur used exploited these special symbols:
• ::= corresponds to →
• | means or
• < > surround the names of the nonterminal symbols.
While it seems obvious to us now that formal specifications of syntax are important and BNF seems a
natural way to provide such specifications, the invention of BNF was an important milestone in the
development of computing. John Backus received the 1977 Turing Award for “profound,
influential, and lasting contributions to the design of practical high-level programming systems,
notably through his work on FORTRAN, and for seminal publication of formal procedures for the
specification of programming languages.” Peter Naur received the 2005 Turing Award “For
fundamental contributions to programming language design and the definition of Algol 60, to compiler
design, and to the art and practice of computer programming. ”.
Since its introduction in 1960, BNF has become the standard tool for describing the context-free part
of the syntax of programming languages, as well as a variety of other formal languages: query
languages, markup languages, and so forth. In later years, it has been extended both to make better use
of the larger character codes that are now in widespread use and to make specifications more concise
and easier to read. For example, modern versions of BNF:
• Often use → instead of ::=.
• Provide a convenient notation for indicating optional constituents. One approach is to use the
subscript opt. Another is to declare square brackets to be metacharacters that surround optional
constituents. The following rules illustrate three ways to say the same thing:
S→T|ε
S → Topt
S→ [T]
• May include many of the features of regular expressions, which are convenient for specifying
those parts of a language’s syntax that do not require the full power of the context-free formalism.
These various dialects are called Extended BNF or EBNF .
Example EBNF
In standard BNF, we could write the following rule that describes the syntax of an identifier that must be
composed of an initial letter, followed by zero or more alphanumeric characters:
But note: This is a simple example that illustrates the point. In any practical system, the parsing of tokens,
such as identifiers, is generally handled by a lexical analyzer and not by the context-free parser.
We assume that <stmt-list>, which is used in other places in the grammar, is defined elsewhere
Here’s the corresponding railroad diagram (again assuming that <stmt-list> is defined elsewhere):
switch-stmt :
SWITCH ( int-expression
enum-type
DEFAULT : stmt-list
}
Terminal strings are shown in upper case. Nonterminals are shown in lower case. To generate a
switch statement, we follow the lines and arrows, starting from switch-stmt. The word SWITCH
appears first, followed by (. Then one of the two alternative paths is chosen. They converge and
then the symbols ) and { appear. There must be at least one case alternative, but, when it is
complete, the path may return for more. The BREAK command is optional. So is the
DEFAULT clause, in both cases because there are detours around them.
The inputs to this FSM are user commands and timing events: arm (turn on the system),
disarm (turn off the system), query the status of the system, reset the system, open a door, activate the
glass-break detector, and 30 seconds elapse. The job of this machine is to detect an intrusion. So we
have labeled the states that require an alarm as accepting states. State 6 differs from state 1 since it
displays that an alarm has occurred since the last time the system was reset and it will not allow the
system to be armed until a reset occurs.
A realistic system has many more states. For example, suppose that alarm codes consist of four
digits. Then the single transition from state 1 to state 2 is actually a sequence of four transitions, one
for each digit that must be typed in order to arm the system. Suppose, for example, that the alarm code
is 9999. Then we can describe the code-entering fragment of the system as the DFSM shown in Figure
To build a model of the protection status of a system, we’ll use three kinds of entities:
• Subjects: active agents, generally processes or users.
• Objects: resources that the agents need to exploit. These could include files, processes,
devices, etc. Notice that processes can be viewed both as subjects (entities capable of doing things) and
as objects (entities that other entities may want to invoke).
• Rights: capabilities that agents may have with respect to the objects. Rights could include read
access, write access, delete access or execute access for files, execute access for processes, edit,
compile, or execute access for source code, check or change access for a password file, and so forth.
The current protection status of a system is described with an access control matrix A that contains one
row for each agent and one column for each protected object. Each cell of this matrix contains the set
of rights that the agent possesses with respect to the object. Figure : shows a simple example of such a
matrix.
The protection status of a system must be able to evolve along with the system. The existence of the
following primitive operations for changing the access matrix:
• Create subject (x) records the existence of a new subject x, such as a new user or process.
• Create object (x) records the existence of a new object x, such as a process or a file.
• Destroy subject (x).
• Destroy object (x).
• Enter r into A[s, o] gives subject s the right r with respect to object o.
• Delete r from A[s, o] removes subject s’s right r with respect to object o.
We will allow commands to be constructed from these primitives, but all such commands must be of
the following restricted form:
command-name(x1, x2, …, xn) =
if r1 in A[…, …] and
r2 in A[…, …] and
…
rj in A[…, …]
then
operation1
operation2
…
operationm
In other words, the command may check that particular rights are present in selected cells of the
access matrix. If all conditions are met, then the operation sequence is executed. All the operations
must be primitive operations as defined above. So no additional tests, loops, or branches are allowed.
The parameters x1, x2, …, xn must each be bound to some subject or some object. The rights r1, r2, …
rj i are hard-coded into the definition of a particular command.
Define a protection framework to be a set of commands that have been defined as described
above and that are available for modifying an access control matrix. Define a protection system to be a
pair (init, framework). Init is an initial configuration that is described by an access control matrix that
contains various rights in its cells. Framework is a protection framework that describes the way in
which the rights contained in the matrix can evolve as a result of system events.
In designing a protection framework, our goal is typically to guarantee that certain subjects
maintain control over certain rights to certain objects. We will say that a right has leaked iff it is added
to some access control matrix cell that did not already contain it. We will say that a protection system
is safe with respect to some right r iff there is no sequence of commands that could, if executed from
the system’s initial configuration, cause r to be leaked.
We’ll say that a system is unsafe iff it is not safe. Note that this definition of safety is probably too
strong for most real applications. For example, if a process creates a file it will generally want to
assign itself various rights to that file. That assignment of rights should not constitute leakage. It may
also choose to allocate some rights to other processes. What it wants to be able to guarantee is that no
further transfer of unauthorized rights will occur. That more narrow definition of leakage can be
described in our framework in a couple of ways, including the ability to ask about leakage from an
arbitrary point in the computation (e.g., after the file has been created and assigned initial rights) and
the ability to exclude some subjects (i.e., those who are “trusted”) from the matrix when leakage is
evaluated. For simplicity, we will consider just the basic model here.
Given a protection system S = (init, framework) and a right r, is it decidable whether S is safe with
respect to r?
It turns out that if we impose an additional constraint on the form of the commands in the
system then the answer is yes. Define a protection framework to be mono-operational iff the body of
each command contains a single primitive operation. The safety question for mono-operational
protection systems is decidable. But such systems are very limited. For example, they do not allow the
definition of a command by which a subject creates a file and then gives itself some set of rights to
that file.
So we must consider the question of decidability of the more general safety question. Given an
arbitrary protection system S = (init, framework) and a right r, is it decidable whether S is safe with
respect to r? Now the answer is no, which we can prove by reduction from Hε = {<M> : Turing
machine M halts on ε}. The proof that we are about to present was originally given in, which was
concerned with protection and security in the specific context of operating systems. It is also presented,
in the larger context of overall system security, in [Bishop 2003].
We call these objects “rights” (in quotes) because, although we will treat them like rights in a
protection system, they are not rights in the standard sense. They do not represent actions that an agent
can take. They are simply symbols that will be manipulated by the reduction.
Each square of M’s tape that is either nonblank or has been visited by M will correspond to one cell in
the matrix A. The cell that corresponds to squarei of M’s tape will contain the “right” that corresponds
to the current symbol on squarei of the tape. In addition, the matrix will encode the position of M’s
read/write head and its state. It will do that by containing, in the cell that is currently under the
read/write head, the “right” that corresponds to M’s current state.
• It is possible to describe the transition function of a Turing machine as a protection framework
(a set of commands, as described above, for manipulating the access control matrix).
• So the question, “Does M ever enter one of its halting states when started with an empty tape?”
can be reduced to the question, “If A starts out representing M’s initial configuration, does a symbol
corresponding to any halting state ever get inserted into any cell of A?” In other words, “Has any
halting state symbol leaked?”
So, if we could decide whether an arbitrary protection system is safe with respect to an arbitrary right
r, we could decide Hε. But we know, from Theorem, that Hε is not in D.
The only question we are asking about M is whether or not it halts. If it halts, we don’t care which of
its halting states it lands in. So we will begin by modifying M so that it has a single halting state qf.
The modified M will enter qf iff the original M would enter any of its halting states. Now we can ask
the specific question, does qf leak?
To make it easier to represent M’s configuration as an access control matrix, we will assume that M has
a one-way (to the right) infinite tape, rather than our standard, two-way infinite tape. By Theorem 17.5,
any computation by a Turing machine with a two-way infinite tape can be simulated by a Turing
machine with a one-way infinite tape, so this assumption does not limit the generality of the result that
we are about to present.
To see how a configuration of M is encoded as an access control matrix, consider the simple
example shown in Figure (a). M is in state q5 and we assume that it started on the blank just to the left
of the beginning of the input, so there are four nonblank or examined squares on M’s tape. This
configuration will be represented as the square access control matrix A, shown in Figure (b). A
contains one row and one column for each tape square s that is nonblank or has been visited:
(a) a b b ...
q5
(b)
s1 s2 s3 s4
s1 own
s2 a own
s3 b, q5 own
s4 b, end
Notice that, primarily, only cells along A’s major diagonal contain any rights. The cell A[i, i] contains
the “right” that corresponds to the contents of tape square i. Since the read/write head is on square 3,
A[3, 3] also contains the “right” corresponding to the current state, q5. The only other “rights” encode
the sequential relationship among the squares on the tape. If si immediately precedes sj, then si “owns”
sj. Finally, the cell that corresponds to the rightmost nonblank or visited tape square contains the
“right” end.
It remains to show how the operation of M can be simulated by commands that modify A. Given a
particular M, we can construct a set of such commands that exactly mimic the moves that M can make.
For example, suppose that, in state q5 reading b, M writes an a, moves left, and goes to state q6. We
construct the following command:
We must construct one such command for every transition of M. We must also construct commands
that correspond to the special cases in which M tries to move off the tape to the left and in which it
moves to the right to a previously unvisited blank tape square. The latter condition occurs whenever M
tries to move right and the current tape square has the “right” end. In that case, the appropriate
command must first create a new object and a new subject corresponding to the next tape square.
The simulation of a Turing machine M begins by encoding M’s initial configuration as an access
control matrix. For example, suppose that M’s initial configuration is as shown in Figure (a). Then we
let A be the access control matrix shown in Figure (b).
(a) a b ...
q0
(b) s1 s2 s3
s1 , q0 own
s2 a own
s3 b, end
Figure: Encoding an initial configuration as an access control matrix
There are a few other details that we must consider. For example, since we are going to test
whether qf ever gets inserted into A during a computation, we must be sure that qf is not in A in the
initial configuration. So if M starts in qf , we will first modify it so that it starts in some new state and
then makes one transition to qf.
Notice that we have constructed the commands in such a fashion that, if M is deterministic, exactly one
command will have its conditions satisfied at any point. If M is nondeterministic then more than one
command may match against some configurations. We can now show that it is undecidable whether,
given an arbitrary protection system S = (init, framework) and right r, S is safe with respect to r. To do
so, we define the following language and show that it is not in D:
Safety = {<S, r> : S is safe with respect to r}.
Proof: We show that H = {<M> : Turing machine M halts on } Safety and so Safety is not in D
because H isn’t. Define:Let R be a reduction from H to Safety defined as follows:
R(<M>) =
1. Make any necessary changes to M:
If M has more than one halting state, then add a new unique halting state qf and add transitions that
take it from each of its original halting states to qf.
If M starts in its halting state qf, then create a new start state that simply reads whatever symbol is
under the read/write head and then goes to qf.
2. Build S:
Construct an initial access control matrix A that corresponds to M’s initial configuration.
Construct a set of commands, as described above, that correspond to the transitions of M.
3. Return <S, qf>.
{R, ¬} is a reduction from Hε to Safety. If Oracle exists and decides Safety, then C =
¬Oracle(R(<M>)) decides Hε. R and ¬ can be implemented as a Turing machines. And C is correct.
By definition, S is unsafe with respect to qf iff qf is not present in the initial configuration of A and
there exists some sequence of commands in S that could result in the initial configuration of S being
transformed into a new configuration in which qf has leaked, i.e., it appears in some cell of A. Since
the initial configuration of S corresponds to M being in its initial configuration on a blank tape, M does
not start in qf, and the commands of S simulate the moves of M, this will happen iff M reaches state qf
and so halts. Thus:
Does the undecidability of Safety mean that we should give up on proving that systems are safe? No.
There are restricted models that are decidable. And there are specific instances of even the more
general model that can be shown to have specific properties. This result just means that there is no
general solution to the problem