Modal Logic in Computer Science
Modal Logic in Computer Science
Theses
2006
Recommended Citation
Lambert, Leigh, "Modal logic in computer science" (2006). Thesis. Rochester Institute of Technology.
Accessed from
This Master's Project is brought to you for free and open access by RIT Scholar Works. It has been accepted for
inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact
[email protected].
Modal Logic in Computer Science
Leigh Lambert
Edith Hemaspaandra, Chair
Chris Homan, Reader
Michael Van Wie, Observer
1 Abstract
Modal logic is a widely applicable method of reasoning for many areas of
computer science. These areas include artificial intelligence, database the-
ory, distributed systems, program verification, and cryptography theory.
Modal logic operators contain propositional logic operators, such as con-
junction and negation, and operators that can have the following meanings:
”it is necessary that,” ”after a program has terminated,” ”an agent knows
or believes that,” ”it is always the case that,” etc. Computer scientists have
examined problems in modal logic, such as satisfiability. Satisfiability de-
termines whether a formula in a given logic is satisfiable. The complexity of
satisfiability in modal logic has a wide range. Depending on how a modal
logic is restricted, the complexity can be anywhere from NP-complete to
highly undecidable. This project gives an introduction to common varia-
tions of modal logic in computer science and their complexity results.
2 Introduction
Modal logic is an extension of traditional propositional logic with the addi-
tion of extra operators, such as “it is possible that,” “it is necessary that,” “it
will always be the case that,” “an agent believes that,” etc. In the last fifty
years, modal logic has become extremely useful in different areas of computer
science, such as artificial intelligence, database theory, distributed systems,
program verification, and cryptography theory. For example, propositional
dynamic logic is well suited to verifying the correctness of complex and inte-
grated systems. Epistemic logic reasons about knowledge and belief and is
1
useful when discussing the knowledge of individuals in a group. Epistemic
logic is useful for distributed systems and artificial intelligence theory. Par-
ticular forms of epistemic logic are used for reasoning about zero-knowledge
proofs, which is a part of cryptography theory.
What makes modal logic so easily adaptable to many areas of computer
science stems from the flexibility of the meanings of modal operators. The
basic modal operator, referred to as the “necessary” operator, and its dual,
the “possibility” operator, can take on different meanings depending on
the application. For example, the following are different meanings for the
“necessary” operator: “after all possible computations of a program have
terminated,” “agent i knows that,” “it is always true that,” “it ought to be
true that,” etc. Modal logic semantics is based on possible-worlds semantics.
Possible-worlds semantics observes different worlds, or states, that are pos-
sibly true after a change occurs at one world. For instance, there are many
possibilities of what tomorrow might be, but only one of those possibilities
is realized. Modal operators need to follow certain rules. For instance, if
“always ϕ” is true, then “always always ϕ” is true. For processes in a dis-
tributed system, we want to make sure that if a process “knows that ϕ” is
true, then ϕ is actually true. These rules will lead to different variations of
modal logic. We refer to each variation as itself a modal logic.
Computer scientists have explored problems, such as satisfiability and
validity, in different variations of modal logic. In this paper, we examine
complexity results for satisfiability. Satisfiability in a modal logic S, or
S-SAT, determines if a formula is satisfiable in S.
This paper is a survey of previous research of modal logic in computer
science. It gives an introduction of the following: (1) modal logic, (2) its uses
in computer science, and (3) the complexity of satisfiability in variations of
modal logic. This paper gives the necessary background in modal logic, its
applications in computer science, and complexity theory. It will put the
complexity of different modal logics into a general framework.
The rest of the paper is organized as follows. Section 3 discusses the
basics of uni-modal logics and other general concepts for each modal logic.
Section 4 gives an introduction of the uses of modal logic in computer science.
In this section, we present variations of modal logic, including multi-modal
logics. Section 5 will discuss the problem of satisfiability of formulas in
propositional logic and uni-modal logics. We will examine some of the basics
of computational complexity as well. Our last section lists complexity results
for the modal logics introduced in Section 4.
2
3 Uni-Modal Logics
A logic is described by either syntax and semantics or a deductive system.
We begin by presenting the syntax and semantics for basic uni-modal logics,
along with general information useful to the discussion of other modal logics.
We then discuss deductive systems common to uni-modal logics. We will
present more complex modal logics in section 4.
3.1 Syntax
Modal logic consists of syntactical and semantical information that describes
modal formulas. We begin by presenting the syntax for uni-modal logics.
The simplest, most basic modal formulas are propositions. Propositions
describe facts about a particular world or state, such as “The sky is cloudy
today” or “Max’s chair is broken.” Let Φ = {p1 , p2 , p3 , . . .} be a nonempty
set of propositions. A modal formula is either an element of Φ or has the
form ϕ ∧ ψ, ¬ϕ, or 2ϕ where ϕ and ψ are modal formulas. Intuitively, the
formula 2ϕ means “ϕ is necessarily true.” Let L(Φ) be the smallest set of
formulas containing Φ that is closed under conjunction, negation, and the
modal operator 2. Remaining operators can be defined in terms of other
operators. We define the following:
def
• ϕ ∨ ψ = ¬(¬ϕ ∧ ¬ψ)
def
• ϕ → ψ = ¬(ϕ ∧ ¬ψ)
def
• true = a propositional tautology (such as p1 ∨ ¬p1 )
def
• false = ¬true
def
• 3ψ = ¬2¬ψ.
3
in ϕ. Each symbol is a member of the set Φ ∪ {∧, ¬, 2, (, )}. The en-
coding of a formula ϕ is ϕ with each proposition replaced with its binary
encoding. For example, the elements p1 , p2 , p3 , . . . would be replaced with
p1, p10, p11, . . ., respectively. The length of ϕ, written as |ϕ|, is the length
of the encoding of ϕ. Thus, |ϕ| ≥ size(ϕ). Also, |ϕ| can be approximately
size(ϕ) × log2 (size(ϕ)), depending on the propositions used in ϕ.
Let formula ϕ be of the form ¬ϕ (respectively, 2ϕ , ϕ ∧ ϕ ). We say
that formula ψ is a subformula of ϕ if ψ is ϕ or ψ is a subformula of ϕ
(respectively, ϕ , ϕ or ϕ ). Subformulas are defined inductively. Thus, each
proposition contained in ϕ is a subformula of ϕ. Define sub(ϕ) to be the
set of all subformulas of ϕ. We denote the size of a set A, or the number of
elements in A, to be written as ||A||. An important property of sub(ϕ) is
||sub(ϕ)|| ≤ size(ϕ). This property can be proven by induction on the size
of ϕ. In addition, the modal depth of a formula ϕ is the maximum number of
nested modal operators in ϕ. For example, the modal depth of the formula
2(2ϕ ∧ 2¬2ϕ ) is at least 3 because ϕ is nested in three modal operators.
3.2 Semantics
Now that we have described the syntax of uni-modal logics, we introduce
the semantics, which determines whether a given formula is true or false.
Modal semantics is formally defined using Kripke structures, which is similar
to possible-worlds semantics. A Kripke structure is a tuple M = (W, R, V )
where W is a set, R is a binary relation on W , and V maps Φ × W to
{true, false} [22]. The set W is a set of ‘possible worlds,’ or states, and R
determines which states are accessible from any given state in W . We say
that state b ∈ W is accessible from state a ∈ W if and only if (a, b) ∈ R.
R is known as the accessibility relation. The function V determines which
facts, or propositions, are true at each of the worlds.
Each modal structure is depicted using a directed graph. Each node is
named after its corresponding state in W . An edge from node w to node
w means that (w, w ) ∈ R. At each node, there are labels indicating which
propositions are true at the particular state. Since Φ is usually infinite, we
only use labels for propositions that are needed (i.e., propositions in formulas
we are working with).
4
u
p2
w v
p1 p1
Figure 1.1
5
Formally, we say that ϕ is valid in a modal structure M , written M |= ϕ,
if for every w ∈ W , (M, w) |= ϕ. Formula ϕ is satisfiable in M if and only
if (M, w) |= ϕ at some state w ∈ W . We say that ϕ is valid in a set of
structures M, written M |= ϕ iff ϕ is valid in all structures in M, and ϕ
is satisfiable in a set of structures M iff ϕ is satisfiable in some structure in
M. Formula ϕ is valid in a structure M , a set of structures M, iff ¬ϕ is
not satisfiable in M , M, respectively. The following theorem gives us basic
properties about necessity.
2. if M |= ϕ and M |= ϕ → ψ, then M |= ψ,
4. if M |= ϕ, then M |= 2ϕ.
3.3 Axiomatizations
This section introduces deductive systems for uni-modal logics. We define a
logic as a collection of axioms and rules of inference. For the basic uni-modal
logic K, the axioms and rules of inference are based on Theorem 3.1. All
other uni-modal logics we will look at are extensions of K.
R1 If
ϕ and
ϕ → ψ, then
ψ (Modus ponens).
R2 If
ϕ, then
2ϕ (Generalization).
6
A4 2ϕ → 22ϕ (“if ϕ is necessarily true, then it is necessary that ϕ is
necessarily true”).
A5 ¬2ϕ → 2¬2ϕ (“if ϕ is not necessarily true, then it is necessary that ϕ
is not necessarily true”).
A6 ¬2(false) (“inconsistent facts are not necessarily true”).
Listed below are a few common modal logics along with axioms that are
always true in each. Each modal logic below contains the axioms and rules
of reference that make up K. In addition,
Theorem 3.2 [21] For X ∈ {K, T, S4, S5, KD45}, X is sound and com-
plete with respect to the set of all X-models. A modal formula ϕ is:
• satisfiable in an X-model if and only if ¬ϕ is not X-provable.
• valid in an X-model if and only if ϕ is X-provable.
To determine if ϕ is X-provable, ϕ is inferred from instances of the axioms
contained in X, using R1 and R2.
The following proposition addresses properties of S5 and KD45. It is
useful for later sections of this paper.
7
Proposition 3.3 [6]
4.1.1 Syntax
Let n be a finite number of agents. Recall that Φ = {p1 , p2 , p3 , . . .} is a
nonempty set of propositions. A modal formula is built inductively from Φ;
it is either an element of Φ or has the form ϕ∧ψ, ¬ϕ, or [i]ϕ, where 1 ≤ i ≤ n
and ϕ and ψ are modal formulas. The formula [i]ϕ intuitively means that
“agent i knows or believes ϕ.” Let Ln (Φ) be defined as the smallest set of
formulas containing Φ that is closed under conjunction, negation, and the
modal operator [i], 1 ≤ i ≤ n. We define <i>ϕ to be ¬[i]¬ϕ. The dual
8
formula <i>ϕ means that “agent i knows or believes that ϕ is true at some
state.”
In addition to reasoning about what each agent in a group knows, it
may be helpful, depending on the application, to reason about the knowl-
edge common to all agents. Common knowledge describes those facts that
everyone in the group knows that everyone knows that . . . everyone knows
to be true. This allows us to formalize facts about a group’s “culture.”
The language that agents use to communicate is an example of common
knowledge. We define the following:
Let LCn (Φ) be the extension of Ln (Φ) that is closed under conjunction, nega-
tion, the n modal operators, E, and C.
Reasoning about distributed knowledge is also useful in some applica-
tions. Distributed knowledge describes the facts than can be gathered from
combining the knowledge of all agents in a group. For example, if agent
i knows ϕ and agent j knows ϕ → ψ, then ψ is considered distributed
knowledge[15]. Let D be the distributed knowledge operator. If ϕ is a for-
mula, then so is Dϕ. Let LD n (Φ) be an extension of Ln (Φ) that is closed
under conjunction, negation, the n modal operators, and D. Distributed
knowledge is categorized as knowledge that a ‘wise agent’ would know, if
there was such an agent in the group.
4.1.2 Semantics
The semantics for epistemic logic is based on the semantics introduced in
Section 3. We redefine a modal structure as M = (W, R1 , R2 , . . . , Rn , V ).
W is a set of states, Ri , 1 ≤ i ≤ n, is a relation that determines which states
agent i believes to be accessible from any state in W , and V maps Φ × W
to {true, false}.
Our definition for |= comes from Section 3, with slight changes in the
modal case.
9
(M, w)|=¬ϕ iff (M, w) |= ϕ.
(M, w) |= [i]ϕ iff (M, w ) |= ϕ for all w such that (w, w ) ∈ Ri , 1 ≤ i ≤ n.
Next, we describe the semantics for common and distributed knowledge.
The following notation is helpful for understanding the definition of C. Let
E 1 ϕ = Eϕ, and E k+1 = E(E k ϕ) for k ≥ 1. We add to the definition of |=.
This definition states that the relation containing the intersection of all
the accessibility relations is the relation that results when combining the
knowledge of all agents. This new relation describes a new, ‘wise agent’
that knows everything that each of the agents know.
4.2 Axiomatizations
The following uni-modal logics, K, T, S4, S5, KD45, correspond to the fol-
lowing modal logics Kn , Tn , S4n , S5n , KD45n , respectively. The axioms and
rules of inference are the same, except for changes due to modal operators.
R1 If
ϕ and
ϕ → ψ, then
ψ (Modus ponens).
R2 If
ϕ, then
[i]ϕ, where 1 ≤ i ≤ n (Generalization).
A3 [i]ϕ → ϕ, where 1 ≤ i ≤ n;
10
A5 ¬[i]ϕ → [i]¬[i]ϕ, where 1 ≤ i ≤ n;
A6 ¬[i](false), where 1 ≤ i ≤ n.
Sato proved that Kn is sound and complete with respect to the set of all
modal structures [35]. Furthermore, Tn (respectively, S4n , S5n , KD45n ) is
sound and complete with respect to the set of modal structures containing
accessibility relations that are reflexive (respectively; reflexive and transitive;
reflexive, symmetric, and transitive; transitive and Euclidean) [35].
The following are axioms and a rule of reference that characterize ax-
iomatizations for common knowledge.
A7 Eϕ ≡ [1]ϕ ∧ . . . [n]ϕ (“everyone knows ϕ if and only if each agent knows
ϕ”).
R3 From
ϕ → E(ψ ∧ ϕ) infer
ϕ → Cψ.
Let KnC (respectively, TnC , S4C C
n , S5n ) be the modal logic that results from
adding A7, A8, and R1 to the axioms for Kn (respectively, Tn , S4n , S5n ).
Milgrom proved that the logic KnC is sound and complete with respect to the
set of all modal structures [27]. In addition, TnC (respectively, S4C C
n , S5n ) is
sound and complete with respect to the set of modal structures containing
accessibility relations that are reflexive (respectively; reflexive and transitive;
reflexive, symetric, and transitive) [27].
The axiom that characterizes distributed knowledge is the following:
A9 [i]ϕ → Dϕ, 1 ≤ i ≤ n (“if an agent knows ϕ is true, then ϕ is distributed
knowledge”).
Let KnD (respectively, TnD , S4D D
n , S5n ) be the modal logic that results from
adding A9 to the axioms for Kn (respectively, Tn , S4n , S5n ). Since the
operator D is in essence a knowledge operator, we expect axioms such as
A2, A3, A4, and A5 and rule of reference R2 to hold true for D as well as the
operator [i], 1 ≤ i ≤ n. For example, the statement (Dϕ∧D(ϕ → ψ)) → Dψ
must be true for A2 to be true in the modal logics for distributed knowledge.
For n ≥ 2 agents, KnD is sound and complete with respect to the set of all
modal structures [10]. Furthermore, for n ≥ 2 agents, TnD (respectively,
S4D D
n , S5n ) is sound and complete to the set of modal structures containing
accessibility relations that are reflexive (respectively; reflexive and transitive;
reflexive, symmetric, and transitive) [10].
11
4.2.1 Applications of Epistemic Logic
We now present some applications of epistemic logic in areas of computer
science. One of the most widely-used application of epistemic logic is to
capture knowledge in a multi-agent system. Some examples of multi-agent
systems are processes over a computer network, a simulation of persons
playing a game, like scrabble, or object sensors on vehicles. We will give a
desciption of this system and how it relates to epistemic logic. This system
was first introduced by [13].
In a multi-agent system, there are n agents, and each agent i has a local
environment. An agent’s local environment consists of information of what
i’s local state is in the system. For example, in a scrabble game, agent i’s
local state consists of the following:
12
system can be viewed in terms of a modal structure with the exception of
V . The set of states, W , would be associated with a set of points. The
accessibility relation, Ri , where 1 ≤ i ≤ n, corresponds to the relation for
agent i. This is determined by ((r, m), (r , m )) ∈ Ri if ri (m) = ri (m ). This
means that agent i considers (r , m ) possible at point (r, m) if i has the
same local environment at both points.
Let Φ be a set of basic propositions. These propositions describe facts
about the system. For example, in a distributed system, some facts might be
“the system is deadlocked,” “the value of variable x is 3,” “process 2 receives
message m in round 6 of this run,” etc. An interpreted system is a tuple
(S, V ), where S is a system and V is a function that maps propositions in
Φ, depending on the global state, to truth values. In other words, V (p, s) ∈
{true, false}, where p ∈ Φ and s is a global state.
We associate I = (S, V ) with a modal structure M = (W, R1 , . . . , Rn , V ),
using the definitions we already presented for W, R1 , . . . , Rn and V . Thus,
agents’ knowledge is determined by their local environments. Now, we define
what it means for a formula ϕ to be true at a point (r, m) in an interpreted
system I, written (I, r, m) |= ϕ, by applying our earlier definitions:
These have been very important insights and fallbacks to the design of dis-
tributed protocols.
Although multi-agent systems have been a common application, epis-
temic logic has been used with other areas of computer science as well.
For example, there have been connections with epistemic logic and zero-
knowledge proofs, which are important to cryptography theory [8]. Com-
plex forms of epistemic logic, containing some probability theory in axioms,
has been useful in reasoning about zero-knowledge proofs. Zero-knowledge
proofs fall under the category of cryptography theory. Knowledge-based
programming is a type of programming language where, based on what an
13
agent knows, explicit tests are performed that can decide what the agent’s
actions are. Halpern and Fagin provide a formal semantics for knowledge-
based protocols, but a complete programming language has not yet been
produced [13].
These are only a few areas of computer science where epistemic logic
is used. There are still areas of study where the use of epistemic logic is
unrealized. Next, we will look at dynamic logic and applications where it is
useful.
4.3.1 Syntax
We will extend L(Φ) introduced in Section 3. Let Φ = {p1 , p2 , p3 . . .} be
a nonempty set of propositions. An ‘atomic’ program is a smallest ba-
sic program, meaning it does not consist of other programs. Let Π =
{a1 , a2 , a3 , . . .} be a nonempty set of atomic programs. Formulas are built
inductively from Φ and Π using the following operators:
• if p ∈ Φ, then p is a formula,
• if a ∈ Π, then a is a program,
14
• if ϕ and ψ are formulas, then ϕ ∧ ψ and ¬ϕ are formulas,
The program α; β means “do α and then β.” The sequential composition
operator addresses the need for order in programs. For example, ’;’ is a com-
mon operator in many programming languages used to separate statements.
The program α ∪ β means “do either α or β (nondeterministically).” The
program α∗ means to “repeat α some finite number of times.” The iteration
operator is related to loops. The program ϕ? reflects if-then conditions, as
it means to “test ϕ: continue if ϕ is true, otherwise ‘fail’.”
Let LP DL (Φ) be defined as the smallest set of formulas containing Φ that
is closed under conjunction, negation, and program necessity. Let LP DL (Π)
be defined as the smallest set of programs containing Π that is closed under
sequential composition, nondeterministic choice, iteration, and test. Note
that the definitions of programs and formulas rely on each other. Formulas
depend on programs because of [α]ϕ, and programs depend on formulas
because of ϕ?.
Parentheses can be dropped by assigning precedence to operators in the
following order: unary operators, the operator ‘;’, and the operator ‘∪.’
Thus, the expression
[ϕ?; α∗ ; β]ϕ ∧ ψ
should be read as
([(((ϕ?); (α∗ )); β)]ϕ) ∧ ψ.
The use of parentheses is utilized for parsing an expression in a particular
way or for readability.
We can write some classical programming statements, such as loop con-
structs, using PDL program operators.
def
• ‘if ϕ then α else β = (ϕ?; α) ∪ (¬ϕ?; β)
def
• ‘while ϕ do α = (ϕ?; α)∗ ; ¬ϕ?
def
• ‘repeat α until ϕ = α; (¬ϕ?; α)∗ ; ϕ?
15
4.3.2 Semantics
The semantics for PDL is based on the semantics presented in Section 3.
We redefine a modal structure as M = (W, {Ra |a ∈ Π}, V ). W is a set of
program states, Ra is one or more binary relation(s) that determines which
states are accessible from any state in W . V maps Φ × W into {true, false}.
Intuitively, we consider (w, w ) ∈ Ra as the case where w is the initial
state of program a and w is an ending state of a. We develop more acces-
sibility relations when combining programs. These newer relations allow us
to discuss the input and output states of compound programs easily. The
following are the meanings for each of the program operators:
def
1. Rα;β = {(w, w ) | ∃w such that wRα w ∧ w Rβ w },
def
2. Rα∪β = Rα ∪ Rβ ,
def
3. Rα∗ = {(u, v) | ∃ u0 , . . . , un where n ≥ 0, u = u0 and v = un such that (ui , ui+1 ) ∈
Rα for 0 ≤ i ≤ n}.
We give the meaning of Rϕ? after presenting the definiton of |=. Our
definition for |= is similar to the |= presented in Section 3. Recall that V
gives the information for the base case, in which ϕ is a proposition. The
following definitions come straight from Section 3.
The loop constructs described earlier inherit their semantics from the
above semantics. For example, for the while-do program, we have the fol-
lowing meaning [17]:
16
4.3.3 Axiomatizations
The following is a list of rules of inference and axioms for the deductive
system of PDL. Let ϕ and ψ be formulas and α and β be programs.
R1 If
ϕ and
ϕ → ψ, then
ψ. (Modus Ponens)
R2 If
ϕ, then
[α]ϕ. (Generalization)
A6 [ψ?]ϕ ↔ (ψ → ϕ).
This is a sound and complete axiom system for PDL [37]. The axioms
A1, A2, and the rules of inference are taken from the axioms and rules of
inference presented in Section 3.2. A8, called the Induction Axiom for PDL,
intuitively means “Suppose ϕ is true. After a number of iterations of α, if
ϕ is true, then after one more iteration of α, ϕ is still true. Then, ϕ will be
true after any number of iterations of α.” For a more in-depth discussion of
PDL, look to Unit 2 of [17].
17
the correctness specification. Correctness specification is very important,
especially with large programs. Programmers tend to redefine problems so
that they will know exactly what they are supposed to build. In formulating
specifications, many unforeseen cases may arise, which is very useful for error
handling.
PDL, and hence dynamic logic, is not well-suited to reasoning about
program behavior at intermediary states. Other logics that do so are process
logic and temporal logic. PDL is better suited to reasoning about program
behavior with respect to only input and output states. For example, the
accessibility relation for a program α only contains information about an
input and an output state, i.e., (w, w ) ∈ Rα means that w is an output
state when program α is run with initial state w. A reasonable restriction
for dynamic logic is to only consider programs that halt. There are some
programs that are not meant to halt, like operating systems, and dynamic
logic should not be used to reason about them.
For programs meant to halt, correctness specifications are usually in the
form of input and output specifications. For instance, formal documentation
of programs usually consists of detailed descriptions of input and output
specifications.
Dynamic logic is then used to reason about a program or a collection of
programs. PDL is used to reason about regular programs. Regular programs
are defined as follows:
18
Next, we will describe another logic that is useful for reasoning about
intermediary states of programs, temporal logic.
• if p ∈ Φ, then p is a formula;
The dual operators of G and H are F (‘at least once in the future’) and
P (‘at least once in the past’), respectively. They are defined as F ϕ ≡ ¬G¬ϕ
and P ϕ ≡ ¬H¬ϕ. Let Lt (Φ) contains the least set of formulas closed under
conjunction, negation, and the modal operators G and H.
The semantics for PTL uses Kripke structures, as seen in previous logics.
Let M = (W, R, V ) be a modal structure as is defined in Section 3. We define
the relation |= as usual.
(M, w) |= ¬ϕ iff (M, w) |= ϕ
(M, w) |= ϕ ∧ ψ iff (M, w) |= ϕ and (M, w) |= ψ
(M, w) |= Gϕ iff (M, w ) |= ϕ for all w ∈ W such that (w, w ) ∈ R
(M, w) |= Hϕ iff (M, w ) |= ϕ for all w ∈ W such that (w , w) ∈ R
19
4.4.2 Axiomatizations
The following modal logics for PTL are similar to those introduced in Sec-
tion 3. They are called Kt , Tt , K4t , S4t . Each of these logics contain the
following axioms and rules of references.
A4: P Gϕ → ϕ.
A5: F Hϕ → ϕ.
R1: If
ϕ and
ϕ → ψ, then,
ψ (Modus Ponens).
R2: If
ϕ, then
Gϕ. (G-generalization)
R3: If
ϕ, then
Hϕ. (H-generalization)
The axioms A1, A2, A3 and the rules R1, R2, R3 hold for each of the modal
logics Kt , Tt , K4t , S4t . In addition,
• A4 holds for Tt ,
20
example, a database for a bank subtracts an amount b of an owner’s bank
account if the owner has made a withdrawal of amount b. To reason about
such a system, we need to know what facts are true at which points in time.
This implies using temporal logic.
(M, w) |= ¬ϕ iff (M, w) |= ϕ
(M, w) |= ϕ ∧ ψ iff (M, w) |= ϕ and (M, w) |= ψ
(M, w) |= Oϕ iff (M, w ) |= ϕ for all w ∈ W such that (w, w ) ∈ R
4.5.2 Axiomatizations
The following is sound and complete for the logic KD with respect to the
class of modal structures where R is serial [1]. Let ϕ and ψ be deontic
formulas.
21
A3 Oϕ → P ϕ.
R1 If
ϕ and
ϕ → ψ, then
ψ (Modus Ponens).
R2 If
ϕ, then
Oϕ (Generalization).
The logic that contains only A1, A2, A3, R1, and R2 is called the standard
system of deontic logic, or KD.
22
5 Introduction to Complexity Theory
Complexity theory investigates the difficulty of problems in computer sci-
ence. This is a fairly new field in computer science and has greatly expanded
in the last thiry years. Resources such as time and space are used as tools
to measure difficulty. Problems that are considered computationally practi-
cal are those that can be solved in a polynomial of the length of the input,
where the resource used is time. To understand this section, readers should
have basic foundations in algorithms and graph theory. The information
presented in this section will provide a foundation to build on in Section 6,
where we discuss the complexity of some problems in the logics introduced
in the previous section. In addition to reading this section, other sources
that may be of aid when reading Section 6 are [28, 38].
5.1 Decidability
A decision problem is answered by either a ‘yes’ or a ‘no’ for each input. A
solution for a decision problem is an algorithm that gives the right answer
for every input. An algorithm is a detailed series of steps, like a recipe, that
solves a problem. Problems that are solved by algorithms are decidable. In
complexity theory, we will consider only decidable problems.
Notice how the problem was stated. The only solutions that would
answer the problem are ‘yes’ or ‘no.’ The problem could also have been
stated as: Given a structure M and a modal formula ϕ, at what state
w ∈ W is ϕ satisfiable? The form of the question changes the problem type
since answers would be states.
One approach to solving the model-checking problem is the following
algorithm, which is based on the algorithm found in [15]. Recall that size(ϕ)
is defined to be the number of symbols in ϕ, where ϕ is a string over Φ ∪
{(, ), 2, ∧, ¬}.
23
name: model-check
input: a modal formula ϕ,
1. let S be the set sub(ϕ)
2. sort S such that each element ψ ∈ S is listed in ascending order
with respect to size(ψ)
3. while (S = ∅)
4. remove a formula ψ from the front of S
5. for each w ∈ W
6. if ψ ∈ Φ
7. if V (w, ψ) = true
8. label w with ψ
9. else
10. label w with ¬ψ
11. if ψ = ¬ψ
12. if w is labeled with ¬ψ
13. label w with ψ
14. else
15. label w with ¬ψ
16. if ψ = ψ ∧ ψ
17. if w is labeled with ψ and ψ
18. label w with ψ
19. else
20. label w with ¬ψ
21. if ψ = 2ψ
22. if w is labeled with ψ for each (w, w ) ∈ R
23. label w with ψ
24. else
25. label w with ¬ψ
26. for each w ∈ W
27. if w is labeled with ϕ
28. return ‘yes’
29. return ‘no’
24
ity results for the model-checking problem. Let M be a directed graph,
where W is a list of “nodes” and R is a set of “edges.” There are two
main representations used for graphs: adjacency-list and adjacency-matrix
representations. We will use a modified version of an adjacency-matrix rep-
resentation. The modification is an additional bit-matrix that contains the
information for V . In row i, column j, a ‘1’ means that V (i, j) = true,
and a ‘0’ means that V (j, i) = false. The propositions used in the ma-
trix are only those that occur in ϕ. Figure 5.1 is an example of a typical
modal structure M , where W = {u, v, w}, R = {(u, w), (v, u), (v, v)}, and
V (p1 , u) = V (p2 , u) = V (p2 , v) = V (p1 , w) = V (p2 , w) = true. Figure 5.2
gives us the corresponding adjacency-matrix representation for M .
p1 , p2 u w p1 , p2
p2 v
Figure 5.1
Array A
Array B
u v w
u v w
u 0 0 1
p1 1 0 1
v 1 1 0
p2 1 1 1
w 0 0 0
Figure 5.2
In Figure 5.2, Array A is a typical adjacency-matrix representation for
a graph. For M , Array A contains the information for W and R. Array B
contains the information for V . Let m be the number of rows in Array B. We
can now define the size of M , abbreviated as ||M ||, to be ||W ||2 + ||W || × m.
Recall the length of ϕ, abbreviated as |ϕ|, is the length of the encoding of
ϕ.
25
To analyze algorithms, we use an important and useful tool for complex-
ity theory, the O notation. Let N be the set of positive integers.
Intuitively, this means that the function f (n) grows slower or at the same
rate as g(n). To continue our example, we will analyze the algorithm for the
model-checking problem. Remember that S contains the subformulas of ϕ.
As a reminder, the size of sub(ϕ) is no greater than size(ϕ). Thus, ||S|| ≤
size(ϕ) ≤ |ϕ|. For each subformula of ϕ, we visit each state in M once. The
body of the loop between lines 5 and 25 is executed at most O(|ϕ| × ||W ||).
However, not all the steps inside the loop take a constant time, that is O(1).
For example, in line 22, to find each w ∈ W such that (w, w ) ∈ R, the
algorithm looks at w’s row in Array A. If there is a ‘1’ in row w, column w ,
then the algorithm checks if w has been labeled with ψ . If a state’s labels
are actually pointers to subformulas in ϕ, then this step could take O(|ϕ|)
steps. Thus, one execution of line 22 would take O(||W || × |ϕ|). Since lines
3 through 21 are the most “complex” section of model-check, we can say
model-check has a running time of O(|ϕ|2 × ||W ||2 ).
26
• q0 ∈ Q is the initial state,
27
Worst-case analysis of M is used to find f (n) since this will give us an
accurate description of the running time for all inputs. Worst-case analysis
is the largest possible running time for a Turing machine.
In comparing running times of different Turing machines, it is easier and
more convenient to compare the growth rates instead of accurate running
time functions. Thus, we generally use O notation to describe running times.
Some of the most well-known time complexity classes are the following:
polynomial time (P), nondeterministic polynomial time (NP), and exponen-
tial time (EXPTIME). We will begin by presenting the complexity classes
P and NP.
5.2.1 P and NP
Definition P is the class of languages A that are decidable in polynomial
time (i.e., P = k DTIME(nk ), where n is the length of an input to A, and
k is a constant).
28
Definition A verifier is an algorithm V that decides a language L such
that L = {w | V accepts on inputs w and c for some c}. A polynomial-time
verifier is a verifier that runs in polynomial time in the length of w. L is
polynomially verifiable if it has a polynomial-time verfier.
29
name: sat-v
Input is a propositional formula ψ and a truth assignment c,
1. Test whether c is a truth assignment of ψ and
replace each proposition in ψ with its corresponding truth value.
2. Simplify ψ to a single truth value.
3. If step 1 passes and ψ is true, return ‘yes’; otherwise, return ‘no’.
It has not yet been proven whether P = NP. This question is one of the
most important unsolved problems in complexity theory.
5.2.2 NP-completeness
Complexity theory uses polynomial-time reductions as a simple way to show
that a certain problem X is at least as hard as a problem Y , without explic-
itly giving an algorithm for X. This is done by finding a polynomial-time
function that reduces Y to X.
Definition A function f is a polynomial-time computable function if there
exists a Turing machine that runs in polynomial time and outputs f (w)
on any input w. A language A is polynomial-time reducible, or polynomial-
time many-one reducible, to B, written as A ≤pm B, if a polynomial-time
computable function exists such that w ∈ A iff f (w) ∈ B. The function f is
called the polynomial-time reduction from A to B. The reduction A ≤pm B
is also read as “A reduces to B.”
To show that a language A reduces to B, there are three parts that must
be expressed in the proof:
• a function f must be given,
• it must be proven that w ∈ A iff f (w) ∈ B,
• and it must be shown that f is computable in polynomial time.
We can now present what it means for a language to be hard compared to
a complexity class.
Definition A language A is said to be hard with respect to a complexity
class C if every language in C can be reduced to A.
Some examples of hardness are NP-hard, PSPACE-hard, and EXPTIME-
hard. We will speak more of the latter two in a later section. An important
idea to grasp is that a language A that is NP-hard intuitively means that
A is at least as hard as all languages in NP. This does not tell us which
complexity class A resides in.
30
Definition A language A is complete with respect to a complexity class C
if A is in C and A is also C-hard.
31
Proof Suppose ϕ is satisfiable in S5. By Proposition 3.3, ϕ is satisfiable
in an S5-model M = (W, R, V ) where, for all v, v ∈ W, (v, v ) ∈ R, i.e., R
is universal. Suppose (M, u) |= ϕ. Let F be the set of subformulas of ϕ
of the form 2ψ for which (M, u) |= ¬2ψ. For each 2ψ ∈ F , there must
be some state u¬ψ ∈ W such that (M, u¬ψ ) |= ¬ψ. Let M = (W , R , V ),
where W = {u} ∪ {u¬ψ | 2ψ ∈ F }, V is the restriction of V to W × Φ, and
R = {(v, v ) | v, v ∈ W }. ||F || < ||sub(ϕ)|| since F is a subset of sub(ϕ)−Φ
and sub(ϕ) ∩ Φ is not empty since ϕ contains at least one proposition. We
already know that ||sub(ϕ)|| ≤ size(ϕ) ≤ |ϕ|, so ||W || ≤ |ϕ|. We now show
that for all states u ∈ W and for all subformulas ψ of ϕ, (M, u ) |= ψ ⇐⇒
(M , u ) |= ψ. We perform induction on the structure of ψ. For the following
cases, let u ∈ W .
32
Theorem 5.5 [25] The satisfiability problem for S5 is in NP.
name: S5-sat
input: a modal formula ϕ,
1. Nondeterministically choose an S5-model M = (W, R, V )
such that such that ||W || ≤ |ϕ|. Assume propositions in Φ
but not used in ϕ are false at each state in W .
2. for each w ∈ W ,
3. check that (M, w) |= ϕ using model-check.
4. if model-check returns ‘yes,’ return ‘yes.’
5. return ‘no.’
33
of V with respect to W × Φ, and R = {(u, v)|u ∈ W , v ∈ W − {u}}.
||F || < ||sub(ϕ)|| since F is a subset of sub(ϕ) − Φ. We already know that
||sub(ϕ)|| ≤ size(ϕ) ≤ |ϕ|, so ||W || ≤ |ϕ|. We now show that for all states
u ∈ W and for all subformulas ψ of ϕ, (M, u ) |= ψ ⇔ (M , u ) |= ψ. We
perform induction on the structure of ψ. For those cases where ψ is not of
the form 2ψ , we refer the reader to cases 1 through 3 of Proposition 5.4,
since R and R have no effect on these forms. Let ψ be of the form 2ψ and
u ∈ W .
The proof for this problem follows that of Theorem 5.5, except we replace
the reference to Proposition 5.4 with Proposition 5.6.
34
5.3 Space Complexity
At the beginning of the previous subsection, we mentioned that determining
the difficulty of a problem is based on resources such as time and space. In
this section, we present attributes of space complexity.
Definition Let M be a Turing machine that halts on all inputs. We say the
space complexity of M is the function f : N → N , the maximum number of
tape cells that M uses to run on any input n. If f (n) is the space complexity
of M , M is an f (n) space Turing machine and M runs in space f (n).
35
∀xi ψ(xi ) with ψ(xi = true) ∧ ψ(xi = false). For each ∃xi , we replace the
formula ∃xi ψ(xi ) with ψ(xi = true) ∨ ψ(xi = false). Let us return to the
example σ = ∀x1 ∃x2 (x1 ∨ x2 ).
name: qbf
input: a QBF formula σ,
1. If σ has no quantifiers, simplify σ to a single value. If σ is true,
return ‘yes;’ otherwise, return ‘no.’
2. If σ is in the form of ∀x(ψ), recursively call qbf with input ψ
twice, once with x replaced by false and once with x
replaced by true. If ‘yes’ is returned on both calls, return ‘yes;’
otherwise return ‘no.’
3. If σ is in the form of ∃x(ψ), recursively call qbf with input ψ
twice, once with x replaced by false and once with x
replaced by true. If ‘yes’ is returned on either call, return ‘yes;’
otherwise return ‘no.’
36
We already showed QBF to be in PSPACE. This theorem is similar to
Theorem 5.2 in that QBF was the first language that was shown to be
PSPACE-complete. Because of Theorem 5.3, we can show a problem to
be PSPACE-hard by reducing it from QBF instead of reducing it from all
problems in PSPACE.
Let K-Satisfiability (K-SAT) be the set containing all formulas that
are satisfiable in K. Ladner showed K-SAT is PSPACE-complete. We
will present a proof to show that a version of K-SAT is also PSPACE-
complete. Let a K2 -model be a K-model M where each state in M has
exactly 2 successors. K2 -satisfiability (K2 -SAT) contains those formulas
that are satisfiable in the set containing all K2 -models. We will show that
K2 -SAT is PSPACE-complete.
37
Algorithm K2 -sat
input: (T, F, T , F ), where T, F, T, F are sets of modal formulas
1. if T ∪ F ⊆ Φ
2. choose ψ ∈ (T ∪ F ) − Φ
3. if ψ = ¬ψ and ψ ∈ T
4. return K2 -sat(T − {ψ}, F ∪ {ψ }, T , F )
5. if ψ = ¬ψ and ψ ∈ F
6. return K2 -sat(T ∪ {ψ }, F − {ψ}, T , F )
7. if ψ = ψ ∧ ψ and ψ ∈ T
8. return K2 -sat((T ∪ {ψ , ψ }) − {ψ}, F, T , F )
9. if ψ = ψ ∧ ψ and ψ ∈ F
10. return (K2 -sat(T, (F ∪ {ψ }) − {ψ}, T , F ))∨
(K2 -sat(T, (F ∪ {ψ }) − ψ, T , F ))
11. if ψ = 2ψ and ψ ∈ T
12. return K2 -sat(T − {ψ}, F, T ∪ {ψ }, F )
13. if ψ = 2ψ and ψ ∈ F
14. return K2 -sat(T, F − {ψ}, T, F ∪ {ψ })
15. if T ∪ F ⊆ Φ
16. if T ∩ F = ∅
17. return ‘no’
18. else
19. for each possible subset B of F ,
20. if (K2 -sat(T , B, ∅, ∅) ∧ K2 -sat(T , F − B, ∅, ∅))
21. return ‘yes’
22. return ‘no’
The above algorithm is based on Ladner’s K-WORLD algorithm. The
main difference is that lines 18-22 of K2 -sat restrict the number of successor
states to only 2. Line 20 ensures that if ϕ is K2 -satisfiable, it is K2 -satisfiable
in a structure where each state has only 2 successor states. In lines 1-18, we
break down ϕ into its underlying propositions and modal subformulas (i.e.,
those subformulas of the form 2ψ). This process involves observing if each
subformula ψ of ϕ that is not a proposition is of the form ¬ψ , ψ ∧ ψ , or
2ψ , where ψ , ψ are subformulas of ϕ. Whether the set containing ψ , ψ
is appended to T, F, T , or F depends on both which set ψ belongs to and the
semantics for the operators ¬, ∧, 2. One way to understand this algorithm is
to imagine that while in the process of breaking down ϕ, there is some given
state w at some given K2-model M = (W, R, V ) that K2 -sat is observing.
When T ∪ F ⊆ Φ and T ∩ F = ∅, the recursive calls in line 20 observe the
two states w1 , w2 ∈ W such that (w, w1 ), (w, w2 ) ∈ R. So, the algorithm
38
now observes w1 or w2 . One can see how this is possible since T = T and
F becomes some subset of F after the recursive calls in line 20. For each
formula ψ ∈ F , ¬2ψ is true at w. So, ψ must not be true at some successor
state of w. We try different combinations of subsets of F until the condition
in line 20 is true. We need only one combination for ϕ to be satisfiable.
To analyze K2 -SAT, we look at how much storage is used per recursion
call, and then observe how many recursive calls are made. The sets T, F, T , F
are pairwise disjoint sets of subformulas of ϕ. The size of the space for
T, F, T , F is no more than ||sub(ϕ)||. Since ||sub(ϕ)|| ≤ |ϕ|, we have a
space complexity of O(|ϕ|) for each recursive call. Also the recursion depth
is O(|ϕ|) since we analyze each symbol in ϕ per recursive call. So, our
algorithm has a space complexity of O(|ϕ|2 ). Ladner gives a more in-depth
analysis. Thus, K2 -SAT is in PSPACE.
Proof Ladner uses QBF to show that X-SAT, where X is “in between” K
and S4, is PSPACE-hard. We will reduce QBF to K2 -SAT. By Theorem 5.3,
this will show that K2 -SAT is PSPACE-hard. To clarify matters, K2 is not
in between K and S4. The set of all formulas valid in M2 is a subset of
the set of all formulas valid in M, meaning K2 is a restriction of K. The
following proof is similar to Ladner’s proof.
Let σ = Q1 p1 . . . Qm pm ϕ(p1 , . . . , pm ) be a QBF. We construct a formula
ψ that is satisfiable in K2 iff σ is true. We construct the formula ψ so that
the existence of a binary tree is forced. Each leaf node represents a unique
truth assignment for the propositions p1 , . . . , pm , which are in ψ and ϕ.
Then, there would be 2m different leaves for the binary tree. We need an
equation that captures the following diagram.
39
p1 ¬p1
p1 , p1 , ¬p1 , ¬p1 ,
p2 ¬p2 p2 ¬p2
.. .. .. ..
. . . .
Figure 5.3
The following equation, abbreviated as tree gives us this effect.
m m−i
(2(i−1) 32(j) pi ∧ 2(i−1) 32(j) ¬pi )
i=1 j=0
40
Let ψ be an abbreviation for the following equation:
3 Qi = ∃
ϕ(p1 , . . . , pm ), 1 ≤ i ≤ m
2 Qi = ∀
tree ∧ ψ
Inductive Hypothesis For all k < j ≤ m, σwj is true given some state
wj ∈ W .
41
by the definition of ∀. We also know that
3 Qi = ∃
(M, wk ) |= 2 ϕ(pk+1 , . . . , pm )for k + 2 ≤ i ≤ m
2 Qi = ∀
for some wk . Since σwk+1 is true at both successors of wk , and
pk+1 = true for one successor and false for the other successor,
σwk is true.
Case 2: Qk+1 = ∃ In this case,
σwk = Qk+2 pk+2 . . . Qm pm ϕ(pk+1 = true, pk+2 , . . . , pm ) ∨
Qk+2 pk+2 . . . Qm pm ϕ(pk+1 = false, pk+2 , . . . , pm )
by the definition of ∃. We also know that
3 Qi = ∃
(M, wk ) |= 3 ϕ(pk+1 , . . . , pm )for k + 2 ≤ i ≤ m
2 Qi = ∀
for some wk ∈ W . Since σwk+1 is true at a successor of wk , σwk
is true.
Thus, σwk is true.
We can now say that σw0 = σ is true.
The last part of our proof is to show that our construction for σ is in
polynomial-time. To develop ψ from σ, it would take O(m2 ) due to the
double in tree. Thus, K2 -SAT is PSPACE-hard.
We have shown K2 -SAT to be PSPACE-complete. We will now introduce
a complexity class that encompasses problems that are more difficult than
PSPACE.
5.4 EXPTIME
EXPTIME is the class of languages that have algorithms that are solved in
an exponential of the input. The algorithms of these languages are some-
times referred to as brute-force algorithms.
Definition EXPTIME is the class of languages A that are decidable in
exponential time (i.e., EXPTIME = k DTIME(kn ), where n is the length
of an input to A and k is a constant).
EXPTIME is a superset of the previously mentioned complexity classes.
So, it is generally believed that P ⊆ NP ⊆ PSPACE = NPSPACE ⊆ EXP-
TIME. We will present examples of EXPTIME in the next section.
42
6 Classification of Complexity Results for Modal
Logics
Using Sections 4 and 5 as a basis, we present complexity results of those log-
ics presented in Section 4 with regards to satisfiabilty. We begin with a logic
that is in EXPTIME-complete, to continue the explanation of EXPTIME
that was started at the end of Section 5.
6.1 PDL
Recall from Section 4.2 that propositional dynamic logic (PDL) is a logic
used for the verification of programs. As with uni-modal logics, PDL uses
propositional logic as a foundation. The main modal operater is [α], where
α is a program. Also, recall that Π = {a1 , a2 , a3 , . . .} is defined as a non-
empty set of atomic programs. Programs are built inductively. Let α and
β be programs and ϕ be a modal formula. Then, α; β, α ∪ β, ϕ?, α∗ are
programs. Refer back to Section 4.2 for more of a review.
Based on the definitions of EXPTIME and completeness (not to be con-
fused with modal completeness), EXPTIME-complete describes the hardest
problems in EXPTIME. PDL-satisfiability, or PDL-SAT, is the problem
that determines whether a formula is satisfiable in some given state in some
given modal structure. Fischer, Ladner, and Pratt show that PDL-SAT is
EXPTIME-complete [11, 30].
We will sketch the ideas behind the proof of Theorem 6.1. We explain
the concepts behind the proof without going into in-depth details. The com-
plexity of the deterministic Turing machine that decides PDL-SAT shows
PDL-SAT to be in EXPTIME. As with previously introduced logics, there
is a finite model property for PDL.
We now give a sketch of the proof of Proposition 6.2. Fischer and Ladner
presented a technique, called the filtration process, that essentially proves
Proposition 6.2. Given a modal structure M such that ϕ is satisfiable in
M , those states that have the same truth assignments for all subformulas of
ϕ can be collapsed into a single state. We know that ||sub(ϕ)|| ≤ |ϕ|. So,
43
the size of the power set of sub(ϕ), which is at most 2|ϕ| , gives the greatest
number of states possible in the collapsed M .
With this property, a naı̈ve algorithm would guess a structure M =
(W, R, V ) such that ||W || is at most 2|ϕ| . The complexity of this algorithm
is nondeterministic exponential time (NEXPTIME), since we are guessing a
modal structure that follows the finite modal property. Since it is not certain
that NEXPTIME = EXPTIME, this algorithm will not be deterministic.
However, there is a solution to this problem. Let Mu be a structure
such that every formula ϕ that is PDL-consistent is satisfied at some state
in Mu . Berman, Kozen, and Pratt each showed that such a structure does
exist [4, 23, 31]. The deterministic algorithm presented by Pratt uses the
filtration process on Mu with respect to a given formula ϕ. This produces a
new structure Mϕ that contains at most 2|ϕ| states. Mϕ is simply the ending
product of Mu after the filtration process has finished. Pratt shows that the
process does not take more than O(2|ϕ| ) steps. Using the model-checking
problem, we can find if ϕ is satisfiable in Mϕ easily. Thus, the improved
algorithm has a complexity of O(2|ϕ| ).
We now present alternating Turing machines as an aid to considering
the complexity of PDL. Recall how a nondeterministic Turing machine can
be simulated by a deterministic Turing machine. At each of the steps that
require a “guess,” the deterministic Turing machine would run each pos-
sible computation path until an accepting state is reached. Alternating
Turing machines encompass nondeterminism along with an additional fea-
ture. Within computation trees of alternating Turing machines, we label
each node as universal or existential. Existential nodes are basically points
of nondeterminism. For a universal node to be true, each possible compu-
tation path must accept. Below is an example of a computation tree for a
possible alternating Turing machine.
∨
∧ ∧ ∧
In the figure above, the nodes marked with a ∧ indicate universal nodes
44
and nodes marked with a ∨ are existential nodes.
There are also complexity classes based off of alternating Turing ma-
chines. For example, P, PSPACE, and EXPTIME are based off of deter-
ministic Turing machines and NP is based off of nondeterministic Turing
machines. ATIME(f (n)) is the set of alternating Turing machines that have
a time complexity of f (n),and ASPACE(f (n)) is the set of alternating Tur-
ing machines that have a space complexity of f (n). How do these “newer”
complexity classes compare with established complexity classes?
That is, each alternating Turing machines that has a linear space com-
plexity can be converted into a deterministic Turing machines that has a
time complexity within a single exponent of the input, and vice versa. This
theorem becomes helpful when explaining the upper bound of PDL-SAT.
Fischer and Ladner showed PDL-SAT to be EXPTIME-hard by con-
structing a formula ϕ of PDL such that the structure encodes the com-
putation tree of a given linear-space bounded one-tape alternating Turing
machine M on a given input of length n over M ’s input alphabet. Because
of Theorem 5.1, we would be showing PDL-SAT is harder than any problem
in EXPTIME.
45
Strict PDL (SPDL) is the name given to the PDL variation in which only
deterministic while programs are allowed. Deterministic while programs is
a name for a set of programs described by the following:
46
• CPDL (PDL with converse operator, which allows a program to be
run backwards): EXPTIME-complete [43]
47
6.3 Temporal Logic and Deontic Logic
Just a few of the simplest temporal logics are listed below.
• Kt :[39] PSPACE-complete
• Tt :[39] PSPACE-complete
References
[1] L. Ȧqvist, Deontic Logic. In D.M. Gabbay & F. Guenther (eds.), Hand-
book of Philosophical Logic Vol. II: 605-714, Reidel, Dordrecht / Boston,
1984.
48
[8] S. Goldwasser, S. Micali, and C. Rackoff, The knowledge complexity
of interactive proof systems. In SIAM Journal on Computing, 18(1):
186-208, 1989.
[9] E.A. Emerson and C. Jutla, The complexity of tree automata and logics
of programs. In Proc. 29th Symp. Foundations of Comput. Sci., IEEE,
1988.
[10] R. Fagin, J.Y. Halpern, and M.Y. Vardi, What can machines know?
On the properties of knowledge in distributed systems. In Journal of the
ACM 39(2): 328-376, 1992.
[11] M.J. Fischer and R.E. Ladner, Propositional dynamic logic of regular
programs. In J. Comput. Syst. Sci. 18(2): 194-211, 1979.
[13] J.Y. Halpern and R. Fagin, Modelling knowledge and action in dis-
tributed systems. In Distributed Computing 3(4): 159-179, 1989. A pre-
liminary version appeared in Proc. 4th ACM Symposium on Principles of
Distributed Computing, 1985 with a title “A formal model of knowledge,
action, and communication in distributed systems: Preliminary report.”
[14] J.Y. Halpern and J.H. Reif, The propositional dynamic logic of deter-
ministic, well-structured programs. In Theor. Comput. Sci. 27: 127-165,
1983.
[16] D. Harel, A. Pnueli, and M. Vardi, Two dimensional temporal logic and
PDL with intersection. Unpublished, 1982.
[17] D. Harel, D. Kozen, and J. Tiuryn, Dynamic Logic, The MIT Press:
Cambridge, MA, 2000.
49
[19] R. Hilpinen, Deontic Logic: Introductory and Systematic Readings, Dor-
drecht: D. Reidel, 1971.
[20] J. Hintikka, Knowledge and Belief, Cornell University Press: Ithaca,
NY. 1962.
[21] G.E. Hughes and M.J. Cresswell, An Introduction to Modal Logic.
Methuen: London. 1968.
[22] S. Kripke, A semantical analysis of modal logic I: normal modal propo-
sitional calculi. In Zeitschrift für Mathematische Logik und Grundlagen
der Mathematik, 9: 67-96, 1963. Announced in Journal of Symbolic Logic
24: 323, 1959.
[23] D. Kozen, Logics of programs. Lecture notes, Aarhus University: Den-
mark, 1981.
[24] R.E. Ladner, Unpublished, 1977.
[25] R. E. Ladner, The computational complexity of provability in systems
of modal propositional logic. In SIAM Journal on Computing 14(1):
113-118, 1981.
[26] L. Levin, Universal search problems (in Russian). In Problemy
Peredachi Informatsii 9(3): 115-116, 1973.
[27] P. Milgrom, An axiomatic characterization of common knowledge. In
Econometrica 49(1): 219-222, 1981.
[28] C.H. Papadimitrou, Computational Complexity, Addison-Wesley Pub-
lishing Company: Reading, MA, 1994.
[29] D. Peleg, Concurrent Dynamic Logic. In Journal of the Association for
Computing Machinery 34(2): 450-479, 1987.
[30] V. Pratt, Semantical Considerations on Floyd-Hoare logic. In Proc. 17th
IEEE Symp. on Foundations of Computer Science: 109-121, 1976.
[31] V.R. Pratt, Dynamic algebras and the nature of induction. In Proc.
12th Symp. Theory of Comput.: 22-28, ACM, 1980.
[32] A. Prior, Past, Present, and Future, Clarendon Press: Oxford, 1967.
[33] R. Reiter, On integrity constraints. In M.Y. Vardi, editor, Proc. Second
Conference on Theoretical Aspects of Reasoning about Knowledge: 97-
112. Morgan Kaufmann: San Francisco, CA, 1988.
50
[34] S. Safra, On the complexity of ω-automata. In Proc. 29th Symp. Foun-
dations of Comput. Sci.: 319-327, IEEE, 1988.
[35] M. Sato, A study of Kripke-style methods for some modal logics
by Gentzen’s sequential method. In Publications Research Institute for
Mathematical Sciences, Kyoto University 13(2), 1977.
[36] W.J. Savitch, Relationships between nondeterministic and determinis-
tic tape complexities. In JCSS 4(2): 177-192, 1970.
[37] K. Segerberg, A completeness theorem in the modal logic of programs
(preliminary report). In Not. Amer. Math. Soc. 24(6): A-552, 1977.
[38] Michael Sipser, Introduction to the Theory of Computation, PWS Pub-
lishing Company: Boston, 1997.
[39] E. Spaan, The complexity of propositional tense logics. In M. de Rijke
(ed) Diamonds and Defaults, Studies in Logic, Language and Informa-
tion, Kluwer, Dordrecht, 1993.
[40] L.J. Stockmeyer and A.R. Meyer, Word problems requiring exponen-
tial time: preliminary report. In Proc. 5th ACM Symp. on Theory of
Computing: 1-9, 1973.
[41] A.M. Turing, On computable numbers, with an application to the
Entscheidungsproblem. In Proceedings, London Mathematical Society:
230-256, 1936.
[42] M.K. Valiev, Decision complexity of variants of propositional dynamic
logic. In Proc. 9th Symp. Math. Found. Comput. Sci.: Volume 88 of Lect.
Notes in Comput. Sci.: 656-664, Springer-Verlag, 1980.
[43] M.Y. Vardi, Reasoning about the past with two-way automata. In Proc.
25th Int. Colloq. Automata Lang. Prog.: Volume 1443 of Lect. Notes in
Comput. Sci.: 628-641, Springer-Verlag, 1998.
[44] R.J. Wieringa and J.-J. Ch. Meyer, Applications of Deontic Logic in
Computer Science: A Concise Overview. In Deontic Logic in Computer
Science: Normative System Specification: 17-40, John Wiley & Sons:
Chichester, 1993.
[45] Y. Yemini and D. Cohen, Some issues in distributed processes communi-
cation. In Proc. of the 1st International Conf. on Distributed Computing
Systems: 199-203, 1979.
51