EECE 338 - Course Summary
EECE 338 - Course Summary
The formal definition of computation for a DFA is as follows: Let M = (Q, Σ, δ, q0 , F ) be a DFA and let w = w1 w2 ...wn be a
string where each wi ∈ Σ. Then, M accepts w if a sequence of states r0 , r1 , ..., rn ∈ Q exists such that:
(1) r0 = q0
(2) δ(ri , wi+1 ) = ri+1 for i = 0, ..., n − 1
(3) rn ∈ F
A DFA must have transitions in each state that cover every possible case.
A nondeterministic finite automaton is also a 5-tuple (Q, Σ, δ, q0 , F ) having the same formal definition as a DFA, with the excep-
tion that the transition function is δ : Q × Σ → P(Q) instead.
An NFA also has a similar formal definition of computation to a DFA, with the exception that ri+1 ∈ δ(ri , wi+1 ) for i = 0, ..., n − 1.
NFAs differ from DFAs in 3 ways: (1) δ(q, a) can be qa or qb or ... , (2) δ(q, a) can be undefined, and (3) transitions are allowed.
Regular Operations
We say that a language A is regular if some finite automaton (DFA or NFA) M exists such that L(M ) = A.
Let A1 and A2 be regular languages. We can say that the class of regular languages is < ... > by constructing an NFA N to recognize
each operation:
Figure 1. (a) Closed Under Union (b) Closed Under Concatenation (c) Closed Under Star
Note: Regular languages are also closed under intersection, complementation, and reversal.
1
2
Regular Expressions
Non-regular Languages
∗
L(G) = {w ∈ Σ∗ | S =⇒ w} is a context-free language.
A grammar G is called ambiguous if ∃ a string w ∈ L(G) with 2 different parse trees (different leftmost derivations).
Closure Properties
Given CFGs G1 = (υ1 , Σ, R1 , S1 ) and G2 = (υ2 , Σ, R2 , S2 ), we can construct CFG G = (υ, Σ, R, S) such that G is:
• Closed Under Union: υ = υ1 ∪ υ2 ∪ {S} and R = R1 ∪ R2 ∪ {S → S1 |S2 }
• Closed Under Concatenation: υ = υ1 ∪ υ2 ∪ {S} and R = R1 ∪ R2 ∪ {S → S1 S2 }
• Closed Under Star: Add to R the rule S → SS |
• Closed Under Intersection with a Regular Language: If A is a CFL and B is a regular language, then A ∩ B is a
CFL.
• Closed Under Reverse
• NOT Closed Under Intersection
• NOT Closed Under Complement
3
Theorem: Any CFL is generated by a CFG in CNF. We can convert a given CFG to CNF through the following steps:
(1) Add a new start variable: S0 → S
(2) Remove -rules: Remove all rules A → where A is not the start variable. For each occurrence of A in the right-hand
side of a rule, we add a new rule with that occurrence deleted (e.g. for R → uAvAw, we add the rules R → uvAw, R →
uAvw, R → uvw)
(3) Remove unit rules: Remove rules of the form A → B, where A and B are both variables. Then, whenever a rule B → u
appears (where u is a string of terminals and/or variables), we add the rule A → u.
(4) Convert remaining rules into the proper form: Replace each rule A → u1 u2 ...uk , where k ≥ 3, with the rules
A → u1 A1 , A1 → u2 A2 , A2 → u3 A3 , ... , Ak−2 → uk−1 uk . Replace any terminal ui in the preceding rules with the new
variable Ui and add the rule Ui → ui .
Note: Deriving any string w of size n from a grammar G in CNF takes at most 2n − 1 derivations (steps).
Given a CFG in CNF and a string w ∈ Σ∗ , we can check if w ∈ L(G) in poly-time using the following algorithm:
• Say w = w1 ...wn , where n > 0 (If n = 0, simply check if s → ∈ R)
• Base Case: If i = j, check if ∃ A → wi ∈ R
∗ ∗ ∗
• Recurrence: If j > i, A =⇒ wi ...wj if ∃ rule A → BC and i ≤ k < j such that B =⇒ wi ...wk and C =⇒ wk+1 ...wj .
• Time: O(|G| × n ) 3
PDA M accepts string w if w can be expressed as w = w1 ...wn , where wi ∈ Σ and ∃ a set of states r0 , r1 , ..., rm ∈ Q and
s0 , s1 , ..., sm ∈ T ∗ representing the history of stack contents such that:
(1) r0 = q0 and s0 = (empty stack)
(2) (ri , b) ∈ δ(ri−1 , wi , a) (For i = 1..m, ∃a, b ∈ T and t ∈ T ∗ such that si−1 = at and si = bt)
(3) rm ∈ F
• ⇐ direction: Given a PDA P = (Q, Σ, T, δ, q0 , {qaccept }), we can construct an equivalent CFG G = {Apq | p, q ∈ Q}, Σ, R, {Aq0 qaccept }
by following these steps (Note that Apq is the rule in G that takes P from state p on an empty stack to state q on an empty
stack):
(1) For each p, q, r, s ∈ Q, u ∈ T, and a, b ∈ Σ : If δ(p, a, ) contains (r, u) and δ(s, b, u) contains (q, ), add the rule
Apq → aArs b to R (If the first pushed symbol (u) and the last popped symbol (u) between states p and q is the same,
add the mentioned rule).
(2) For each p, q, r ∈ Q, add the rule Apq → Apr Arq to R (If the first pushed symbol and the last popped symbol are
NOT the same, add the mentioned rule, where r is the state between p and q when the stack becomes empty).
(3) Finally, for each p ∈ Q, add the rule App → to R
4
Non-Context-Free Languages
Computation of a TM: A TM computes by first receiving an input w = w1 w2 ...wn ∈ Σ∗ on the leftmost n squares of its tape,
the rest of tape being filled with t. The head starts at the leftmost square of the tape, and the TM can recognize where the input
has ended by detecting the first t, since t ∈/ Σ. Then, the computation proceeds according to the rules of the transition function
δ. If the TM ever tries to move its head to the left off the left-hand end of the tape, it stays in place. The computation of a TM
continues until it reaches an accept or reject state (halts). If neither states are reached, the TM computes forever.
Configurations of a TM: A setting of the current (1) state, (2) tape contents, and (3) head position is called a configuration
of a TM. For a state q and two strings u, v over the tape alphabet T , and the head positioned over the first symbol of v, the
configuration is written as uqv . We say that a configuration C1 yields C2 if C1 goes to C2 in one step. Common configurations
include:
• Starting Configuration: q0 w, where w is the input string
• Accepting Configuration: uqaccept v
• Rejecting Configuration: uqreject v
A TM accepts input if ∃ a sequence of configurations that begin with the starting configuration and end with the accepting con-
figuration, with each configuration in between legally yielded from the one that precedes it.
A language A is:
• Turing Recognizable (Recursively Enumerable) if ∃ a TM M such that L(M ) = A, and on input w, M either accepts,
rejects, or loops forever.
• Co-Turing Recognizable if its complement is TR.
• Decidable if ∃ a decider TM M (halts on all inputs) such that L(M ) = A
(1) Multitape Turing Machine: A multitape TM is similar to a single-tape TM, except that it contains k tapes. It has the
transition function δ : Q × T k → Q × (T × {L, R})k . Initially, the input is placed on tape 1, while all other tapes are
initialized to t.
Theorem: Every multitape TM has an equivalent single-tape TM. Given a k-tape TM M, we can construct a single-tape
TM M’ that simulates M by:
(a) Storing the non-t of the k tapes on a single tape, separated by a new symbol #.
(b) Marking the symbol under the heads with dots, and adding the marked symbols to the tape alphabet.
5
The slowdown for this conversion is quadratic: t(n) → O(t(n)2 ), where n is the length of the input.
(2) Nondeterministic Turing Machine: Is similar to a TM, and has the transition function δ : Q × T → P(Q × T × {L, R}).
Its computation tree contains branches that are either (1) accepting, (2) rejecting, or (3) non-halting. If any branch in this
tree reaches an accepting configuration, the NTM accepts.
Theorem: Every NTM N has an equivalent deterministic (decider, if N always halts) TM D. We can construct D as
follows:
(a) On input w, D simulates M by breadth-first search (BFS) on the computation tree of M on w (instead of DFS,
which can get D stuck on non-halting branches).
(b) If BFS finds an accepting configuration, accept. If BFS returns without finding an accepting configuration, reject.
(3) Enumerator: An enumerator E is a TM with 2 tapes (work tape(read/write) and output tape (read)). It is also a 5-tuple
with the transition function δ : Q × T → Q × (T × {L, R}) × (Σ ∪ {t} × {S, R}). This means that the enumerator’s work tape
(associated with (T × {L, R})) can move left (L) and right (R), while the output tape (associated with (Σ ∪ {t} × {S, R}))
can move right (R) or stay in place (S). Initially, the tapes are empty, and according to the transition function, E runs forever.
Church-Turing Thesis
The Church-Turing Thesis is a hypothesis that states: A function is computable ⇔ A function is computable by a TM.
Chapter 4 - Decidability
Decidable Problems - Regular Languages
Proof: Let R be a TM that decides ECFG (This algorithm checks if the start variable can generate a string of terminals)
R = ”On input hGi, where G is a CFG:
(1) Mark all terminal symbols in G.
(2) Repeat until no new variables get marked: Mark any variable A where G has a rule A → U1 U2 ...Uk and each symbol
U1 , ..., Uk has already been marked.
(3) If the start variable is not marked, accept. Otherwise reject.”
Countability
Lemma 2: If a set is infinite and its elements can be ordered, then this set is countable.
Consider the diagonal, and let x = 0.b1 b2 b3 ... where bi 6= aii and bi 6= 0 or 9. Thus, ∃i such that f (i) 6= x, since they disagree on
the ith digit. Therefore, f is NOT injective, and so R is uncountable.
Theorem: ATM = {hM, wi | M is a TM that accepts input string w} is undecidable (TR) (Proof is by diagonalization, not
included in these notes).
7
Chapter 5 - Reducibility
Undecidable Problems from Language Theory
Proof: Assume that REGU LARTM is decided by a TM R. Construct a decider TM S for ATM :
S = ”On input hM, wi:
(1) TM M1 = ”On input x:
(a) If x is of the form 0n 1n , accept
(b) Else, run M on w. If M accepts, accept.”
(2) Run R on input hM1 i
(3) If R accepts, accept. If R rejects, reject.”
(4) Contradiction =⇒ REGU LARTM is undecidable (Neither TR nor CoTR).
A Linear Bounded Automaton (LBA) is a TM that is not allowed to move to a portion of the tape not containing the input. It
uses O(n) memory, where n is the input length.
Lemma: Let M be an LBA with q states and g symbols ∈ T , and let n be the length of the input string w. Then, there are at
most ng n q configurations of M on w
Computation History: Let M be a TM and w be an input string. An accepting computation history of M on w is a sequence
of configurations C1 , C2 , ..., Cl such that
(1) C1 is the starting configuration of M on w
(2) Ci+1 follows legally from Ci
(3) Cl is an accepting configuration
Note: Ci s must be distinct. If they are not, then M would loop infinitely on w
Now, we assume that ELBA is decided by TM R, and we construct a TM decider S for ATM :
S = ”On input hM i:
(1) Construct LBA B as previously mentioned, taking input x, the computation history of running M on w.
(2) Run R on hBi
(3) If R accepts, reject. Else, accept.”
(4) Contradiction =⇒ undecidable
Now, we construct D, the equivalent PDA of G using the construction technique mentioned in Chapter 2. D starts by nondeter-
ministically deciding which of the three conditions to check. If conditions 1 and 3 are satisfied (easy to check), D accepts. To
check the 2nd condition, we must have written the computation histories in G to be in alternating reverse order. Now, D can
push a configuration to the stack, and check whether the configuration after it follows legally from it by popping each symbol and
comparing. The rest of the proof is the same as the previous proof.
Mapping Reducibility
A function f : Σ∗ → Σ∗ is a computable function if a TM M , on every input w, halts with just f (w) on its tape.
Mapping Reducibility: A language A is mapping reducible to language B, written A ≤m B, if there is a computable function
f : Σ∗ → Σ∗ , where for every w
w ∈ A ⇔ f (w) ∈ B
Properties:
• A ≤m B, and B is Decidable/TR/CoTR =⇒ A shares the same properties
• A ≤m B, and A is Undecidable/Not TR/Not CoTR =⇒ B shares the same properties
• A is both TR and CoTR ⇔ A is decidable
• A is TR or CoTR and A ≤m A, then A is decidable
• A ≤m B =⇒ A ≤m B
• A ≤m B and B ≤m C =⇒ A ≤m C
To prove language A:
• Not TR: ATM ≤m A or ATM ≤m A
• Not CoTR: ATM ≤m A
• ETM ≤m EQTM
• ATM ≤m ELBA
• Every t(n) ≥ n time multitape TM has an equivalent O(t(n)2 ) time single-tape TM (Proof: After constructing a single-
tape (S) TM from a k-tape (M ) TM as described on page 4, we analyze the simulation: To simulate each of M ’s steps, S
performs two scans and possibly up to k rightward shifts (O(t(n))). Since M uses O(t(n)) steps, the entire simulation takes
O(t(n)) (since S has to setup its tape at the beginning) + O(t(n)2 ) = O(t(n)2 ) steps).
• Every t(n) ≥ n time single-tape NTM has an equivalent 2O(t(n)) time single-tape deterministic TM (Proof: After con-
structing a single-tape deterministic TM from an NTM as described on page 5, we analyze the simulation: The maximum
length of any branch = t(n) and each node in the tree can have at most b children, so the total number of leaves in the
tree is at most bt(n) =⇒ The total number of nodes in the tree is bounded by O(bt(n) ). By BFS, we need O(t(n)) time to
traverse a branch, so the total running time would be O(t(n)bt(n) ) = 2O(t(n)) ).
• Any language that can be decided in o(nlogn) (small-o) time on a single-tape TM is regular.
The Class P
P is the class of languages that are decidable in polynomial time on a deterministic single-tape TM. Formally defined,
[
P = T IM E(nk )
k
Theorem: P AT H ∈ P
Proof: M = ”On input hG, s, ti, where G is a directed graph with nodes s and t:
(1) Place a mark on node s.
(2) Repeat the following until no additional nodes are marked:
(a) Scan all edges of G. If an edge (a, b) is found going from a marked node a to an unmarked node b, mark b.
(3) If t is marked, accept. Otherwise, reject.”
10
The Class NP
A verifier for a language A os an algorithm V , where A = {w | V accepts hw, ci for some string c}, where w is the input string and
c is the certificate (proof). A poly-time verifier runs in poly-time in the length of w. A language A is polynomially verifiable if
it has a poly-time verifier.
Theorem: SU BSET − SU M = {hS, ti | S = {x1 , ..., xk }, and for some {y1 , ..., yl } ⊆ {x1 , ..., xk } we have Σyi = t} ∈ N P
Verifier Proof: V = ”On input hS, ti,:
(1) Test whether c is a collection of numbers that sum to t.
(2) Test whether S contains all the numbers in c.
(3) If both pass, accept. Otherwise, reject.”
NTM Proof: N = ”On input hG, ki,:
(1) Nondeterministically select a subset c of the numbers in S.
(2) If c sums to t, accept. Otherwise, reject.”
NP-Completeness
If a language known to be NP-complete is proven to ∈ P, all problems in NP would be poly-time solvable (P becomes = NP).
A function f : Σ∗ → Σ∗ is a poly-time computable function if a poly-time TM M , on every input w, halts with just f (w) on its tape.
Poly-time Mapping Reducibility: A language A is poly-time mapping reducible to language B, written A ≤p B, if there is a
poly-time computable function f : Σ∗ → Σ∗ , where for every w
w ∈ A ⇔ f (w) ∈ B
Theorem: A language B is NP-complete if it satisfies two conditions:
(1) B ∈ N P
(2) Every A ∈ N P ≤p B
The Cook-Levin Theorem: SAT = {hφi | φ is a satisfiable Boolean formula} is NP-complete (proof not included in these notes)
11
Theorem: 3SAT ≤p V ERT EX − COV ER = {hG, ki | G is an undirected graph that has a k−node vertex cover} Proof Idea:
Let φ be a 3-cnf boolean formula with k clauses. The reduction generates a string hG, ki, where G is an undirected graph defined as
follows:
(1) Variable Gadget: For every variable x in φ, we create two nodes labeled x and x and connect them with an edge.
(2) Clause Gadget: For every clause in φ, we create a node triple whose nodes are connected and correspond to the variables
in that clause. We then connect each variable in this triple with its corresponding node in the variable gadget.
(3) We let k = m + 2l, where m = the number of variables and l = the number of clauses.
Notes:
• We say that an f (n) ≥ n space TM runs in 2O(f (n)) time
• P 6= EXP T IM E
• Regular languages and CFLs ∈ P
Savitch’s Theorem states that any NTM that uses f (n) ≥ n space can be coverted to a deterministic TM that uses f 2 (n) space.
In other words, for f (n) ≥ n,
N SP ACE(f (n)) ⊆ SP ACE(f 2 (n))
where SP ACE(f (n)) is the set of languages decided by an O(f (n)) space deterministic TM, and N SP ACE(f (n)) is the set of
languages decided by an O(f (n)) space NTM.
• PSPACE is the class of languages that are decidable in poly-space on a deterministic TM.
• NPSPACE is the class of languages that are decidable in poly-space on a nondeterministic TM (NTM).
We know, by Savitch’s Theorem, that P SP ACE = N P SP ACE (since the square of any polynomial is still a polynomial).
Notes:
• N P ⊂ P SP ACE
• coN P ⊂ P SP ACE
If M is a TM with a separate read-only tape, we define its configuration to only include the current state and the positions of the
two heads. We no longer include the input w in the configuration =⇒ n2O(f (n)) configurations.
Theorem: A = {w = 0k 1k | k ≥ 0} ∈ L (Proof: Count 0’s and 1’s on work tape (log space for binary encoding), and accept if
counts are equal).
Theorem: A = {ww | w ∈ {0, 1}∗ } ∈ L (Proof: First counter counts the length of the input (n), and rejects if n is odd. Then, we
iterate until n
2
, and reject if any wi 6= w n +i . Finally, we accept after the loop terminates with no rejections.)
2
Theorem: P AT H ∈ N L
Proof: N = ”On input hG, s, ti,
(1) Record the position of the current node at each step only on the work tape.
(2) Loop to m = number of nodes and nondeterministically select the next node (from the nodes pointed at by the current
node). If node t is reached, accept.
(3) Reject
A log-space transducer is a TM that computes a function f : Σ∗ → Σ∗ , where f (w) is the string remaining on its output tape
after it halts. It consists of a:
(1) Read-only input tape
(2) Read-write work tape (may contain O(logn) symbols), and a
(3) Write-only output tape (the head on this tape can only move rightward)
A language A is log-space reducible to B, written A ≤L B, if A is mapping reducible to B by means of a log-space computable
function f .
Theorem: P AT H is NL-complete
Proof: Consider that A is computed by a log-space TM M on input w, and let the function f reduce A to P AT H:
F = ”On input w,
(1) Construct directed graph G as follows:
(a) Listing the Nodes: Each node of G is a configuration of M on w, and so each node takes clogn space for some
constant c. The transducer goes through all possible strings of length clogn and tests whether each string is a legal
configuration of M on w, then outputs those that pass the test. The start node, s, is the start configuration, and the
target node, t, is the accepting configuration (we modify M to have a unqiue accepting configuration).
(b) Listing the Edges: Similarly, check if some configuration c1 legally yields c2 under the transition function of M
(which can be done in log-space since we only need to check the tape contents under the head locations given by c1 to
determine if the yield is legal). The transducer then outputs all pairs (c1 , c2 ) that qualify.
Notes:
• NL ⊂ P
• N L = coN L
Chapter 9 - Intractability
Circuit Complexity
A Boolean circuit is a collection of gates (AND, OR, and NOT) and inputs connected by wires. It is a Directed Acyclic Graph
(DAG).
A circuit family C is an infinite list of circuits (C0 , C1 , C2 , ...) where Cn has n input variables. We say that C decides a language
A over {0, 1} if, ∀w, w ∈ A ⇔ Cn (w) = 1, where n = len(w).
The circuit complexity of a language is the size (number of gates) complexity of a minimal circuit family for that language.
Theorem: For t(n) ≥ n, if A ∈ T IM E(t(n)), then A has circuit complexity O(t2 (n)) (Proof not included in these notes).
We say that a circuit is satisfiable if some setting of the inputs causes the circuit to output 1.
13
(4) Add the clause (wm ) to φ, where wm is the final output gate.
(5) Finally, for any clause containing fewer than 3 literals, we repeat one of the existing literals inside to ensure φ is in 3-cnf.
Hierarchy Theorems
Space Hierarchy
Theorem: For any space constructible function f , a language A exists that is decidable in O(f (n)) space but not in o(f (n)) space.
Proof: D is an O(f (n)) space algorithm that decides languages A which is not decidable in o(f (n)) time:
D = ”On input w:
(1) Let n be the length of w.
(2) Compute f (n) using space constructibility and mark off this much tape. If later stages ever attempt to use more, reject.
(3) If w is not of the form hM i10∗ for some TM M , reject.
(4) Simulate M on w while counting the number of steps used in the simulation. If this count ever exceed 2f (n) , reject.
(5) If M accepts, reject. If M rejects, accept.
Corollaries:
• For any two functions f1 and f2 where f1 (n) = o(f2 (n)) and f2 is space constructible, SP ACE(f1 (n)) ( SP ACE(f2 (n)).
• For any two real numbers 0 ≤ 1 < 2 , SP ACE(n1 ) ( SP ACE(n2 )
• N L ( P SP ACE
• P SP ACE ( EXP SP ACE
Time Hierarchy
Corollaries:
2 t (n)
• For any two functions t1 and t2 where t1 (n) = o( logt ) and t2 is time constructible, T IM E(t1 (n)) ( T IM E(t2 (n)).
2 (n)
• For any two real numbers 0 ≤ 1 < 2 , T IM E(n ) ( T IM E(n2 )
1
• P ( EXP T IM E
A probabilistic TM is a TM that can flip coins by means of a pseudo-random number generator. It is similar to an NTM where
each probabilistic step has 2 equally likely legal moves. We can observe the following probabilities:
• P r[Branch b] = 2 number of coin flips in b
• P r[M acc w] = ΣP r[b], where b is an accepting branch of M
• P r[M rej w] = 1 − P r[M acc w]
14
Markov Inequality: Let X be a random variable that takes positive values and let a ≥ 0. Then:
E(X)
P r[X ≥ a] =
a
Weak Law of Large Numbers: Let X1 , ..., Xn be pair-wise independent random variables (Xi and Xj are independent ∀i 6= j)
with common mean q and common variance σ 2 . Then, ∀ε > 0:
n σ2
P r[| Σ − qn| ≥ εn] =
i=1 ε2 n
Chernoff Inequality: Let X1 , ..., Xn be independent random variables that take 0/1 values and let µ = E(ΣXi ). Then, (∀δ > 0):
i
(1) Upper tail:
2−µ(1+δ)
µ (
e−δ δ > 2e − 1
P r[ΣXi − µ] ≤ ≤ 2
i (1 + δ)1+δ −µ δ4
2 otherwise
1
BPP (Bounded-error Probabilistic Poly-time) is the class of languages decided by a prob. poly-time TM with error = 3
.
The choice of the above value of error is explained by the Amplification lemma: Let A be decidable by a prob. poly-time TM M
with error probability 13 , then, for any f (n) = poly(n), ∃ a prob. poly-time TM M 0 which decides A with error probability e−f (n) .
Proof: M 0 = ”On input w:
(1) Let k = d48f (n)e.
(2) Run M on w k times.
(3) If most runs accept, accept. Otherwise, reject.”
Thus, we obtain (proof using the Chernoff inequality):
• w ∈ A =⇒ P r[M 0 rej w] ≤ e−f (n)
• w∈/ A =⇒ P r[M 0 acc w] ≤ e−f (n)
RP (Randomized Poly-time) is the class of languages decided by a prob. poly-time TM such that:
• w ∈ A =⇒ P r[M acc w] ≥ 21
• w∈/ A =⇒ P r[M acc w] = 0
We can also replace 21 with any constant or with ε = 1 − 2−f (n) for a polynomial f (n) by running M k times and accepting if at
least one run accepts.
Notes:
• RP ⊂ BP P
• coRP ⊂ BP P
• RP ∩ coRP = ZP P (Zero-error Probabilistic Expected Poly-time)
• BP P ⊂ P SP ACE
• BP P ⊂ P/P oly
15
Notes:
• A decision problem A is NP-hard if, ∀B ∈ N P, B ≤P A (A may not be in NP, e.g. TQBF ).
• NP-complete =⇒ NP-hard.
• Existence of a poly-time algorithm for A =⇒ P = N P .
An algorithm for an optimization problem is called a ρ−approximation algorithm if it finds a solution of cost:
• ≤ ρ(cost of minimum cost solution)
• ≥ ρ1 (cost of maximum cost solution)
Approx-VC is a 2-approximation algorithm for MIN-VERTEX-COVER, i.e. it produces a VC C such that |C| ≤ 2|C ∗ |, where C∗ is
a minimum VC of G.
Proof: Let A = the set of edges selected in step (3a). We have:
(1) |C| = 2|A|
(2) C ∗ contains at least one endpoint of each edge in A (since C ∗ is a VC and A is a set of edges).
(3) No 2 edges in A share an endpoint (by cleanup in step (4)).
By observations 2) and 3) =⇒ |C ∗ | ≥ |A|. Thus, by 1) =⇒ |C| ≤ 2|C ∗ |
Properties of ≤P orL (Let CLASS be any of the classes we discussed, such as P, NP, coNP, PSPACE, EXPTIME, L, NL, coNL,...):
• A ≤P orL B and B ∈ CLASS =⇒ A ∈ CLASS.
• A ≤P orL B and A ∈ / CLASS =⇒ B ∈ / CLASS.
• A ≤P orL B, where A is NP(L)-complete and B ∈ N P (L) =⇒ B is NP(L)-complete.