Graph Isomorphism Notes
Graph Isomorphism Notes
21 languages
Article
Talk
Read
Edit
View history
Tools
From Wikipedia, the free encyclopedia
In graph theory, an isomorphism of graphs G and H is a bijection between the vertex sets
of G and H
such that any two vertices u and v of G are adjacent in G if and only if and are adjacent
in H. This kind of bijection is commonly described as "edge-preserving bijection", in
accordance with the general notion of isomorphism being a structure-preserving bijection. If
an isomorphism exists between two graphs, then the graphs are called isomorphic and
denoted as . In the case when the bijection is a mapping of a graph onto itself, i.e.,
when G and H are one and the same graph, the bijection is called an automorphism of G. If a
graph is finite, we can prove it to be bijective by showing it is one-one/onto; no need to show
both. Graph isomorphism is an equivalence relation on graphs and as such it partitions
the class of all graphs into equivalence classes. A set of graphs isomorphic to each other is
called an isomorphism class of graphs. The question of whether graph isomorphism can be
determined in polynomial time is a major unsolved problem in computer science, known as
the Graph Isomorphism problem.[1][2]
The two graphs shown below are isomorphic, despite their different looking drawings
An isomorphism
Graph G Graph H
between G and H
f(a) = 1
f(b) = 6
f(c) = 8
f(d) = 3
f(g) = 5
f(h) = 2
f(i) = 4
f(j) = 7
Variations[edit]
In the above definition, graphs are understood to be undirected non-labeled non-
weighted graphs. However, the notion of isomorphic may be applied to all other variants of
the notion of graph, by adding the requirements to preserve the corresponding additional
elements of structure: arc directions, edge weights, etc., with the following exception.
Isomorphism of labeled graphs[edit]
For labeled graphs, two definitions of isomorphism are in use.
Under one definition, an isomorphism is a vertex bijection which is both edge-preserving and
label-preserving.[3][4]
Under another definition, an isomorphism is an edge-preserving vertex bijection which
preserves equivalence classes of labels, i.e., vertices with equivalent (e.g., the same) labels
are mapped onto the vertices with equivalent labels and vice versa; same with edge labels.[5]
For example, the graph with the two vertices labelled with 1 and 2 has a single
automorphism under the first definition, but under the second definition there are two auto-
morphisms.
The second definition is assumed in certain situations when graphs are endowed with unique
labels commonly taken from the integer range 1,...,n, where n is the number of the vertices of
the graph, used only to uniquely identify the vertices. In such cases two labeled graphs are
sometimes said to be isomorphic if the corresponding underlying unlabeled graphs are
isomorphic (otherwise the definition of isomorphism would be trivial).
Motivation[edit]
The formal notion of "isomorphism", e.g., of "graph isomorphism", captures the informal
notion that some objects have "the same structure" if one ignores individual distinctions of
"atomic" components of objects in question. Whenever individuality of "atomic" components
(vertices and edges, for graphs) is important for correct representation of whatever is
modeled by graphs, the model is refined by imposing additional restrictions on the structure,
and other mathematical objects are used: digraphs, labeled graphs, colored graphs, rooted
trees and so on. The isomorphism relation may also be defined for all these generalizations
of graphs: the isomorphism bijection must preserve the elements of structure which define
the object type in question: arcs, labels, vertex/edge colors, the root of the rooted tree, etc.
The notion of "graph isomorphism" allows us to distinguish graph properties inherent to the
structures of graphs themselves from properties associated with graph representations: graph
drawings, data structures for graphs, graph labelings, etc. For example, if a graph has exactly
one cycle, then all graphs in its isomorphism class also have exactly one cycle. On the other
hand, in the common case when the vertices of a graph are (represented by) the integers 1,
2,... N, then the expression
may be different for two isomorphic graphs.
Whitney theorem[edit]
Main article: Whitney graph isomorphism theorem
The exception to Whitney's theorem: these two graphs are not isomorphic but have
isomorphic line graphs.
The Whitney graph isomorphism theorem,[6] shown by Hassler Whitney, states that
two connected graphs are isomorphic if and only if their line graphs are isomorphic, with
a single exception: K3, the complete graph on three vertices, and the complete bipartite
graph K1,3, which are not isomorphic but both have K3 as their line graph. The Whitney
graph theorem can be extended to hypergraphs.[7]
Graph homomorphism
10 languages
Article
Talk
Read
Edit
View history
Tools
Definitions[edit]
In this article, unless stated otherwise, graphs are finite, undirected
graphs with loops allowed, but multiple edges (parallel edges) disallowed. A graph
homomorphism[4] f from a graph to a graph , written
f : G → H
is a function from to that maps endpoints of each edge in to endpoints of an edge
in . Formally, implies , for all pairs of vertices in . If there exists any homomorphism
from G to H, then G is said to be homomorphic to H or H-colorable. This is often
denoted as just:
G → H .
The above definition is extended to directed graphs. Then, for a
homomorphism f : G → H, (f(u),f(v)) is an arc (directed edge) of H whenever
(u,v) is an arc of G.
There is an injective homomorphism from G to H (i.e., one that never maps
distinct vertices to one vertex) if and only if G is a subgraph of H. If a
homomorphism f : G → H is a bijection (a one-to-one correspondence between
vertices of G and H) whose inverse function is also a graph homomorphism,
then f is a graph isomorphism.[5]
Covering maps are a special kind of homomorphisms that mirror the definition
and many properties of covering maps in topology.[6] They are defined
as surjective homomorphisms (i.e., something maps to each vertex) that are also
locally bijective, that is, a bijection on the neighbourhood of each vertex. An
example is the bipartite double cover, formed from a graph by splitting each
vertex v into v0 and v1 and replacing each edge u,v with edges u0,v1 and v0,u1. The
function mapping v0 and v1 in the cover to v in the original graph is a
homomorphism and a covering map.
Graph homeomorphism is a different notion, not related directly to
homomorphisms. Roughly speaking, it requires injectivity, but allows mapping
edges to paths (not just to edges). Graph minors are a still more relaxed notion.
Cores and retracts[edit]
Connection to colorings[edit]
A k-coloring, for some integer k, is an assignment of one of k colors to each
vertex of a graph G such that the endpoints of each edge get different colors.
The k-colorings of G correspond exactly to homomorphisms from G to
the complete graph Kk.[12] Indeed, the vertices of Kk correspond to the k colors,
and two colors are adjacent as vertices of Kk if and only if they are different.
Hence a function defines a homomorphism to Kk if and only if it maps adjacent
vertices of G to different colors (i.e., it is a k-coloring). In particular, G is k-
colorable if and only if it is Kk-colorable.[12]
If there are two homomorphisms G → H and H → Kk, then their
composition G → Kk is also a homomorphism.[13] In other words, if a graph H can
be colored with k colors, and there is a homomorphism from G to H, then G can
also be k-colored. Therefore, G → H implies χ(G) ≤ χ(H), where χ denotes
the chromatic number of a graph (the least k for which it is k-colorable).[14]
Variants[edit]
General homomorphisms can also be thought of as a kind of coloring: if the
vertices of a fixed graph H are the available colors and edges of H describe
which colors are compatible, then an H-coloring of G is an assignment of colors
to vertices of G such that adjacent vertices get compatible colors. Many notions
of graph coloring fit into this pattern and can be expressed as graph
homomorphisms into different families of graphs. Circular colorings can be
defined using homomorphisms into circular complete graphs, refining the usual
notion of colorings.[15] Fractional and b-fold coloring can be defined using
homomorphisms into Kneser graphs.[16] T-colorings correspond to
homomorphisms into certain infinite graphs. [17] An oriented coloring of a directed
graph is a homomorphism into any oriented graph.[18] An L(2,1)-coloring is a
homomorphism into the complement of the path graph that is locally injective,
meaning it is required to be injective on the neighbourhood of every vertex. [19]
Orientations without long paths[edit]
Main article: Gallai–Hasse–Roy–Vitaver theorem
Another interesting connection concerns orientations of graphs. An orientation of
an undirected graph G is any directed graph obtained by choosing one of the
two possible orientations for each edge. An example of an orientation of the
complete graph Kk is the transitive tournament T→k with vertices 1,2,…,k and arcs
from i to j whenever i < j. A homomorphism between orientations of
graphs G and H yields a homomorphism between the undirected
graphs G and H, simply by disregarding the orientations. On the other hand,
given a homomorphism G → H between undirected graphs, any
orientation H→ of H can be pulled back to an orientation G→ of G so that G→ has
a homomorphism to H→. Therefore, a graph G is k-colorable (has a
homomorphism to Kk) if and only if some orientation of G has a homomorphism
to T→k.[20]
A folklore theorem states that for all k, a directed graph G has a homomorphism
to T→k if and only if it admits no homomorphism from the directed path P→k+1.
[21]
Here P→n is the directed graph with vertices 1, 2, …, n and edges from i to i +
1, for i = 1, 2, …, n − 1. Therefore, a graph is k-colorable if and only if it has an
orientation that admits no homomorphism from P→k+1. This statement can be
strengthened slightly to say that a graph is k-colorable if and only if some
orientation contains no directed path of length k (no P→k+1 as a subgraph). This is
the Gallai–Hasse–Roy–Vitaver theorem.
Structure of homomorphisms[edit]
Compositions of homomorphisms are homomorphisms. [13] In particular, the
relation → on graphs is transitive (and reflexive, trivially), so it is a preorder on
graphs.[27] Let the equivalence class of a graph G under homomorphic
equivalence be [G]. The equivalence class can also be represented by the
unique core in [G]. The relation → is a partial order on those equivalence
classes; it defines a poset.[28]
Let G < H denote that there is a homomorphism from G to H, but no
homomorphism from H to G. The relation → is a dense order, meaning that for
all (undirected) graphs G, H such that G < H, there is a graph K such
that G < K < H (this holds except for the trivial cases G = K0 or K1).[29][30] For
example, between any two complete graphs (except K0, K1, K2) there are infinitely
many circular complete graphs, corresponding to rational numbers between
natural numbers.[31]
The poset of equivalence classes of graphs under homomorphisms is
a distributive lattice, with the join of [G] and [H] defined as (the equivalence class
of) the disjoint union [G ∪ H], and the meet of [G] and [H] defined as the tensor
product [G × H] (the choice of graphs G and H representing the equivalence
classes [G] and [H] does not matter).[32] The join-irreducible elements of this
lattice are exactly connected graphs. This can be shown using the fact that a
homomorphism maps a connected graph into one connected component of the
target graph.[33][34] The meet-irreducible elements of this lattice are exactly
the multiplicative graphs. These are the graphs K such that a product G × H has
a homomorphism to K only when one of G or H also does. Identifying
multiplicative graphs lies at the heart of Hedetniemi's conjecture.[35][36]
Graph homomorphisms also form a category, with graphs as objects and
homomorphisms as arrows.[37] The initial object is the empty graph, while
the terminal object is the graph with one vertex and one loop at that vertex.
The tensor product of graphs is the category-theoretic product and
the exponential graph is the exponential object for this category.[36][38] Since these
two operations are always defined, the category of graphs is a cartesian closed
category. For the same reason, the lattice of equivalence classes of graphs
under homomorphisms is in fact a Heyting algebra.[36][38]
For directed graphs the same definitions apply. In particular → is a partial
order on equivalence classes of directed graphs. It is distinct from the order →
on equivalence classes of undirected graphs, but contains it as a suborder. This
is because every undirected graph can be thought of as a directed graph where
every arc (u,v) appears together with its inverse arc (v,u), and this does not
change the definition of homomorphism. The order → for directed graphs is
again a distributive lattice and a Heyting algebra, with join and meet operations
defined as before. However, it is not dense. There is also a category with
directed graphs as objects and homomorphisms as arrows, which is again
a cartesian closed category.[39][38]
Incomparable graphs[edit]
Computational complexity[edit]
In the graph homomorphism problem, an instance is a pair of graphs (G,H) and
a solution is a homomorphism from G to H. The general decision problem,
asking whether there is any solution, is NP-complete.[48] However, limiting allowed
instances gives rise to a variety of different problems, some of which are much
easier to solve. Methods that apply when restraining the left side G are very
different than for the right side H, but in each case a dichotomy (a sharp
boundary between easy and hard cases) is known or conjectured.
Homomorphisms to a fixed graph[edit]
The homomorphism problem with a fixed graph H on the right side of each
instance is also called the H-coloring problem. When H is the complete graph Kk,
this is the graph k-coloring problem, which is solvable in polynomial time for k =
0, 1, 2, and NP-complete otherwise.[49] In particular, K2-colorability of a graph G is
equivalent to G being bipartite, which can be tested in linear time. More
generally, whenever H is a bipartite graph, H-colorability is equivalent to K2-
colorability (or K0 / K1-colorability when H is empty/edgeless), hence equally easy
to decide.[50] Pavol Hell and Jaroslav Nešetřil proved that, for undirected graphs,
no other case is tractable:
Hell–Nešetřil theorem (1990): The H-coloring problem is in P when H is bipartite
and NP-complete otherwise.[51][52]
This is also known as the dichotomy theorem for (undirected) graph
homomorphisms, since it divides H-coloring problems into NP-complete or P
problems, with no intermediate cases. For directed graphs, the situation is
more complicated and in fact equivalent to the much more general question
of characterizing the complexity of constraint satisfaction problems.[53] It turns
out that H-coloring problems for directed graphs are just as general and as
diverse as CSPs with any other kinds of constraints. [54][55] Formally, a
(finite) constraint language (or template) Γ is a finite domain and a finite set
of relations over this domain. CSP(Γ) is the constraint satisfaction problem
where instances are only allowed to use constraints in Γ.
Theorem (Feder, Vardi 1998): For every constraint language Γ, the problem
CSP(Γ) is equivalent under polynomial-time reductions to some H-coloring
problem, for some directed graph H.[55]
Intuitively, this means that every algorithmic technique or complexity
result that applies to H-coloring problems for directed graphs H applies
just as well to general CSPs. In particular, one can ask whether the Hell–
Nešetřil theorem can be extended to directed graphs. By the above
theorem, this is equivalent to the Feder–Vardi conjecture (aka CSP
conjecture, dichotomy conjecture) on CSP dichotomy, which states that
for every constraint language Γ, CSP(Γ) is NP-complete or in P.[48] This
conjecture was proved in 2017 independently by Dmitry Zhuk and Andrei
Bulatov, leading to the following corollary:
Corollary (Bulatov 2017; Zhuk 2017): The H-coloring problem on directed
graphs, for a fixed H, is either in P or NP-complete.
Homomorphisms from a fixed family of graphs[edit]
The homomorphism problem with a single fixed graph G on left side
of input instances can be solved by brute-force in time |V(H)|O(|V(G)|), so
polynomial in the size of the input graph H.[56] In other words, the
problem is trivially in P for graphs G of bounded size. The interesting
question is then what other properties of G, beside size, make
polynomial algorithms possible.
The crucial property turns out to be treewidth, a measure of how tree-
like the graph is. For a graph G of treewidth at most k and a graph H,
the homomorphism problem can be solved in time |V(H)|O(k) with a
standard dynamic programming approach. In fact, it is enough to
assume that the core of G has treewidth at most k. This holds even if
the core is not known.[57][58]
The exponent in the |V(H)|O(k)-time algorithm cannot be lowered
significantly: no algorithm with running time |V(H)|o(tw(G) /log tw(G)) exists,
assuming the exponential time hypothesis (ETH), even if the inputs
are restricted to any class of graphs of unbounded treewidth. [59] The
ETH is an unproven assumption similar to P ≠ NP, but stronger.
Under the same assumption, there are also essentially no other
properties that can be used to get polynomial time algorithms. This is
formalized as follows:
Theorem (Grohe): For a computable class of graphs , the homomorphism
problem for instances with is in P if and only if graphs in have cores of bounded
treewidth (assuming ETH).[58]
One can ask whether the problem is at least solvable in a time
arbitrarily highly dependent on G, but with a fixed polynomial
dependency on the size of H. The answer is again positive if we
limit G to a class of graphs with cores of bounded treewidth, and
negative for every other class.[58] In the language of parameterized
complexity, this formally states that the homomorphism problem
in parameterized by the size (number of edges) of G exhibits a
dichotomy. It is fixed-parameter tractable if graphs in have cores
of bounded treewidth, and W[1]-complete otherwise.
The same statements hold more generally for constraint
satisfaction problems (or for relational structures, in other words).
The only assumption needed is that constraints can involve only a
bounded number of variables (all relations are of some bounded
arity, 2 in the case of graphs). The relevant parameter is then the
treewidth of the primal constraint graph.[59]
ameya_chawla
Read
Discuss
Prerequisite – Theory of Computation
Grammar :
It is a finite set of formal rules for generating syntactically correct sentences or
meaningful correct sentences.
Constitute Of Grammar :
Grammar is basically composed of two basic elements –
1. Terminal Symbols –
Terminal symbols are those which are the components of the sentences generated
using a grammar and are represented using small case letter like a, b, c etc.
2. Non-Terminal Symbols –
Non-Terminal Symbols are those symbols which take part in the generation of the
sentence but are not the component of the sentence. Non-Terminal Symbols are also
called Auxiliary Symbols and Variables. These symbols are represented using a
capital letter like A, B, C, etc.
Formal Definition of Grammar :
Any Grammar can be represented by 4 tuples – <N, T, P, S>
Discuss
Discuss
There are two Pumping Lemmas, which are defined for 1. Regular Languages, and 2.
Context – Free Languages Pumping Lemma for Regular Languages For any regular
language L, there exists an integer n, such that for all x ∈ L with |x| ≥ n, there exists u, v,
w ∈ Σ*, such that x = uvw, and (1) |uv| ≤ n (2) |v| ≥ 1 (3) for all i ≥ 0: uviw ∈ L In simple
terms, this means that if a string v is ‘pumped’, i.e., if v is inserted any number of times,
the resultant string still remains in L. Pumping Lemma is used as a proof for irregularity
of a language. Thus, if a language is regular, it always satisfies pumping lemma. If there
exists at least one string made from pumping which is not in L, then L is surely not
regular. The opposite of this may not always be true. That is, if Pumping Lemma holds, it
does not mean that the language is regular.
For example, let us prove L01 = {0n1n | n ≥ 0} is irregular. Let us assume that L is regular,
then by Pumping Lemma the above given rules follow. Now, let x ∈ L and |x| ≥ n. So, by
Pumping Lemma, there exists u, v, w such that (1) – (3) hold. We show that for all u, v,
w, (1) – (3) does not hold. If (1) and (2) hold then x = 0n1n = uvw with |uv| ≤ n and |v| ≥ 1.
So, u = 0a, v = 0b, w = 0c1n where : a + b ≤ n, b ≥ 1, c ≥ 0, a + b + c = n But, then (3) fails
for i = 0 uv0w = uw = 0a0c1n = 0a + c1n ∉ L, since a + c ≠ n.
For above example, 0n1n is CFL, as any string can be the result of pumping at
two places, one for 0 and other for 1. Let us prove, L 012 = {0n1n2n | n ≥ 0} is not
Context-free. Let us assume that L is Context-free, then by Pumping Lemma,
the above given rules follow. Now, let x ∈ L and |x| ≥ n. So, by Pumping
Lemma, there exists u, v, w, x, y such that (1) – (3) hold. We show that for all u,
v, w, x, y (1) – (3) do not hold. If (1) and (2) hold then x = 0 n1n2n = uvwxy with |
vwx| ≤ n and |vx| ≥ 1. (1) tells us that vwx does not contain both 0 and 2. Thus,
either vwx has no 0’s, or vwx has no 2’s. Thus, we have two cases to consider.
Suppose vwx has no 0’s. By (2), vx contains a 1 or a 2. Thus uwy has ‘n’ 0’s
and uwy either has less than ‘n’ 1’s or has less than ‘n’ 2’s. But (3) tells us that
uwy = uv0wx0y ∈ L. So, uwy has an equal number of 0’s, 1’s and 2’s gives us a
contradiction. The case where vwx has no 2’s is similar and also gives us a
contradiction. Thus L is not context-free. Source : John E. Hopcroft, Rajeev
Motwani, Jeffrey D. Ullman (2003). Introduction to Automata Theory,
Languages, and Computation.
This article has been contributed by Nirupam Singh. Please write comments
if you find anything incorrect, or if you want to share more information about the
topic discussed above.
Discuss
A grammar is a set of production rules which are used to generate strings of a language.
In this article, we have discussed how to find the language generated by a grammar and
vice versa as well.
Given a grammar G, its corresponding language L(G) represents the set of all strings
generated from G. Consider the following grammar,
G: S-> aSb|ε
In this grammar, using S-> ε, we can generate ε. Therefore, ε is part of L(G). Similarly,
using S=>aSb=>ab, ab is generated. Similarly, aabb can also be generated.
Therefore,
L(G) = {anbn, n>=0}
In language L(G) discussed above, the condition n = 0 is taken to accept ε.
Key Points –
{ Q, Σ, q, F, δ }
FA is characterized into two types:
1) Deterministic Finite Automata (DFA):
DFA consists of 5 tuples {Q, Σ, q, F, δ}.
Q : set of all states.
Σ : set of input symbols. ( Symbols which machine takes as input )
q : Initial state. ( Starting state of a machine )
F : set of final state.
δ : Transition Function, defined as δ : Q X Σ --> Q.
In a DFA, for a particular input character, the machine goes to one state only. A
transition function is defined on every state for every input symbol. Also in DFA
null (or ε) move is not allowed, i.e., DFA cannot change state without any input
character.
For example, below DFA with Σ = {0, 1} accepts all strings ending with 0.
NFA
One important thing to note is, in NFA, if any path for an input string leads
to a final state, then the input string is accepted. For example, in the above
NFA, there are multiple paths for the input string “00”. Since one of the paths
leads to a final state, “00” is accepted by the above NFA.
Some Important Points:
Justification:
Since all the tuples in DFA and NFA are the same except for one of the tuples,
which is Transition Function (δ)
In case of DFA
δ : Q X Σ --> Q
In case of NFA
δ : Q X Σ --> 2Q
Now if you observe you’ll find out Q X Σ –> Q is part of Q X Σ –> 2 Q.
On the RHS side, Q is the subset of 2 Q which indicates Q is contained in 2 Q or Q
is a part of 2Q, however, the reverse isn’t true. So mathematically, we can
conclude that every DFA is NFA but not vice-versa. Yet there is a way to
convert an NFA to DFA, so there exists an equivalent DFA for every NFA.
1. Both NFA and DFA have the same power and each NFA can be translated
into a DFA.
2. There can be multiple final states in both DFA and NFA.
3. NFA is more of a theoretical concept.
4. DFA is used in Lexical Analysis in Compiler.
5. If the number of states in the NFA is N then, its DFA can have maximum
2N number of states.