0% found this document useful (0 votes)
69 views12 pages

Spectral Analysis of Signed Graphs For Clustering, Prediction and Visualization

The document discusses spectral analysis methods for signed graphs, which contain both positively and negatively weighted edges. It proposes defining variants of the graph Laplacian and related matrices that allow applying spectral clustering, prediction, and visualization techniques to signed graphs. Specifically, it derives a signed Laplacian from the problem of drawing signed graphs in a way that places negatively connected vertices on opposite sides. The signed Laplacian and other signed spectral methods are then shown to enable clustering, prediction, and visualization tasks on networks with positive and negative relationships.

Uploaded by

srinivas1956
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views12 pages

Spectral Analysis of Signed Graphs For Clustering, Prediction and Visualization

The document discusses spectral analysis methods for signed graphs, which contain both positively and negatively weighted edges. It proposes defining variants of the graph Laplacian and related matrices that allow applying spectral clustering, prediction, and visualization techniques to signed graphs. Specifically, it derives a signed Laplacian from the problem of drawing signed graphs in a way that places negatively connected vertices on opposite sides. The signed Laplacian and other signed spectral methods are then shown to enable clustering, prediction, and visualization tasks on networks with positive and negative relationships.

Uploaded by

srinivas1956
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Spectral Analysis of Signed Graphs for

Clustering, Prediction and Visualization


Jerome Kunegis

Stephan Schmidt

Andreas Lommatzsch

J urgen Lerner

Ernesto W. De Luca

Sahin Albayrak

Abstract
We study the application of spectral clustering, predic-
tion and visualization methods to graphs with nega-
tively weighted edges. We show that several charac-
teristic matrices of graphs can be extended to graphs
with positively and negatively weighted edges, giving
signed spectral clustering methods, signed graph ker-
nels and network visualization methods that apply to
signed graphs. In particular, we review a signed variant
of the graph Laplacian. We derive our results by con-
sidering random walks, graph clustering, graph drawing
and electrical networks, showing that they all result in
the same formalism for handling negatively weighted
edges. We illustrate our methods using examples from
social networks with negative edges and bipartite rating
graphs.
1 Introduction
Many graph-theoretic data mining problems can be
solved by spectral methods which consider matrices
associated with a given network and compute their
eigenvalues and eigenvectors. Common examples are
spectral clustering, graph drawing and graph kernels
used for link prediction and link weight prediction.
In the usual setting, only graphs with unweighted or
positively weighted edges are supported. Some data
mining problems however apply to signed graphs, i.e.
graphs that contain positively and negatively weighted
edges. In this paper, we show that many spectral
machine learning methods can be extended to the signed
case using specially-dened characteristic matrices.
Intuitively, a positive edge in a graph denotes prox-
imity or similarity, and a negative edge denotes dissim-
ilarity or distance. Negative edges are found in many
types of networks: social networks may contain not just
friend but also foe links, and connections between
users and products may denote like and dislike. In other
cases, negative edges may be introduced explicitly, as in

DAI-Labor, Technische Universitat Berlin, Germany

Universit at Konstanz, Germany


the case of constrained clustering, where some pairs of
points must or must not be in the same cluster. These
problems can all be modeled with signed graphs.
Spectral graph theory is the study of graphs using
methods of linear algebra [4]. To study a given graph, its
edge set is represented by an adjacency matrix, whose
eigenvectors and eigenvalues are then used. Alterna-
tively, the Laplacian matrix or one of several normal-
ized adjacency matrices are used. In this paper, we are
concerned with spectral graph mining algorithms that
apply to signed graphs. In order to do this, we dene
variants of the Laplacian and related matrices that re-
sult in signed variants of spectral clustering, prediction
and visualization methods.
We begin the study in Section 2 by considering the
problem of drawing a graph with negative edges and
derive the signed combinatorial graph Laplacian. In
Section 3, we give the precise denitions of the various
signed graph matrices, and derive basic results about
their spectrum in Section 4. In Section 5 we derive
the normalized and unnormalized signed Laplacian from
the problem of signed spectral clustering using signed
extensions of the ratio cut and normalized cut functions.
In Section 6, we dene several graph kernels that apply
to signed graphs and show how they can be applied to
link sign prediction in signed unipartite and bipartite
networks. In Section 7 we give a derivation of the signed
Laplacian using electrical networks in which resistances
admit negative values. We conclude in Section 8.
2 Signed Graph Drawing
To motivate signed spectral graph theory, we consider
the problem of drawing signed graphs, and show how
it naturally leads to the signed normalized Laplacian.
Proper denitions are given in the next section. The
Laplacian matrix turns up in graph drawing when we
try to nd an embedding of a graph into a plane in a way
that adjacent nodes are drawn near to each other [1].
The drawing of signed graphs is covered for instance
in [2], but using the adjacency matrix instead of the
Laplacian. In our approach, we stipulate that negative
edges should be drawn as far from each other as possible.
2.1 Positively Weighted Graphs We now describe
the general method for generating an embedding of the
nodes of a graph into the plane using the Laplacian
matrix.
Given a connected graph G = (V, E, A) with posi-
tively weighted edges, its adjacency matrix (A
ij
) gives
the positive edge weights when the vertices i and j are
connected, and is zero otherwise. We now want to nd
a two-dimensional drawing of the graph in which each
vertex is drawn near to its neighbors. This requirement
gives rise to the following vertex equation, which states
that every vertex is placed at the mean of its neighbors
coordinates, weighted by the weight of the connecting
edges. For each node i, let X
i
R
2
be its coordinates
in the drawing, then
X
i
=
_
_

ji
A
ij
_
_
1

ji
A
ij
X
j
. (2.1)
Rearranging and aggregating the equation for all i we
arrive at
Dx = Ax (2.2)
Lx = 0
where D is the diagonal matrix dened by D
ii
=

j
A
ij
and L = D A is the combinatorial Laplacian of G.
In other words, X should belong to the null space
of L, which leads to the degenerate solution of X
containing constant vectors, as the constant vector is
an eigenvector of L with eigenvalue 0. To exclude
that solution, we require additionally that the column
vectors of X are orthogonal to the constant vector,
leading to X being the eigenvectors associated with the
two smallest eigenvalues of L dierent from zero. This
solution results in a well-known satisfactory embedding
of positively weighted graphs. Such an embedding is
related to the resistance distance (or commute time
distance) between nodes of the graph [1].
2.2 Signed Graphs We now extend the graph draw-
ing method described in the previous section to graphs
with positive and negative edge weights.
To adapt (2.1) for negative edge weights, we in-
terpret a negative edge as an indication that two ver-
tices should be placed on opposite sides of the drawing.
Therefore, we take the opposite coordinates X
j
of ver-
tices j adjacent to i through a negative edge, and use
the absolute value of edge weights to compute the mean,
as pictured in Figure 1. We will call this construction
(a) Unsigned graph (b) Signed graph
Figure 1: Drawing a vertex at the mean coordinates
of its neighbors by proximity and antipodal proximity.
(a) In unsigned graphs, a vertex u is placed at the mean
of its neighbors v
1
, v
2
, v
3
. (b) In signed graphs, a vertex
u is placed at the mean of its positive neighbors v
1
, v
2
and antipodal points v
3
of its negative neighbors.
antipodal proximity. This leads to the vertex equation
X
i
=
_
_

ji
|A
ij
|
_
_
1

ji
A
ij
X
j
(2.3)
resulting in a signed Laplacian matrix

L =

D A in
which we take

D
ii
=

j
|A
ij
|:

Dx = Ax (2.4)

Lx = 0.
As with L, we must now choose eigenvalues of

L close to
zero as coordinates. As we will see in the next section,

L is always positive-semidenite, and is positive-denite


for graphs that are unbalanced, i.e. that contain cycles
with an odd number of negative edges.
To obtain a graph drawing from

L, we can thus
distinguish three cases, assuming that G is connected:
If all edges are positive,

L = L and we arrive at the
solution given by the ordinary Laplacian matrix.
If the graph is unbalanced,

L is positive-denite
and we can use the eigenvectors of the two smallest
eigenvalues as coordinates.
If the graph is balanced, its spectrum is equiva-
lent to that of the corresponding unsigned Lapla-
cian matrix, up to signs of the eigenvector compo-
nents. Using the eigenvectors of the two smallest
eigenvalues (including zero), we arrive at a graph
drawing with all points being placed on two paral-
lel lines, reecting the perfect 2-clustering present
in the graph.
2.3 Toy Examples Figure 2 shows example graphs
with positive edges drawn in green and negative edges
in red. All edges have weight 1, and all graphs con-
tain cycles with an odd number of negative edges. Col-
umn (a) shows all graphs drawn using the eigenvectors
of the two largest eigenvalues of the adjacency matrix A.
Column (b) shows the unsigned Laplacian embedding of
the graphs by setting all edge weights to +1. Column (c)
shows the signed Laplacian embedding. The embedding
0.8 0.6 0.4 0.2 0 0.2 0.4 0.6
0.378
0.378
0.378
0.378
0.378
0.378
0.378
0.8 0.6 0.4 0.2 0 0.2 0.4 0.6
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8 0.6 0.4 0.2 0 0.2 0.4 0.6
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.55 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.8 0.6 0.4 0.2 0 0.2 0.4 0.6
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8 0.6 0.4 0.2 0 0.2 0.4 0.6
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.4 0.3 0.2 0.1 0 0.1 0.2 0.3
0.25
0.2
0.15
0.1
0.05
0
0.05
0.1
0.15
0.2
0.25
0.4 0.3 0.2 0.1 0 0.1 0.2 0.3
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4 0.3 0.2 0.1 0 0.1 0.2 0.3
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0.6
0.5
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5
0.5
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8 0.6 0.4 0.2 0 0.2 0.4 0.6
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
(a) A (b) L (c)

L
Figure 2: Small toy graphs drawn using the eigenvec-
tors of three graph matrices. (a) The adjacency ma-
trix A. (b) The unsigned Laplacian D A. (c) The
signed Laplacian

DA. All graphs shown contain neg-
ative cycles, and their signed Laplacian matrices are
positive-denite. Edges have weights 1 shown in green
(+1) and red (1).
given by the eigenvectors of A is clearly not satisfactory
for graph drawing. As expected, the graphs drawn using
the ordinary Laplacian matrix place nodes connected by
a negative edge near to each other. The signed Lapla-
cian matrix produces a graph embedding where nega-
tive links span large distances across the drawing, as
required.
3 Denitions
In this section, we give the denition of the combina-
torial and normalized signed Laplacian matrices of a
graph and derive their basic properties.
The combinatorial Laplacian matrix of signed
graphs with edge weights restricted to 1 is described
in [12] where it is called the Kirchho matrix (of a signed
graph). A dierent Laplacian matrix for signed graphs
that is not positive-semidenite is used in the context of
knot theory [19]. That same Laplacian matrix is used
in [16] to draw graphs with negative edge weights. For
graphs where all edges have negative weights, the ma-
trix D + A may be considered, and corresponds to the
Laplacian of the underlying unsigned graph [5].
Let G = (V, E, A) be an undirected graph with
vertex set V , edge set E, and nonzero edge weights
described by the adjacency matrix A R
V V
. We will
denote an edge between nodes i and j as (i, j) and write
i j to signify that two nodes are adjacent. If (i, j) is
not an edge of the graph, we set A
ij
= 0. Otherwise,
A
ij
> 0 denotes a positive edge and A
ij
< 0 denotes a
negative edge. Unless otherwise noted, we assume G to
be connected.
3.1 Laplacian Matrix Given a graph G with only
positively weighted edges, its ordinary Laplacian ma-
trix is a symmetric V V matrix that, in a general
sense, captures relations between individual nodes of the
graph. The Laplacian matrix is positive-semidenite,
and its MoorePenrose pseudoinverse can be interpreted
as a forest count [3] and can be used to compute the re-
sistance distance between any two nodes [14].
Definition 1. The Laplacian matrix L R
V V
of a
graph G with nonnegative adjacency matrix A is given
by
L = D A (3.5)
with the diagonal degree matrix D R
V V
given by
D
ii
=

ji
A
ij
. (3.6)
The multiplicity of the Laplacians eigenvalue zero
equals the number of connected components in G. If G
is connected, the eigenvalue zero has multiplicity one,
and the second-smallest eigenvalue is known as the alge-
braic connectivity of the graph [4]. The eigenvector cor-
responding to that second-smallest eigenvalue is called
the Fiedler vector, and has been used successfully for
clustering the nodes of G [6].
3.2 Signed Laplacian Matrix If applied to signed
graphs, the Laplacian of Equation (3.5) results in an
indenite matrix, and thus cannot be used as the basis
for graph kernels [19]. Therefore, we use a modied
degree matrix

D [12].
Definition 2. The signed Laplacian matrix

L R
V V
of a graph G with adjacency matrix A is given by

L =

D A, (3.7)
where the signed degree matrix

D R
V V
is the
diagonal matrix given by

D
ii
=

ji
|A
ij
|. (3.8)
We prove in Section 4 that the signed Laplacian
matrix is positive-semidenite.
Two dierent matrices are usually called the nor-
malized Laplacian, and both can be extended to signed
graphs. We follow [20] and call these the random walk
and symmetric normalized Laplacian matrices.
3.3 Random Walk Normalized Laplacian When
modeling random walks on an unsigned graph G, the
transition probability from node i to j is given by
entries of the stochastic matrix D
1
A. This matrix also
arises from Equation (2.2) as the eigenvalue equation
x = D
1
Ax. The matrix L
rw
= I D
1
A is called the
random walk Laplacian and is positive-semidenite. Its
signed counterpart is given by

L
rw
= I

D
1
A.
The random walk normalized Laplacian arises when
considering random walks, but also when drawing
graphs and when clustering using normalized cuts, as
explained in Section 5.
3.4 Symmetric Normalized Laplacian Another
signed Laplacian is given by L
sym
= I D
1/2
AD
1/2
.
As with the unnormalized Laplacian matrix, we dene
an ordinary and a signed variant.
L
sym
= D
1/2
LD
1/2
= I D
1/2
AD
1/2

L
sym
=

D
1/2

D
1/2
= I

D
1/2
A

D
1/2
The ordinary variant only applies to graphs with posi-
tive edge weights. The normalized Laplacian matrices
can be used instead of the combinatorial Laplacian ma-
trices in most settings, with good results reported for
graphs with very skewed degree distributions [8].
4 Spectral Analysis
In this section, we prove that the signed Laplacian ma-
trix L is positive-semidenite, characterize the graphs
for which it is positive-denite, and give the relation-
ship between the eigenvalue decomposition of the signed
Laplacian matrix and the eigenvalue decomposition of
the corresponding unsigned Laplacian matrix. The
characterization of the smallest eigenvalue of

L in terms
of graph balance can be found in [12].
4.1 Positive-semideniteness
Theorem 4.1. The signed Laplacian matrix

L is
positive-semidenite for any graph G.
Proof. We write the Laplacian matrix as a sum over the
edges of G:

L =

(i,j)E

L
(i,j)
where

L
(i,j)
R
V V
contains the four following nonzero
entries:

L
(i,j)
ii
=

L
(i,j)
jj
= |A
ij
|

L
(i,j)
ij
=

L
(i,j)
ji
= A
ij
.
Let x R
V
be a vertex-vector. By considering the
bilinear form x
T

L
(i,j)
x, we see that

L
(i,j)
is positive-
semidenite:
x
T

L
(i,j)
x = |A
ij
|x
2
i
+|A
ij
|x
2
j
2A
ij
x
i
x
j
= |A
ij
|(x
i
sgn(A
ij
)x
j
)
2
0 (4.9)
We now consider the bilinear form x
T

Lx:
x
T

Lx =

(i,j)E
x
T

L
(i,j)
x 0
It follows that

L is positive-semidenite.
Another way to prove that

L is positive-semidenite
consists of expressing it using the incidence matrix of
G. Assume that for each edge (i, j), an arbitrary
orientation is chosen. Then we dene the incidence
matrix S R
V E
of G as
S
i(i,j)
= +
_
|A
ij
|
S
j(i,j)
= sgn(A
ij
)
_
|A
ij
|.
We now consider the product SS
T
R
V V
:
(SS
T
)
ii
=

ji
|A
ij
|
(SS
T
)
ij
= A
ij
Figure 3: The nodes of a graph without negative
cycles can be partitioned into two sets such that all
edges inside of each group are positive and all edges
between the two groups are negative. We call such a
graph balanced, and the eigenvalue decomposition of
its signed Laplacian matrix can be expressed as the
modied eigenvalue decomposition of the corresponding
unsigned graphs Laplacian.
for diagonal and o-diagonal entries, respectively.
Therefore SS
T
=

L, and it follows that

L is positive-
semidenite. This result is independent of the orienta-
tion chosen for S.
4.2 Positive-deniteness We now show that, un-
like the ordinary Laplacian matrix, the signed Lapla-
cian matrix is strictly positive-denite for some graphs,
including most real-world networks.
As with the ordinary Laplacian matrix, the spec-
trum of the signed Laplacian matrix of a disconnected
graph is the union of the spectra of its connected com-
ponents. This can be seen by noting that the Lapla-
cian matrix of an unconnected graph has block-diagonal
form, with each diagonal entry being the Laplacian ma-
trix of a single component. Therefore, we will restrict
ourselves to connected graphs. First, we dene balanced
graphs [11].
Definition 3. A connected graph with nonzero signed
edge weights is balanced when its vertices can be parti-
tioned into two groups such that all positive edges con-
nect vertices within the same group, and all negative
edges connect vertices of dierent groups.
Figure 3 shows a balanced graph partitioned into
two vertex sets. Equivalently, unbalanced graphs can
be dened as those graphs containing a cycle with an
odd number of negative edges, as shown in Figure 4.
To prove that the balanced graphs are exactly those
that do not contain cycles with an odd number of edges,
consider that any cycle in a balanced graph has to cross
sides an even number of times. On the other hand, any
Figure 4: An unbalanced graph contains a cycle with an
odd number of negatively weighted edges. Negatively
weighted edges are shown in red, positively weighted
edges are shown in green, and edges that are not part of
the cycle in black. The presence of such cycles results
in a positive-denite Laplacian matrix.
balanced graph can be partitioned into two vertex sets
by depth-rst traversal while assigning each vertex to a
partition such that the balance property is fullled. Any
inconsistency that arises during such a labeling leads to
a cycle with an odd number of negative edges.
Using this denition, we can characterize the graphs
for which the signed Laplacian matrix is positive-
denite.
Theorem 4.2. The signed Laplacian matrix of an un-
balanced graph is positive-denite.
Proof. We show that if the bilinear form x
T

Lx is zero
for some x = 0, then a bipartition of the vertices as
described above exists.
Let x
T

Lx = 0. We have seen that for every

L
(i,j)
and any x, x
T

L
(i,j)
x 0. Therefore, we have for every
edge (i, j)
x
T

L
(i,j)
x = 0
|A
ij
|(x
i
sgn(A
ij
)x
j
)
2
= 0
x
i
= sgn(A
ij
)x
j
In other words, two components of x are equal if the
corresponding vertices are connected by a positive edge,
and opposite to each other if the corresponding vertices
are connected by a negative edge. Because the graph is
connected, it follows that all |x
i
| must be equal. We can
exclude the solution x
i
= 0 for all i because x is not the
zero vector. Without loss of generality, we assume that
|x
i
| = 1 for all i. Therefore, x gives a bipartition into
vertices with x
i
= +1 and vertices with x
i
= 1, with
the property that two vertices with the same value of x
i
are in the same partition and two vertices with opposite
sign of x
i
are in dierent partitions, and therefore G is
balanced.
4.3 Balanced Graphs We now show how the spec-
trum and eigenvalues of the signed Laplacian of a bal-
anced graph arise from the spectrum and the eigenvalues
of the corresponding unsigned graph by multiplication
of eigenvector components with 1.
Let G = (V, E, A) be a balanced graph with positive
and negative edge weights and G = (V, E, A) the
corresponding graph with positive edge weights given
by A
ij
= |A
ij
|. Since G is balanced, there is a vector
x {1, +1}
V
such that for all edges (i, j), sgn(A
ij
) =
x
i
x
j
.
Theorem 4.3. If

L is the signed Laplacian matrix of
the balanced graph G with bipartition x and eigenvalue
decomposition

L = UU
T
, then the eigenvalue decom-
position of the Laplacian matrix L of G of the corre-
sponding unsigned graph G of G is given by L = UU
T
where
U
ij
= x
i
U
ij
. (4.10)
Proof. To see that L = UU
T
, note that for diagonal
elements, U
i
U
T
i
= x
2
i
U
i
U
T
i
= U
i
U
T
i
=

L
ii
=
L
ii
. For o-diagonal elements, we have U
i
U
T
j
=
x
i
x
j
U
i
U
T
j
= sgn(A
ij
)

L
ij
= sgn(A
ij
)A
ij
= |A
ij
| =
A
ij
= L
ij
.
We now show that UU
T
is an eigenvalue decom-
position of L by showing that U is orthogonal. To see
that the columns of U are indeed orthogonal, note that
for any two column indices m = n, we have U
T
m
U
n
=

iV
U
im
U
in
=

iV
x
2
i
U
im
U
in
= U
T
m
U
n
= 0 be-
cause U is orthogonal. Changing signs in U does
not change the norm of each column vector, and thus
L = UU
T
is a proper eigenvalue decomposition.
As shown in Section 4.2, the Laplacian matrix of
an unbalanced graph is positive-denite and therefore
its spectrum is dierent from that of the corresponding
unsigned graph. Aggregating Theorems 4.2 and 4.3, we
arrive at our main result.
Theorem 4.4. The signed Laplacian matrix of a graph
is positive-denite if and only if the graph is unbalanced.
Proof. The theorem follows directly from Theorems 4.2
and 4.3.
The spectra of several large unipartite and bipartite
signed networks are plotted in Figure 5. Table 1
gives the smallest Laplacian eigenvalues for some large
networks. Jester is a bipartite useritem network of
joke ratings [9]. MovieLens is the bipartite user
item network of signed movie ratings from GroupLens
research
1
, released in three dierent sizes. The Slashdot
1
http://www.grouplens.org/node/73
0 1 2 3 4 5 6 7 8 9 10
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
0.05
i
i
slashdotzoo.l.lap spectrum
(a) Slashdot Zoo
0 5 10 15 20 25 30 35
0
0.005
0.01
0.015
0.02
0.025
i
i
epinions.l.lap spectrum
(b) Epinions
0 10 20 30 40 50 60 70 80 90 100
0
0.2
0.4
0.6
0.8
1
1.2
1.4
i
i
jester.l.lap spectrum
(c) Jester
0 5 10 15 20 25 30 35 40 45 50
0
0.1
0.2
0.3
0.4
0.5
i
i
movielens100k/rating.l.lap spectrum
(d) MovieLens 100k
0 5 10 15 20 25 30 35 40 45 50
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
i
i
movielens1m.l.lap spectrum
(e) MovieLens 1M
0 5 10 15 20 25 30 35 40 45 50
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
i
i
movielens10m/rating.l.lap spectrum
(f) MovieLens 10M
Figure 5: The Laplacian spectra of several large signed
networks.
Table 1: Smallest Laplacian eigenvalue
1
> 0 of
large signed networks. Smaller values indicate a more
balanced network.
Network
1
MovieLens 100k 0.4285 conict
MovieLens 1M 0.3761
Jester 0.06515
MovieLens 10M 0.04735
Slashdot Zoo 0.006183
Epinions 0.004438 balance
Zoo is a social network with friend and foe links
extracted from technology news website Slashdot [17].
Epinions is a signed useruser opinion network [21]. The
smallest Laplacian eigenvalue is a characteristic value
of the network denoting its balance. The higher the
value the more unbalanced is the network [12]. All these
large networks are unbalanced, and we can for instance
observe that the social networks are more balanced than
the rating networks.
5 Clustering
One of the main application areas of the graph Lapla-
cian are clustering problems. In spectral clustering, the
eigenvectors of matrices associated with a graph are
used to partition the vertices of the graph into well-
connected groups. In this section, we show that in a
graph with negatively weighted edges, spectral cluster-
ing algorithms correspond to nding clusters of vertices
connected by positive edges, but not connected by neg-
ative edges.
Spectral clustering algorithms are usually derived
by formulating a minimum cut problem which is then
relaxed. The choice of the cut function results in
dierent spectral clustering algorithms. In all cases, the
vertices of a given graph are mapped into the space
spanned by the eigenvector components of a matrix
associated with the graph.
We derive a signed extension of the ratio cut,
which leads to clustering with the signed combinatorial
Laplacian

L. Analogous derivations are possible for the
normalized Laplacians. We restrict our proofs to the
case of 2-clustering. Higher-order clusterings can be
derived analogously.
5.1 Unsigned Graphs We rst review the deriva-
tion of the ratio cut in unsigned graphs, which leads to
a clustering based on the eigenvectors of L.
Let G = (V, E, A) be an unsigned graph. A cut of
G is a partition of the vertices V = X Y . The weight
of a cut is given by cut(X, Y ) =

iX,jY
A
ij
. The cut
measures how well two clusters are connected. Since we
want to nd two distinct groups of vertices, the cut must
be minimized. Minimizing cut(X, Y ) however leads in
most cases to solutions separating very few vertices from
the rest of the graph. Therefore, the cut is usually
divided by the size of the clusters, giving the ratio cut:
RatioCut(X, Y ) = cut(X, Y )
_
1
|X|
+
1
|Y |
_
To get a clustering, we then solve the following opti-
mization problem:
min
XV
RatioCut(X, V \ X)
Let Y = V \ X, then this problem can be solved by
expressing it in terms of the characteristic vector u of
X dened by:
u
i
= +
_
|Y |/|X| when i X (5.11)
u
i
=
_
|X|/|Y | when i Y
We observe that uLu
T
= 2|V | RatioCut(X, Y ), and
that

i
u
i
= 0, i.e. u is orthogonal to the constant
vector. Denoting by U the vectors u of the form given
in Equation (5.11) we have
min
uR
V
uLu
T
s.t. u 1, u U
This can be relaxed by removing the constraint u
U, giving as solution the eigenvector of L having the
smallest nonzero eigenvalue [20]. The next subsection
gives an analogous derivation for signed graphs.
5.2 Signed Graphs We now give a derivation of
the ratio cut for signed graphs. Let G = (V, E, A)
be a signed graph. We will write A
+
and A

for
the adjacency matrices containing only positive and
negative edges. In other words, A
+
ij
= max(0, A
ij
) and
A

ij
= max(0, A
ij
).
For convenience we dene positive and negative cuts
that only count positive and negative edges respectively:
cut
+
(X, Y ) =

iX,jY
A
+
ij
cut

(X, Y ) =

iX,jY
A

ij
In these denitions, we allow X and Y to be overlap-
ping. For a vector u R
V
, we consider the bilinear form
u
T

Lu. As shown in Equation (4.9), this can be written
in the following way:
u
T

Lu =

ij
|A
ij
|(u
i
sgn(A
ij
)u
j
)
2
For a given partition (X, Y ), let u R
V
be the following
vector:
u
i
= +
1
2
_
|A|
|

A|
+

|

A|
|A|
_
when i X (5.12)
u
i
=
1
2
_
|A|
|

A|
+

|

A|
|A|
_
otherwise.
The corresponding bilinear form then becomes:
u
T

Lu =

ij
|A
ij
| (u
i
sgn(A
ij
)u
j
)
2
= |V |
_
2 cut
+
(X, Y ) + cut

(X, X) + cut

(Y, Y )
_
_
1
|X|
+
1
|Y |
_
This leads us to dene the following signed cut of (X, Y ):
scut(X, Y ) = 2 cut
+
(X, Y )
+ cut

(X, X) + cut

(Y, Y )
and to dene the signed ratio cut as follows:
SignedRatioCut(X, Y ) = scut(X, Y )
_
1
|X|
+
1
|Y |
_
Therefore, the following minimization problem solves
the signed clustering problem, where U denotes vectors
of the form given in Equation (5.12).
min
XV
SignedRatioCut(X, V \ X)
min
uR
V
u

Lu
T
s.t. u U
Note that we lose the orthogonality of u to the constant
vector. This can be explained by the fact that when G
contains negative edges, the smallest eigenvector can
always be used for clustering: If G is balanced, the
smallest eigenvalue is zero and its eigenvector equals
(1) and gives the two clusters separated by negative
edges. If G is unbalanced, the smallest eigenvalue is
larger than zero, so the constant vector plays no role.
The signed cut scut(X, Y ) counts the number of
positive edges that connect the two groups X and Y ,
and the number of negative edges that remain in each
of these groups. Thus, minimizing the signed cut leads
to clusterings where two groups are connected by few
positive edges and contain few negative edges inside
each group. This signed ratio cut generalizes the ratio
cut of unsigned graphs and justies the use of the signed
Laplacian

L for spectral clustering of signed graphs.
5.3 Normalized Laplacian When instead of nor-
malizing with the number of vertices |X| we normalize
with the number of edges vol(X), the result is a spectral
clustering algorithm based on the eigenvectors of D
1
A
introduced by Shi and Malik [24]. The cuts normalized
by vol(X) are called normalized cuts. In the signed case,
the eigenvectors of

D
1
A lead to the signed normalized
cut:
SignedNormalizedCut(X, Y )
= scut(X, Y )
_
1
vol(X)
+
1
vol(Y )
_
A similar derivation can be made for normalized
cuts based on

D
1/2
A

D
1/2
, generalizing the spectral
clustering method of Ng, Jordan and Weiss [22]. The
following section gives an example of clustering a small,
signed graph.
5.4 Anthropological Example As an application
of signed spectral clustering to real-world data, we
present the dataset of [23]. This dataset describes the
relations between sixteen tribal groups of the Eastern
Central Highlands of New Guinea [10]. Relations be-
tween tribal groups in the GahukuGama alliance struc-
ture can be friendly (rova) or antagonistic (hina).
We model the dataset as a graph with edge weights +1
for friendship and 1 for enmity.
The resulting graph contains cycles with an odd
number of negative edges, and therefore its signed
Laplacian matrix is positive-denite. We use the eigen-
vectors of the two smallest eigenvalues (1.04 and 2.10)
to embed the graph into the plane. The result is shown
in Figure 6. We observe that indeed the positive (green)
edges are short, and the negative (red) edges are long.
Looking at only the positive edges, the drawing makes
Alikadzuha
Asarodzuha
Gahuku Ove
Masilakidzuha
Ukudzuha
Gehamo
Gaveve
Gama
Kotuni
Nagamidzuha
Nagamiza
Kohika
Seuve
Uheto
Notohana
Figure 6: The tribal groups of the Eastern Central
Highlands of New Guinea from the study of Read [23]
drawn using signed Laplacian graph embedding. Indi-
vidual tribes are shown as vertices of the graphs, with
friendly relations shown as green edges and antagonis-
tic relations shown as red edges. The three higher-order
groups as described by Hage & Harary in [10] are lin-
early separable.
the two connected components easy to see. Looking at
only the negative edges, we recognize that the tribal
groups can be clustered into three groups, with no neg-
ative edges inside any group. These three groups corre-
spond indeed to a higher-order grouping in the Gahuku
Gama society [10]. An example on a larger network is
shown in the next section, using the genre of movies in
a useritem graph with positive and negative ratings.
6 Graph Kernels and Link Prediction
In this section, we describe Laplacian graph kernels that
apply to graphs with negatively weighted edges. We
rst review ordinary Laplacian graph kernels, and then
extend these to signed graphs.
A kernel is a function of two variables k(x, y) that
is symmetric, positive-semidenite, and represents a
measure of similarity in the space of its arguments.
Graph kernels are dened on the vertices of a given
graph, and are used for link prediction, classication
and other machine learning problems.
6.1 Unsigned Laplacian Kernels We briey re-
view three graph kernels based on the ordinary Lapla-
cian matrix.
Commute time kernel The simplest graph kernel
possible using L is the MoorePenrose pseudoinverse
L
+
. This kernel is called the commute time kernel due
to its connection to random walks [7]. Alternatively, it
can be described as the resistance distance kernel, based
on interpreting the graph as a network of electrical
resistances [14].
Regularized Laplacian kernel Regularizing the
Laplacian kernel results in the regularized Laplacian
graph kernel (I + L)
1
, which is always positive-
denite [13]. is the regularization parameter.
Heat diusion kernel By considering a process
of heat diusion on a graph one arrives at the heat
diusion kernel exp(L) [15].
6.2 Signed Laplacian Kernels To apply the Lapla-
cian graph kernels to signed graphs, we replace the or-
dinary Laplacian L by the signed Laplacian

L.
Signed resistance distance In graphs with neg-
ative edge weights, the commute time kernel can be ex-
tended to

L
+
. As noted in Section 4,

L is positive-
denite for certain graphs, in which case the pseudoin-
verse reduces to the ordinary matrix inverse. The signed
Laplacian graph kernel can also be interpreted as the
signed resistance distance kernel [18]. A separate deriva-
tion of the signed resistance distance kernel is given in
the next section.
Regularized Laplacian kernel For signed
graphs, we dene the regularized Laplacian kernel as
(I +

L)
1
.
Heat diusion kernel We extend the heat diu-
sion kernel to signed graphs giving exp(

L).
Because

L is positive-semidenite, it follows that all
three kernels are also positive-semidenite and indeed
proper kernels.
In the following two subsections, we perform exper-
iments in which we apply these signed Laplacian graph
kernels to link sign prediction in unipartite and bipar-
tite networks, respectively. Since these networks con-
tain negatively weighted edges, we cannot apply the
ordinary Laplacian graph kernels to them. Therefore,
we compare the signed Laplacian graph kernels to other
kernels dened for networks with negative edges.
6.3 Social Network Analysis In this section, we
apply the signed Laplacian kernels to the task of link
sign prediction in the social network of user relation-
ships in the Slashdot Zoo. The Slashdot Zoo consists of
the relationships between users of the technology news
site Slashdot. On Slashdot, users can tag other users as
friends and foes, giving rise to a network of users
connected by two edge types [17]. We model friend
links as having the weight +1 and foe links as having
the weight 1. We ignore link direction, resulting in
an undirected graph. The resulting network has 71,523
nodes and 488,440 edges, of which 24.1% are negative.
6.3.1 Evaluation Setup As the task for evaluation,
we choose the prediction of link sign. We split the edge
set into a training and a test set, the test set containing
33% of all edges. In contrast to the usual task of link
prediction, we do not try to predict whether a node is
present in the test set, but only its weight.
As baseline kernels, we take the symmetric adja-
cency matrix A of the training set, compute its reduced
eigenvalue decomposition A
(k)
= UU
T
of rank k = 9,
and use it to compute the following three link sign pre-
diction functions:
Rank reduction The reduced eigenvalue decom-
position A
(k)
itself can be used to compute a rank-k
approximation of A. We use this approximation as the
link sign prediction.
Power sum We use a polynomial of degree 4 of
A
(k)
, the coecients of which are determined empiri-
cally to give the most accurate link sign prediction by
cross validation.
Matrix exponential We compute the exponential
graph kernel exp(A
(k)
) = U exp()U
T
, and use it for
link prediction.
To evaluate the signed spectral approach, we use
three graph kernels based on the signed Laplacian
matrix

L. These kernels are computed with the reduced
eigenvalue decomposition of

L
(k)
= UU
T
in which
only the smallest k = 9 eigenvalues are kept.
Signed resistance distance The pseudoinverse
of the signed Laplacian

L
+
= U
+
U
T
gives the signed
resistance distance kernel. In this dataset, odd cycles
exist, and so

L is positive-denite. We thus use all
eigenvectors of

L.
Signed regularized Laplacian By regularization
we arrive at (I +

L)
1
= U(I + )
1
U
T
, the
regularized Laplacian graph kernel.
Signed heat diusion We compute the heat dif-
fusion kernel given by exp(

L) = U exp()U
T
.
6.3.2 Evaluation Results As a measure of predic-
tion accuracy, we use the root mean squared error
(RMSE) [7]. The evaluation results are summarized in
Table 7. We observe that all three signed Laplacian
graph kernels perform better than the baseline meth-
ods. The best performance is achieved by the signed
regularized Laplacian kernel.
6.4 Collaborative Filtering To evaluate the signed
Laplacian graph kernel on a bipartite network, we chose
the task of collaborative ltering on the MovieLens 100k
corpus, as published by GroupLens Research. In a col-
laborative ltering setting, the network consists of users,
items and ratings. Each rating is represented by an edge
between a user and an item. The MovieLens 100k cor-
Figure 7: The link sign prediction accuracy of dierent
graph kernels measured by the root mean squared error
(RMSE).
Kernel RMSE
Rank reduction 0.838
Power sum 0.840
Matrix exponential 0.839
Signed resistance distance 0.812
Signed regularized Laplacian 0.778
Signed heat diusion 0.789
pus of movie ratings contains 6.040 users, 3.706 items,
and about a hundred thousand ratings.
Figure 8 shows the users and items of the
MovieLens 100k corpus drawn with the method de-
scribed in 2.2. We observe that the signed Laplacian
graph drawing methods place movies on lines passing
through the origin. These lines correspond to clusters
in the underlying graph, as illustrated by the placement
of the movies in the genre adventure in red.
6.4.1 Rating Prediction The task of collaborative
ltering consists of predicting the value of unknown
ratings. To do that, we split the set of ratings into a
training set and a test set. The test set contains 2% of
all known ratings. Using the graph consisting of edges
in the training set, we compute the signed Laplacian
matrix, and use its inverse as a kernel for rating
prediction. The collaborative ltering task consists of
predicting the rating of edges.
6.4.2 Algorithms In the setting of collaborative l-
tering, the graph G = (V, E, A) is bipartite, containing
user-vertices, item-vertices, and edges between them. In
the MovieLens 100k corpus, ratings are given on an in-
teger scale from 1 to 5. We scale these ratings to be
evenly distributed around zero, by taking the mean of
user and item ratings. Let R
ij
{1, 2, 3, 4, 5} be the
rating by user i of item j, then we dene
A
ij
= A
ji
= R
ij
(
i
+
j
)/2
Where
i
and
j
are user and item means, and d
i
and d
j
are the number of rating given by each user or
received by each item. The resulting adjacency matrix
A is symmetric. We now describe the rating prediction
algorithms we evaluate.
Mean As the rst baseline algorithm, we use (
i
+

j
)/2 as the prediction.
Rank reduction As another baseline algorithm,
we perform rank reduction on the symmetric matrix
A by truncation of the eigenvalue decomposition A =
0.06 0.04 0.02 0 0.02 0.04 0.06
0.06
0.04
0.02
0
0.02
0.04
0.06
(a) Adjacency matrix A
1 0.5 0 0.5 1 1.5
x 10
8
1
0.5
0
0.5
1
1.5
x 10
8
(b) Signed Laplacian matrix

L
Figure 8: The users and items of the MovieLens 100k
corpus plotted using (a) eigenvectors of A with largest
eigenvalue and (b) eigenvectors of

L with smallest
eigenvalue. Users are shown in yellow, movies in blue,
and movies in the genre adventure in red. Edges are
omitted for clarity. In the Laplacian plot, movies are
clustered along lines through the origin, justifying the
usage of the cosine similarity as described below.
UU
T
to the k rst eigenvectors and eigenvalues, giving
A
(k)
= U
(k)

(k)
U
T
(k)
. We vary the value of k between 1
and 120. As a prediction for the user-item pair (i, j), we
use the corresponding value in A
(k)
. To that value, we
add (
u
+
i
)/2 to scale the value back to the MovieLens
rating scale.
Signed resistance distance kernel We use the
signed Laplacian kernel

K =

L
+
for prediction. We
compute the kernel on a rank-reduced eigenvalue de-
composition of

L. As observed in Figure 8, clusters of
vertices tend to form lines through the origin. There-
fore, we suspect that the distance of a single point to the
origin does not matter to rating prediction, and we use
0.86
0.88
0.9
0.92
0.94
0.96
0.98
1
0 20 40 60 80 100 120
R
M
S
E
k (dimensional reduction)
Mean
Dimensionality reduction
Signed Laplacian kernel
Figure 9: Root mean squared error of the rating
prediction algorithms in function of the reduced rank k.
Lower values denote better prediction accuracy.
the cosine similarity in the space spanned by the square
root of the inverted Laplacian. This observation corre-
sponds to established usage for the unsigned Laplacian
graph kernel [7].
Given the signed Laplacian matrix

L with eigende-
composition

L = UU
T
, we set M = U
1/2
, noting
that K = MM
T
. Then, we normalize the rows of M to
unit length to get the matrix N using N
i
= M
i
/|M
i
|.
The cosine distance between any vertices i and j is
then given by N
i
N
T
j
, and NN
T
is our new kernel. As
with simple rank reduction, we scale the result to the
MovieLens rating scale by adding (
u
+
i
)/2.
The evaluation results are shown in Figure 9. We
observe that the signed Laplacian kernel has higher
rating prediction accuracy than simple rank reduction
and than the baseline algorithm. We also note that
simple rank reduction achieves better accuracy for small
values of k. All kernels display overtting.
7 Electrical Networks
In this section, we complete the work of [18] by giving a
derivation of the signed Laplacian matrix

L in electrical
networks with negative edges.
The resistance distance and heat diusion kernels
dened in the previous section are justied by physical
interpretations. In the presence of signed edge weights,
these interpretations still hold using the following for-
malism, which we describe in terms of electrical resis-
tance networks.
A positive electrical resistance indicates that the po-
tentials of two connected nodes will tend to each other:
The smaller the resistance, the more both potentials ap-
proach each other. If the edge has negative weight, we
can interpret the connection as consisting of a resistor
of the corresponding positive weight in series with an
inverting amplier that guarantees its ends to have op-
a
a
+a
a
Figure 10: An edge with a negative weight is inter-
preted as a positive resistor in series with an inverting
component.
posite voltage, as depicted in Figure 10. In other words,
two nodes connected by a negative edge will tend to op-
posite voltages.
Thus, a positive edge with weight +a is modeled
by a resistor of resistance a and a negative edge with
weight a is modeled by a resistor of resistance a in
series with a (hypothetical) electrical component that
assures its ends have opposite electrical potential.
Electrical networks with such edges can be analysed
in the following way. Let i be any node of an unsigned
network. Then its electric potential is given by the mean
of its neighbors potentials, weighted by edge weights.
v
i
=
_
_

ji
A
ij
v
i
_
_
_
_

ji
A
ij
_
_
1
If edges have negative weights, the inversion of poten-
tials results in the following equation:
v
i
=
_
_

ji
A
ij
v
i
_
_
_
_

ji
|A
ij
|
_
_
1
Thus, electrical networks give the equation

Dv = Av
and the matrices

D
1
A and

D A.
Because such an inverting amplier needs the def-
inition of a specic value of zero voltage, the resulting
model loses one degree of freedom, explaining that in
the general case

L has rank one greater than L.
8 Discussion
The various example applications in this paper show
that signed graphs appear in many diverse spectral
graph mining applications, and that they can be ap-
proached by dening a variant of the Laplacian ma-
trix

L. Although we used the notation

L =

D A
throughout this paper, we propose the notation L =

DA, since it does not interfere with unsigned graphs


(in which

D = D) and because the matrix DA is less
useful for data mining applications
2
. This signed exten-
sion not only exists for the combinatorial Laplacian L
2
although D A is used for signed graphs in knot theory
but also for the two normalized Laplacians I D
1
A
and D
1/2
LD
1/2
.
Spectral clustering of signed graphs is thus indeed
possible using the intuitive measure of cut which
counts positive edges between two clusters and negative
edges inside each cluster. As for unsigned spectral
clustering, the dierent Laplacian matrices correspond
to ratio cuts and normalized cuts. For link prediction,
we saw that the signed Laplacians can be used as
kernels (since they are positive-semidenite), and can
replace graph kernels based on the adjacency matrix.
This is especially true when the sign of edges is to
be predicted. Finally, we derived the signed Laplacian
using the application of graph drawing and electrical
networks. These derivations should conrm that the
denition L =

DA is to be preferred over L = DA
for signed graphs.
In all cases, we observed that when the graph is
unbalanced, zero is not an eigenvalue of the Laplacian
and thus its eigenvector can be used directly unlike the
unsigned case when the eigenvector of least eigenvalue
is trivial and can be ignored. For graph drawing, this
results in the loss of translational invariance of the
drawing, i.e. the drawing is placed relative to a point
zero. For electrical networks, this results in the loss
of invariance under addition of a constant to electrical
potentials, since the inverting amplication depends on
the chosen zero voltage.
References
[1] M. Belkin and P. Niyogi. Laplacian eigenmaps and
spectral techniques for embedding and clustering. In
Advances in Neural Information Processing Systems,
pages 585591, 2002.
[2] U. Brandes, D. Fleischer, and J. Lerner. Summariz-
ing dynamic bipolar conict structures. Trans. on Vi-
sualization and Computer Graphics, 12(6):14861499,
2006.
[3] P. Chebotarev and E. V. Shamis. On proximity
measures for graph vertices. Automation and Remote
Control, 59(10):14431459, 1998.
[4] F. Chung. Spectral Graph Theory. American Mathe-
matical Society, 1997.
[5] M. Desai and V. Rao. A characterization of the small-
est eigenvalue of a graph. Graph Theory, 18(2):181
194, 1994.
[6] I. S. Dhillon, Y. Guan, and B. Kulis. Kernel k-means:
Spectral clustering and normalized cuts. In Proc. Int.
Conf. Knowledge Discovery and Data Mining, pages
551556, 2004.
[7] F. Fouss, A. Pirotte, J.-M. Renders, and M. Saerens.
Random-walk computation of similarities between
nodes of a graph with application to collaborative rec-
ommendation. Trans. on Knowledge and Data Engi-
neering, 19(3):355369, 2007.
[8] C. Gkantsidis, M. Mihail, and E. Zegura. Spectral
analysis of Internet topologies. In Proc. Joint Conf.
IEEE Computer and Communications Societies, pages
364374, 2003.
[9] K. Goldberg, T. Roeder, D. Gupta, and C. Perkins.
Eigentaste: A constant time collaborative ltering
algorithm. Information Retrieval, 4(2):133151, 2001.
[10] P. Hage and F. Harary. Structural Models in Anthro-
pology. Cambridge University Press, 1983.
[11] F. Harary. On the notion of balance of a signed graph.
Michigan Math., 2(2):143146, 1953.
[12] Y. Hou. Bounds for the least Laplacian eigenvalue of
a signed graph. Acta Mathematica Sinica, 21(4):955
960, 2005.
[13] T. Ito, M. Shimbo, T. Kudo, and Y. Matsumoto.
Application of kernels to link analysis. In Proc. Int.
Conf. on Knowledge Discovery in Data Mining, pages
586592, 2005.
[14] D. J. Klein and M. Randic. Resistance distance.
Mathematical Chemistry, 12(1):8195, 1993.
[15] R. Kondor and J. Laerty. Diusion kernels on graphs
and other discrete structures. In Proc. Int. Conf. on
Machine Learning, pages 315322, 2002.
[16] Y. Koren, L. Carmel, and D. Harel. ACE: A fast
multiscale eigenvectors computation for drawing huge
graphs. In Symposium on Information Visualization,
pages 137144, 2002.
[17] J. Kunegis, A. Lommatzsch, and C. Bauckhage. The
Slashdot Zoo: Mining a social network with negative
edges. In Proc. Int. World Wide Web Conf., pages
741750, 2009.
[18] J. Kunegis, S. Schmidt, C. Bauckhage, M. Mehlitz,
and S. Albayrak. Modeling collaborative similarity
with the signed resistance distance kernel. In Proc.
European Conf. on Articial Intelligence, pages 261
265, 2008.
[19] M. Lien and W. Watkins. Dual graphs and knot invari-
ants. Linear Algebra and its Applications, 306(1):123
130, 2000.
[20] U. v. Luxburg. A tutorial on spectral clustering.
Statistics and Computing, 17(4):395416, 2007.
[21] P. Massa and P. Avesani. Controversial users demand
local trust metrics: an experimental study on epin-
ions.com community. In Proc. American Association
for Articial Intelligence Conf., pages 121126, 2005.
[22] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral
clustering: Analysis and an algorithm. In Advances
in Neural Information Processing Systems, pages 849
856, 2001.
[23] K. E. Read. Cultures of the Central Highlands, New
Guinea. Southwestern J. of Anthropology, (1):143,
1954.
[24] J. Shi and J. Malik. Normalized cuts and image
segmentation. IEEE Trans. on Pattern Analysis and
Machine Intelligence, 22(8):888905, 2000.

You might also like