0% found this document useful (0 votes)
79 views83 pages

Spielman Laplacian Matrices of Graphs and Applications

The document summarizes Laplacian matrices of graphs and their applications. It discusses how Laplacian matrices can be used for tasks like interpolation on graphs, modeling spring networks, spectral clustering, and measuring graph boundaries. It provides examples of using the Laplacian quadratic form and eigenvectors to solve problems like graph drawing and data partitioning. Key applications of Laplacian matrices include modeling physical processes on graphs and analyzing the structure of networks.

Uploaded by

death eater
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views83 pages

Spielman Laplacian Matrices of Graphs and Applications

The document summarizes Laplacian matrices of graphs and their applications. It discusses how Laplacian matrices can be used for tasks like interpolation on graphs, modeling spring networks, spectral clustering, and measuring graph boundaries. It provides examples of using the Laplacian quadratic form and eigenvectors to solve problems like graph drawing and data partitioning. Key applications of Laplacian matrices include modeling physical processes on graphs and analyzing the structure of networks.

Uploaded by

death eater
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Laplacian Matrices of Graphs:

Algorithms and Applications

Daniel A. Spielman

ICML, June 21, 2016


Outline
Laplacians
Interpolation on graphs
Spring networks
Clustering
Isotonic regression

Sparsification

Solving Laplacian Equations


Best results
The simplest algorithm
Interpolation on Graphs (Zhu,Ghahramani,Lafferty ’03)

Interpolate values of a function at all vertices


from given values at a few vertices.
X
Minimize (x(i) x(j))2
(i,j)2E
Subject to given values

CDC20 ANAPC10 1

0 CDC27 ANAPC2

ANAPC5 UBE2C
Interpolation on Graphs (Zhu,Ghahramani,Lafferty ’03)

Interpolate values of a function at all vertices


from given values at a few vertices.
X
Minimize (x(i) x(j))2
(i,j)2E
Subject to given values

0.51 CDC20 ANAPC10 1

0 CDC27 ANAPC2 0.53

0.30 ANAPC5 UBE2C 0.61


Interpolation on Graphs (Zhu,Ghahramani,Lafferty ’03)

Interpolate values of a function at all vertices


from given values at a few vertices.
X
Minimize (x(i) x(j))2 = xT LG x
(i,j)2E
Subject to given values

0.51 CDC20 ANAPC10 1

0 CDC27 ANAPC2 0.53

0.30 ANAPC5 UBE2C 0.61

Take derivatives. Minimize by solving Laplacian


Interpolation on Graphs (Zhu,Ghahramani,Lafferty ’03)

Interpolate values of a function at all vertices


from given values at a few vertices.
X
Minimize (x(i) x(j))2 = xT LG x
(i,j)2E
Subject to given values

0.51 CDC20 ANAPC10 1

0 CDC27 ANAPC2 0.53

0.30 ANAPC5 UBE2C 0.61


The Laplacian Quadratic Form
X
(x(i) x(j))2
(i,j)2E
The Laplacian Matrix of a Graph
X
xT LG x = (x(i) x(j))2
(i,j)2E
Spring Networks
View edges as rubber bands or ideal linear springs

Nail down some vertices, let rest settle

In equilibrium, nodes are averages of neighbors.


Spring Networks
View edges as rubber bands or ideal linear springs

Nail down some vertices, let rest settle

When stretched to length `


2
potential energy is /2
Spring Networks
Nail down some vertices, let rest settle

Physics: position minimizes total potential energy

1 X
(x(i) x(j))2
2
(i,j)2E

subject to boundary constraints (nails)


Spring Networks
Interpolate values of a function at all vertices
from given values at a few vertices.
X
Minimize (x(i) x(j))2 = xT LG x
(i,j)2E

CDC20 ANAPC10 1

0 CDC27 ANAPC2

ANAPC5 UBE2C
Spring Networks
Interpolate values of a function at all vertices
from given values at a few vertices.
X
Minimize (x(i) x(j))2 = xT LG x
(i,j)2E

CDC20 ANAPC10
1
CDC27 ANAPC2

0
ANAPC5 UBE2C
Spring Networks
Interpolate values of a function at all vertices
from given values at a few vertices.
X
Minimize (x(i) x(j))2 = xT LG x
(i,j)2E
0.51
CDC20 ANAPC10
1
CDC27 ANAPC2
0 0.53
ANAPC5 UBE2C
0.30 0.61
In the solution, variables are the average of their neighbors
Drawing by Spring Networks (Tutte ’63)
Drawing by Spring Networks (Tutte ’63)
Drawing by Spring Networks (Tutte ’63)
Drawing by Spring Networks (Tutte ’63)
Drawing by Spring Networks (Tutte ’63)
Drawing by Spring Networks (Tutte ’63)

If the graph is planar,


then the spring drawing
has no crossing edges!
Drawing by Spring Networks (Tutte ’63)
Drawing by Spring Networks (Tutte ’63)
Drawing by Spring Networks (Tutte ’63)
Drawing by Spring Networks (Tutte ’63)
Drawing by Spring Networks (Tutte ’63)
Measuring boundaries of sets
Boundary: edges leaving a set

SS
Measuring boundaries of sets
Boundary: edges leaving a set

Characteristic Vector of S: 0
0
( 0 0
1 i in S 1
0 0
x(i) = 1 0
0 i not in S 1 0
1 1 1
SS 1 1 0 0
Measuring boundaries of sets
Boundary: edges leaving a set

Characteristic Vector of S: 0
0
( 0 0
1 i in S 1
0 0
x(i) = 1 0
0 i not in S 1 0
1 1 1
SS
X 1 1 0 0
2
(x(i) x(j))
(i,j)2E

= |boundary(S)|
Spectral Clustering and Partitioning
Find large sets of small boundary

-0.4
Heuristic to find 0.7
0.2 -1.1
x with xT LG x small 1.3
0.5
-0.20
0.9 0.4
1.6
1.3 1.0 0.8
Compute eigenvector S
S 1.1 0.8 0.5 -0.3
LG v 2 = 2 v2

Consider the level sets


The Laplacian Matrix of a Graph
2 6
1 4
3 5
0 1
3 1 1 1 0 0 Symmetric
B 1 2 0 0 0 1 C
B C
B 1 0 3 1 1 0 C Non-positive
B C
B 1 0 1 4 1 1 C off-diagonals
B C
@ 0 0 1 1 3 1 A
0 1 0 1 1 3 Diagonally dominant
The Laplacian Matrix of a Graph
X
xT LG x = (x(i) x(j))2
(i,j)2E
✓ ◆
x(i)
x(i) x(j) = 1 1
x(j)
✓ ◆T ✓ ◆ ✓ ◆T ✓ ◆
2 x(i) 1 1 x(i)
(x(i) x(j)) =
x(j) 1 1 x(j)
✓ ◆T ✓ ◆✓ ◆
x(i) 1 1 x(i)
=
x(j) 1 1 x(j)
Laplacian Matrices of Weighted Graphs
X
T
x LG x = wi,j (x(i) x(j))2
(i,j)2E

X
LG = wi,j (bi,j bTi,j ) where bi,j = ei ej
(i,j)2E
Laplacian Matrices of Weighted Graphs
X
T
LG = wi,j (bi,j bi,j ) where bi,j = ei ej
(i,j)2E
T
LG = B W B
B is the signed edge-vertex adjacency matrix
with one row for each bi,j

W is the diagonal matrix of weights wi,j


Laplacian Matrices of Weighted Graphs
X
LG = T
wi,j (bi,j bi,j ) LG = B T W B
(i,j)2E
0 1
1 1 0 0 0 0
B0 1 0 0 0 1C
2 6 B
B1
B 0 0 1 0 0C
C
C
B0 0 0 1 0 1C
B C
1 4 B=B
B1
B0
0
0 1
1 0
1
0
0
0C
0C
C
B C
B0 0C
3 5 B
@0
0
0
1
0
0
1
1
1 0A
C

0 0 0 0 1 1
Quickly Solving Laplacian Equations
S,Teng ’04: Using low-stretch trees and sparsifiers
c 1
O(m log n log ✏ )

Where m is number of non-zeros and n is dimension


Quickly Solving Laplacian Equations
S,Teng ’04: Using low-stretch trees and sparsifiers
c 1
O(m log n log ✏ )

Koutis, Miller, Peng ’11: Low-stretch trees and sampling


e log n log ✏
O(m 1
)
Where m is number of non-zeros and n is dimension
Quickly Solving Laplacian Equations
S,Teng ’04: Using low-stretch trees and sparsifiers
c 1
O(m log n log ✏ )

Koutis, Miller, Peng ’11: Low-stretch trees and sampling


e log n log ✏
O(m 1
)

Cohen, Kyng, Pachocki, Peng, Rao ’14:


e 1/2
O(m log n log ✏ 1
)

Where m is number of non-zeros and n is dimension


Quickly Solving Laplacian Equations
S,Teng ’04: Using low-stretch trees and sparsifiers
c 1
O(m log n log ✏ )

Koutis, Miller, Peng ’11: Low-stretch trees and sampling


e log n log ✏
O(m 1
)

Cohen, Kyng, Pachocki, Peng, Rao ’14:


e 1/2
O(m log n log ✏ 1
)
Good code:
LAMG (lean algebraic multigrid) – Livne-Brandt
CMG (combinatorial multigrid) – Koutis
Quickly Solving Laplacian Equations
S,Teng ’04: Using low-stretch trees and sparsifiers
c 1
O(m log n log ✏ )

An ✏ -accurate solution to LG x = b
is an x satisfying
kx x⇤ kLG  ✏ kx⇤ kLG
p 1/2
T
where kvkLG = v LG v = ||LG v||
Quickly Solving Laplacian Equations
S,Teng ’04: Using low-stretch trees and sparsifiers
c 1
O(m log n log ✏ )

An ✏ -accurate solution to LG x = b
is an x satisfying
kx x⇤ kLG  ✏ kx⇤ kLG

Allows fast computation of eigenvectors


corresponding to small eigenvalues.
Laplacians in Linear Programming

Laplacians appear when solving Linear Programs on


on graphs by Interior Point Methods

Lipschitz Learning : regularized interpolation on graphs


(Kyng, Rao, Sachdeva,S ‘15)

Maximum and Min-Cost Flow (Daitch, S ’08, Mądry ‘13)

Shortest Paths (Cohen, Mądry, Sankowski, Vladu ‘16)

Isotonic Regression (Kyng, Rao, Sachdeva ‘15)


Isotonic Regression (Ayer et. al. ‘55)

3.7
3.6

4.0

3.2
3.9

3.2

2.5

A function x : V ! R is isotonic with respect to a


directed acyclic graph if x increases on edges.
Isotonic Regression (Ayer et. al. ‘55)
College GPA
3.7
3.6

4.0
SAT
3.2
3.9

3.2

2.5

High-school GPA
Isotonic Regression (Ayer et. al. ‘55)
College GPA
3.7
3.6

4.0
Estimate by
SAT nearest neighbor?
3.2
3.9

3.2

2.5

High-school GPA
Isotonic Regression (Ayer et. al. ‘55)
College GPA
3.7
3.6

4.0
Estimate by
SAT nearest neighbor?
3.2
3.9

3.2

2.5

High-school GPA

We want the estimate to be monotonically increasing


Isotonic Regression (Ayer et. al. ‘55)
College GPA
3.7
3.6

4.0
SAT
3.2
3.9

3.2

2.5

High-school GPA

Given y : V ! R find the isotonic x minimizing kx yk


Isotonic Regression (Ayer et. al. ‘55)
College GPA
3.866
3.6

3.866
SAT
3.2
3.866
3.2

2.5

High-school GPA

Given y : V ! R find the isotonic x minimizing kx yk


Fast IPM for Isotonic Regression
(Kyng, Rao, Sachdeva ’15)

Given y : V ! R find the isotonic x minimizing kx yk1


Fast IPM for Isotonic Regression
(Kyng, Rao, Sachdeva ’15)

Given y : V ! R find the isotonic x minimizing kx yk1

or kx ykp for any p > 1

3/2 3
in time O(m log m)
Linear Program for Isotonic Regression
Signed edge-vertex incidence matrix
0 1
1 0 0 1 0 0 0
B1 0 0 0 1 0 0C
B C
B0 1 1 0 0 0 0C
B C
B0 0 1 0 1 0 0C
B C
B0 0 1 0 0 1 0C
B C
B0 0 0 1 0 0 1C
B C
@0 0 0 0 1 0 1A
0 0 0 0 0 1 1

x is isotonic if Bx  0
Linear Program for Isotonic Regression
Given y, minimize kx yk1

subject to Bx  0

0 1
1 0 0 1 0 0 0
B1 0 0 0 1 0 0C
B C
B0 1 1 0 0 0 0C
B C
B0 0 1 0 1 0 0C
B C
B0 0 1 0 0 1 0C
B C
B0 0 0 1 0 0 1C
B C
@0 0 0 0 1 0 1A
0 0 0 0 0 1 1
Linear Program for Isotonic Regression
P
Given y, minimize i ri

subject to Bx  0
|xi yi | = ri

0 1
1 0 0 1 0 0 0
B1 0 0 0 1 0 0C
B C
B0 1 1 0 0 0 0C
B C
B0 0 1 0 1 0 0C
B C
B0 0 1 0 0 1 0C
B C
B0 0 0 1 0 0 1C
B C
@0 0 0 0 1 0 1A
0 0 0 0 0 1 1
Linear Program for Isotonic Regression
P
Given y, minimize i ri

subject to Bx  0
|xi yi |  ri

0 1
1 0 0 1 0 0 0
B1 0 0 0 1 0 0C
B C
B0 1 1 0 0 0 0C
B C
B0 0 1 0 1 0 0C
B C
B0 0 1 0 0 1 0C
B C
B0 0 0 1 0 0 1C
B C
@0 0 0 0 1 0 1A
0 0 0 0 0 1 1
Linear Program for Isotonic Regression
P
Given y, minimize i ri

subject to Bx  0
x i y i  ri
(xi yi )  ri
0 1
1 0 0 1 0 0 0
B1 0 0 0 1 0 0C
B C
B0 1 1 0 0 0 0C
B C
B0 0 1 0 1 0 0C
B C
B0 0 1 0 0 1 0C
B C
B0 0 0 1 0 0 1C
B C
@0 0 0 0 1 0 1A
0 0 0 0 0 1 1
Linear Program for Isotonic Regression
P
Minimize i ri
0 1 0 1
subject to 0 B ✓ ◆ 0
@ I A r
I @ y A
x
I I y
Linear Program for Isotonic Regression
P
Minimize i ri
0 1 0 1
subject to 0 B ✓ ◆ 0
@ I A r
I @ y A
x
I I y

IPM solves a sequence of equations of form


0 1T 0 10 1
0 B S0 0 0 0 B
@ I I A @ 0 S1 0 A @ I I A
I I 0 0 S2 I I

with positive diagonal matrices S0 , S1 , S2


Linear Program for Isotonic Regression
0 1T 0 10 1
0 B S0 0 0 0 B
@ I I A @0 S1 0 A@ I I A
I I 0 0 S2 I I
✓ ◆
S1 + S2 S2 S1
=
S2 S1 B T S0 B + S1 + S2

Laplacian!

S0 , S1 , S2 are positive diagonal


Linear Program for Isotonic Regression
0 1T 0 10 1
0 B S0 0 0 0 B
@ I I A @0 S1 0 A@ I I A
I I 0 0 S2 I I
✓ ◆
S1 + S2 S2 S1
=
S2 S1 B T S0 B + S1 + S2

Laplacian!

S0 , S1 , S2 are positive diagonal

Kyng, Rao, Sachdeva ’15:


Reduce to solving Laplacians to constant accuracy
Spectral Sparsification

Every graph can be approximated


by a sparse graph with a similar Laplacian
Approximating Graphs

A graph H is an ✏ -approximation of G if

1 xT LH x
for all x  T 1+
1+ x LG x

LH ⇡ ✏ LG
Approximating Graphs

A graph H is an ✏ -approximation of G if

1 xT LH x
for all x  T 1+
1+ x LG x

Preserves boundaries of every set


Approximating Graphs

A graph H is an ✏ -approximation of G if

1 xT LH x
for all x  T 1+
1+ x LG x

Solutions to linear equations are similiar

LH ⇡✏ LG () LH1 ⇡✏ LG1
Spectral Sparsification (Batson, S, Srivastava ’09)

Every graph G has an ✏ -approximation H


2 2
with n(2 + ✏) /✏ edges
Spectral Sparsification (Batson, S, Srivastava ’09)

Every graph G has an ✏ -approximation H


2 2
with n(2 + ✏) /✏ edges

Random regular graphs approximate complete graphs


Fast Spectral Sparsification
(S & Srivastava ‘08)
If sample each edge with probability
inversely proportional to its effective spring constant,
only need O(n log n/✏2 ) samples
2
Takes time O(m log n) (Koutis, Levin, Peng ‘12)

(Lee & Sun ‘15)


Can find an ✏ -approximation with O(n/✏2 ) edges in
time O(n1+c ) for every c > 0
Approximate Gaussian Elimination
(Kyng & Sachdeva ‘16)

Gaussian Elimination:
compute upper triangular U so that

LG = U T U

Approximate Gaussian Elimination:


compute sparse upper triangular U so that

LG ⇡ U T U
Gaussian Elimination
0 1
16 4 8 4
B 4 5 0 1C
B C
@ 8 0 14 0A
4 1 0 7

1. Find the rank-1 matrix that agrees


on the first row and column
0 1 0 1
16 4 8 4 4
B 4 1 2 1C B 1C
B C=B C 4 1 2 1
@ 8 2 4 2A @ 2A
4 1 2 1 1

2. Subtract it
Gaussian Elimination
1. Find the rank-1 matrix that agrees
on the first row and column
0 1 0 1
16 4 8 4 4
B 4 1 2 1C B 1C
B C=B C 4 1 2 1
@ 8 2 4 2A @ 2A
4 1 2 1 1
2. Subtract it
0 1 0 1 0 1
16 4 8 4 16 4 8 4 0 0 0 0
B 4 5 0 1C B 4 1 2 1C B 2C
B C B C = B0 4 2 C
@ 8 0 14 0A @ 8 2 4 2 A @0 2 10 2A
4 1 0 7 4 1 2 1 0 2 2 6

3. Repeat
Gaussian Elimination
2. Subtract it
0 1 0 1 0 1
16 4 8 4 16 4 8 4 0 0 0 0
B 4 5 0 1C B 4 1 2 1C B 2C
B C B C = B0 4 2 C
@ 8 0 14 0A @ 8 2 4 2 A @0 2 10 2A
4 1 0 7 4 1 2 1 0 2 2 6

1. Find the rank-1 matrix that agrees


on the next row and column
0 1 0 1
0 0 0 0 0
B0 4 2 2C B 2C
B C=B C 0 2 1 1
@0 2 1 1A @ 1A
0 2 1 1 1
Gaussian Elimination
1. Find the rank-1 matrix that agrees
on the next row and column
0 1 0 1
0 0 0 0 0
B0 4 2 2C B 2C
B C=B C 0 2 1 1
@0 2 1 1A @ 1A
0 2 1 1 1

2. Subtract it
0 1 0 1 0 1
0 0 0 0 0 0 0 0 0 0 0 0
B0 4 2 2C B0 4 2 2C B0 0 0 0C
B C B C=B C
@0 2 10 2A @0 2 1 1 A @0 0 9 3A
0 2 2 6 0 2 1 1 0 0 3 5
Gaussian Elimination
0 1
16 4 8 4
B 4 5 0 1C
B C
@ 8 0 14 0A
4 1 0 7
0 10 1T 0 10 1T 0 10
0 1 0 1T 1T
4 4 0 0 0 0 0 0
B 1C B 1C B 2 CB 2 C B 0 CB 0 C B0C B0C
= @ A@ A + @ A@ A + @ A@ A + B
B C B C B C B C B C B C
@
CB C
2 2 1 1 3 3 0A @0A
1 1 1 1 1 1 2 2
Gaussian Elimination
0 1
16 4 8 4
B 4 5 0 1C
B C
@ 8 0 14 0A
4 1 0 7
0 10 1T 0 10 1T 0
0 1 0 1T 10 1T
4 4 0 0 0 0 0 0
B 1C B 1C B 2 CB 2 C B 0 CB 0 C B0C B0C
= @ A@ A + @ A@ A + @ A@ A + B
B C B C B C B C B C B C
@
CB C
2 2 1 1 3 3 0A @0A
1 1 1 1 1 1 2 2

0 10 1
4 0 0 0 4 1 2 1
B 1 2 0 0C B0 2 1 1C
=B
@ 2
CB C
1 3 0A @0 0 3 1A
1 1 1 2 0 0 0 2
Gaussian Elimination
0 1
16 4 8 4
B 4 5 0 1C
B C
@ 8 0 14 0A
4 1 0 7
0 10 1T 0 10 1T
0 1 0 1T 0 10 1T
4 4 0 0 0 0 0 0
B 1C B 1C B 2 CB 2 C B 0 CB 0 C B0C B0C
= @ A@ A + @ A@ A + @ A@ A + B
B C B C B C B C B C B C
@
CB C
2 2 1 1 3 3 0A @0A
1 1 1 1 1 1 2 2

0 1T 0 1
4 1 2 1 4 1 2 1
B0 2 1 1C B0 2 1 1C
=B
@0
C B C
0 3 1A @0 0 3 1A
0 0 0 2 0 0 0 2
Gaussian Elimination
0 1
16 4 8 4
B 4 5 0 1C
B C
@ 8 0 14 0A
4 1 0 7
0 10 1T 0 10 1T 0 10
0 1 0 1T 1T
4 4 0 0 0 0 0 0
B 1C B 1C B 2 CB 2 C B 0 CB 0 C B0C B0C
= @ A@ A + @ A@ A + @ A@ A + B
B C B C B C B C B C B C
@
CB C
2 2 1 1 3 3 0A @0A
1 1 1 1 1 1 2 2

Computation time proportional to the


sum of the squares of the number of nonzeros
in these vectors
Gaussian Elimination of Laplacians
If this is a Laplacian, then so is this
0 1 0 10 1T 0 1
16 4 8 4 4 4 0 0 0 0
B 4 5 0 1C B 1C B 1C B0 4 2 2C
B C B CB C =B C
@ 8 0 14 0A @ 2A @ 2A @0 2 10 2A
4 1 0 7 1 1 0 2 2 6
Gaussian Elimination of Laplacians
If this is a Laplacian, then so is this
0 1 0 10 1T 0 1
16 4 8 4 4 4 0 0 0 0
B 4 5 0 1C B 1C B 1C B0 4 2 2C
B C B CB C =B C
@ 8 0 14 0A @ 2A @ 2A @0 2 10 2A
4 1 0 7 1 1 0 2 2 6

When eliminate a node, add a clique on its neighbors

4 2 5 2 5
1 2
1
4 4 2 4
2
8 3 6 3 6
Approximate Gaussian Elimination
(Kyng & Sachdeva ‘16)

1. when eliminate a node, add a clique on its neighbors

4 2 5 2 5
1 2
1
4 4 2 4
2
8 3 6 3 6

2. Sparsify that clique, without ever constructing it


Approximate Gaussian Elimination
(Kyng & Sachdeva ‘16)

1. When eliminate a node of degree d,


add d edges at random between its neighbors,
sampled with probability proportional to
the weight of the edge to the eliminated node

1
Approximate Gaussian Elimination
(Kyng & Sachdeva ‘16)

0. Initialize by randomly permuting vertices, and


making O(log2 n) copies of every edge

1. When eliminate a node of degree d,


add d edges at random between its neighbors,
sampled with probability proportional to
the weight of the edge to the eliminated node

Total time is O(m log3 n)


Approximate Gaussian Elimination
(Kyng & Sachdeva ‘16)

0. Initialize by randomly permuting vertices, and


making O(log2 n) copies of every edge

1. When eliminate a node of degree d,


add d edges at random between its neighbors,
sampled with probability proportional to
the weight of the edge to the eliminated node

Total time is O(m log3 n)

Can be improved by sacrificing some simplicity


Approximate Gaussian Elimination
(Kyng & Sachdeva ‘16)

Analysis by Random Matrix Theory:

Write UTU as a sum of random matrices.


⇥ T

E U U = LG

Random permutation and copying


control the variances of the random matrices

Apply Matrix Freedman inequality (Tropp ‘11)


Recent Developments

Other families of linear systems


(Kyng, Lee, Peng, Sachdeva, S ‘16)
✓ i✓

1 e
complex-weighted Laplacians e i✓
1

✓ ◆
connection Laplacians I Q
QT I

Laplacians.jl
To learn more

My web page on:


Laplacian linear equations, sparsification, local graph
clustering, low-stretch spanning trees, and so on.

My class notes from


“Graphs and Networks” and “Spectral Graph Theory”

Lx = b, by Nisheeth Vishnoi

You might also like