0% found this document useful (0 votes)
68 views33 pages

Distributed Computing Seminar: Lecture 5: Graph Algorithms & Pagerank

Graph Algorithms and PageRank Christophe bisciglia, Aaron kimball, and Sierra Michels-Slettvet. Graph Representations Breadth-First Search and MapReduce Problem: Sending the entire graph to a map task involves an enormous amount of memory

Uploaded by

Anuroopa Pai M
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views33 pages

Distributed Computing Seminar: Lecture 5: Graph Algorithms & Pagerank

Graph Algorithms and PageRank Christophe bisciglia, Aaron kimball, and Sierra Michels-Slettvet. Graph Representations Breadth-First Search and MapReduce Problem: Sending the entire graph to a map task involves an enormous amount of memory

Uploaded by

Anuroopa Pai M
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 33

Distributed Computing Seminar

Lecture 5: Graph Algorithms & PageRank

Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet Summer 2007


Except as otherwise noted, the content of this presentation is 2007 Google Inc. and licensed under the Creative Commons Attribution 2.5 License.

Outline
Motivation Graph Representations Breadth-First Search & Shortest-Path Finding PageRank

Motivating Concepts
Performing computation on a graph data structure requires processing at each node Each node contains node-specific data as well as links (edges) to other nodes Computation must traverse the graph and perform the computation step

How do we traverse a graph in MapReduce? How do we represent the graph for this?

Breadth-First Search
Breadth-First Search is an iterated algorithm over graphs Frontier advances from origin by one level with each pass
3 1 2

2 3

3 3 4

Breadth-First Search & MapReduce


Problem: This doesn't fit into MapReduce Solution: Iterated passes through MapReduce map some nodes, result includes additional nodes which are fed into successive MapReduce passes

Breadth-First Search & MapReduce


Problem: Sending the entire graph to a map task (or hundreds/thousands of map tasks) involves an enormous amount of memory Solution: Carefully consider how we represent graphs

Graph Representations
The most straightforward representation of graphs uses references from each node to its neighbors

Direct References
Structure is inherent to object Iteration requires linked list threaded through graph Requires common view of shared memory (synchronization!) Not easily serializable
class GraphNode { Object data; Vector<GraphNode> out_edges; GraphNode iter_next; }

Adjacency Matrices
Another classic graph representation. M[i][j]= '1' implies a link from node i to j. Naturally encapsulates iteration over nodes

1 1 2 0 1

2 1 0

3 0 1

4 1 1

3 4

0 1

1 0

0 1

0 0

Adjacency Matrices: Sparse Representation


Adjacency matrix for most large graphs (e.g., the web) will be overwhelmingly full of zeros. Each row of the graph is absurdly long

Sparse matrices only include non-zero elements

Sparse Matrix Representation


1: (3, 1), (18, 1), (200, 1) 2: (6, 1), (12, 1), (80, 1), (400, 1) 3: (1, 1), (14, 1)

Sparse Matrix Representation


1: 3, 18, 200 2: 6, 12, 80, 400 3: 1, 14

Finding the Shortest Path


A common graph search application is finding the shortest path from a start node to one or more target nodes Commonly done on a single machine with Dijkstra's Algorithm Can we use BFS to find the shortest path via MapReduce?
This is called the single-source shortest path problem. (a.k.a. SSSP)

Finding the Shortest Path: Intuition

We can define the solution to this problem inductively:


DistanceTo(startNode) For all nodes n directly

=0 reachable from startNode, DistanceTo(n) = 1 For all nodes n reachable from some other set of nodes S,
DistanceTo(n) = 1 + min(DistanceTo(m), m S)

From Intuition to Algorithm

A map task receives a node n as a key, and (D, points-to) as its value p points-to, emit (p, D+1) Reduce task gathers possible distances to a given p and selects the minimum one
D is the distance to the node from the start points-to is a list of nodes reachable from n

What This Gives Us


This MapReduce task can advance the known frontier by one hop To perform the whole BFS, a nonMapReduce component then feeds the output of this step back into the MapReduce task for another iteration

Problem: Where'd the points-to list go? Solution: Mapper emits (n, points-to) as

well

Blow-up and Termination


This algorithm starts from one node Subsequent iterations include many more nodes of the graph as frontier advances Does this ever terminate?

Yes!

Eventually, routes between nodes will stop being discovered and no better distances will be found. When distance is the same, we stop Mapper should emit (n, D) to ensure that current distance is carried into the reducer

Adding weights
Weighted-edge shortest path is more useful than cost==1 approach Simple change: points-to list in map task includes a weight 'w' for each pointed-to node

emit

(p, D+wp) instead of (p, D+1) for each node p Works for positive-weighted graph

Comparison to Dijkstra
Dijkstra's algorithm is more efficient because at any step it only pursues edges from the minimum-cost path inside the frontier MapReduce version explores all paths in parallel; not as efficient overall, but the architecture is more scalable Equivalent to Dijkstra for weight=1 case

PageRank: Random Walks Over The Web


If a user starts at a random web page and surfs by clicking links and randomly entering new URLs, what is the probability that s/he will arrive at a given page? The PageRank of a page captures this notion

More

popular or worthwhile pages get a higher rank

PageRank: Visually
www.cnn.com

en.wikipedia.org

www.nytimes.com

PageRank: Formula
Given page A, and pages T1 through Tn linking to A, PageRank is defined as: PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn)) C(P) is the cardinality (out-degree) of page P d is the damping (random URL) factor

PageRank: Intuition

Calculation is iterative: PRi+1 is based on PRi Each page distributes its PRi to all pages it links to. Linkees add up their awarded rank fragments to find their PRi+1 d is a tunable parameter (usually = 0.85) encapsulating the random jump factor

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

PageRank: Issues
Is PageRank guaranteed to converge? How quickly? What is the correct value of d, and how sensitive is the algorithm to it? What is an efficient algorithm to solve this?

PageRank: First Implementation


Create two tables 'current' and 'next' holding the PageRank for each page. Seed 'current' with initial PR values Iterate over all pages in the graph, distributing PR from 'current' into 'next' of linkees current := next; next := fresh_table(); Go back to iteration step or end if converged

Distribution of the Algorithm

Key insights allowing parallelization:


The

'next' table depends on 'current', but not on any other rows of 'next' Individual rows of the adjacency matrix can be processed in parallel Sparse matrix rows are relatively small

Distribution of the Algorithm

Consequences of insights:
We

can map each row of 'current' to a list of PageRank fragments to assign to linkees These fragments can be reduced into a single PageRank value for a page by summing Graph representation can be even more compact; since each element is simply 0 or 1, only transmit column numbers where it's 1

Map step: break page rank into even fragments to distribute to link targets

Reduce step: add together fragments into next PageRank

Iterate for next step...

Phase 1: Parse HTML

Map task takes (URL, page content) pairs and maps them to (URL, (PRinit, list-of-urls))
is the seed PageRank for URL list-of-urls contains all pages pointed to by URL
PRinit

Reduce task is just the identity function

Phase 2: PageRank Distribution

Map task takes (URL, (cur_rank, url_list))


For

each u in url_list, emit (u, cur_rank/|url_list|) Emit (URL, url_list) to carry the points-to list along through iterations

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

Phase 2: PageRank Distribution

Reduce task gets (URL, url_list) and many (URL, val) values
Sum

vals and fix up with d Emit (URL, (new_rank, url_list))

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

Finishing up...
A non-parallelizable component determines whether convergence has been achieved (Fixed number of iterations? Comparison of key values?) If so, write out the PageRank lists - done! Otherwise, feed output of Phase 2 into another Phase 2 iteration

Conclusions
MapReduce isn't the greatest at iterated computation, but still helps run the heavy lifting Key element in parallelization is independent PageRank computations in a given step Parallelization requires thinking about minimum data partitions to transmit (e.g., compact representations of graph rows)

Even

the implementation shown today doesn't actually scale to the whole Internet; but it works for intermediate-sized graphs

You might also like