0% found this document useful (0 votes)
71 views33 pages

Mapreduce: Theory and Implementation: Cse 490H - Intro To Distributed Computing, Modified by George Lee

MapReduce is a programming model and software framework for processing large datasets in a distributed manner across clusters of computers. It allows for massive scalability by automatically parallelizing and distributing workloads. Programmers implement a "map" function that processes key-value pairs to generate intermediate output, and a "reduce" function that merges all intermediate values associated with the same key. MapReduce provides fault tolerance, parallelization, and tools for monitoring jobs. As an example, counting word frequencies in a set of documents can be modeled as mapping each word to an output of "1", then reducing by summing the counts for each unique word.

Uploaded by

c1099775
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views33 pages

Mapreduce: Theory and Implementation: Cse 490H - Intro To Distributed Computing, Modified by George Lee

MapReduce is a programming model and software framework for processing large datasets in a distributed manner across clusters of computers. It allows for massive scalability by automatically parallelizing and distributing workloads. Programmers implement a "map" function that processes key-value pairs to generate intermediate output, and a "reduce" function that merges all intermediate values associated with the same key. MapReduce provides fault tolerance, parallelization, and tools for monitoring jobs. As an example, counting word frequencies in a set of documents can be modeled as mapping each word to an output of "1", then reducing by summing the counts for each unique word.

Uploaded by

c1099775
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 33

MapReduce: Theory and Implementation

CSE 490h Intro to Distributed Computing, Modified by George Lee


Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5 License. 1

Outline
Lisp/ML map/fold review MapReduce overview Example

Map
map f lst: (a->b) -> (a list) -> (b list) Creates a new list by applying f to each element of the input list; returns output in order.

Fold
fold f x0 lst: ('a*'b->'b)->'b->('a list)->'b Moves across a list, applying f to each element plus an accumulator. f returns the next accumulator value, which is combined with the next element of the list

Implicit Parallelism In map

In a purely functional setting, elements of a list being computed by map cannot see the effects of the computations on other elements If order of application of f to elements in list is commutative, we can reorder or parallelize execution This is the secret that MapReduce exploits

MapReduce

Motivation: Large Scale Data Processing


Want to process lots of data ( > 1 TB) Want to parallelize across hundreds/thousands of CPUs Want to make this easy

MapReduce
Automatic parallelization & distribution Fault-tolerant Provides status and monitoring tools Clean abstraction for programmers

Programming Model
Borrows from functional programming Users implement interface of two functions:

map (in_key, in_value) -> (out_key, intermediate_value) list


reduce (out_key, intermediate_value list) -> out_value list

map
Records from the data source (lines out of files, rows of a database, etc) are fed into the map function as key*value pairs: e.g., (filename, line). map() produces one or more intermediate values along with an output key from the input.

reduce
After the map phase is over, all the intermediate values for a given output key are combined together into a list reduce() combines those intermediate values into one or more final values for that same output key (in practice, usually only one final value per key)

Parallelism
map() functions run in parallel, creating different intermediate values from different input data sets reduce() functions also run in parallel, each working on a different output key All values are processed independently Bottleneck: reduce phase cant start until map phase is completely finished.

Example: Count word occurrences


map(String input_key, String input_value): // input_key: document name // input_value: document contents for each word w in input_value: EmitIntermediate(w, "1");

reduce(String output_key, Iterator intermediate_values):


// output_key: a word // output_values: a list of counts

int result = 0;
for each v in intermediate_values: result += ParseInt(v); Emit(AsString(result));

Example vs. Actual Source Code


Example is written in pseudo-code Actual implementation is in C++, using a MapReduce library Bindings for Python and Java exist via interfaces True code is somewhat more involved (defines how the input key/values are divided up and accessed, etc.)

Locality
Master program divvies up tasks based on location of data: tries to have map() tasks on same machine as physical file data, or at least same rack map() task inputs are divided into 64 MB blocks: same size as Google File System chunks

Fault Tolerance

Master detects worker failures


Re-executes

completed & in-progress map()

tasks Re-executes in-progress reduce() tasks

Master notices particular input key/values cause crashes in map(), and skips those values on re-execution.
Effect:

Can work around bugs in third-party libraries!

Optimizations

No reduce can start until map is complete:


A

single slow disk controller can rate-limit the whole process

Master redundantly executes slowmoving map tasks; uses results of first copy to finish

Optimizations
Combiner functions can run on same machine as a mapper Causes a mini-reduce phase to occur before the real reduce phase, to save bandwidth

Under what conditions is it sound to use a combiner?

MapReduce Conclusions

MapReduce has proven to be a useful abstraction Greatly simplifies large-scale computations at Google Functional programming paradigm can be applied to large-scale applications Fun to use: focus on problem, let library deal w/ messy details

PageRank and MapReduce

21

PageRank: Formula
Given page A, and pages T1 through Tn linking to A, PageRank is defined as:
PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn)) C(P) is the cardinality (out-degree) of page P d is the damping (random URL) factor

PageRank: Intuition

Calculation is iterative: PRi+1 is based on PRi Each page distributes its PRi to all pages it links to. Linkees add up their awarded rank fragments to find their PRi+1 d is a tunable parameter (usually = 0.85) encapsulating the random jump factor

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

PageRank: First Implementation


Create two tables 'current' and 'next' holding the PageRank for each page. Seed 'current' with initial PR values Iterate over all pages in the graph (represented as a sparse adjacency matrix), distributing PR from 'current' into 'next' of linkees current := next; next := fresh_table(); Go back to iteration step or end if converged

Distribution of the Algorithm

Key insights allowing parallelization:


The

'next' table depends on 'current', but not on any other rows of 'next' Individual rows of the adjacency matrix can be processed in parallel Sparse matrix rows are relatively small

Distribution of the Algorithm

Consequences of insights:
We

can map each row of 'current' to a list of PageRank fragments to assign to linkees These fragments can be reduced into a single PageRank value for a page by summing Graph representation can be even more compact; since each element is simply 0 or 1, only transmit column numbers where it's 1

Phase 1: Parse HTML

Map task takes (URL, page content) pairs and maps them to (URL, (PRinit, list-of-urls))
PRinit is the seed PageRank for URL list-of-urls contains all pages pointed to

by URL

Reduce task is just the identity function

Phase 2: PageRank Distribution

Map task takes (URL, (cur_rank, url_list))


For each u in url_list, emit (u, cur_rank/|url_list|) Emit (URL, url_list) to carry the points-to list

along through iterations

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

Phase 2: PageRank Distribution

Reduce task gets (URL, url_list) and many (URL, val) values
Sum vals and fix up with d Emit (URL, (new_rank, url_list))

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

Finishing up...
A non-parallelizable component determines whether convergence has been achieved (Fixed number of iterations? Comparison of key values?) If so, write out the PageRank lists - done! Otherwise, feed output of Phase 2 into another Phase 2 iteration

Conclusions
MapReduce isn't the greatest at iterated computation, but still helps run the heavy lifting Key element in parallelization is independent PageRank computations in a given step Parallelization requires thinking about minimum data partitions to transmit (e.g., compact representations of graph rows)

Even

the implementation shown today doesn't actually scale to the whole Internet; but it works for intermediate-sized graphs

Controversial Views
http://databasecolumn.vertica.com/databas e-innovation/mapreduce-a-major-stepbackwards/ http://databasecolumn.vertica.com/databas e-innovation/mapreduce-ii/

33

You might also like