Parlab Parallel Boot Camp: Cloud Computing With Mapreduce and Hadoop

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 55

PARLab Parallel Boot Camp

Cloud Computing with MapReduce


and Hadoop

Matei Zaharia
Electrical Engineering and Computer Sciences
University of California, Berkeley
What is Cloud Computing?

• “Cloud” refers to large Internet services like Google,


Yahoo, etc that run on 10,000’s of machines

• More recently, “cloud computing” refers to services by


these companies that let external customers rent
computing cycles on their clusters
– Amazon EC2: virtual machines at 10¢/hour, billed hourly
– Amazon S3: storage at 15¢/GB/month

• Attractive features:
– Scale: up to 100’s of nodes
– Fine-grained billing: pay only for what you use
– Ease of use: sign up with credit card, get root access
What is MapReduce?

• Simple data-parallel programming model designed for


scalability and fault-tolerance

• Pioneered by Google
– Processes 20 petabytes of data per day

• Popularized by open-source Hadoop project


– Used at Yahoo!, Facebook, Amazon, …
What is MapReduce used for?

• At Google:
– Index construction for Google Search
– Article clustering for Google News
– Statistical machine translation
• At Yahoo!:
– “Web map” powering Yahoo! Search
– Spam detection for Yahoo! Mail
• At Facebook:
– Data mining
– Ad optimization
– Spam detection
Example: Facebook Lexicon

www.facebook.com/lexicon
Example: Facebook Lexicon

www.facebook.com/lexicon
What is MapReduce used for?

• In research:
– Astronomical image analysis (Washington)
– Bioinformatics (Maryland)
– Analyzing Wikipedia conflicts (PARC)
– Natural language processing (CMU)
– Particle physics (Nebraska)
– Ocean climate simulation (Washington)
– <Your application here>
Outline

• MapReduce architecture
• Example applications
• Getting started with Hadoop
• Higher-level languages over Hadoop: Pig and Hive
• Amazon Elastic MapReduce
MapReduce Design Goals

1. Scalability to large data volumes:


– 1000’s of machines, 10,000’s of disks

2. Cost-efficiency:
– Commodity machines (cheap, but unreliable)
– Commodity network
– Automatic fault-tolerance (fewer administrators)
– Easy to use (fewer programmers)
Typical Hadoop Cluster

Aggregation switch

Rack switch

• 40 nodes/rack, 1000-4000 nodes in cluster


• 1 Gbps bandwidth within rack, 8 Gbps out of rack
• Node specs (Yahoo terasort):
8 x 2GHz cores, 8 GB RAM, 4 disks (= 4 TB?)

Image from http://wiki.apache.org/hadoop-data/attachments/HadoopPresentations/attachments/YahooHadoopIntro-apachecon-us-2008.pdf


Typical Hadoop Cluster

Image from http://wiki.apache.org/hadoop-data/attachments/HadoopPresentations/attachments/aw-apachecon-eu-2009.pdf


Challenges

1. Cheap nodes fail, especially if you have many


– Mean time between failures for 1 node = 3 years
– Mean time between failures for 1000 nodes = 1 day
– Solution: Build fault-tolerance into system

1. Commodity network = low bandwidth


– Solution: Push computation to the data

1. Programming distributed systems is hard


– Solution: Data-parallel programming model: users write “map” &
“reduce” functions, system distributes work and handles faults
Hadoop Components

• Distributed file system (HDFS)


– Single namespace for entire cluster
– Replicates data 3x for fault-tolerance

• MapReduce framework
– Executes user jobs specified as “map” and “reduce”
functions
– Manages work distribution & fault-tolerance
Hadoop Distributed File System

• Files split into 128MB blocks


Namenode
• Blocks replicated across several File1
datanodes (usually 3) 1
2

• Single namenode stores 3


4
metadata (file names, block
locations, etc)
• Optimized for large files,
sequential reads
• Files are append-only 1 2 1 3
2 1 4 2
4 3 3 4

Datanodes
MapReduce Programming Model

• Data type: key-value records

• Map function:
(Kin, Vin)  list(Kinter, Vinter)

• Reduce function:
(Kinter, list(Vinter))  list(Kout, Vout)
Example: Word Count

def mapper(line):
foreach word in line.split():
output(word, 1)

def reducer(key, values):


output(key, sum(values))
Word Count Execution

Input Map Shuffle & Sort Reduce Output

the, 1
brown, 1
the quick fox, 1 brown, 2
Map
brown fox fox, 2
Reduce
how, 1
the, 1
fox, 1
now, 1
the, 1 the, 3
the fox ate
Map
the mouse quick, 1

how, 1
ate, 1 ate, 1
now, 1
mouse, 1
brown, 1 Reduce cow, 1
how now mouse, 1
Map cow, 1
brown cow quick, 1
MapReduce Execution Details

• Single master controls job execution on multiple slaves

• Mappers preferentially placed on same node or same


rack as their input block
– Minimizes network usage

• Mappers save outputs to local disk before serving them


to reducers
– Allows recovery if a reducer crashes
– Allows having more reducers than nodes
An Optimization: The Combiner

• A combiner is a local aggregation function for


repeated keys produced by same map
• Works for associative functions like sum, count, max

• Decreases size of intermediate data

• Example: map-side aggregation for Word Count:


def combiner(key, values):
output(key, sum(values))
Word Count with Combiner

Input Map & Combine Shuffle & Sort Reduce Output

the, 1
brown, 1
the quick fox, 1 brown, 2
Map
brown fox fox, 2
Reduce
how, 1
the, 2 now, 1
fox, 1
the, 3
the fox ate
Map
the mouse quick, 1

how, 1
ate, 1 ate, 1
now, 1
mouse, 1
brown, 1 Reduce cow, 1
how now mouse, 1
Map cow, 1
brown cow quick, 1
Fault Tolerance in MapReduce

1. If a task crashes:
– Retry on another node
» OK for a map because it has no dependencies
» OK for reduce because map outputs are on disk
– If the same task fails repeatedly, fail the job or ignore
that input block (user-controlled)

 Note: For these fault tolerance features to work,


your map and reduce tasks must be side-effect-free
Fault Tolerance in MapReduce

2. If a node crashes:
– Re-launch its current tasks on other nodes
– Re-run any maps the node previously ran
» Necessary because their output files were lost along
with the crashed node
Fault Tolerance in MapReduce

3. If a task is going slowly (straggler):


– Launch second copy of task on another node
(“speculative execution”)
– Take the output of whichever copy finishes first, and
kill the other

 Surprisingly important in large clusters


– Stragglers occur frequently due to failing hardware,
software bugs, misconfiguration, etc
– Single straggler may noticeably slow down a job
Takeaways

• By providing a data-parallel programming model,


MapReduce can control job execution in useful ways:
– Automatic division of job into tasks
– Automatic placement of computation near data
– Automatic load balancing
– Recovery from failures & stragglers

• User focuses on application, not on complexities of


distributed computing
Outline

• MapReduce architecture
• Example applications
• Getting started with Hadoop
• Higher-level languages over Hadoop: Pig and Hive
• Amazon Elastic MapReduce
1. Search

• Input: (lineNumber, line) records


• Output: lines matching a given pattern

• Map:
if(line matches pattern):
output(line)

• Reduce: identify function


– Alternative: no reducer (map-only job)
2. Sort

• Input: (key, value) records


• Output: same records, sorted by key

ant, bee
• Map: identity function Map
Reduce [A-M]
zebra
• Reduce: identify function
aardvark
ant
cow bee
cow
Map
elephant
pig

• Trick: Pick partitioning aardvark,


Reduce [N-Z]

function h such that elephant pig


sheep

k1<k2 => h(k1)<h(k2) Map sheep, yak yak


zebra
3. Inverted Index

• Input: (filename, text) records


• Output: list of files containing each word

• Map:
foreach word in text.split():
output(word, filename)

• Combine: uniquify filenames for each word

• Reduce:
def reduce(word, filenames):
output(word, sort(filenames))
Inverted Index Example

hamlet.txt
to, hamlet.txt
to be or be, hamlet.txt
not to be or, hamlet.txt afraid, (12th.txt)
not, hamlet.txt be, (12th.txt, hamlet.txt)
greatness, (12th.txt)
not, (12th.txt, hamlet.txt)
of, (12th.txt)
be, 12th.txt or, (hamlet.txt)
12th.txt
not, 12th.txt to, (hamlet.txt)
be not afraid, 12th.txt
afraid of of, 12th.txt
greatness
greatness, 12th.txt
4. Most Popular Words

• Input: (filename, text) records


• Output: top 100 words occurring in the most files

• Two-stage solution:
– Job 1:
» Create inverted index, giving (word, list(file)) records
– Job 2:
» Map each (word, list(file)) to (count, word)
» Sort these records by count as in sort job

• Optimizations:
– Map to (word, 1) instead of (word, file) in Job 1
– Count files in job 1’s reducer rather than job 2’s mapper
– Estimate count distribution in advance and drop rare words
5. Numerical Integration

• Input: (start, end) records for sub-ranges to integrate


– Easy using custom InputFormat
• Output: integral of f(x) dx over entire range

• Map:
def map(start, end):
sum = 0
for(x = start; x < end; x += step):
sum += f(x) * step
output(“”, sum)
• Reduce:
def reduce(key, values):
output(key, sum(values))
Outline

• MapReduce architecture
• Example applications
• Getting started with Hadoop
• Higher-level languages over Hadoop: Pig and Hive
• Amazon Elastic MapReduce
Getting Started with Hadoop

• Download from hadoop.apache.org


• To install locally, unzip and set JAVA_HOME
• Details: hadoop.apache.org/core/docs/current/quickstart.html

• Three ways to write jobs:


– Java API
– Hadoop Streaming (for Python, Perl, etc)
– Pipes API (C++)
Word Count in Java

public class MapClass extends MapReduceBase


implements Mapper<LongWritable, Text, Text, IntWritable> {

private final static IntWritable ONE = new IntWritable(1);

public void map(LongWritable key, Text value,


OutputCollector<Text, IntWritable> out,
Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer itr = new StringTokenizer(line);
while (itr.hasMoreTokens()) {
out.collect(new text(itr.nextToken()), ONE);
}
}
}
Word Count in Java

public class ReduceClass extends MapReduceBase


implements Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text key, Iterator<IntWritable> values,


OutputCollector<Text, IntWritable> out,
Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
out.collect(key, new IntWritable(sum));
}
}
Word Count in Java
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");

conf.setMapperClass(MapClass.class);
conf.setCombinerClass(ReduceClass.class);
conf.setReducerClass(ReduceClass.class);

FileInputFormat.setInputPaths(conf, args[0]);
FileOutputFormat.setOutputPath(conf, new Path(args[1]));

conf.setOutputKeyClass(Text.class); // out keys are words (strings)


conf.setOutputValueClass(IntWritable.class); // values are counts

JobClient.runJob(conf);
}
Word Count in Python with Hadoop Streaming

Mapper.py: import sys


for line in sys.stdin:
for word in line.split():
print(word.lower() + "\t" + 1)

Reducer.py: import sys


counts = {}
for line in sys.stdin:
word, count = line.split("\t”)
dict[word] = dict.get(word, 0) +
int(count)
for word, count in counts:
print(word.lower() + "\t" + 1)
Outline

• MapReduce architecture
• Example applications
• Getting started with Hadoop
• Higher-level languages over Hadoop: Pig and Hive
• Amazon Elastic MapReduce
Motivation

• Many parallel algorithms can be expressed by a series


of MapReduce jobs

• But MapReduce is fairly low-level: must think about


keys, values, partitioning, etc

• Can we capture common “job building blocks”?


Pig

• Started at Yahoo! Research


• Runs about 30% of Yahoo!’s jobs
• Features:
– Expresses sequences of MapReduce jobs
– Data model: nested “bags” of items
– Provides relational (SQL) operators (JOIN, GROUP BY, etc)
– Easy to plug in Java functions
– Pig Pen development environment for Eclipse
An Example Problem

Suppose you have user


Load Users Load Pages
data in one file, page
view data in another, and
Filter by age
you need to find the top
5 most visited pages by
Join on name
users aged 18 - 25.
Group on url

Count clicks

Order by clicks

Take top 5

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt


In MapReduce
import java.io.IOException;                   reporter.setStatus("OK");           lp.setOutputKeyClass(Text.class);  
import java.util.ArrayList;               }           lp.setOutputValueClass(Text.class);  
import java.util.Iterator;            lp.setMapperClass(LoadPages.class);  
import java.util.List;              // Do the cross product and collect the values           FileInputFormat.addInputPath(lp, new 
              for (String s1 : first) {   Path("/user/gates/pages"));
 
import org.apache.hadoop.fs.Path;                   for (String s2 : second) {           FileOutputFormat.setOutputPath(lp,  
import org.apache.hadoop.io.LongWritable;                       String outval = key + "," + s1 + "," + s2;               new Path("/user/gates/tmp/indexed_pages"));  
import org.apache.hadoop.io.Text;                       oc.collect(null, new Text(outval));           lp.setNumReduceTasks(0); 
import org.apache.hadoop.io.Writable;                       reporter.setStatus("OK");           Job loadPages = new Job(lp);  
import org.apache.hadoop.io.WritableComparable;                     }   
import org.apache.hadoop.mapred.FileInputFormat;               }           JobConf lfu = new JobConf(MRExample.class);  
import org.apache.hadoop.mapred.FileOutputFormat;           }          lfu.setJobName("Load and Filter Users");  
import org.apache.hadoop.mapred.JobConf;       }          lfu.setInputFormat(TextInputFormat.class);  
import org.apache.hadoop.mapred.KeyValueTextInputFormat;       public static class LoadJoined extends MapReduceBase           lfu.setOutputKeyClass(Text.class);  
import org.ap ache.hadoop.mapred.Mapper;           implements Mapper<Text, Text, Text, LongWritable> {           lfu.setOutputValueClass(Text.class);  
import org.apache.hadoop.mapred.MapReduceBase;             lfu.setMapperClass(LoadAndFilterUsers.class);  
import org.apache.hadoop.mapred.OutputCollector;           public void map(           FileInputFormat.add
InputPath(lfu, new 
import org.apache.hadoop.mapred.RecordReader;                   Text k,   Path("/user/gates/users"));
 
import org.apache.hadoop.mapred.Reducer;                   Text val,           FileOutputFormat.setOutputPath(lfu,  
import org.apache.hadoop.mapred.Reporter;                   OutputColle ctor<Text, LongWritable> oc,               new Path("/user/gates/tmp/filtered_users"));  
import org.apache.hadoop.mapred.SequenceFileInputFormat;                   Reporter reporter) throws IOException {           lfu.setNumReduceTasks(0);  
import org.apache.hadoop.mapred.SequenceFileOutputFormat;               // Find the url           Job loadUsers = new Job(lfu);  
import org.apache.hadoop.mapred.TextInputFormat;               String line = val.toString();    
import org.apache.hadoop.mapred.jobcontrol.Job;               int firstComma = line.indexOf(',');           JobConf join = new JobConf( MRExample.class);  
import org.apache.hadoop.mapred.jobcontrol.JobC ontrol;              int secondComma = line.indexOf(',', first Comma);          join.setJobName("Join Users and Pages");  
import org.apache.hadoop.mapred.lib.IdentityMapper;               String key = line.substring(firstComma, secondComma);           join.setInputFormat(KeyValueTextInputFormat.class);  
              // drop the rest of the record, I don't need it anymore,           join.setOutputKeyClass(Text.class);  
public class MRExample {              // just pass a 1 for the combiner/reducer to sum instead.           join.setOutputValueClass(Text.class);  
    public static class LoadPages extends MapReduceBase               Text outKey = new Text(key);           join.setMapperClass(IdentityMap per.class);  
        implements Mapper<LongWritable, Text, Text, Text> {               oc.collect(outKey, new LongWritable(1L));           join.setReducerClass(Join.class);  
          }          FileInputFormat.addInputPath(join, new 
        public void map(LongWritable k, Text val,       }  Path("/user/gates/tmp/indexed_pages"));  
                OutputCollector<Text, Text> oc,       public static class ReduceUrls extends MapReduceBase           FileInputFormat.addInputPath(join, new 
                Reporter reporter) throws IOException {           implements Reducer<Text, LongWritable, WritableComparable,  Path("/user/gates/tmp/filtered_users"));  
            // Pull the key out   Writable> {          FileOutputFormat.se
tOutputPath(join, new 
            String line = val.toString();     Path("/user/gates/tmp/joined")); 
            int firstComma = line.indexOf(',');           public void reduce(           join.setNumReduceTasks(50);  
            String key = line.sub string(0, firstComma);                   Text ke y,          Job joinJob = new Job(join);  
            String value = line.substring(firstComma + 1);                   Iterator<LongWritable> iter,            joinJob.addDependingJob(loadPages);  
            Text outKey = new Text(key);                   OutputCollector<WritableComparable, Writable> oc,           joinJob.addDependingJob(loadUsers);  
            // Prepend an index to the value so we know which file                   Reporter reporter) throws IOException {    
            // it came from.               // Add up all the values we see           JobConf group = new JobConf(MRE xample.class);  
            Text outVal = new Text("1 " + value);             group.setJobName("Group URLs");  
            oc.collect(outKey, outVal);               long sum = 0;           group.setInputFormat(KeyValueTextInputFormat.class);  
        }               wh ile (iter.hasNext()) {          group.setOutputKeyClass(Text.class);  
    }                  sum += iter.next().get();           group.setOutputValueClass(LongWritable.class);  
    public static class LoadAndFilterUsers extends MapReduceBase                   reporter.setStatus("OK");           group.setOutputFormat(SequenceFi leOutputFormat.class);  
        implements Mapper<LongWritable, Text, Text, Text> {               }           group.setMapperClass(LoadJoined.class);  
            group.setCombinerClass(ReduceUrls.class);  
        public void map(LongWritable k, Text val,               oc.collect(key, new LongWritable(sum));           group.setReducerClass(ReduceUrls.class);  
                 OutputCollector<Text, Text> oc,           }          FileInputFormat.addInputPath(group, new 
                Reporter reporter) throws IOException {       }  Path("/user/gates/tmp/joined")); 
            // Pull the key out       public static class LoadClicks extends MapReduceBase           FileOutputFormat.setOutputPath(group, new 
            String line = val.toString();           implements Mapper<WritableComparable, Writable, LongWritable,  Path("/user/gates/tmp/grouped"));  
            int firstComma = line.indexOf(',');   Text> {          group.setNumReduceTasks(50);  
            String value = line.substring( firstComma + 1);             Job groupJob = new Job(group);  
            int age = Integer.parseInt(value);           public void map(           groupJob.addDependingJob(joinJob);  
            if (age < 18 || age > 25) return;                   WritableComparable key,    
            String key = line.substring(0, firstComma);                   Writable val,           JobConf top100 = new JobConf(MRExample.class);  
            Text outKey = new Text(key);                   OutputCollector<LongWritable, Text> oc,           top100.setJobName("Top 100 sites");  
            // Prepend an index to the value so w e know which file
                  Reporter reporter)  throws IOException {           top100.setInputFormat(SequenceFileInputFormat.class);  
            // it came from.               oc.collect((LongWritable)val, (Text)key);           top100.setOutputKeyClass(LongWritable.class);  
            Text outVal = new Text("2" + value);           }          top100.setOutputValueClass(Text.class);  
            oc.collect(outKey, outVal);       }          top100.setOutputFormat(SequenceFileOutputF ormat.class);  
        }       public static class LimitClicks extends MapReduceBase           top100.setMapperClass(LoadClicks.class);  
    }          implements Reducer<LongWritable, Text, LongWritable, Text> {           top100.setCombinerClass(LimitClicks.class);  
    public static class Join extends MapReduceBase             top100.setReducerClass(LimitClicks.class);  
        implements Reducer<Text, Text, Text, Text> {           int count = 0;           FileInputFormat.addInputPath(top100, new 
          public  void reduce(   Path("/user/gates/tmp/grouped"));  
        public void reduce(Text key,               LongWritable key,           FileOutputFormat.setOutputPath(top100, new 
                Iterator<Text> iter,                Iterator<Text> iter,   Path("/user/gates/top100sitesforusers18to25"));  
                OutputCollector<Text, Text> oc,               OutputCollector<LongWritable, Text> oc,           top100.setNumReduceTasks(1);  
                Reporter reporter) throws IOException {               Reporter reporter) throws IOException {           Job limit = new Job(top100);  
            // For each value, figure out which file it's from and            limit.addDependingJob(groupJob);  
store it              // Only output the first 100 records    
            // accordingly.               while (count  < 100 && iter.hasNext()) {           JobControl jc = new JobControl("Find top  100 sites for users 
            List<String> first = new ArrayList<String>();                   oc.collect(key, iter.next());   18 to 25"); 
            List<String> second = new ArrayList<String>();                   count++;           jc.addJob(loadPages);  
              }           jc.addJob(loadUsers);  
            while (iter.hasNext()) {           }          jc.addJob(joinJob);
 
                Text t = iter.next();       }          jc.addJob(groupJob); 
                String value = t.to String();       public static void main(String[] args) throws IOException {           jc.addJob(limit);
 
                if (value.charAt(0) == '1')          JobConf lp = new JobConf(MRExample.class);           jc.run(); 
first.add(value.substring(1));           lp.se tJobName("Load Pages");       } 
                else second.add(value.substring(1));           lp.setInputFormat(TextInputFormat.class);   } 

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt


In Pig Latin

Users = load ‘users’ as (name, age);


Filtered = filter Users by
age >= 18 and age <= 25;
Pages = load ‘pages’ as (user, url);
Joined = join Filtered by name, Pages by user;
Grouped = group Joined by url;
Summed = foreach Grouped generate group,
count(Joined) as clicks;
Sorted = order Summed by clicks desc;
Top5 = limit Sorted 5;

store Top5 into ‘top5sites’;

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt


Ease of Translation

Notice how naturally the components of the job translate into Pig Latin.

Load Users Load Pages

Users = load …
Filter by age
Filtered = filter …
Pages = load …
Join on name
Joined = join …
Group on url
Grouped = group …
Summed = … count()…
Count clicks Sorted = order …
Top5 = limit …
Order by clicks

Take top 5

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt


Ease of Translation

Notice how naturally the components of the job translate into Pig Latin.

Load Users Load Pages

Users = load …
Filter by age
Filtered = filter …
Pages = load …
Join on name
Joined = join …
Job 1
Group on url
Grouped = group …
Job 2 Summed = … count()…
Count clicks Sorted = order …
Top5 = limit …
Order by clicks
Job 3
Take top 5

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt


Hive

• Developed at Facebook
• Used for majority of Facebook jobs
• “Relational database” built on Hadoop
– Maintains list of table schemas
– SQL-like query language (HQL)
– Can call Hadoop Streaming scripts from HQL
– Supports table partitioning, clustering, complex
data types, some optimizations
Sample Hive Queries
• Find top 5 pages visited by users aged 18-25:
SELECT p.url, COUNT(1) as clicks
FROM users u JOIN page_views p ON (u.name = p.user)
WHERE u.age >= 18 AND u.age <= 25
GROUP BY p.url
ORDER BY clicks
LIMIT 5;

• Filter page views through Python script:


SELECT TRANSFORM(p.user, p.date)
USING 'map_script.py'
AS dt, uid CLUSTER BY dt
FROM page_views p;
Outline

• MapReduce architecture
• Example applications
• Getting started with Hadoop
• Higher-level languages over Hadoop: Pig and Hive
• Amazon Elastic MapReduce
Amazon Elastic MapReduce

• Provides a web-based interface and command-line


tools for running Hadoop jobs on Amazon EC2
• Data stored in Amazon S3
• Monitors job and shuts down machines after use
• Small extra charge on top of EC2 pricing

• If you want more control over how you Hadoop


runs, you can launch a Hadoop cluster on EC2
manually using the scripts in src/contrib/ec2
Elastic MapReduce Workflow
Elastic MapReduce Workflow
Elastic MapReduce Workflow
Elastic MapReduce Workflow
Conclusions

• MapReduce programming model hides the complexity of


work distribution and fault tolerance

• Principal design philosophies:


– Make it scalable, so you can throw hardware at problems
– Make it cheap, lowering hardware, programming and admin costs

• MapReduce is not suitable for all problems, but when it


works, it may save you quite a bit of time

• Cloud computing makes it straightforward to start


using Hadoop (or other parallel software) at scale
Resources
• Hadoop: http://hadoop.apache.org/core/
• Pig: http://hadoop.apache.org/pig
• Hive: http://hadoop.apache.org/hive
• Video tutorials: http://www.cloudera.com/hadoop-training

• Amazon Web Services: http://aws.amazon.com/


• Amazon Elastic MapReduce guide:
http://docs.amazonwebservices.com/ElasticMapReduce/la
test/GettingStartedGuide/

• My email: [email protected]

You might also like