0% found this document useful (0 votes)
11 views49 pages

Parlab Parallel Boot Camp Cloud Computing With Mapreduce and Hadoop

Uploaded by

Shree Nilaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views49 pages

Parlab Parallel Boot Camp Cloud Computing With Mapreduce and Hadoop

Uploaded by

Shree Nilaya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 49

What is MapReduce?

• Simple data-parallel programming model


designed for scalability and fault-tolerance

• Pioneered by Google
– Processes 20 petabytes of data per day

• Popularized by open-source Hadoop project


– Used at Yahoo!, Facebook, Amazon, …
What is MapReduce used for?
• At Google:
– Index construction for Google Search
– Article clustering for Google News
– Statistical machine translation
• At Yahoo!:
– “Web map” powering Yahoo! Search
– Spam detection for Yahoo! Mail
• At Facebook:
– Data mining
– Ad optimization
– Spam detection
What is MapReduce used for?
• In research:
– Astronomical image analysis (Washington)
– Bioinformatics (Maryland)
– Analyzing Wikipedia conflicts (PARC)
– Natural language processing (CMU)
– Particle physics (Nebraska)
– Ocean climate simulation (Washington)
– <Your application here>
MapReduce Design Goals
1. Scalability to large data volumes:
– 1000’s of machines, 10,000’s of disks

2. Cost-efficiency:
– Commodity machines (cheap, but unreliable)
– Commodity network
– Automatic fault-tolerance (fewer administrators)
– Easy to use (fewer programmers)
Typical Hadoop Cluster
Aggregation switch

Rack switch

• 40 nodes/rack, 1000-4000 nodes in cluster


• 1 Gbps bandwidth within rack, 8 Gbps out of rack
• Node specs (Yahoo terasort):
8 x 2GHz cores, 8 GB RAM, 4 disks (= 4 TB?)

Image from http://wiki.apache.org/hadoop-data/attachments/HadoopPresentations/attachments/YahooHadoopIntro-apachecon-us-2008.pdf


Typical Hadoop Cluster

Image from http://wiki.apache.org/hadoop-data/attachments/HadoopPresentations/attachments/aw-apachecon-eu-2009.pdf


Challenges
1. Cheap nodes fail, especially if you have many
– Mean time between failures for 1 node = 3 years
– Mean time between failures for 1000 nodes = 1 day
– Solution: Build fault-tolerance into system

1. Commodity network = low bandwidth


– Solution: Push computation to the data

1. Programming distributed systems is hard


– Solution: Data-parallel programming model: users write
“map” & “reduce” functions, system distributes work
and handles faults
Hadoop Components
• Distributed file system (HDFS)
– Single namespace for entire cluster
– Replicates data 3x for fault-tolerance

• MapReduce framework
– Executes user jobs specified as “map” and
“reduce” functions
– Manages work distribution & fault-tolerance
Hadoop Distributed File System
• Files split into 128MB blocks
Namenode
• Blocks replicated across File1
1
2
several datanodes (usually 3) 3
4

• Single namenode stores


metadata (file names, block
locations, etc)
• Optimized for large files, 1 2 1 3

sequential reads 2
4
1
3
4
3
2
4

• Files are append-only Datanodes


MapReduce Programming Model
• Data type: key-value records

• Map function:
(Kin, Vin)  list(Kinter, Vinter)

• Reduce function:
(Kinter, list(Vinter))  list(Kout, Vout)
Word Count Execution
Input Map Shuffle & Sort Reduce Output

the, 1
brown, 1
the quick fox, 1 brown, 2
Map
brown fox fox, 2
Reduce
how, 1
the, 1
fox, 1
now, 1
the, 1 the, 3
the fox ate
Map
the mouse quick, 1

how, 1
ate, 1 ate, 1
now, 1
mouse, 1
brown, 1 Reduce cow, 1
how now
mouse, 1
Map cow, 1
brown cow quick, 1
MapReduce Execution Details
• Single master controls job execution on multiple
slaves

• Mappers preferentially placed on same node or


same rack as their input block
– Minimizes network usage

• Mappers save outputs to local disk before


serving them to reducers
– Allows recovery if a reducer crashes
– Allows having more reducers than nodes
An Optimization: The Combiner
• A combiner is a local aggregation function for
repeated keys produced by same map
• Works for associative functions like sum, count,
max

• Decreases size of intermediate data

• Example: map-side aggregation for Word Count:


def combiner(key, values):
output(key, sum(values))
Word Count with Combiner
Input Map & Combine Shuffle & Sort Reduce Output

the, 1
brown, 1
the quick fox, 1 brown, 2
Map
brown fox fox, 2
Reduce
how, 1
the, 2 now, 1
fox, 1
the, 3
the fox ate
Map
the mouse quick, 1

how, 1
ate, 1 ate, 1
now, 1
mouse, 1
brown, 1 Reduce cow, 1
how now
mouse, 1
Map cow, 1
brown cow quick, 1
Fault Tolerance in MapReduce
1. If a task crashes:
– Retry on another node
• OK for a map because it has no dependencies
• OK for reduce because map outputs are on disk
– If the same task fails repeatedly, fail the job or ignore that input
block (user-controlled)

 Note: For these fault tolerance features to work,


your map and reduce tasks must be side-effect-free
Fault Tolerance in MapReduce
2. If a node crashes:
– Re-launch its current tasks on other nodes
– Re-run any maps the node previously ran
• Necessary because their output files were lost along with the
crashed node
Fault Tolerance in MapReduce
3. If a task is going slowly (straggler):
– Launch second copy of task on another node (“speculative
execution”)
– Take the output of whichever copy finishes first, and kill the
other

 Surprisingly important in large clusters


– Stragglers occur frequently due to failing hardware, software
bugs, misconfiguration, etc
– Single straggler may noticeably slow down a job
Takeaways
• By providing a data-parallel programming
model, MapReduce can control job execution
in useful ways:
– Automatic division of job into tasks
– Automatic placement of computation near data
– Automatic load balancing
– Recovery from failures & stragglers

• User focuses on application, not on


complexities of distributed computing
Outline
• MapReduce architecture
• Example applications
• Getting started with Hadoop
• Higher-level languages over Hadoop: Pig and
Hive
• Amazon Elastic MapReduce
1. Search
• Input: (lineNumber, line) records
• Output: lines matching a given pattern

• Map:
if(line matches pattern):
output(line)

• Reduce: identify function


– Alternative: no reducer (map-only job)
2. Sort
• Input: (key, value) records
• Output: same records, sorted by key

• Map: identity function Map


ant, bee
Reduce [A-M]
• Reduce: identify function zebra
aardvark
ant
cow bee
cow
Map
elephant
• Trick: Pick partitioning pig
Reduce [N-Z]
function h such that aardvark,
elephant pig

k1<k2 => h(k1)<h(k2) Map sheep, yak


sheep
yak
zebra
3. Inverted Index
• Input: (filename, text) records
• Output: list of files containing each word

• Map:
foreach word in text.split():
output(word, filename)

• Combine: uniquify filenames for each word

• Reduce:
def reduce(word, filenames):
output(word, sort(filenames))
Inverted Index Example
hamlet.txt
to, hamlet.txt
to be or be, hamlet.txt
not to be or, hamlet.txt afraid, (12th.txt)
not, hamlet.txt be, (12th.txt, hamlet.txt)
greatness, (12th.txt)
not, (12th.txt, hamlet.txt)
of, (12th.txt)
be, 12th.txt or, (hamlet.txt)
12th.txt
not, 12th.txt to, (hamlet.txt)
be not afraid, 12th.txt
afraid of of, 12th.txt
greatness
greatness, 12th.txt
4. Most Popular Words
• Input: (filename, text) records
• Output: top 100 words occurring in the most files

• Two-stage solution:
– Job 1:
• Create inverted index, giving (word, list(file)) records
– Job 2:
• Map each (word, list(file)) to (count, word)
• Sort these records by count as in sort job

• Optimizations:
– Map to (word, 1) instead of (word, file) in Job 1
– Count files in job 1’s reducer rather than job 2’s mapper
– Estimate count distribution in advance and drop rare words
5. Numerical Integration
• Input: (start, end) records for sub-ranges to integrate
– Easy using custom InputFormat
• Output: integral of f(x) dx over entire range

• Map:
def map(start, end):
sum = 0
for(x = start; x < end; x += step):
sum += f(x) * step
output(“”, sum)
• Reduce:
def reduce(key, values):
output(key, sum(values))
Outline
• MapReduce architecture
• Example applications
• Getting started with Hadoop
• Higher-level languages over Hadoop: Pig and
Hive
• Amazon Elastic MapReduce
Getting Started with Hadoop
• Download from hadoop.apache.org
• To install locally, unzip and set JAVA_HOME
• Details: hadoop.apache.org/core/docs/current/quickstart.html

• Three ways to write jobs:


– Java API
– Hadoop Streaming (for Python, Perl, etc)
– Pipes API (C++)
Word Count in Java
public class MapClass extends MapReduceBase
implements Mapper<LongWritable, Text, Text, IntWritable> {

private final static IntWritable ONE = new IntWritable(1);

public void map(LongWritable key, Text value,


OutputCollector<Text, IntWritable> out,
Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer itr = new StringTokenizer(line);
while (itr.hasMoreTokens()) {
out.collect(new text(itr.nextToken()), ONE);
}
}
}
Word Count in Java
public class ReduceClass extends MapReduceBase
implements Reducer<Text, IntWritable, Text, IntWritable> {

public void reduce(Text key, Iterator<IntWritable> values,


OutputCollector<Text, IntWritable> out,
Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
out.collect(key, new IntWritable(sum));
}
}
Word Count in Java
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");

conf.setMapperClass(MapClass.class);
conf.setCombinerClass(ReduceClass.class);
conf.setReducerClass(ReduceClass.class);

FileInputFormat.setInputPaths(conf, args[0]);
FileOutputFormat.setOutputPath(conf, new Path(args[1]));

conf.setOutputKeyClass(Text.class); // out keys are words (strings)


conf.setOutputValueClass(IntWritable.class); // values are counts

JobClient.runJob(conf);
}
Word Count in Python with Hadoop Streaming

Mapper.py: import sys


for line in sys.stdin:
for word in line.split():
print(word.lower() + "\t" + 1)

Reducer.py: import sys


counts = {}
for line in sys.stdin:
word, count = line.split("\t”)
dict[word] = dict.get(word, 0) +
int(count)
for word, count in counts:
print(word.lower() + "\t" + 1)
Outline
• MapReduce architecture
• Example applications
• Getting started with Hadoop
• Higher-level languages over Hadoop: Pig and
Hive
• Amazon Elastic MapReduce
Motivation
• Many parallel algorithms can be expressed by
a series of MapReduce jobs

• But MapReduce is fairly low-level: must think


about keys, values, partitioning, etc

• Can we capture common “job building


blocks”?
Pig
• Started at Yahoo! Research
• Runs about 30% of Yahoo!’s jobs
• Features:
– Expresses sequences of MapReduce jobs
– Data model: nested “bags” of items
– Provides relational (SQL) operators (JOIN, GROUP
BY, etc)
– Easy to plug in Java functions
– Pig Pen development environment for Eclipse
An Example Problem
Suppose you have Load Users Load Pages
user data in one file,
page view data in Filter by age

another, and you Join on name


need to find the top
Group on url
5 most visited pages
by users aged 18 - Count clicks

25. Order by clicks

Take top 5

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt


import ja va .io.IOExce ption;
import ja va .util.Arra yList;
import ja va .util.Ite ra tor;
import ja va .util.List;
In MapReduce
}
re porte r.se tSta tus("OK");

// Do the c ross produc t a nd c olle c t the va lue s


lp.se tOutputKe yCla ss(Te xt.c la ss);
lp.se tOutputVa lue Cla ss(Te xt.c la ss);
lp.se tMa ppe rCla ss(Loa dPa ge s.c la ss);
File InputForma t.a ddInputPa th(lp, new
for (String s1 : first) { Pa th("/ use r/ga te s/pa ge s"));
import org.a pac he .hadoop.fs.Pa th; for (String s2 : se c ond) { File OutputForma t.se tOutputPa th(lp,
import org.a pac he .hadoop.io.LongWrita ble ; String outva l = ke y + "," + s1 + "," + s2; ne w Pa th("/user/ga te s/tmp/inde xe d_page s"));
import org.a pac he .hadoop.io.Te xt; oc .c olle c t(null, ne w Te xt(outva l)); lp.se tNumRe duc e Ta sks(0);
import org.a pac he .hadoop.io.Writa ble ; re porte r.se tSta tus("OK"); Job loa dPa ge s = ne w Job(lp);
i mport org.a pa c he .ha doop.io.Writable Compa ra ble ; }
import org.a pac he .hadoop.mapred.File InputForma t; } JobConf lfu = ne w JobConf(MRExample .c la ss);
import org.a pac he .hadoop.mapred.File OutputFormat; } lfu.s e tJobNa me ("Loa d a nd Filte r Use rs");
import org.a pac he .hadoop.mapred.JobConf; } lfu.se tInputForma t(Te xtInputForma t.c la ss);
import org.a pac he .hadoop.mapred.Ke yVa lue Te xtInputForma t; public sta tic c la ss Loa dJoined e xte nds Ma pRe duc e Ba se lfu.se tOutputK eyCla ss(Text.c la ss);
import org.a pa c he .ha doop.ma pre d.Ma ppe r; imple me nts Ma ppe r<Te xt, Text, Te xt, LongWrita ble > { lfu.se tOutputVa lue Cla ss(Te xt.c la ss);
import org.a pac he .hadoop.mapred.Ma pRe duc e Ba se ; lfu.se tMa ppe rClass(Loa dA ndFilte rUse rs.c la ss);
import org.a pac he .hadoop.mapred.OutputColle c tor; public void ma p( File InputForma t.a dd InputPa th(lfu, ne w
import org.a pac he .hadoop.mapred.Re c ordRe ade r; Te xt k, Pa th("/use r/ga te s/users"));
import org.a pac he .hadoop.mapred.Re duce r; Te xt va l, File OutputForma t.se tOutputPa th(lfu,
import org.a pac he .hadoop.mapred.Re porte r; OutputColle c tor< Te xt, LongWrita ble > oc , ne w Pa th("/user/ga te s/tmp/filte re d_use rs"));
imp ort org.a pac he .ha doop.mapre d.Se que nc eFileInputForma t; Re porte r re porte r) throws IOExc e ption { lfu.se tNumReduce Ta sks(0);
import org.a pac he .hadoop.mapred.Se que nc eFileOutputForma t; // Find the url Job loa dUse rs = ne w Job(lfu);
import org.a pac he .hadoop.mapred.Te xtInputForma t; String line = val.toString();
import org.a pac he .hadoop.mapred.jobc ontrol.Job; int firstComma = line .inde xOf(','); JobConf join = ne w JobConf( MRExa mple .c la ss);
import org.a pac he .hadoop.mapred.jobc ontrol.JobC ontrol; int se c ondComma = line .inde xOf(',', first Comma ); join.se tJobN a me ("Join Use rs a nd Page s");
import org.a pac he .hadoop.mapred.lib.IdentityMa ppe r; String ke y = line .substring(firstComma , se condComma ); join.se tInputForma t(Ke yVa lue Te xtInputForma t.c la ss);
// drop the re st of the rec ord, I don't ne e d it a nymore , join.se tO utputKe yCla ss(Te xt.c la ss);
public c la ss MRExa mple { // just pass a 1 for the c ombine r/reduce r to sum inste a d. join.se tO utputVa lue Cla ss(Te xt.c la ss);
public sta tic c la ss Loa dPa ge s exte nds Ma pRe duc e Ba se Te xt outKe y = ne w Te xt(ke y); join.se tMa ppe rCla ss(Ide ntityMa p per.cla ss);
imple me nts Ma ppe r< LongWritable , Te xt, Te xt, Te xt> { oc .c olle c t(outKe y, ne w LongWrita ble (1L)); join.se tRe duc e rCla ss(Join.c la ss);
} File InputForma t.a ddInputPa th(join, ne w
public void map(LongWrita ble k, Te xt va l, } Pa th("/use r/ga te s/tmp/inde xe d_pa ge s"));
OutputColle c tor< Te xt, Te xt> oc, public sta tic c la ss Re duc e Urls exte nds Ma pRe duc e Ba se File InputForma t.a ddInputPa th(join, ne w
Reporte r reporte r) throws IO Exc e ption { imple me nts Re duc e r< Te xt, LongWrita ble , Writa ble Compa ra ble, Pa th("/use r/ga te s/tmp/filte re d_use rs"));
// Pull the key out Writa ble > { File OutputForma t.se tOutputPa th(join, ne w
String line = va l.toString(); Pa th("/use r/ga te s/tmp/joine d"));
int firstComma = line .indexO f(','); public void re duc e ( join.se tN umRe duc e Ta sks(50);
String ke y = line .sub string(0, firstComma ); Te xt ke y, Job joinJob = ne w Job(join);
String va lue = line .substring(firstComma + 1); Ite ra tor< LongWrita ble > ite r, joinJob.a ddDe pe ndingJob(loa dPa ge s);
Te xt outKe y = ne w Te xt(ke y); OutputColle c tor< Writa ble Compa rable , Writa ble> oc , joinJob.a ddDe pe ndingJob(loa dUse rs);
// Prepe nd a n inde x to the va lue so we know w hic h file Re porte r re porte r) throws IOExc e ption {
// it c a me from. // Add up a ll the value s we se e JobConf group = ne w JobConf(MRE xample .c la ss);
Te xt outVa l = ne w Te xt("1 " + va lue ); group.se tJobNa me ("Group URLs");
oc .c olle c t(outKe y, outVa l); long sum = 0; group.se tInputForma t(KeyVa lueTe xtInputForma t.c la ss);
} wh ile (ite r.ha sNe xt()) { group.se tO utputKe yCla ss(Te xt.cla ss);
} sum + = ite r.ne xt().get(); group.se tO utputVa lue Cla ss(LongWrita ble .c la ss);
public sta tic c la ss Loa dAndFilte rUse rs e xte nds Ma pRe duc eBa se re porte r.se tSta tus("OK"); group.se tO utputForma t(Se que nce Fi le O utputFormat.c la ss);
imple me nts Ma ppe r< LongWritable , Te xt, Te xt, Te xt> { } group.se tMa ppe rCla ss(Loa dJoine d.c la ss);
group.se tCombine rCla ss(Re duc e U rls.c lass);
public void map(LongWrita ble k, Te xt va l, oc .c olle c t(ke y, ne w LongWrita ble(sum)); group.se tRe duc e rCla ss(Re duc e U rls.c la ss);
OutputColle ctor< Te xt, Te xt> oc , } File InputForma t.a ddInputPa th(group, ne w
Reporte r reporte r) throws IO Exc e ption { } Pa th("/use r/ga te s/tmp/joine d"));
// Pull the key out public sta tic c la ss Loa dClic ks e xte nds Ma pRe duc e Ba se File OutputForma t.se tOutputPa th(group, ne w
String line = va l.toString(); i mple me nts Ma pper<Writable Compa ra ble , Writa ble , LongWrita ble , Pa th("/use r/ga te s/tmp/groupe d"));
int firstComma = line .indexO f(','); Te xt> { group.se tN umRe duc e Ta sks(50);
String va lue = line .substring( firstComma + 1); Job groupJob = ne w Job(group);
int a ge = Intege r.pa rseInt(va lue ); public void ma p( groupJob.a ddDe pe ndingJob(joinJob);
if (a ge < 18 || a ge > 25) re turn; Writa ble Compa ra ble ke y,
String ke y = line .substring(0, firstComma ); Writa ble va l, JobConf top100 = ne w JobConf(MRExa mple .c la ss);
Te xt outKe y = ne w Te xt(ke y); OutputColle c tor< LongWrita ble , Te xt> oc , top100.se tJobNa me ("Top 100 sites");
// Prepe nd a n inde x to the va lue so w e know whic h file Re porte r re porte r) throws IO Exc e ption { top100.se tInputForma t(Seque nc e File InputForma t.c la ss);
// it c a me from. oc .c olle c t((LongWrita ble )va l, (Text)ke y); top100.se tOutputKe yCla ss(LongWrita ble .c la ss);
Te xt outVa l = ne w Te xt("2" + va lue ); } top100.se tOutputVa lue Cla ss(Te xt.c la ss);
oc .c olle c t(outKe y, outVa l); } top100.se tOutputForma t(Se que nc e File OutputF ormat.cla ss);
} public sta tic c la ss LimitClic ks e xte nds Ma pRe duc e Ba se top100.se tMa ppe rCla ss(Loa dClic ks.c la ss);
} imple me nts Re duc e r< LongWrita ble , Te xt, LongWrita ble , Te xt> { top100.se tCombine rCla ss(LimitClic ks.c la ss);
public sta tic c la ss Join exte nds Ma pRe duc e Ba se top100.se tRe duc e rCla ss(LimitClic ks.c la ss);
imple me nts Reduc e r< Te xt, Te xt, Te xt, Te xt> { int c ount = 0; File InputForma t.a ddInputPa th(top100, ne w
public void re duc e ( Pa th("/use r/ga te s/tmp/groupe d"));
public void re duc e (Te xt ke y, LongWrita ble ke y, FileOutputForma t.se tOutputPa th(top100, ne w
Iterator< Text> ite r, Ite ra tor< Te xt> ite r, Pa th("/use r/ga te s/top100site sforuse rs18to25"));
OutputCollec tor< Te xt, Te xt> oc, OutputCollec tor< LongWrita ble , Te xt> oc , top100.se tNumRe duc e Ta sks(1);
Reporte r reporte r) throws IO Exc e ption { Re porte r re porte r) throws IOExc e ption { Job limit = ne w Job(top100);
// For e a ch value , figure out whic h file it's from a nd limit.a ddDe pe ndingJob(groupJob);
store it // Only output the first 100 re c ords
// a c c ordingly. while (count < 100 && ite r.ha sNe xt()) { JobControl jc = ne w JobControl("Find top 100 site s for use rs
List<String> first = ne w ArrayList< String> (); oc .c olle c t(ke y, ite r.ne xt()); 18 to 25");
List<String> se c ond = ne w Arra yList< String> (); count++ ; jc .a ddJob(loa dPa ge s);
} jc .a ddJob(loa dUse rs);
while (iter.ha sNe xt()) { } jc .a ddJob(joinJob);
Te xt t = ite r.ne xt(); } jc .a ddJob(groupJob);
String va lue = t.to String(); public sta tic void ma in(String[] a rgs) throws IOExc e ption { jc .a ddJob(limit);
if (va lue .c ha rAt(0) = = '1') JobConf lp = ne w JobConf(MRExa mple.c la ss); jc .run();
first.a dd(va lue.substring(1)); lp.se tJobN ame ("Loa d Pa ge s"); }
e lse se c ond.a dd(va lue .substring(1)); lp.se tInputForma t(TextInputFormat.c la ss); }

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt


In Pig Latin
Users = load ‘users’ as (name, age);
Filtered = filter Users by
age >= 18 and age <= 25;
Pages = load ‘pages’ as (user, url);
Joined = join Filtered by name, Pages by user;
Grouped = group Joined by url;
Summed = foreach Grouped generate group,
count(Joined) as clicks;
Sorted = order Summed by clicks desc;
Top5 = limit Sorted 5;

store Top5 into ‘top5sites’;

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt


Ease of Translation
Notice how naturally the components of the job translate into Pig Latin.

Load Users Load Pages

Users = load …
Filter by age
Filtered = filter …
Pages = load …
Join on name
Joined = join …
Group on url
Grouped = group …
Summed = … count()…
Count clicks Sorted = order …
Top5 = limit …
Order by clicks

Take top 5

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt


Ease of Translation
Notice how naturally the components of the job translate into Pig Latin.

Load Users Load Pages

Users = load …
Filter by age
Filtered = filter …
Pages = load …
Join on name
Joined = join …
Job 1
Group on url
Grouped = group …
Job 2 Summed = … count()…
Count clicks Sorted = order …
Top5 = limit …
Order by clicks
Job 3
Take top 5

Example from http://wiki.apache.org/pig-data/attachments/PigTalksPapers/attachments/ApacheConEurope09.ppt


Hive
• Developed at Facebook
• Used for majority of Facebook jobs
• “Relational database” built on Hadoop
– Maintains list of table schemas
– SQL-like query language (HQL)
– Can call Hadoop Streaming scripts from HQL
– Supports table partitioning, clustering, complex
data types, some optimizations

Sample Hive Queries
Find top 5 pages visited by users aged 18-25:
SELECT p.url, COUNT(1) as clicks
FROM users u JOIN page_views p ON (u.name = p.user)
WHERE u.age >= 18 AND u.age <= 25
GROUP BY p.url
ORDER BY clicks
LIMIT 5;

• Filter page views through Python script:


SELECT TRANSFORM(p.user, p.date)
USING 'map_script.py'
AS dt, uid CLUSTER BY dt
FROM page_views p;
Outline
• MapReduce architecture
• Example applications
• Getting started with Hadoop
• Higher-level languages over Hadoop: Pig and
Hive
• Amazon Elastic MapReduce
Amazon Elastic MapReduce
• Provides a web-based interface and command-line tools for
running Hadoop jobs on Amazon EC2
• Data stored in Amazon S3
• Monitors job and shuts down machines after use
• Small extra charge on top of EC2 pricing

• If you want more control over how you Hadoop runs, you
can launch a Hadoop cluster on EC2 manually using the
scripts in src/contrib/ec2
Elastic MapReduce Workflow
Elastic MapReduce Workflow
Elastic MapReduce Workflow
Elastic MapReduce Workflow
Conclusions
• MapReduce programming model hides the complexity of
work distribution and fault tolerance

• Principal design philosophies:


– Make it scalable, so you can throw hardware at problems
– Make it cheap, lowering hardware, programming and admin
costs

• MapReduce is not suitable for all problems, but when it


works, it may save you quite a bit of time

• Cloud computing makes it straightforward to start using


Hadoop (or other parallel software) at scale

Resources
Hadoop: http://hadoop.apache.org/core/
• Pig: http://hadoop.apache.org/pig
• Hive: http://hadoop.apache.org/hive
• Video tutorials: http://www.cloudera.com/hadoop-training

• Amazon Web Services: http://aws.amazon.com/


• Amazon Elastic MapReduce guide:
http://docs.amazonwebservices.com/ElasticMapReduce/latest/G
ettingStartedGuide/

• My email: [email protected]

You might also like