0% found this document useful (0 votes)
6 views117 pages

02 Hadoop

The document provides an introduction to Hadoop and MapReduce, focusing on the principles of distributed computing and the advantages of using Hadoop for processing large datasets. It discusses the foundational concepts of Map and Reduce functions, their applications in various examples, and the architecture of Hadoop's Distributed File System (HDFS). Additionally, it covers the execution model, fault tolerance, and the importance of data locality in optimizing performance.

Uploaded by

Frederic Vargas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views117 pages

02 Hadoop

The document provides an introduction to Hadoop and MapReduce, focusing on the principles of distributed computing and the advantages of using Hadoop for processing large datasets. It discusses the foundational concepts of Map and Reduce functions, their applications in various examples, and the architecture of Hadoop's Distributed File System (HDFS). Additionally, it covers the execution model, fault tolerance, and the importance of data locality in optimizing performance.

Uploaded by

Frederic Vargas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 117

Introduction to

Hadoop & MapReduce


Dimitris Kotzinos

Content obtained from many sources,


notably: Jimmy Lin course on MapReduce.
Our Plan Today
1. Background: Cloud and Distributed Computing
2. Foundations of MapReduce
i. Back to functional programming
3. MapReduce Concretely
4. Programming MapReduce with Hadoop
The datacenter is the computer
“Big Ideas”
¢ Scale “out”, not “up”
l Limits of SMP and large shared-memory machines
¢ Move processing to the data
l Cluster have limited bandwidth
¢ Process data sequentially, avoid random access
l Seeks are expensive, disk throughput is reasonable
¢ Seamless scalability
l From the mythical man-month to the tradable machine-hour
Source: NY Times (6/14/2006)
Source:
Source: Harper’s (Feb, 2008)
Source: Bonneville Power
Building Blocks

Source: Barroso and Urs Hölzle (2009)


Storage Hierarchy

Funny story about sense of scale…


Source: Barroso and Urs Hölzle (2009)
Anatomy of a Datacenter

Source: Barroso and Urs Hölzle (2009)


Why commodity machines?

Source: Barroso and Urs Hölzle (2009); performance figures from late 2007
What about communication?
¢ Nodes need to talk to each other!
l SMP: latencies ~100 ns
l LAN: latencies ~100 µs
¢ Scaling “up” vs. scaling “out”
l Smaller cluster of SMP machines vs. larger cluster of commodity machines
l E.g., 8 128-core machines vs. 128 8-core machines
l Note: no single SMP machine is big enough
¢ Let’s model communication overhead…

Source: analysis on this an subsequent slides from Barroso and Urs Hölzle (2009)
Modeling Communication Costs
¢ Simple execution cost model:
l Total cost = cost of computation + cost to access global data
l Fraction of local access inversely proportional to size of cluster
l n nodes (ignore cores for now)

1 ms + f ´ [100 ns ´ n + 100 µs ´ (1 - 1/n)]

l Light communication: f =1
l Medium communication: f =10
l Heavy communication: f =100

¢ What are the costs in parallelization?


Cost of Parallelization
Advantages of scaling “up”

So why not?
Seeks vs. Scans
¢ Consider a 1 TB database with 100 byte records
l We want to update 1 percent of the records
¢ Scenario 1: random access
l Each update takes ~30 ms (seek, read, write)
l 1% updates = ~35 days
¢ Scenario 2: rewrite all records
l Assume 100 MB/s throughput
l Time = 5.6 hours(!)
¢ Lesson: avoid random seeks!

Source: Ted Dunning, on Hadoop mailing


Justifying the “Big Ideas”
¢ Scale “out”, not “up”
l Limits of SMP and large shared-memory machines
¢ Move processing to the data
l Cluster have limited bandwidth
¢ Process data sequentially, avoid random access
l Seeks are expensive, disk throughput is reasonable
¢ Seamless scalability
l From the mythical man-month to the tradable machine-hour
Numbers Everyone Should Know*

L1 cache reference 0.5 ns

Branch mispredict 5 ns
L2 cache reference 7 ns
Mutex lock/unlock 25 ns
Main memory reference 100 ns
Send 2K bytes over 1 Gbps network 20,000 ns
Read 1 MB sequentially from memory 250,000 ns
Round trip within same datacenter 500,000 ns
Disk seek 10,000,000 ns
Read 1 MB sequentially from disk 20,000,000 ns
Send packet CA → Netherlands → CA 150,000,000 ns

* According to Jeff Dean (LADIS 2009 keynote)


Hadoop & Map Reduce
Foundations
What Is ?
ñDistributed computing framework
- For clusters of computers
- Thousands of Compute Nodes
- Petabytes of data
ñOpen source, Java
ñGoogle’s MapReduce inspired Yahoo’s
Hadoop.
ñNow as an Apache project
Map and Reduce
ñThe idea of Map, and Reduce is 40+ year old
- Present in all Functional Programming Languages.
- See, e.g., APL, Lisp and ML
ñAlternate names for Map: Apply-All
ñHigher Order Functions
- take function definitions as arguments, or
- return a function as output
ñMap and Reduce are higher-order functions.
Map: A Higher Order Function
ñF(x: int) returns r: int
ñLet V be an array of integers.
ñW = map(F, V)
- W[i] = F(V[i]) for all I
- i.e., apply F to every element of V
Map Examples in Haskell
ñmap (+1) [1,2,3,4,5]
== [2, 3, 4, 5, 6]
ñmap (toLower) "abcDEFG12!@#“
== "abcdefg12!@#“
ñmap (`mod` 3) [1..10]
== [1, 2, 0, 1, 2, 0, 1, 2, 0, 1]
reduce: A Higher Order Function
ñreduce also known as
fold, accumulate,
compress or inject
ñReduce/fold takes in
a function and folds it
in between the
elements of a list.
Fold-Left in Haskell
ñDefinition
- foldl f z [] = z
- foldl f z (x:xs) = foldl f (f z x) xs
ñExamples
- foldl (+) 0 [1..5] ==15
- foldl (+) 10 [1..5] == 25
- foldl (div) 7 [34,56,12,4,23] == 0
Fold-Right in Haskell
ñDefinition
- foldr f z [] = z
- foldr f z (x:xs) = f x (foldr f z xs)
ñExample
- foldr (div) 7 [34,56,12,4,23] == 8
Examples of
Map Reduce Computation
Word Count Example
ñRead text files and count how often words
occur.
- The input is text files
- The output is a text file
ñeach line: word, tab, count
ñMap: Produce pairs of (word, count = 1)
from files
ñReduce: For each word, sum up up the
counts (i.e., fold).
Grep Example
ñSearch input files for a given pattern
ñMap: emits a line if pattern is matched
ñReduce: Copies results to output
Inverted Index Example
(this was the original Google's usecase)

ñGenerate an inverted index of words from


a given set of files
ñMap: parses a document and emits
<word, docId> pairs
ñReduce: takes all pairs for a given word,
sorts the docId values, and emits a <word,
list(docId)> pair
MapReduce principle
applied to BigData
Adapt MapReduce for BigData
1. Always maps/reduces on list of key/value pairs
2. Map/Reduce execute in parallel on a cluster
3. Fault tolerance is built in the framework
4. Specific systems/implementation aspects matters
– How is data partitioned as input to map
– How is data serialized between processes
5. Cloud specific improvements:
– Handle elasticity
– Take cluster topology (e.g., node proximity, node
size) into account
Execution on Clusters
1. Input files split (M splits)
2. Assign Master & Workers
3. Map tasks
4. Writing intermediate data to disk (R
regions)
5. Intermediate data read & sort
6. Reduce tasks
7. Return
MapReduce in Hadoop (1)
MapReduce in Hadoop (2)
MapReduce in Hadoop (3)
Data Flow in a MapReduce
Program in Hadoop
InputFormat à 1:many
Map function
Partitioner
Sorting & Merging
Combiner
Shuffling
Merging
Reduce function
OutputFormat
Map/Reduce Cluster
Implementation
Input M map Intermediate R reduce Output
files tasks files tasks files

split 0 Output 0
split 1
split 2
split 3 Output 1
split 4

Several map or Each intermediate Each reduce task


reduce tasks can file is divided into R corresponds to one
run on a single partitions, by partition
computer partitioning function
Execution
Automatic Parallel Execution in
MapReduce (Google)

Handles failures automatically, e.g., restarts tasks if a node


fails; runs multiples copies of the same task to avoid a slow
task slowing down the whole job
Fault Recovery
ñWorkers are pinged by master periodically
-Non-responsive workers are marked as failed
-All tasks in-progress or completed by failed
worker become eligible for rescheduling
ñMaster could periodically checkpoint
-Current implementations abort on master
failure
Component Overview
ñhttp://hadoop.apache.org/
ñOpen source Java
ñScale
- Thousands of nodes and
- petabytes of data
ñWe will use v.1 and v.2
Hadoop
ñMapReduce and Distributed File System
framework for large commodity clusters
ñMaster/Slave relationship
-JobTracker handles all scheduling & data flow
between TaskTrackers
-TaskTracker handles all worker tasks on a
node
-Individual worker task runs map or reduce
operation
ñIntegrates with HDFS for data locality
Hadoop Supported File Systems
ñHDFS: Hadoop's own file system.
ñAmazon S3 file system.
- Targeted at clusters hosted on the Amazon Elastic
Compute Cloud server-on-demand infrastructure
- Not rack-aware
ñCloudStore
- previously Kosmos Distributed File System
- like HDFS, this is rack-aware.
ñFTP Filesystem
- stored on remote FTP servers.
ñRead-only HTTP and HTTPS file systems.
"Rack awareness"
ñoptimization which takes into account the
geographic clustering of servers
ñnetwork traffic between servers in different
geographic clusters is minimized.
Goals of HDFS
Very Large Distributed File System
– 10K nodes, 100 million files, 10 PB
Assumes Commodity Hardware
– Files are replicated to handle hardware failure
– Detect failures and recovers from them
Optimized for Batch Processing
– Data locations exposed so that computations can
move to where data resides
– Provides very high aggregate bandwidth
User Space, runs on heterogeneous OS
HDFS: Hadoop Distr File System
ñDesigned to scale to petabytes of storage, and
run on top of the file systems of the underlying
OS.
ñMaster (“NameNode”) handles replication,
deletion, creation
ñSlave (“DataNode”) handles data retrieval
ñFiles stored in many blocks
- Each block has a block Id
- Block Id associated with several nodes hostname:port
(depending on level of replication)
HDFS Architecture
Cluster Membership

NameNode

Secondary
NameNode

Client

Cluster Membership

NameNode : Maps a file to a file-id and list of MapNodes


DataNodes
DataNode : Maps a block-id to a physical location on disk
SecondaryNameNode: Periodic merge of Transaction log
Distributed File System
Single Namespace for entire cluster
Data Coherency
– Write-once-read-many access model
– Client can only append to existing files
Files are broken up into blocks
– Typically 128 MB block size
– Each block replicated on multiple DataNodes
Intelligent Client
– Client can find location of blocks
– Client accesses data directly from DataNode
NameNode Metadata
Meta-data in Memory
– The entire metadata is in main memory
– No demand paging of meta-data
Types of Metadata
– List of files
– List of Blocks for each file
– List of DataNodes for each block
– File attributes, e.g creation time, replication factor
A Transaction Log
– Records file creations, file deletions. etc
DataNode
A Block Server
– Stores data in the local file system (e.g. ext3)
– Stores meta-data of a block (e.g. CRC)
– Serves data and meta-data to Clients
Block Report
– Periodically sends a report of all existing blocks to the
NameNode
Facilitates Pipelining of Data
– Forwards data to other specified DataNodes
Block Placement
Current Strategy
-- One replica on local node
-- Second replica on a remote rack
-- Third replica on same remote rack
-- Additional replicas are randomly placed
Clients read from nearest replica
Would like to make this policy pluggable
Data Correctness
Use Checksums to validate data
– Use CRC32
File Creation
– Client computes checksum per 512 byte
– DataNode stores the checksum
File access
– Client retrieves the data and checksum from
DataNode
– If Validation fails, Client tries other replicas
NameNode Failure
A single point of failure
Transaction Log stored in multiple directories
– A directory on the local file system
– A directory on a remote file system (NFS/CIFS)
Need to develop a real HA solution
Data Pipelining
Client retrieves a list of DataNodes on which to place
replicas of a block
Client writes block to the first DataNode
The first DataNode forwards the data to the next
DataNode in the Pipeline
When all replicas are written, the Client moves on to
write the next block in file
Rebalancer
Goal: % disk full on DataNodes should be similar
Usually run when new DataNodes are added
Cluster is online when Rebalancer is active
Rebalancer is throttled to avoid network congestion
Command line tool
HDFS Limitations
ñ“Almost” GFS (Google FS)
- No file update options (record append, etc); all
files are write-once
ñDoes not implement demand replication
ñDesigned for streaming
- Random seeks devastate performance
NameNode
ñ“Head” interface to HDFS cluster
ñRecords all global metadata
Secondary NameNode
ñNot a failover NameNode!
ñRecords metadata snapshots from “real”
NameNode
- Can merge update logs in flight
- Can upload snapshot back to primary
NameNode Death
ñNo new requests can be served while
NameNode is down
- Secondary will not fail over as new primary

ñSo why have a secondary at all?


NameNode Death, cont’d
ñIf NameNode dies from software glitch,
just reboot
ñBut if machine is hosed, metadata for
cluster is irretrievable!
Bringing the Cluster Back
ñIf original NameNode can be restored,
secondary can re-establish the most
current metadata snapshot
ñIf not, create a new NameNode, use
secondary to copy metadata to new
primary, restart whole cluster ( L )
ñIs there another way…?
Keeping the Cluster Up
ñProblem: DataNodes “fix” the address of
the NameNode in memory, can’t switch in
flight
ñSolution: Bring new NameNode up, but
use DNS to make cluster believe it’s the
original one
Further Reliability Measures
ñNamenode can output multiple copies of
metadata files to different directories
- Including an NFS mounted one
- May degrade performance; watch for NFS
locks
Hadoop v. ‘MapReduce’
ñMapReduce is also the name of a
framework developed by Google
ñHadoop was initially developed by Yahoo
and now part of the Apache group.
ñHadoop was inspired by Google's
MapReduce and Google File System
(GFS) papers.
MapReduce v. Hadoop
MapReduce Hadoop

Org Google Yahoo/Apache

Impl C++ Java

Distributed File Sys GFS HDFS

Data Base Bigtable HBase

Distributed lock mgr Chubby ZooKeeper


Mechanics of Programming
Hadoop Jobs
Job Launch: Client
ñClient program creates a JobConf
- Identify classes implementing Mapper and
Reducer interfaces
ñsetMapperClass(), setReducerClass()
- Specify inputs, outputs
ñsetInputPath(), setOutputPath()
- Optionally, other options too:
ñsetNumReduceTasks(), setOutputFormat()…
Job Launch: JobClient
ñPass JobConf to
- JobClient.runJob() // blocks
- JobClient.submitJob() // does not block
ñJobClient:
- Determines proper division of input into
InputSplits
- Sends job data to master JobTracker server
Job Launch: JobTracker
ñJobTracker:
- Inserts jar and JobConf (serialized to XML) in
shared location
- Posts a JobInProgress to its run queue
Job Launch: TaskTracker
ñTaskTrackers running on slave nodes
periodically query JobTracker for work
ñRetrieve job-specific jar and config
ñLaunch task in separate instance of Java
- main() is provided by Hadoop
Job Launch: Task
ñTaskTracker.Child.main():
- Sets up the child TaskInProgress attempt
- Reads XML configuration
- Connects back to necessary MapReduce
components via RPC
- Uses TaskRunner to launch user process
Job Launch: TaskRunner
ñTaskRunner, MapTaskRunner,
MapRunner work in a daisy-chain to
launch Mapper
- Task knows ahead of time which InputSplits it
should be mapping
- Calls Mapper once for each record retrieved
from the InputSplit
ñRunning the Reducer is much the same
Creating the Mapper
ñYour instance of Mapper should extend
MapReduceBase
ñOne instance of your Mapper is initialized
by the MapTaskRunner for a
TaskInProgress
- Exists in separate process from all other
instances of Mapper – no data sharing!
Mapper
void map (
WritableComparable key,
Writable value,
OutputCollector output,
Reporter reporter
)
What is Writable?
ñHadoop defines its own “box” classes for
strings (Text), integers (IntWritable), etc.
ñAll values are instances of Writable
ñAll keys are instances of
WritableComparable
Writing For Cache Coherency
while (more input exists) {
myIntermediate = new intermediate(input);
myIntermediate.process();
export outputs;
}
Getting Data To The Mapper
Input file Input file

InputSplit InputSplit InputSplit InputSplit


InputFormat

RecordReader RecordReader RecordReader RecordReader

Mapper Mapper Mapper Mapper

(intermediates) (intermediates) (intermediates) (intermediates)


Reading Data
ñData sets are specified by InputFormats
- Defines input data (e.g., a directory)
- Identifies partitions of the data that form an
InputSplit
- Factory for RecordReader objects to extract
(k, v) records from the input source
FileInputFormat and Friends
ñTextInputFormat
- Treats each ‘\n’-terminated line of a file as a value
ñKeyValueTextInputFormat
- Maps ‘\n’- terminated text lines of “k SEP v”
ñSequenceFileInputFormat
- Binary file of (k, v) pairs with some add’l metadata
ñSequenceFileAsTextInputFormat
- Same, but maps (k.toString(), v.toString())
Filtering File Inputs
ñFileInputFormat will read all files out of a
specified directory and send them to the
mapper
ñDelegates filtering this file list to a method
subclasses may override
- e.g., Create your own “xyzFileInputFormat” to
read *.xyz from directory list
Record Readers
ñEach InputFormat provides its own
RecordReader implementation
- Provides (unused?) capability multiplexing
ñLineRecordReader
- Reads a line from a text file
ñKeyValueRecordReader
- Used by KeyValueTextInputFormat
Input Split Size
ñFileInputFormat will divide large files into
chunks
- Exact size controlled by mapred.min.split.size
ñRecordReaders receive file, offset, and
length of chunk
ñCustom InputFormat implementations may
override split size
- e.g., “NeverChunkFile”
Sending Data To Reducers
ñMap function receives OutputCollector
object
- OutputCollector.collect() takes (k, v) elements
ñAny (WritableComparable, Writable) can
be used
WritableComparator
ñCompares WritableComparable data
- Will call WritableComparable.compare()
- Can provide fast path for serialized data
ñJobConf.setOutputValueGroupingComparator()
Sending Data To The Client
ñReporter object sent to Mapper allows
simple asynchronous feedback
- incrCounter(Enum key, long amount)
- setStatus(String msg)
ñAllows self-identification of input
- InputSplit getInputSplit()
Partition And Shuffle

Mapper Mapper Mapper Mapper

(intermediates) (intermediates) (intermediates) (intermediates)

Partitioner Partitioner Partitioner Partitioner


shuffling

(intermediates) (intermediates) (intermediates)

Reducer Reducer Reducer


Partitioner
ñint getPartition(key, val, numPartitions)
- Outputs the partition number for a given key
- One partition == values sent to one Reduce
task
ñHashPartitioner used by default
- Uses key.hashCode() to return partition num
ñJobConf sets Partitioner implementation
Reduction
ñreduce( WritableComparable key,
Iterator values,
OutputCollector output,
Reporter reporter)
ñKeys & values sent to one partition all go
to the same reduce task
ñCalls are sorted by key – “earlier” keys are
reduced and output before “later” keys
Finally: Writing The Output

Reducer Reducer Reducer


OutputFormat

RecordWriter RecordWriter RecordWriter

output file output file output file


OutputFormat
ñAnalogous to InputFormat
ñTextOutputFormat
- Writes “key val\n” strings to output file
ñSequenceFileOutputFormat
- Uses a binary format to pack (k, v) pairs
ñNullOutputFormat
- Discards output
Making Hadoop Work
ñBasic configuration involves pointing
nodes at master machines
- mapred.job.tracker
- fs.default.name
- dfs.data.dir, dfs.name.dir
- hadoop.tmp.dir
- mapred.system.dir
ñSee “Hadoop Quickstart” in online
documentation
Configuring for Performance
ñConfiguring Hadoop performed in “base
JobConf” in conf/hadoop-site.xml
ñContains 3 different categories of settings
- Settings that make Hadoop work
- Settings for performance
- Optional flags/bells & whistles
Number of Tasks
ñControlled by two parameters:
- mapred.tasktracker.map.tasks.maximum
- mapred.tasktracker.reduce.tasks.maximum
ñTwo degrees of freedom in mapper run
time: Number of tasks/node, and size of
InputSplits
ñCurrent conventional wisdom: 2 map
tasks/core, less for reducers
ñSee http://wiki.apache.org/lucene-
hadoop/HowManyMapsAndReduces
Dead Tasks
ñJobs would “run away”, admin restart
needed
ñVery often stuck in huge shuffle process
- Developers did not know about Partitioner
class, may have had non-uniform distribution
- Did not use many Reducer tasks
- Lesson: Design algorithms to use Combiners
where possible
Working With the Scheduler
ñRemember: Hadoop has a FIFO job
scheduler
- No notion of fairness, round-robin
ñDesign your tasks to “play well” with one
another
- Decompose long tasks into several smaller
ones which can be interleaved at Job level
Additional Languages &
Components
Hadoop and C++
ñHadoop Pipes
- Library of bindings for native C++ code
- Operates over local socket connection
ñStraight computation performance may be
faster
ñDownside: Kernel involvement and context
switches
Hadoop and Python
ñOption 1: Use Jython
- Caveat: Jython is a subset of full Python
ñOption 2: HadoopStreaming
HadoopStreaming
ñEffectively allows shell pipe ‘|’ operator to
be used with Hadoop
ñYou specify two programs for map and
reduce
- (+) stdin and stdout do the rest
- (-) Requires serialization to text, context
switches…
- (+) Reuse Linux tools: “cat | grep | sort | uniq”
Eclipse Plugin
ñSupport for Hadoop in Eclipse IDE
- Allows MapReduce job dispatch
- Panel tracks live and recent jobs
ñhttp://www.alphaworks.ibm.com/tech/mapr
educetools
References
ñhttp://hadoop.apache.org/
ñJeffrey Dean and Sanjay Ghemawat,
MapReduce: Simplified Data Processing on
Large Clusters. Usenix SDI '04, 2004.
http://www.usenix.org/events/osdi04/tech/full_pa
pers/dean/dean.pdf
ñDavid DeWitt, Michael
Stonebraker, "MapReduce: A major step
backwards“, craig-henderson.blogspot.com
ñhttp://scienceblogs.com/goodmath/2008/01/data
bases_are_hammers_mapreduc.php
wordCount

A Simple Hadoop Example


http://wiki.apache.org/hadoop/WordCount
Word Count Example
ñRead text files and count how often words
occur.
- The input is text files
- The output is a text file
ñeach line: word, tab, count
ñMap: Produce pairs of (word, count)
ñReduce: For each word, sum up the
counts.
Word Count over a Given Set of Web
Pages

see 1 bob 1
see bob throw
bob 1 run 1
throw 1 see 2
see 1 spot 1
see spot run
spot 1 throw 1
run 1

Can we do word count in parallel?


WordCount Overview
3 import ...
12 public class WordCount {
13
14 public static class Map extends MapReduceBase implements Mapper ... {
17
18 public void map ...
26 }
27
28 public static class Reduce extends MapReduceBase implements Reducer ...
{
29
30 public void reduce ...
37 }
38
39 public static void main(String[] args) throws Exception {
40 JobConf conf = new JobConf(WordCount.class);
41 ...
53 FileInputFormat.setInputPaths(conf, new Path(args[0]));
54 FileOutputFormat.setOutputPath(conf, new Path(args[1]));
55
56 JobClient.runJob(conf);
57 }
58
59 }
wordCount Reducer
28 public static class Reduce
extends MapReduceBase
implements Reducer
<Text, IntWritable, Text, IntWritable>
{
29
30 public void reduce(
Text key,
Iterator<IntWritable> values,
OutputCollector<Text,
IntWritable> output,
Reporter reporter)
throws IOException
{
31 int sum = 0;
32 while (values.hasNext()) {
33 sum += values.next().get();
34 }
35 output.collect(key, new IntWritable(sum));
36 }
37 }
wordCount JobConf
40 JobConf conf = new JobConf(WordCount.class);
41 conf.setJobName("wordcount");
42
43 conf.setOutputKeyClass(Text.class);
44 conf.setOutputValueClass(IntWritable.class);
45
46 conf.setMapperClass(Map.class);
47 conf.setCombinerClass(Reduce.class);
48 conf.setReducerClass(Reduce.class);
49
50 conf.setInputFormat(TextInputFormat.class);
51 conf.setOutputFormat(TextOutputFormat.class);
WordCount main
39 public static void main(String[] args) throws Exception {
40 JobConf conf = new JobConf(WordCount.class);
41 conf.setJobName("wordcount");
42
43 conf.setOutputKeyClass(Text.class);
44 conf.setOutputValueClass(IntWritable.class);
45
46 conf.setMapperClass(Map.class);
47 conf.setCombinerClass(Reduce.class);
48 conf.setReducerClass(Reduce.class);
49
50 conf.setInputFormat(TextInputFormat.class);
51 conf.setOutputFormat(TextOutputFormat.class);
52
53 FileInputFormat.setInputPaths(conf, new Path(args[0]));
54 FileOutputFormat.setOutputPath(conf, new Path(args[1]));
55
56 JobClient.runJob(conf);
57 }
Invocation of wordcount
1. /usr/local/bin/hadoop dfs -mkdir <hdfs-dir>
2. /usr/local/bin/hadoop dfs -copyFromLocal
<local-dir> <hdfs-dir>
3. /usr/local/bin/hadoop
jar hadoop-*-examples.jar
wordcount
[-m <#maps>]
[-r <#reducers>]
<in-dir>
<out-dir>
Lifecycle of a MapReduce Job

Map function

Reduce function

Run this program as a


MapReduce job
Lifecycle of a MapReduce Job
Time

Input Map Map Reduce Reduce


Splits Wave 1 Wave 2 Wave 1 Wave 2

How are the number of splits, number of map and reduce


tasks, memory allocation to tasks, etc., determined?
Job Configuration Parameters
190+ parameters in
Hadoop
Set manually or defaults
are used

You might also like