Hadoop - MapReduce
Hadoop - MapReduce
Due to the advent of new technologies, devices, and communication means like social
networking sites, the amount of data produced by mankind is growing rapidly every year. The
amount of data produced by us from the beginning of time till 2003 was 5 billion gigabytes.
If you pile up the data in the form of disks it may fill an entire football field. The same
amount was created in every two days in 2011, and in every ten minutes in 2013. This rate is
still growing enormously. Though all this information produced is meaningful and can be
useful when processed, it is being neglected.
Thus Big Data includes huge volume, high velocity, and extensible variety of data. The data in it will
be of three types.
Structured data: Relational data.
Semi Structured data: XML data.
Unstructured data: Word, PDF, Text, Media Logs
Big data technologies are important in providing more accurate analysis, which may lead to
more concrete decision-making resulting in greater operational efficiencies, cost reductions,
and reduced risks for the business. To harness the power of big data, you would require an
infrastructure that can manage and process huge volumes of structured and unstructured data
in real-time and can protect data privacy and security. There are various technologies in the
market from different vendors including Amazon, IBM, Microsoft, etc., to handle big data.
While looking into the technologies that handle big data, we examine the following two
classes of technology:
Operational Analytical
To fulfill the above challenges, organizations normally take the help of enterprise servers.
Chapter – 2
2.3. Hadoop
Using the solution provided by Google, Doug Cutting and his team developed an Open
Source Project called HADOOP.
Hadoop runs applications using the MapReduce algorithm, where the data is processed in
parallel with others. In short, Hadoop is used to develop applications that could perform
complete statistical analysis on huge amounts of data.
3. Introduction to Hadoop
Hadoop is an Apache open source framework written in java that allows distributed
processing of large datasets across clusters of computers using simple programming
models. The Hadoop framework application works in an environment that provides
distributed storage and computation across clusters of computers. Hadoop is designed to
scale up from single server to thousands of machines, each offering local computation
and storage.
Fig.3.1
3.2. MapReduce
The Hadoop Distributed File System (HDFS) is based on the Google File System (GFS) and
provides a distributed file system that is designed to run on commodity hardware. It has many
similarities with existing distributed file systems. However, the differences from other
distributed file systems are significant. It is highly fault-tolerant and is designed to be
deployed on low-cost hardware. It provides high throughput access to application data and is
suitable for applications having large datasets.
Apart from the above-mentioned two core components, Hadoop framework also includes the
following two modules:
Hadoop Common: These are Java libraries and utilities required by other
Hadoop modules.
Hadoop YARN: This is a framework for job scheduling and cluster resource
management.
Hadoop framework allows the user to quickly write and test distributed systems. It is
efficient, and it automatic distributes the data and work across the machines and in turn,
utilizes the underlying parallelism of the CPU cores.
Hadoop does not rely on hardware to provide fault-tolerance and high availability (FTHA),
rather Hadoop library itself has been designed to detect and handle failures at the application
layer.
Servers can be added or removed from the cluster dynamically and Hadoop continues to
operate without interruption.
Another big advantage of Hadoop is that apart from being open source, it is compatible on all
the platforms since it is Java based.
Chapter – 4
4. Environment Setup
Hadoop is supported by GNU/Linux platform and its flavors. Therefore, we have to install a
Linux operating system for setting up Hadoop environment. In case you have an OS other
than Linux, you can install a Virtualbox software in it and have Linux inside the Virtualbox.
Open the Linux terminal and type the following commands to create a user.
$ su
password:
# useradd hadoop
# passwd hadoop
New passwd:
Retype new passwd
If java is not installed in your system, then follow the steps given below for installing java.
Step 1
Download java (JDK <latest version> - X64.tar.gz) by visiting the following link
http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html.
Then jdk-7u71-linux-x64.tar.gz will be downloaded into your system.
Step 2
Generally you will find the downloaded java file in Downloads folder. Verify it and extract
the jdk-7u71-linux-x64.gz file using the following commands.
$ cd Downloads/
$ ls
jdk-7u71-linux-x64.gz
$ tar zxf jdk-7u71-linux-x64.gz
$ ls
jdk1.7.0_71 jdk-7u71-linux-x64.gz
Step 3
To make java available to all the users, you have to move it to the location “/usr/local/”. Open
root, and type the following commands.
$ su
password:
# mv jdk1.7.0_71 /usr/local/
# exit
Step 4
For setting up PATH and JAVA_HOME variables, add the following commands to
~/.bashrc file.
export JAVA_HOME=/usr/local/jdk1.7.0_71
export PATH=PATH:$JAVA_HOME/bin
Now apply all the changes into the current running system
$ source ~/.bashrc
Step 5
Use the following commands to configure java alternatives
# alternatives --install /usr/bin/java java usr/local/java/bin/java 2
# alternatives --install /usr/bin/javac javac usr/local/java/bin/javac 2
# alternatives --install /usr/bin/jar jar usr/local/java/bin/jar 2
# alternatives --set java usr/local/java/bin/java
# alternatives --set javac usr/local/java/bin/javac
# alternatives --set jar usr/local/java/bin/jar
Now verify the installation using the command java -version from the terminal as explained
above.
4.3.Downloading Hadoop
Download and extract Hadoop 2.4.1 from Apache software foundation using the following
commands
$ su
password:
# cd /usr/local
# wget http://apache.claz.org/hadoop/common/hadoop-2.4.1/
hadoop-2.4.1.tar.gz
# tar xzf hadoop-2.4.1.tar.gz
# mv hadoop-2.4.1/* to hadoop/
# exit
Before proceeding further, you need to make sure that Hadoop is working fine. Just issue
the following command:
$ hadoop version
If everything is fine with your setup, then you should see the following result:
Hadoop 2.4.1
Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768
Compiled by hortonmu on 2013-10-07T06:28Z
Compiled with protoc 2.5.0
From source with checksum 79e53ce7994d1628b240f09af91e1af4
It means your Hadoop's standalone mode setup is working fine. By default, Hadoop is
configured to run in a non-distributed mode on a single machine.
Example
Let's check a simple example of Hadoop. Hadoop installation delivers the following
example MapReduce jar file, which provides basic functionality of MapReduce and can
be used for calculating, like Pi value, word counts in a given list of files, etc.
$HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
Let's have an input directory where we will push a few files and our requirement is to
count the total number of words in those files. To calculate the total number of words, we
do not need to write our MapReduce, provided the .jar file contains the implementation
for word count. You can try other examples using the same .jar file; just issue the
following commands to check supported MapReduce functional programs by hadoop-
mapreduce-examples-2.2.0.jar file.
$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-
2.2.0.jar
Step 1
Create temporary content files in the input directory. You can create this input directory anywhere
you would like to work.
$ mkdir input
$ cp $HADOOP_HOME/*.txt input
$ ls -l input
These files have been copied from the Hadoop installation home directory. For your experiment, you
can have different and large sets of files.
Step 2
Let's start the Hadoop process to count the total number of words in all the files available in
the input directory, as follows:
$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-
examples-2.2.0.jar wordcount input ouput
Step 3
Step-2 will do the required processing and save the output in output/part-r-00000 file, which
you can check by using:
$cat output/*
It will list down all the words along with their total counts available in all the files available in
the input directory.
"AS 4 "Contribution" 1
"Contributor" 1
"Derivative 1
"Legal 1
"License" 1
"License"); 1
"Licensor" 1
"NOTICE” 1 "Not 1
"Object" 1
"Source” 1
"Work” 1
"You" 1
"Your") 1
"[]" 1
"control" 1
"printed 1
"submitted" 1
(50%) 1
(BIS), 1
(C) 1
(Don't) 1
(ECCN) 1
(INCLUDING 2
(INCLUDING, 2
.............
<configuration>
<property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
The hdfs-site.xml file contains information such as the value of replication data,
namenode path, and datanode paths of your local file systems. It means the place
where you want to store the Hadoop infrastructure.
Let us assume the following data.
dfs.replication (data replication value) = 1
(In the below given path /hadoop/ is the user name.
hadoopinfra/hdfs/namenode is the directory created by hdfs file system.)
namenode path = //home/hadoop/hadoopinfra/hdfs/namenode
Open this file and add the following properties in between the <configuration>, </configuration> tags
in this file.
<configuration>
<property> <name>dfs.replication</name> <value>1</value> </property> <property>
<name>dfs.name.dir</name> <value>file:///home/hadoop/hadoopinfra/hdfs/namenode</value>
</property> <property> <name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/datanode</value> </property>
</configuration>
Note: In the above file, all the property values are user-defined and you can make changes according
to your Hadoop infrastructure.
yarn-site.xml
This file is used to configure yarn into Hadoop. Open the yarn-site.xml file and add
the following properties in between the <configuration>, </configuration> tags in this
file.
<configuration>
<property> <name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value> </property>
</configuration>
mapred-site.xml
This file is used to specify which MapReduce framework we are using. By default,
Hadoop contains a template of yarn-site.xml. First of all, it is required to copy the file
from mapred-site,xml.template to mapred-site.xml file using the following
command.
$ cp mapred-site.xml.template mapred-site.xml
Open mapred-site.xml file and add the following properties in between the
<configuration>, </configuration> tags in this file.
<configuration>
<property> <name>mapreduce.framework.name</name> <value>yarn</value>
</property>
</configuration>
4.7. Verifying Hadoop Installation
The following steps are used to verify the Hadoop installation.
Step 1:Name Node Setup
Set up the namenode using the command “hdfs namenode -format” as follows.
$ cd ~
$ hdfs namenode –format
Fig.4.1
The default port number to access all applications of cluster is 8088. Use the following
url to visit this service.
http://localhost:8088/
Chapter -5
5. HDFS Overview
Hadoop File System was developed using distributed file system design. It is run on
commodity hardware. Unlike other distributed systems, HDFS is highly fault-tolerant and
designed using low-cost hardware. HDFS holds very large amount of data and provides easier
access. To store such huge data, the files are stored across multiple machines. These files are
stored in redundant fashion to rescue the system from possible data losses in case of failure.
HDFS also makes applications available to parallel processing.
5.1.Features of HDFS
It is suitable for the distributed storage and processing.
Hadoop provides a command interface to interact with HDFS.
The built-in servers of namenode and datanode help users to easily check the
status of cluster.
Streaming access to file system data.
HDFS provides file permissions and authentication
5.2.HDFS Architecture
Given below is the architecture of a Hadoop File System.
Fig.5.1……..
HDFS follows the master-slave architecture and it has the following elements.
5.2.1. Namenode
The namenode is the commodity hardware that contains the GNU/Linux operating
system and the namenode software. It is a software that can be run on commodity
hardware. The system having the namenode acts as the master server and it does
the following tasks:
Manages the file system namespace.
Regulates client’s access to files.
It also executes file system operations such as renaming, closing, and
opening files and directories.
5.2.2. Datanode
The datanode is a commodity hardware having the GNU/Linux operating system and
datanode software. For every node (Commodity hardware/System) in a cluster, there will be a
datanode. These nodes manage the data storage of their system.
Datanodes perform read-write operations on the file systems, as per client request.
They also perform operations such as block creation, deletion, and replication
according to the instructions of the namenode.
5.2.3. Block
Generally the user data is stored in the files of HDFS. The file in a file system will be
divided into one or more segments and/or stored in individual data nodes. These file
segments are called as blocks. In other words, the minimum amount of data that HDFS
can read or write is called a Block. The default block size is 64MB, but it can be
increased as per the need to change in HDFS configuration.
5.3.Goals of HDFS
Fault detection and recovery: Since HDFS includes a large number of commodity
hardware, failure of components is frequent. Therefore HDFS should have mechanisms
for quick and automatic fault detection and recovery.
Huge datasets: HDFS should have hundreds of nodes per cluster to manage the
applications having huge datasets.
Hardware at data: A requested task can be done efficiently, when the computation
takes place near the data. Especially where huge datasets are involved, it reduces the
network traffic and increases the throughput.
Chapter – 6
6. HDFS Operations
After formatting the HDFS, start the distributed file system. The following command will start the
namenode as well as the data nodes as cluster.
$ start-dfs.sh
After loading the information in the server, we can find the list of files in a directory,
status of a file, using ‘ls’. Given below is the syntax of ls that you can pass to a directory
or a filename as an argument.
$ $HADOOP_HOME/bin/hadoop fs -ls <args>
Step 2
Transfer and store a data file from local systems to the Hadoop file system using the put
command.
Step 3
You can verify the file using ls command.
$ $HADOOP_HOME/bin/hadoop fs -ls /user/input
Step 1
Initially, view the data from HDFS using cat command.
Step 2
Get the file from HDFS to the local file system using get command.
$ $HADOOP_HOME/bin/hadoop fs -get /user/output/ /home/hadoop_tp/
$ stop-dfs.sh
Chapter – 7
7. Command Reference
"<path>" means any file or directory name. "<path>..." means one or more file or directory
names. "<file>" means any filename. "<src>" and "<dest>" are path names in a directed
operation. "<localSrc>" and "<localDest>" are paths as above, but on the local file system.
All other files and path names refer to the objects inside HDFS.
Command Description
-du <path> Shows disk usage, in bytes, for all the files
which match path; filenames are reported with
the full HDFS protocol prefix.
-put <localSrc> <dest> Copies the file or directory from the local file
system identified by localSrc to dest within the
DFS.
-copyFromLocal <localSrc> <dest> Identical to -put
-moveFromLocal <localSrc> <dest> Copies the file or directory from the local file
system identified by localSrc to dest within
HDFS, and then deletes the local copy on
success.
-get [-crc] <src> <localDest> Copies the file or directory in HDFS identified
by src to the local file system path identified
by localDest.
-getmerge <src> <localDest> Retrieves all files that match the path src in
HDFS, and copies them to a single, merged file
in the local file system identified by localDest.
-moveToLocal <src> <localDest> Works like -get, but deletes the HDFS copy on
success.
-setrep [-R] [-w] rep <path> Sets the target replication factor for files
identified by path to rep. (The actual
replication factor will move toward the target
over time)
-chmod [-R] mode,mode,... <path>... Changes the file permissions associated with
one or more objects identified by path....
Performs changes recursively with -R. mode is
a 3-digit octal mode, or {augo}+/-{rwxX}.
Assumes if no scope is specified and does not
apply an umask.
-chown [-R] [owner][:[group]] <path>... Sets the owning user and/or group for files or
directories identified by path.... Sets owner
recursively if -R is specified.
-chgrp [-R] group <path>... Sets the owning group for files or directories
identified by path.... Sets group recursively if -
R is specified.
MapReduce is a framework using which we can write applications to process huge amounts of data, in
parallel, on large clusters of commodity hardware in a reliable manner.
8.1.What is MapReduce?
MapReduce is a processing technique and a program model for distributed computing based on java.
The MapReduce algorithm contains two important tasks, namely Map and Reduce. Map takes a set of
data and converts it into another set of data, where individual elements are broken down into tuples
(key/value pairs). Secondly, reduce task, which takes the output from a map as an input and combines
those data tuples into a smaller set of tuples. As the sequence of the name MapReduce implies, the
reduce task is always performed after the map job.
The major advantage of MapReduce is that it is easy to scale data processing over multiple computing
nodes. Under the MapReduce model, the data processing primitives are called mappers and reducers.
Decomposing a data processing application into mappers and reducers is sometimes nontrivial. But,
once we write an application in the MapReduce form, scaling the application to run over hundreds,
thousands, or even tens of thousands of machines in a cluster is merely a configuration change. This
simple scalability is what has attracted many programmers to use the MapReduce model.
8.2.The Algorithm
Generally MapReduce paradigm is based on sending the computer to where the data resides!
MapReduce program executes in three stages, namely map stage, shuffle stage, and reduce
stage.
Map stage: The map or mapper’s job is to process the input data. Generally the input data is
in the form of file or directory and is stored in the Hadoop file system (HDFS). The input file
is passed to the mapper function line by line. The mapper processes the data and creates
several small chunks of data.
Reduce stage: This stage is the combination of the Shuffle stage and the Reduce stage. The
Reducer’s job is to process the data that comes from the mapper. After processing, it produces
a new set of output, which will be stored in the HDFS.
During a MapReduce job, Hadoop sends the Map and Reduce tasks to the appropriate servers
in the cluster.
The framework manages all the details of data-passing such as issuing tasks, verifying task
completion, and copying data around the cluster between the nodes.
Most of the computing takes place on nodes with data on local disks that reduces the network
traffic.
After completion of the given tasks, the cluster collects and reduces the data to form an
appropriate result, and sends it back to the Hadoop server.
Fig.8.1 ……
8.3.Inputs and Outputs (Java Perspective)
The MapReduce framework operates on <key, value> pairs, that is, the framework views the input to
the job as a set of <key, value> pairs and produces a set of <key, value> pairs as the output of the job,
conceivably of different types.
The key and the value classes should be in serialized manner by the framework and hence, need to
implement the Writable interface. Additionally, the key classes have to implement the Writable-
Comparable interface to facilitate sorting by the framework. Input and Output types of a MapReduce
job: (Input) <k1, v1> -> map -> <k2, v2>-> reduce -> <k3, v3> (Output).
Input Output
8.4.Terminology
PayLoad - Applications implement the Map and the Reduce functions, and form the core of
the job.
Mapper - Mapper maps the input key/value pairs to a set of intermediate key/value pair.
NamedNode - Node that manages the Hadoop Distributed File System (HDFS).
DataNode - Node where data is presented in advance before any processing takes place.
MasterNode - Node where JobTracker runs and which accepts job requests from clients.
SlaveNode - Node where Map and Reduce program runs.
JobTracker - Schedules jobs and tracks the assign jobs to Task tracker.
Task Tracker - Tracks the task and reports status to JobTracker.
Job - A program is an execution of a Mapper and Reducer across a dataset.
Task - An execution of a Mapper or a Reducer on a slice of data.
Task Attempt - A particular instance of an attempt to execute a task on a SlaveNode.
8.5.Example Scenario
Given below is the data regarding the electrical consumption of an organization. It contains the
monthly electrical consumption and the annual average for various years.
If the above data is given as input, we have to write applications to process it and produce results such
as finding the year of maximum usage, year of minimum usage, and so on. This is a walkover for the
programmers with finite number of records. They will simply write the logic to produce the required
output, and pass the data to the application written.
But, think of the data representing the electrical consumption of all the large-scale industries of a
particular state, since its formation.
When we write applications to process such bulk data,
They will take a lot of time to execute.
There will be a heavy network traffic when we move data from source to network server and
so on.
Example Program
Given below is the program to the sample data using MapReduce framework.
package hadoop;
import java.util.*;
import java.io.IOException;
import java.io.IOException;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
public class ProcessUnits
{
//Mapper class
public static class E_EMapper extends MapReduceBase implements
Mapper<LongWritable ,/*Input key Type */
Text, /*Input value Type*/
Text, /*Output key Type*/
IntWritable> /*Output value Type*/
{ //Map function
public void map(LongWritable key, Text value,
OutputCollector<Text, IntWritable> output,
Reporter reporter) throws IOException
{
String line = value.toString();
String lasttoken = null;
StringTokenizer s = new StringTokenizer(line,"\t");
String year = s.nextToken();
while(s.hasMoreTokens()){lasttoken=s.nextToken();}
int avgprice = Integer.parseInt(lasttoken);
output.collect(new Text(year), new IntWritable(avgprice));
}
}
//Reducer class
public static class E_EReduce extends MapReduceBase implements
Reducer< Text, IntWritable, Text, IntWritable >
{ //Reduce function
public void reduce(
Text key,
Iterator <IntWritable> values,
OutputCollector<Text, IntWritable> output,
Reporter reporter) throws IOException
{
int val=Integer.MIN_VALUE;
while (values.hasNext())
{
if((val=values.next().get())>maxavg)
{
output.collect(key, new IntWritable(val));
}
}
}
}
//Main function
public static void main(String args[])throws Exception
{
JobConf conf = new JobConf(Eleunits.class);
conf.setJobName("max_eletricityunits");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(E_EMapper.class);
conf.setCombinerClass(E_EReduce.class);
conf.setReducerClass(E_EReduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
Save the above program as ProcessUnits.java. The compilation and execution of the program
is explained below.
Follow the steps given below to compile and execute the above program.
Step 1
The following command is to create a directory to store the compiled java classes.
$ mkdir units
Step 2
Step 3
The following commands are used for compiling the ProcessUnits.java program and creating
a jar for the program.
Step 4
The following command is used to create an input directory in HDFS.
$HADOOP_HOME/bin/hadoop fs -mkdir input_dir
Step 5
The following command is used to copy the input file named sample.txt in the input directory
of HDFS.
Step 6
The following command is used to verify the files in the input directory.
$HADOOP_HOME/bin/hadoop fs -ls input_dir/
Step 7
The following command is used to run the Eleunit_max application by taking the input files
from the input directory.
Wait for a while until the file is executed. After execution, as shown below, the output will
contain the number of input splits, the number of Map tasks, the number of reducer tasks, etc.
INFO mapreduce.Job: Job job_1414748220717_0002
completed successfully
14/10/31 06:02:52
INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=61
FILE: Number of bytes written=279400
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=546
HDFS: Number of bytes written=40
HDFS: Number of read operations=9
HDFS: Number of large read operations=0
HDFS: Number of write operations=2 Job Counters
Launched map tasks=2
Launched reduce tasks=1
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=146137
Total time spent by all reduces in occupied slots (ms)=441
Total time spent by all map tasks (ms)=14613
Total time spent by all reduce tasks (ms)=44120
Total vcore-seconds taken by all map tasks=146137
Total vcore-seconds taken by all reduce tasks=44120 Hadoop
37
Step 8
The following command is used to verify the resultant files in the output folder.
Step 9
The following command is used to see the output in Part-00000 file. This file is generated by
HDFS.
$HADOOP_HOME/bin/hadoop fs -cat output_dir/part-00000
1981 34
1984 40
1985 45
Step 10
The following command is used to copy the output folder from HDFS to the local file system
for analyzing.
Options Description
GENERIC_OPTIONS Description
-submit <job-file> Submits the job.
-status <job-id> Prints the map and reduce completion
percentage and all job counters.
-counter < job-id> <group-name> Prints the counter value.
<counter-name>
-kill <job-id> Kills the job.
-events <job-id> <from-event-#> <#-of- Prints the events' details received by
events> jobtracker for the given range.
-history [all] <jobOutputDir> -history Prints job details, failed and killed tip
<jobOutputDir> details. More details about the job such as
successful tasks and task attempts made
for each task can be viewed by specifying
the [all] option.
9. Streaming
Hadoop streaming is a utility that comes with the Hadoop distribution. This utility allows you to
create and run Map/Reduce jobs with any executable or script as the mapper and/or the reducer.
9.1.Exampleusing Python
For Hadoop streaming, we are considering the word-count problem. Any job in Hadoop must
have two phases: mapper and reducer. We have written codes for the mapper and the reducer in
python script to run it under Hadoop. One can also write the same in Perl and Ruby.
#!/usr/bin/python
from operator import itemgetter import sys
current_word = ""
current_count = 0 word = ""
# Input takes from standard input for myline in sys.stdin:
# Remove whitespace either side myline = myline.strip()
# Split the input we got from mapper.py word, count = myline.split('\t', 1)
# Convert count variable to integer try: count = int(count) except ValueError:
# Count was not a number, so silently ignore this line continue if current_word == word:
current_count += count else: if current_word:
# Write result to standard output print '%s\t%s' % (current_word, current_count)
current_count = count current_word = word
# Do not forget to output the last word if needed! if current_word == word: print '%s\t%s' %
(current_word, current_count)
Save the mapper and reducer codes in mapper.py and reducer.py in Hadoop home directory.
Make sure these files have execution permission (chmod +x mapper.py and chmod +x
reducer.py). As python is indentation sensitive so the same code can be download from the
below link.
9.1.3. Execution of WordCountP rogram
$ $HADOOP_HOME/bin/hadoop jar contrib/streaming/hadoop-streaming-1. 2.1.jar \ -input
input_dirs \ -output output_dir \ -mapper <path/mapper.py \ -reducer <path/reducer.py
Where "\" is used for line continuation for clear readability.
For example,
./bin/hadoop jar contrib/streaming/hadoop-streaming-1.2.1.jar -input myinput -output
myoutput -mapper /home/expert/hadoop-1.2.1/mapper.py -reducer /home/expert/hadoop-
1.2.1/reducer.py
9.3.Important Commands
Parameters Options Description
-input directory/file- Required Input location for
name mapper.
-output directory-name Required Output location for
reducer.
-mapper executable or Required Mapper executable.
script or JavaClassName
-reducer executable or Required Reducer executable.
script or JavaClassName
-file file-name Optional Makes the mapper,
reducer, or combiner
executable available
locally on the compute
nodes.
-inputformat Optional Class you supply should
JavaClassName return key/value pairs of
Text class. If not
specified,
TextInputFormat is used
as the default.
-outputformat Optional Class you supply should
JavaClassName take key/value pairs of
Text class. If not
specified,
TextOutputformat is
used as the default.
-partitioner Optional Class that determines
JavaClassName which reduce a key is
sent to.
-combiner Optional Combiner executable for
streamingCommand or map output.
JavaClassName
-cmdenv name=value Optional Passes the environment
variable to streaming
commands.
-inputreader Optional For backwards-
compatibility: specifies a
record reader class
(instead of an input
format class).
-verbose Optional Verbose output.
-lazyOutput Optional Creates output lazily. For
example, if the output
format is based on
FileOutputFormat, the
output file is created
only on the first call to
output.collect (or
Context.write).
-numReduceTasks Optional Specifies the number of
reducers.
-mapdebug Optional Script to call when map
task fails.
-reducedebug Optional Script to call when
reduce task fails.
Chapter-10
This chapter explains the setup of the Hadoop Multi-Node cluster on a distributed environment.
As the whole cluster cannot be demonstrated, we are explaining the Hadoop cluster environment
using three systems (one master and two slaves); given below are their IP addresses.
Hadoop Master: 192.168.1.15 (hadoop-master)
Hadoop Slave: 192.168.1.16 (hadoop-slave-1)
Hadoop Slave: 192.168.1.17 (hadoop-slave-2)
Follow the steps given below to have Hadoop Multi-Node cluster setup.
If java is not installed in your system, then follow the given steps for installing java.
Step 1
Download java (JDK <latest version> - X64.tar.gz) by visiting the following link
http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html.
Step 2
Generally you will find the downloaded java file in Downloads folder. Verify it and extract the jdk-
7u71-linux-x64.gz file using the following commands.
$ cd Downloads/
$ ls
jdk-7u71-Linux-x64.gz
$ tar zxf jdk-7u71-Linux-x64.gz
$ ls
jdk1.7.0_71 jdk-7u71-Linux-x64.gz
Step 3
To make java available to all the users, you have to move it to the location “/usr/local/”. Open the
root, and type the following commands.
$ su
password:
# mv jdk1.7.0_71 /usr/local/
# exit
Step 4
For setting up PATH and JAVA_HOME variables, add the following commands to ~/.bashrc file.
export JAVA_HOME=/usr/local/jdk1.7.0_71
export PATH=PATH:$JAVA_HOME/bin
Now verify the java -version command from the terminal as explained above.
Follow the above process and install java in all your cluster nodes.
# useradd hadoop
# passwd hadoop
core-site.xml
Open the core-site.xml file and edit it as shown below.
hdfs-site.xml
<configuration>
<property> <name>dfs.data.dir</name>
<value>/opt/hadoop/hadoop/dfs/name/data</value> <final>true</final> </property>
<property> <name>dfs.name.dir</name>
<value>/opt/hadoop/hadoop/dfs/name</value> <final>true</final> </property>
<property> <name>dfs.replication</name> <value>1</value> </property>
</configuration>
mapred-site.xml
Open the mapred-site.xml file and edit it as shown below.
<configuration>
<property> <name>mapred.job.tracker</name> <value>hadoop-
master:9001</value> </property> </configuration>
hadoop-env.sh
# su hadoop
$ cd /opt/hadoop/hadoop
$ vi etc/hadoop/masters
hadoop-master
10.9.1. Networking
Add new nodes to an existing Hadoop cluster with some appropriate network configuration. Assume
the following network configuration.
For New node Configuration:
IP address : 192.168.1.103
netmask : 255.255.255.0
hostname : slave3.in
ping master.in
10.12.Start the DataNode on New Node
$ jps
7141 DataNode
10312 Jps
Step 1
Login to master.
Login to master machine user where Hadoop is installed.
$ su hadoop
Step 2
Change cluster configuration.
An exclude file must be configured before starting the cluster. Add
a key named dfs.hosts.exclude to our $HADOOP_HOME/etc/hadoop/hdfs -site.xml
file. The value associated with this key provides the full path to a file on the
NameNode's local file system which contains a list of machines which are not
permitted to connect to HDFS.
For example, add these lines to etc/hadoop/hdfs-site.xml file.
<property>
<name>dfs.hosts.exclude</name>
<value>/home/hadoop/hadoop-1.2.1/hdfs_exclude.txt</value>
<description>>DFS exclude</description>
</property>
Step 3
slave2.in
Step 4
Force configuration reload.
Run the command "$HADOOP_HOME/bin/hadoop dfsadmin -refreshNodes"
without the quotes.
$ $HADOOP_HOME/bin/hadoop dfsadmin -refreshNodes
This will force the NameNode to re-read its configuration, including the newly updated ‘excludes’
file. It will decommission the nodes over a period of time, allowing time for each node's blocks to be
replicated onto machines which are scheduled to remain active.
On slave2.in, check the jps command output. After some time, you will see the DataNode process is
shutdown automatically.
Step 5
hutdown nodes.
After the decommission process has been completed, the decommissioned hardware
can be safely shut down for maintenance. Run the report command to dfsadmin to
check the status of decommission. The following command will describe the status of
the decommission node and the connected nodes to the cluster.
$ $HADOOP_HOME/bin/hadoop dfsadmin -report
Step 6