0% found this document useful (0 votes)
61 views10 pages

Palak

This document contains a summary of key concepts in Hadoop MapReduce and examples of code to implement common tasks. [1] It explains the mapper, reducer, and driver code components of a MapReduce program using a word count example. The mapper parses input and emits counts, the reducer sums counts by word, and the driver runs the job. [2] File management tasks like creating directories, uploading/downloading files, copying/moving files, and removing files or directories are demonstrated using Hadoop fs commands. [3] Modes for running Hadoop programs are described, from standalone on one machine to pseudo-distributed and fully-distributed across clusters.

Uploaded by

Dolly Mehra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views10 pages

Palak

This document contains a summary of key concepts in Hadoop MapReduce and examples of code to implement common tasks. [1] It explains the mapper, reducer, and driver code components of a MapReduce program using a word count example. The mapper parses input and emits counts, the reducer sums counts by word, and the driver runs the job. [2] File management tasks like creating directories, uploading/downloading files, copying/moving files, and removing files or directories are demonstrated using Hadoop fs commands. [3] Modes for running Hadoop programs are described, from standalone on one machine to pseudo-distributed and fully-distributed across clusters.

Uploaded by

Dolly Mehra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

GURU TEGH BAHADUR INSTITUTE

OF TECHNOLOGY

BIG DATA ANALYTICS PRACTICAL FILE

Submitted By- Palak

Class - IT-2 (8th Semester)


Enrollment No. – 20813203119
Program 1: How to Use Hadoop Cluster
Solution

There are three modes in which you can get the experience of hadoop.

Standalone mode:

In this mode you need an ide like eclipse and the hadoop library files (which you can
download from the apache website). You can create your mapreduce program and run it
in your local machine. You will be able to check the logic of the code and you can check
any syntax errors and this needs some sample data to perform these actions but you will
not get the full experience of hadoop.

Psuedo-distributed mode:

In this mode you get all the daemons of hadoop running on a single machine and you
can get a vm from cloudera or hortonworks which is just plug and play type of thing. It
will have all the necessary tools installed and configured. In this mode you can scale up
your data to check how your code performs and optimize accordingly to get the job
done in the required time.

Fully-distributed mode:

In this mode you get all the daemons running on different machines. This is mostly used
in the production stage of your project. When you have already verified your code you
will get a chance to implement it in this mode.

Since you request an online service where you can practice your hadoop code. Install
eclipse on pc and download the libraries and start coding

File Management tasks in Hadoop

Program 2. Create a directory in HDFS at given path(s).


Usage:

hadoop fs -mkdir

Example:

hadoop fs -mkdir /user/saurzcode/dir1 /user/saurzcode/dir2


Program 3. Upload and download a file in HDFS.
Upload:

hadoop fs -put:
Copy single src file, or multiple src files from local file system to the Hadoop data file system
Usage:
hadoop fs -put <localsrc>……<HDFS_dest_Path>
Example:

hadoop fs -put /home/saurzcode/Samplefile.txt /user/ saurzcode/dir3/

Download: hadoop fs -get:


Copies/Downloads files to the local file system
Usage:
hadoop fs -get
Example: hadoop fs -get /user/saurzcode/dir3/Samplefile.txt /home/

Program 4. See contents of a file Same as unix cat command:


Usage: hadoop fs -cat
Example: hadoop fs -cat /user/saurzcode/dir1/abc.txt5.

Program 5. Copy a file from source to destination


This command allows multiple sources as well in which case the destination must be a
directory.
Usage:
hadoop fs –cp
Example:

hadoop fs -cp /user/saurzcode/dir1/abc.txt /user/saurzcode/ dir2


Program 6. Remove a file or directory in HDFS.
Remove files specified as argument. Deletes directory only when it is empty
Usage : hadoop fs -rm
Example: hadoop fs -rm /user/saurzcode/dir1/abc.txt

Recursive version of delete.


Usage :
hadoop fs -rmr Example: hadoop fs -rmr /user/saurzcode/

Program 7. Copy a file from source to destination


This command allows multiple sources as well in which case the destination must be a
directory.
Usage:
hadoop fs –cp
Example:
hadoop fs -cp /user/saurzcode/dir1/abc.txt /user/saurzcode/ dir2

Program 8. Move file from source to destination.


Note:-

Moving files across filesystem is not permitted.


Usage :
hadoop fs -mv
Example:
hadoop fs -mv /user/saurzcode/dir1/abc.txt /user/saurzcode/ dir2

Program 9.Display the aggregate length of a file.


Usage :

hadoop fs -du
Example:
hadoop fs -du /user/saurzcode/dir1/abc.txt

Program 10. Implement a program of Word Count Map Reduce


program to understand Map Reduce Paradigm.
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import org.apache.hadoop.fs.Path;

public class WordCount


{
public static class Map extends Mapper<LongWritable,Text,text,Inkwritable>

{
public void map(LongWritable key, Text value,Context context) throws
IOException,InterruptedException
{
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens())

{
value.set(tokenizer.nextToken());
context.write(value, new IntWritable(1));
}
}

}
public static class Reduce extends Reducer
{
public void reduce(Text key, Iterable values,Context context) throws
IOException,InterruptedException
{
int sum=0;for(IntWritable x: values)

{
sum+=x.get();
}
context.write(key, new IntWritable(sum));
}

}
public static void main(String[] args) throws Exception
{
Configuration conf= new Configuration();
Job job = new Job(conf,"My Word Count Program");

job.setJarByClass(WordCount.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);

job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
Path outputPath = new Path(args[1]);

//Configuring the input/output path from the filesystem into the job

FileInputFormat.addInputPath(job, new Path(args[0]));


FileOutputFormat.setOutputPath(job, new Path(args[1]));

//deleting the output path automatically from hdfs so that we don't have to delete it explicitly
outputPath.getFileSystem(conf).delete(outputPath); //exiting the job only if the flag value
becomes false
System.exit(job.waitForCompletion(true) ? 0 : 1);

}
}

The entire MapReduce program can be fundamentally divided into three parts:
• Mapper Phase Code

• Reducer Phase Code


• Driver Code
We will understand the code for each of these three parts sequentially.
Mapper code:
public static class Map extends Mapper { public void map(LongWritable key, Text value, Context
context) throws IOException,InterruptedException

{
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens())
{

value.set(tokenizer.nextToken());
context.write(value, new IntWritable(1));
}
• We have created a class Map that extends the class Mapper which is already defined in the
MapReduce Framework
. • We define the data types of input and output key/value pair after the class declaration using
angle brackets.
• Both the input and output of the Mapper is a key/value pair.

• Input:
◦ The key is nothing but the offset of each line in the text file:LongWritable
◦ The value is each individual line (as shown in the figure at the right): Text• Output:
◦ The key is the tokenized words: Text ◦ We have the hardcoded value in our case which
is
1: IntWritable
◦ Example – Dear 1, Bear 1, etc.

• We have written a java code where we have tokenized each word and assigned them a
hardcoded value equal to 1.

Reducer Code:
public static class Reduce extends Reducer
{
public void reduce(Text key, Iterable values,Context context) throws
IOException,InterruptedException
{

int sum=0; for(IntWritable x: values)


{
sum+=x.get();
}
context.write(key, new IntWritable(sum));
}

}
• We have created a class Reduce which extends class Reducer like that of Mapper.
• We define the data types of input and output key/value pair after the class declaration using
angle brackets as done for Mapper
.• Both the input and the output of the Reducer is a keyvalue pair.
• Input:
◦ The key nothing but those unique words which have been generated after the sorting
and shuffling phase: Text

◦ The value is a list of integers corresponding to each key: IntWritable


◦ Example – Bear, [1, 1], etc.

Output:
◦ The key is all the unique words present in the input text file: Text

◦ The value is the number of occurrences of each of the unique words: IntWritable
◦ Example – Bear, 2; Car, 3, etc.
• We have aggregated the values present in each of the list corresponding to each key and
produced the final answer.
• In general, a single reducer is created for each of the unique words, but, you can specify the
number of reducer in map red-site.xml
Driver Code:
Configuration conf= new Configuration();
Job job = new Job(conf,"My Word Count Program");

job.setJarByClass(WordCount.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setInputFormatClass(TextInputFormat.class);

job.setOutputFormatClass(TextOutputFormat.class);
Path outputPath = new Path(args[1]);

//Configuring the input/output path from the filesystem into the job
FileInputFormat.addInputPath(job, new Path(args[0]));

FileOutputFormat.setOutputPath(job, new Path(args[1]));

• In the driver class, we set the configuration of our MapReduce job to run in Hadoop.
• We specify the name of the job , the data type of input/ output of the mapper and reducer.
• We also specify the names of the mapper and reducer classes.

• The path of the input and output folder is also specified.


• The method setInputFormatClass () is used for specifying that how a Mapper will read the
input data or what will be the unit of work. Here, we have chosen TextInputFormat so that
single line is read by the mapper at a time from the input text file.
• The main () method is the entry point for the driver. In this method, we instantiate a new
Configuration object for the job.
Run the MapReduce code:
The command for running a MapReduce code is:
hadoop jar hadoop-mapreduce-example.jar WordCount / sample/input /sample/output

You might also like