Sum of even and odd numbers in MapReduce using Cloudera Distribution Hadoop(CDH)
Last Updated :
11 Jul, 2025
Counting the number of even and odd and finding their sum in any language is a piece of cake like in C, C++, Python, Java, etc. MapReduce also uses Java for the writing the program but it is very easy if you know the syntax how to write it. It is the basic of MapReduce. You will first learn how to execute this code similar to "Hello World" program in other programming languages. So here are the steps which show how to write a MapReduce code for Count and Sum of Even and Odd Numbers.
Example:
Input:
1 2 3 4 5 6 7 8 9
Output:
Even 20 // sum of even numbers
Even 4 // count of even numbers
Odd 25 // sum of odd numbers
Odd 5 // count of odd numbers
Steps:
- First Open Eclipse -> then select File -> New -> Java Project ->Name it EvenOdd -> then Finish.

- class="alignnone size-full wp-image-1045790" />
- Create Three Java Classes into the project. Name them EODriver(having the main function), EOMapper, EOReducer.
- You have to include two Reference Libraries for that:
Right Click on Project -> then select Build Path-> Click on Configure Build Path

- In the above figure, you can see the Add External JARs option on the Right Hand Side. Click on it and add the below mention files. You can find these files in /usr/lib/
1. /usr/lib/hadoop-0.20-mapreduce/hadoop-core-2.6.0-mr1-cdh5.13.0.jar
2. /usr/lib/hadoop/hadoop-common-2.6.0-cdh5.13.0.jar
Mapper Code: You have to copy paste this program into the EOMapper Java Class file.
Java
// Importing libraries
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;
public class EOMapper extends MapReduceBase implements Mapper<LongWritable,
Text, Text, IntWritable> {
@Override
// Map function
public void map(LongWritable key, Text value, OutputCollector<Text,
IntWritable> output, Reporter rep)
throws IOException
{
// Splitting the line into spaces
String data[] = value.toString().split(" ");
for (String num : data)
{
int number = Integer.parseInt(num);
if (number % 2 == 1)
{
// For Odd Numbers
output.collect(new Text("ODD"), new IntWritable(number));
}
else
{
// For Even Numbers
output.collect(new Text("EVEN"),
new IntWritable(number));
}
}
}
}
Reducer Code: You have to copy paste this program into the EOReducer Java Class file.
Java
// Importing libraries
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
public class EOReducer extends MapReduceBase implements Reducer<Text,
IntWritable, Text, IntWritable> {
@Override
// Reduce Function
public void reduce(Text key, Iterator<IntWritable> value,
OutputCollector<Text, IntWritable> output, Reporter rep)
throws IOException
{
// For finding sum and count of even and odd
// you don't have to take different variables
int sum = 0, count = 0;
if (key.equals("ODD"))
{
while (value.hasNext())
{
IntWritable i = value.next();
// Finding sum and count of ODD Numbers
sum += i.get();
count++;
}
}
else
{
while (value.hasNext())
{
IntWritable i = value.next();
// Finding sum and count of EVEN Numbers
sum += i.get();
count++;
}
}
// First sum then count is printed
output.collect(key, new IntWritable(sum));
output.collect(key, new IntWritable(count));
}
}
Driver Code: You have to copy paste this program into the EODriver Java Class file.
Java
// Importing libraries
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class EODriver extends Configured implements Tool {
@Override
public int run(String[] args) throws Exception
{
if (args.length < 2)
{
System.out.println("Please enter valid arguments");
return -1;
}
JobConf conf = new JobConf(EODriver.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
conf.setMapperClass(EOMapper.class);
conf.setReducerClass(EOReducer.class);
conf.setMapOutputKeyClass(Text.class);
conf.setMapOutputValueClass(IntWritable.class);
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
JobClient.runJob(conf);
return 0;
}
// Main Method
public static void main(String args[]) throws Exception
{
int exitcode = ToolRunner.run(new EODriver(), args);
System.out.println(exitcode);
}
}
- Now you have to make a jar file. Right Click on Project-> Click on Export-> Select export destination as Jar File-> Name the jar File(EvenOdd.jar) -> Click on next -> at last Click on Finish. Now copy this file into the Workspace directory of Cloudera



- Open the terminal on CDH and change the directory to the workspace. You can do this by using "cd workspace/" command. Now, Create a text file(EOFile.txt) and move it to HDFS. For that open terminal and write this code(remember you should be in the same directory as jar file you have created just now).

- Now, run this command to copy the file input file into the HDFS.
hadoop fs -put EOFile.txt EOFile.txt

- Now to run the jar file by using following syntax: "hadoop jar JarFilename DriverClassName TextFileName OutPutFolderName"

- After Executing the code, you can see the result in EOOutput file or by writing following command on terminal.
hadoop fs -cat EOOutput/part-00000
Similar Reads
How to Execute WordCount Program in MapReduce using Cloudera Distribution Hadoop(CDH) Prerequisites: Hadoop and MapReduce Counting the number of words in any language is a piece of cake like in C, C++, Python, Java, etc. MapReduce also uses Java but it is very easy if you know the syntax on how to write it. It is the basic of MapReduce. You will first learn how to execute this code s
4 min read
MapReduce Programming Model and its role in Hadoop. In the Hadoop framework, MapReduce is the programming model. MapReduce utilizes the map and reduce strategy for the analysis of data. In todayâs fast-paced world, there is a huge number of data available, and processing this extensive data is one of the critical tasks to do so. However, the MapReduc
6 min read
Hadoop - Reducer in Map-Reduce MapReduce is a core programming model in the Hadoop ecosystem, designed to process large datasets in parallel across distributed machines (nodes). The execution flow is divided into two major phases: Map Phase and Reduce Phase.Hadoop programs typically consist of three main components:Mapper Class:
3 min read
Hadoop - Mapper In MapReduce In Hadoopâs MapReduce framework, the Mapper is the core component of the Map Phase, responsible for processing raw input data and converting it into a structured form (key-value pairs) that Hadoop can efficiently handle.A Mapper is a user-defined Java class that takes input splits (chunks of data fr
4 min read
Hadoop - Architecture As we all know Hadoop is a framework written in Java that utilizes a large cluster of commodity hardware to maintain and store big size data. Hadoop works on MapReduce Programming Algorithm that was introduced by Google. Today lots of Big Brand Companies are using Hadoop in their Organization to dea
6 min read
Sum of even numbers at even position Given an array of size n. The problem is to find the sum of numbers that are even and are at even index. Examples: Input : arr[] = {5, 6, 12, 1, 18, 8} Output : 30 Explanation: Here, n = 6 Now here are index and numbers as: index->arr[index] 0->5, 1->6, 2->12, 3->1, 4->18, 5->8
11 min read