0% found this document useful (0 votes)
24 views14 pages

Hadoop Architec

Uploaded by

annapoorni722
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views14 pages

Hadoop Architec

Uploaded by

annapoorni722
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

Hadoop – Architecture
Read Courses Jobs

As we all know Hadoop is a framework written in Java that utilizes a large cluster of
commodity hardware to maintain and store big size data. Hadoop works on MapReduce
Programming Algorithm that was introduced by Google. Today lots of Big Brand Companies
are using Hadoop in their Organization to deal with big data, eg. Facebook, Yahoo, Netflix,
eBay, etc. The Hadoop Architecture Mainly consists of 4 components.

MapReduce
HDFS(Hadoop Distributed File System)
YARN( Yet Another Resource Negotiator)
Common Utilities or Hadoop Common

We use cookies to ensure you have the best browsing experience on our website. By using
our site, you acknowledge that you have read and understood our Cookie Policy & Privacy Got It !
Policy

https://www.geeksforgeeks.org/hadoop-architecture/ 1/14
2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

Let’s understand the role of each one of this component in detail.

1. MapReduce

MapReduce nothing but just like an Algorithm or a data structure that is based on the YARN
framework. The major feature of MapReduce is to perform the distributed processing in
parallel in a Hadoop cluster which Makes Hadoop working so fast. When you are dealing
with Big Data, serial processing is no more of any use. MapReduce has mainly 2 tasks which
are divided phase-wise:

In first phase, Map is utilized and in next phase Reduce is utilized.

Here, we can see that the Input is provided to the Map( ) function then it’s output is used as an
input to the Reduce function and after that, we receive our final output. Let’s understand
We use cookies to ensure you have the best browsing experience on our website. By using
What
our site,this Map( ) andthat
you acknowledge Reduce( ) does.
you have read and understood our Cookie Policy & Privacy
Policy

https://www.geeksforgeeks.org/hadoop-architecture/ 2/14
2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

As we can see that an Input is provided to the Map( ), now as we are using Big Data. The Input
is a set of Data. The Map( ) function here breaks this DataBlocks into Tuples that are nothing
but a key-value pair. These key-value pairs are now sent as input to the Reduce( ). The
Reduce( ) function then combines this broken Tuples or key-value pair based on its Key value
and form set of Tuples, and perform some operation like sorting, summation type job, etc.
which is then sent to the final Output Node. Finally, the Output is Obtained.

The data processing is always done in Reducer depending upon the business requirement of
that industry. This is How First Map( ) and then Reduce is utilized one by one.

Let’s understand the Map Task and Reduce Task in detail.

Map Task:

RecordReader The purpose of recordreader is to break the records. It is responsible for


providing key-value pairs in a Map( ) function. The key is actually is its locational
information and value is the data associated with it.
Map: A map is nothing but a user-defined function whose work is to process the Tuples
obtained from record reader. The Map( ) function either does not generate any key-value
pair or generate multiple pairs of these tuples.
Combiner: Combiner is used for grouping the data in the Map workflow. It is similar to a
Local reducer. The intermediate key-value that are generated in the Map is combined with
the help of this combiner. Using a combiner is not necessary as it is optional.
Par t it ionar: Partitional is responsible for fetching key-value pairs generated in the Mapper
Phases. The partitioner generates the shards corresponding to each reducer. Hashcode of
each key is also fetched by this partition. Then partitioner performs it’s(Hashcode)
modulus with the number of reducers( key.hashcode()%(number of reducers)).

Reduce Task

We use cookies to ensure you have the best browsing experience on our website. By using
our Shuffle and Sor t:that
site, you acknowledge Theyou
Task
haveof Reducer
read starts our
and understood with thisPolicy
Cookie step,&the process in which the
Privacy
Mapper generates the intermediate Policy key-value and transfers them to the Reducer task is

https://www.geeksforgeeks.org/hadoop-architecture/ 3/14
2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

known as Shuffling. Using the Shuffling process the system can sort the data using its key
value.

Once some of the Mapping tasks are done Shuffling begins that is why it is a faster process
and does not wait for the completion of the task performed by Mapper.
Reduce: The main function or task of the Reduce is to gather the Tuple generated from
Map and then perform some sorting and aggregation sort of process on those key-value
depending on its key element.
OutputFormat: Once all the operations are performed, the key-value pairs are written into
the file with the help of record writer, each record in a new line, and the key and value in a
space-separated manner.

2. HDFS

HDFS(Hadoop Distributed
We use cookies to ensure you haveFile System)
the best browsingisexperience
utilized for storage
on our website.permission.
By using It is mainly designed
our working
for site, you acknowledge that youHardware
on commodity have read anddevices(inexpensive
understood our Cookie Policy & Privacy
devices), working on a distributed
Policy

https://www.geeksforgeeks.org/hadoop-architecture/ 4/14
2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

file system design. HDFS is designed in such a way that it believes more in storing the data in
a large chunk of blocks rather than storing small data blocks.

HDFS in Hadoop provides Fault-tolerance and High availability to the storage layer and the
other devices present in that Hadoop cluster. Data storage Nodes in HDFS.

NameNode(Master)
DataNode(Slave)

NameNode:NameNode works as a Master in a Hadoop cluster that guides the


Datanode(Slaves). Namenode is mainly used for storing the Metadata i.e. the data about the
data. Meta Data can be the transaction logs that keep track of the user’s activity in a Hadoop
cluster.

Meta Data can also be the name of the file, size, and the information about the location(Block
number, Block ids) of Datanode that Namenode stores to find the closest DataNode for Faster
Communication. Namenode instructs the DataNodes with the operation like delete, create,
Replicate, etc.

DataNode: DataNodes works as a Slave DataNodes are mainly utilized for storing the data in
a Hadoop cluster, the number of DataNodes can be from 1 to 500 or even more than that. The
more number of DataNode, the Hadoop cluster will be able to store more data. So it is
advised that the DataNode should have High storing capacity to store a large number of file
blocks.

High Level Architecture Of Hadoop

We use cookies to ensure you have the best browsing experience on our website. By using
our site, you acknowledge that you have read and understood our Cookie Policy & Privacy
Policy

https://www.geeksforgeeks.org/hadoop-architecture/ 5/14
2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

File Block In HDFS: Data in HDFS is always stored in terms of blocks. So the single block of
data is divided into multiple blocks of size 128MB which is default and you can also change it
manually.

Let’s understand this concept of breaking down of file in blocks with an example. Suppose
you have uploaded a file of 400MB to your HDFS then what happens is this file got divided
into
We useblocks
cookiesofto128MB+128MB+128MB+16MB = 400MB
ensure you have the best browsing experience size. Means
on our website. By using4 blocks are created each
our128MB
of site, you acknowledge
except thethatlastyou haveHadoop
one. read and understood
doesn’t knowour Cookie
or itPolicy
doesn’t& Privacy
care about what data is
Policy
stored in these blocks so it considers the final file blocks as a partial record as it does not have
https://www.geeksforgeeks.org/hadoop-architecture/ 6/14
2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

any idea regarding it. In the Linux file system, the size of a file block is about 4KB which is very
much less than the default size of file blocks in the Hadoop file system. As we all know
Hadoop is mainly configured for storing the large size data which is in petabyte, this is what
makes Hadoop file system different from other file systems as it can be scaled, nowadays file
blocks of 128MB to 256MB are considered in Hadoop.

Replicat ion In HDFS Replication ensures the availability of the data. Replication is making a
copy of something and the number of times you make a copy of that particular thing can be
expressed as it’s Replication Factor. As we have seen in File blocks that the HDFS stores the
data in the form of various blocks at the same time Hadoop is also configured to make a copy
of those file blocks.

By default, the Replication Factor for Hadoop is set to 3 which can be configured means you
can change it manually as per your requirement like in above example we have made 4 file
blocks which means that 3 Replica or copy of each file block is made means total of 4×3 = 12
blocks are made for the backup purpose.

This is because for running Hadoop we are using commodity hardware (inexpensive system
hardware) which can be crashed at any time. We are not using the supercomputer for our
Hadoop setup. That is why we need such a feature in HDFS which can make copies of that file
blocks for backup purposes, this is known as fault tolerance.

Now one thing we also need to notice that after making so many replica’s of our file blocks
we are wasting so much of our storage but for the big brand organization the data is very
much important than the storage so nobody cares for this extra storage. You can configure the
Replication factor in your hdfs-site.xml file.

Rack Awareness The rack is nothing but just the physical collection of nodes in our Hadoop
cluster (maybe 30 to 40). A large Hadoop cluster is consists of so many Racks . with the help
of this Racks information Namenode chooses the closest Datanode to achieve the maximum
performance while performing the read/write information which reduces the Network Traffic.

HDFS Architecture

We use cookies to ensure you have the best browsing experience on our website. By using
our site, you acknowledge that you have read and understood our Cookie Policy & Privacy
Policy

https://www.geeksforgeeks.org/hadoop-architecture/ 7/14
2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

3. YARN( Yet Another Resource Negotiator)

YARN is a Framework on which MapReduce works. YARN performs 2 operations that are Job
scheduling and Resource Management. The Purpose of Job schedular is to divide a big task
into small jobs so that each job can be assigned to various slaves in a Hadoop cluster and
Processing can be Maximized. Job Scheduler also keeps track of which job is important, which
job has more priority, dependencies between the jobs and all the other information like job
timing, etc. And the use of Resource Manager is to manage all the resources that are made
available for running a Hadoop cluster.

Features of YARN

Multi-Tenancy
Scalability
Cluster-Utilization
Compatibility
We use cookies to ensure you have the best browsing experience on our website. By using
our site, you acknowledge that you have read and understood our Cookie Policy & Privacy
Policy

https://www.geeksforgeeks.org/hadoop-architecture/ 8/14
2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

90% Refund @Courses Trending Now Data Structures & Algorithms Foundational Courses Data Science Prac

4. Hadoop common or Common Utilities

Hadoop common or Common utilities are nothing but our java library and java files or we can
say the java scripts that we need for all the other components present in a Hadoop cluster.
these utilities are used by HDFS, YARN, and MapReduce for running the cluster. Hadoop
Common verify that Hardware failure in a Hadoop cluster is common so it needs to be solved
automatically in software by Hadoop Framework.

Whether you're preparing for your first job interview or aiming to upskill in this ever-evolving
tech landscape, GeeksforGeeks Courses are your key to success. We provide top-quality
content at affordable prices, all geared towards accelerating your growth in a time-bound
manner. Join the millions we've already empowered, and we're here to do the same for you.
Don't miss out - check it out now!

Looking for a place to share your ideas, learn, and connect? Our Community portal is just the
spot! Come join us and see what all the buzz is about!

Participate in Three 90 Challenge! Enroll in any GeeksforGeeks course and get 90% refund by
completing 90% course. Explore offer now.

Last Updated : 03 Jan, 2023 65

Previous Next

Share your thoughts in the comments Add Your Comment


We use cookies to ensure you have the best browsing experience on our website. By using
our site, you acknowledge that you have read and understood our Cookie Policy & Privacy
Policy

https://www.geeksforgeeks.org/hadoop-architecture/ 9/14
2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

Similar Reads
Difference between Hadoop 1 and Hadoop 2 Difference Between Hadoop 2.x vs Hadoop 3.x

Hadoop - HDFS (Hadoop Distributed File Hadoop - Features of Hadoop Which Makes It
System) Popular

Sum of even and odd numbers in MapReduce How to Execute WordCount Program in
using Cloudera Distribution Hadoop(CDH) MapReduce using Cloudera Distribution
Hadoop(CDH)

Distributed Cache in Hadoop MapReduce Volunteer and Grid Computing | Hadoop

Data with Hadoop RDMS vs Hadoop

D dikshant…

Article Tags : Hadoop , Hadoop

Additional Information

A-143, 9th Floor, Sovereign Corporate


Tower, Sector-136, Noida, Uttar Pradesh -
201305

We use cookies to ensure you have the best browsing experience on our website. By using
our site, you acknowledge that you have read and understood our Cookie Policy & Privacy
Policy

https://www.geeksforgeeks.org/hadoop-architecture/ 10/14
2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

Company Explore
About Us Job-A-Thon Hiring Challenge
Legal Hack-A-Thon
Careers GfG Weekly Contest
In Media Offline Classes (Delhi/NCR)
Contact Us DSA in JAVA/C++
Advertise with us Master System Design
GFG Corporate Solution Master CP
Placement Training Program GeeksforGeeks Videos
Apply for Mentor Geeks Community

Languages DSA
Python Data Structures
Java Algorithms
C++ DSA for Beginners
PHP Basic DSA Problems
GoLang DSA Roadmap
SQL Top 100 DSA Interview Problems
R Language DSA Roadmap by Sandeep Jain
Android Tutorial All Cheat Sheets
Tutorials Archive

Data Science & ML HTML & CSS


Data Science With Python HTML
Data Science For Beginner CSS
Machine Learning Tutorial Web Templates
ML Maths CSS Frameworks
Data Visualisation Tutorial Bootstrap
Pandas Tutorial Tailwind CSS
NumPy Tutorial SASS
NLP Tutorial LESS
Deep Learning Tutorial Web Design
We use cookies to ensure you have the best browsing experience on our website. By using
our site, you acknowledge Python
that you have read and understood our Cookie Policy & Computer
Privacy Science
Policy

https://www.geeksforgeeks.org/hadoop-architecture/ 11/14
2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

Python Programming Examples GATE CS Notes


Django Tutorial Operating Systems
Python Projects Computer Network
Python Tkinter Database Management System
Web Scraping Software Engineering
OpenCV Python Tutorial Digital Logic Design
Python Interview Question Engineering Maths

DevOps Competitive Programming


Git Top DS or Algo for CP
AWS Top 50 Tree
Docker Top 50 Graph
Kubernetes Top 50 Array
Azure Top 50 String
GCP Top 50 DP
DevOps Roadmap Top 15 Websites for CP

System Design JavaScript


High Level Design JavaScript Examples
Low Level Design TypeScript
UML Diagrams ReactJS
Interview Guide NextJS
Design Patterns AngularJS
OOAD NodeJS
System Design Bootcamp Lodash
Interview Questions Web Browser

NCERT Solutions School Subjects


Class 12 Mathematics
Class 11 Physics
Class 10 Chemistry
Class 9 Biology
Class 8 Social Science
We use cookies to ensure you have the best browsing experience on our website. By using
Complete
our site, you acknowledge Study
that you Material English Grammar
have read and understood our Cookie Policy & Privacy
Policy

https://www.geeksforgeeks.org/hadoop-architecture/ 12/14
2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

Commerce UPSC Study Material


Accountancy Polity Notes
Business Studies Geography Notes
Economics History Notes
Management Science and Technology Notes
HR Management Economy Notes
Finance Ethics Notes
Income Tax Previous Year Papers

SSC/ BANKING Colleges


SSC CGL Syllabus Indian Colleges Admission & Campus Experiences
SBI PO Syllabus List of Central Universities - In India
SBI Clerk Syllabus Colleges in Delhi University
IBPS PO Syllabus IIT Colleges
IBPS Clerk Syllabus NIT Colleges
SSC CGL Practice Papers IIIT Colleges

Companies Preparation Corner


META Owned Companies Company-Wise Recruitment Process
Alphabhet Owned Companies Resume Templates
TATA Group Owned Companies Aptitude Preparation
Reliance Owned Companies Puzzles
Fintech Companies Company-Wise Preparation
EdTech Companies

Exams More Tutorials


JEE Mains Software Development
JEE Advanced Software Testing
GATE CS Product Management
NEET SAP
UGC NET SEO - Search Engine Optimization
Linux
We use cookies to ensure you have the best browsing experience on our website. By using Excel
our site, you acknowledge that you have read and understood our Cookie Policy & Privacy
Policy

https://www.geeksforgeeks.org/hadoop-architecture/ 13/14
2/12/24, 4:51 PM Hadoop - Architecture - GeeksforGeeks

Free Online Tools Write & Earn


Typing Test Write an Article
Image Editor Improve an Article
Code Formatters Pick Topics to Write
Code Converters Share your Experiences
Currency Converter Internships
Random Number Generator
Random Password Generator

@GeeksforGeeks, Sanchhaya Education Private Limited, All rights reserved

We use cookies to ensure you have the best browsing experience on our website. By using
our site, you acknowledge that you have read and understood our Cookie Policy & Privacy
Policy

https://www.geeksforgeeks.org/hadoop-architecture/ 14/14

You might also like