Module 2 CN
Module 2 CN
Module 2
Syllabus:
Introduction to Hadoop (T1): Introduction, Hadoop and its Ecosystem, Hadoop Distributed File
System, MapReduce Framework and Programming Model, Hadoop Yarn, Hadoop Ecosystem Tools.
Hadoop Distributed File System Basics (T2): HDFS Design Features, Components, HDFS User
Commands.
Essential Hadoop Tools (T2): Using Apache Pig, Hive, Sqoop, Flume, Oozie, HBase..
Introduction to Hadoop:
Introduction:
Hadoop is an Apache open source framework written in java that allows distributed processing of large
datasets across clusters of computers using simple programming models. The Hadoop framework
application works in an environment that provides distributed storage and computation across clusters
of computers. Hadoop is designed to scale up from single server to thousands of machines, each
offering local computation and storage.
A programming model is centralized computing of data in which the data is transferred from multiple
distributed data sources to a central server. Analyzing, reporting, visualizing, business-intelligence tasks
compute centrally. Data are inputs to the central server.
An enterprise collects and analyzes data at the enterprise level.
Big Data Store Model:
Model for Big Data store is as follows: Data store in file system consisting of data blocks (physical
division of data). The data blocks are distributed across multiple nodes. Data nodes are at the racks of a
cluster. Racks are scalable. A Rack has multiple data nodes (data servers), and each cluster is arranged
in a number of racks.
Data Store model of files in data nodes in racks in the clusters Hadoop system uses the data store model
in which storage is at clusters, racks, data nodes and data blocks. Data blocks replicate at the DataNodes
such that a failure of link leads to access of the data block from the other nodes replicated at the same or
other racks
Big Data Programming Model:
Big Data programming model is that application in which application jobs and tasks (or sub-tasks) is
scheduled on the same servers which store the data for processing.
Job means running an assignment of a set of instructions for processing. For example, processing the
queries in an application and sending the result back to the application is a job. Other example is
instructions for sorting the examination performance data is a job.
Hadoop and its Ecosystem:
Apache initiated the project for developing storage and processing framework for Big Data storage and
processing. Doug Cutting and Machael J. Cafarelle the creators named that framework as Hadoop.
Cutting's son was fascinated by a stuffed toy elephant, named Hadoop, and this is how the name
Hadoop was derived.
The project consisted of two components, one of them is for data store in blocks in the clusters and the
other is computations at each individual cluster in parallel with another.
Hadoop components are written in Java with part of native code in C. The command line utilities are
written in shell scripts.
Infrastructure consists of cloud for clusters. A cluster consists of sets of computers or PCs. The Hadoop
platform provides a low cost Big Data platform, which is open source and uses cloud services. Tera
Bytes of data processing takes just few minutes. Hadoop enables distributed processing of large datasets
(above 10 million bytes) across clusters of computers using a programming model called MapReduce.
The system characteristics are scalable, self-manageable, self-healing and distributed file system.
Hadoop core components:
The following diagram shows the core components of the Apache Software Foundation’s Hadoop
framework.
A Hadoop cluster example and the replication of data blocks in racks for two students of IDs 96 and 1025
Hadoop Physical Organization:
Few nodes in a Hadoop cluster act as NameNodes. These nodes are termed as MasterNodes or simply
masters. The masters have a different configuration supporting high DRAM and processing power. The
masters have much less local storage. Majority of the nodes in Hadoop cluster act as DataNodes and
Task Trackers. These nodes are referred to as slave nodes or slaves. The slaves have lots of disk storage
and moderate amounts of processing capabilities and DRAM. Slaves are responsible to store the dat and
process the computation tasks submitted by the clients.
The following Figure shows the client, master NameNode, primary and secondary MasterNodes and
slave nodes in the Hadoop physical architecture.
Clients as the users run the application with the help of Hadoop ecosystem projects. For example, Hive,
Mahout and Pig are the ecosystem's projects. They are not required to be present at the Hadoop cluster.
A single MasterNode provides HDFS, MapReduce and Hbase using threads in small to medium sized
clusters. When the cluster size is large, multiple servers are used, such as to balance the load. The
secondary NameNode provides NameNode management services and Zookeeper is used by HBase for
metadata storage.
The map job takes a set of data and converts it into another set of data. The individual elements are
broken down into tuples (key/value pairs) in the resultant set of data.
The reduce job takes the output from a map as input and combines the data tuples into a smaller set
of tuples.
Map and reduce jobs run in isolation from one another. As the sequence of the name MapReduce
implies, the reduce job is always performed after the map job.
Hadoop YARN:
YARN is a resource management platform. It manages computer resource.YARN manages the
schedules for running of the sub-tasks.
Hadoop 2 Execution model:
Following shows the YARN-based execution model. The figure shows the YARN components Client,
Resource Manager (RM), Node Manager (NM), Application Master (AM) and Containers. And also
illustrates YARN components namely, Client, Resource Manager (RM), Node Manager (RM),
Application Master (AM) and Containers.
Sqoop:
The loading of data into Hadoop clusters becomes an important task during data analytics. Apache
Sqoop is a tool that is built for loading efficiently the voluminous amount of data between Hadoop and
external data. Sqoop initially parses the arguments passed in the command line and prepares the map
task. The map task initializes multiple Mappers depending on the number supplied by the user in the
command line. Each map task will be assigned with part of data to be imported based on key defined in
the command line. Sqoop distributes the input data equally among the Mappers. Then each Mapper
creates a connection with the database using JDBC and fetches the part of data assigned by Sqoop and
writes it into HDFS/Hive/HBase as per the choice provided in the command line.
Due to sequential nature of data, there is no local caching mechanism. The large block and file sizes
makes it more efficient to reread data from HDFS than to try to cache the data. A principal design
aspect of Hadoop MapReduce is the emphasis on moving the computation to the data rather than
moving the data to the computation. In other high performance systems, a parallel file system will
exist on hardware separate from computer hardware. Data is then moved to and from the computer
components via high-speed interfaces to the parallel file system array. Finally, Hadoop clusters assume
node failure will occur at some point. To deal with this situation, it has a redundant design that can
tolerate system failure and still provide the data needed by the compute part of the program.
The design of HDFS is based on two types of nodes: NameNode and multiple DataNodes. In a basic
design, NameNode manages all the metadata needed to store and retrieve the actual data from the
DataNodes. No data is actually stored on the NameNode. The design is a Master/Slave architecture
in which master(NameNode) manages the file system namespace and regulates access to files by
clients. File system namespace operations such as opening, closing and renaming files and directories
are all managed by the NameNode. The NameNode also determines the mapping of blocks to
DataNodes and handles Data Node failures.
The slave(DataNodes) are responsible for serving read and write requests from the file system to
the clients. The NameNode manages block creation, deletion and replication. When a client writes
data, it first communicates with the NameNode and requests to create a file. The NameNode
determines how many blocks are needed and provides the client with the DataNodes that will store the
data. As part of the storage process, the data blocks are replicated after they are written to the assigned
node.
The client requests a file from the NameNode, which returns the best DataNodes from which to read
the data. The client then access the data directly from the DataNodes. Thus, once the metadata has
been delivered to the client, the NameNode steps back and lets the conversation between the client and
the DataNodes proceed. While data transfer is progressing, the NameNode also monitors the
DataNodes by listening for heartbeats sent from DataNodes. The lack of a heartbeat signal indicates
a node failure. Hence the NameNode will route around the failed Data Node and begin re-replicating
the now-missing blocks. The mappings b/w data blocks and physical DataNodes are not kept in
persistent storage on the NameNode. The NameNode stores all metadata in memory.In almost all
Hadoop deployments, there is a SecondaryNameNode(Checkpoint Node). It is not an active failover
node and cannot replace the primary NameNode in case of it failure.
Apache Sqoop:
Sqoop is a tool designed to transfer data between Hadoop and relational databases.
Sqoop is used to
-import data from a relational database management system (RDBMS) into the Hadoop Distributed File
System(HDFS),
- transform the data in Hadoop and
- export the data back into an RDBMS.
Sqoop import method:
Sqoop import
The data import is done in two steps :
1) Sqoop examines the database to gather the necessary metadata for the data to be imported.
2) Map-only Hadoop job : Transfers the actual data using the metadata.
where the files should be populated. By default, these files contain comma delimited fields, with new
lines separating different records.
Sqoop export
Apache Flume:
ApacheFlume is an independent agent designed to collect, transport, and store data into HDFS.
Data transport involves a number of Flume agents that may traverse a series of machines and locations.
Flume is often used for log files, social media-generated data, email messages, and just about any
continuous data source.