Homework 2 (22 11 2022)
Homework 2 (22 11 2022)
Q2. What are the various Hadoop daemons and their roles in a Hadoop cluster?
MASTER DAEMON ,this maintains and manages data nodes ,record metadata and receives
heartbeat and block report format data nodes. SLAVE DAEMON, this maintains and
manages name nodes, stores actual data and serves read and write requests from clients.
Q3. Why does one remove or add nodes in a Hadoop cluster frequently?
It is a striking feature of Hadoop Framework is the ease of scale in accordance with the rapid
growth in data volume. Because of these two reasons, one of the most common task of a
Hadoop administrator is to commission (Add) and decommission (Remove) “Data Nodes” in a
Hadoop Cluster.
Q4. What happens when two clients try to access the same file in the HDFS?
HDFS provides support only for exclusive writes so when one client is already writing the
file, the other client cannot open the file in write mode.
Q9. How do you define “block” in HDFS? What is the default block size in
Hadoop 1 and in Hadoop 2? Can it be changed?
Each file in HDFS is stored as "block". Default block size in Hadoop1 is 64MB and Default
block size in Hadoop2 is 128MB.Yes it can be changed.