0% found this document useful (0 votes)
25 views6 pages

Experiment No. 3.1: 1) JAVA-Java JDK 2) HADOOP-Hadoop Package - Step 1: Verify The Java Installed

Uploaded by

HARSH KUMAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views6 pages

Experiment No. 3.1: 1) JAVA-Java JDK 2) HADOOP-Hadoop Package - Step 1: Verify The Java Installed

Uploaded by

HARSH KUMAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Experiment No. 3.

Name: Kumar Harsh UID: 21BCS11423


Branch: BE-CSE Sec/Group:CC-651/B
Semester: 6th
Sub. Name: Cloud Computing and Subject Code: 21CSP-378 Distributed
Systems Lab

Aim: Install Hadoop single node cluster and run applications like word count.

Objective: To Install Hadoop single node cluster .

Procedure:
1) JAVA-Java JDK (installed)
2) HADOOP-Hadoop package (Downloaded)
• Step 1: Verify the Java installed

• Step 2: Extract Hadoop at C:\Hadoop

• Step 3: Setting up the HADOOP_HOME variable


Use windows environment variable setting for Hadoop Path setting.

1
• Step 4: Set JAVA_HOME variable
Use windows environment variable setting for java Path setting.

• Step 5: Set Hadoop and Java bin directory path

• Step 6: Hadoop Configuration:


2
For Hadoop Configuration we need to modify Six files that are listed
below1. Core-site.xml
2. Mapred-site.xml
3. Hdfs-site.xml
4. Yarn-site.xml
5. Hadoop-env.cmd
6. Create two folders datanode and namenode
• Step 6.1: Core-site.xml configuration
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

• Step 6.2: Mapred-site.xml configuration


<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

• Step 6.3: Hdfs-site.xml configuration


<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>C:\hadoop-2.8.0\data\namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>C:\hadoop-2.8.0\data\datanode</value>
</property></configuration>
Step 6.4: Yarn-site.xml configuration
3
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</n ame>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

• Step 6.5: Hadoop-env.cmd configuration


Set "JAVA_HOME=C:\Java" (On C:\java this is path to file jdk.18.0)

• Step 6.6: Create datanode and namenode folders


1. Create folder "data" under "C:\Hadoop-2.8.0"
2. Create folder "datanode" under "C:\Hadoop-2.8.0\data"
3. Create folder "namenode" under "C:\Hadoop-2.8.0\data"

• Step 7: Format the namenode folder


Open command window (cmd) and typing command “hdfs namenode
–format”

4
• Step 8: Testing the setup
Open command window (cmd) and typing command “start-all.cmd”

• Step 8.1: Testing the setup:


Ensure that namenode, datanode, and Resource manager are running

• Step 9: Open: http://localhost:8088

5
• Step 10:
Open: http://localhost:50070

You might also like