0% found this document useful (0 votes)
42 views3 pages

Hadoop Installation

The document provides instructions for installing Hadoop on a Linux system. It involves downloading and installing JDK and Hadoop, configuring environment variables and Hadoop configuration files, and starting the Hadoop services. Key steps include: downloading and extracting JDK and Hadoop files, configuring environment variables, editing configuration files like hadoop-env.sh, core-site.xml and hdfs-site.xml, formatting the namenode, and starting all Hadoop services using start-all.sh.

Uploaded by

vkthik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views3 pages

Hadoop Installation

The document provides instructions for installing Hadoop on a Linux system. It involves downloading and installing JDK and Hadoop, configuring environment variables and Hadoop configuration files, and starting the Hadoop services. Key steps include: downloading and extracting JDK and Hadoop files, configuring environment variables, editing configuration files like hadoop-env.sh, core-site.xml and hdfs-site.xml, formatting the namenode, and starting all Hadoop services using start-all.sh.

Uploaded by

vkthik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

HADOOP

INSTALLATION
Download
Download
Note:
Download
Download

JDK and install. (jdk-6u35-linux-i586.bin)


Hadoop and install.(hadoop-1.2.1.tar)
JDK for Linux with .bin extension.
hadoop with .tar extension.

1.Create a directory
Mkdir
directory name
2.Change mode
Chmod 777 *.*
3. ./jdk-6u35-linux-i586.bin
4. gunzip hadoop-1.2.1.tar.gz
5.tar
xvf hadoop-1.2.1.tar
6.sudo apt-get install openssh-client
7.sudo apt-get install openssh-server
8.ssh-keygen
-t rsa -P
9.cat $HOME/.ssh/id_rsa.pub >>
~/.ssh/authorized_keys
10.ssh localhost
Vim install
11.sudo vi /etc/sysctl.conf
net.ipv6.conf.all.disable_ipv6 =1
net.ipv6.conf.default.disable_ipv6 =1
net.ipv6.conf.io.disable_ipv6
=1
REBOOT YOUR MACHINE(changes will take place
only when you reboot your linux)
12.To cross check ipv6 is disabled
Sudo cat /proc/sys/net/ipv6/conf/all/disable_ipv6
Note:
If 0 not disabled
If 1 disabled
13.cd~
14.vi .bashrc

#setting javapath
export JAVA_HOME=/home/user/dirname/jdk1.6.0-35
export Path=$JAVA_HOME/bin:$PATH
#setting Hadoop path
export HADOOP_PREFIX=/home/user/dirname/hadoop1.2.1
export PATH=$HADOOP_PREFIX/bin:$PATH
15.Edit these configuration files hadoop-env.sh,mapredsite.xml,hdfs-site.xml,core-site.xml
(your directory->hadoop1.2.1->conf)
Hadoop-env.sh:
# The java implementation to use. Required.
export JAVA_HOME=/home/user/yoga/jdk1.6.0_35
# Hadoop_Prefix
export HADOOP_HOME=/home/user/yoga/hadoop-1.2.1
NOTE: Replace yoga with your directory name.
mapred-site.xml:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:50001</value>
</property>
</configuration>
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.webhdfs.enable</name>
<value>true</value>
</property>

<property>
<name>dfs.name.dir</name>
<value>/home/ubuntu/hadoop-dir/name-dir</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/ubuntu/hadoop-dir/data-dir</value>
</property>
</configuration>
core-site.xml:
<configuration>
<!---global properties-->
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:50000</value>
</property>
</configuration>
Cd dirname/hadoop-1.2.1/bin
Hadoop namenode -format
./start-all.sh
Jps

You might also like