Cloud Computing LAB
Cloud Computing LAB
COMPUTING
LAB
Ex.No.1 CREATION OF VIRTUAL MACHINES
Date:
AIM:
To find procedure to run the virtual machine of different configuration and check
how many virtual machines can be utilized at particular time.
PROCEDURE:
Open Oracle virtual box manager and click create new -> virtual machine.
Step 2:
Provide a name for the virtual machine and select the hard disk size for the virtual
machine.
Select the storage size as Dynamically allocated memory size and click OK.
Select the iso file of the virtual OS Ubuntu and click Start.
Step 4:
The virtual OS Ubuntu is opened successfully. Now type “gedit” in the search
box to open text editor in Ubuntu.
Step 5:
Type your desired C program in text editor and save it with the extension (.c).
Step 6:
PROCEDURE:
Step1:
Open Oracle virtual box manager and click create new -> virtual machine. Provide
and name for the operating system and select the memory size to be occupied in memory.
Step 2:
Select the iso file of the virtual OS Windows7 and click Start.
Step 3:
Select the language to use in the Operating System and click Install Now.
Step 4:
Select the type of installation as Custom for new installation and allocate Disk
space according to your convenience. Click Next to start the installation.
Step 5:
Provide a user name and password(optional) to gain access over the OS.
Step 7:
Set the time and date for the new Operating System.
Step 8:
Thus the new Operating System Windows7 will be opened as the virtual machine.
RESULT:
Thus the procedure to run different virtual machines on a single system using
Oracle Virtual Box is studied and implemented successfully.
EX NO:2 TO ATTACH VIRTUAL BLOCK TO THE VIRTUAL MACHINE AND
AIM:
To find a procedure to attach virtual block to the vritual machine and check whether it
holds the data even after the release of the virtual machine.
PROCEDURE:
Step1: Open the virtual box and create a new virtual machine - Windows 7(32bit)
Step2: In Storage tab select Controller IDE:and then “choose disk” is selected
Step3: In Storage tab select Controller :SATA and then “choose new disk” is selected.
Step4: Now Start the virtual machine and install windows
Step5: Go to Start in Virtual machine and select control panelP
Step6: In Control Panel select “system security” and then “administrative tools” .
volume”
Step10: Select next(2) option and also choose “disk name” and then give next(2). finally
“finish” the window.
Step
11: Select Start menu in virtual machine and choose computer icon and the memory
partition will be displayed.
RESULT:
Thus the virtual block has been attached and the virtual machine has been
checked.
Ex.No.3 EXECUTION OF A SAMPLE PROGRAM IN A
AIM:
To find a procedure to use the C compiler in the virtual machine and execute a
sample program.
PROCEDURE:
Step 1:
Open the virtual machine in which you want to run the C program.
Step 2:
The text editor used by the Ubuntu Operating System is the GEDIT.
Type your desired C program in the text editor and save it as a C file using the
Step 5:
RESULT:
Thus the procedure to use the C compiler in the virtual machine and execute a
sample program is implemented successfully.
EX.NO.:4 VIRTUAL MACHINE MIGRATION
DATE:
AIM:
To show the Virtual Machine Migration based on the certain condition from one
node to the other.
PROCEDURE:
Step7: Go to File->Computer:/home/sam/Documents/
Step8: Type the neighbour's URL: sftp://[email protected]._/
RESULT:
Date:
AIM:
PROCEDURE:
The cloud administrator can set usage quotas for the vDC. In this case, we will put a limit
of 10 VMs.
At this point, the cloud administrator can also prepare working Templates and
Images for the vDC users.
RESULT:
AIM:
PROCEDURE:
STEP:1
CS117@user:~$ cd ~
STEP:2
Done.
Other []:
Is the information correct? [Y/n] Y
STEP:3
Installing SSH
The ssh is pre-enabled on Linux, but in order to start sshd daemon, we need to install
ssh first. Use this command to do that :
This will install ssh on our machine. If we get something similar to the following, we can
think it is setup properly:
CS117@user:~$ su hduser
Password:
CS117@user:~$ ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
50:6b:f3:fc:0f:32:bf:30:79:c2:41:71:26:cc:7d:e3 hduser@laptop
The key's randomart image is:
+--[ RSA 2048]----+
| .oo.o |
| . .o=. o |
| .+. o.|
| o= E |
| S+ |
| .+ |
| O+ |
| Oo |
| o.. |
+-----------------+
STEP:4
Install Hadoop
hduser@laptop:~/hadoop-2.6.0$ su k
Password:
STEP:5
The following files will have to be modified to complete the Hadoop setup:
1. ~/.bashrc
2. /usr/local/hadoop/etc/hadoop/hadoop-env.sh
3. /usr/local/hadoop/etc/hadoop/core-site.xml
4. /usr/local/hadoop/etc/hadoop/mapred-site.xml.template
5. /usr/local/hadoop/etc/hadoop/hdfs-site.xml
~/.bashrc:
hduser@laptop:~$ vi ~/.bashrc
2. /usr/local/hadoop/etc/hadoop/hadoop-env.sh
hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
3. /usr/local/hadoop/etc/hadoop/core-site.xml:
hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
4. /usr/local/hadoop/etc/hadoop/mapred-site.xml
hduser@laptop:~$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template
/usr/local/hadoop/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>
5. /usr/local/hadoop/etc/hadoop/hdfs-site.xml
hduser@laptop:~$ vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_store/hdfs/datanode</value>
</property>
</configuration>
STEP:6
Format the New Hadoop Filesystem
STEP:7
Starting Hadoop
@laptop:~$ cd /usr/local/hadoop/sbin
CS117@user:/usr/local/hadoop/sbin$ ls
distribute-exclude.sh start-all.cmd stop-balancer.sh
hadoop-daemon.sh start-all.sh stop-dfs.cmd
hadoop-daemons.sh start-balancer.sh stop-dfs.sh
hdfs-config.cmd start-dfs.cmd stop-secure-dns.sh
hdfs-config.sh start-dfs.sh stop-yarn.cmd
httpfs.sh start-secure-dns.sh stop-yarn.sh
kms.sh start-yarn.cmd yarn-daemon.sh
mr-jobhistory-daemon.sh start-yarn.sh yarn-daemons.sh
refresh-namenodes.sh stop-all.cmd
slaves.sh stop-all.sh
hduser@laptop:/usr/local/hadoop/sbin$ start-all.sh
hduser@laptop:~$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/04/18 16:43:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-
namenode-laptop.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-
laptop.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-
secondarynamenode-laptop.out
15/04/18 16:43:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-
resourcemanager-laptop.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-
nodemanager-laptop.out
STEP:8
hduser@laptop:/usr/local/hadoop/sbin$ jps
9026 NodeManager
7348 NameNode
9766 Jps
8887 ResourceManager
7507 DataNode
Stopping Hadoop
$ pwd
/usr/local/hadoop/sbin
$ ls
hduser@laptop:/usr/local/hadoop/sbin$ start-all.sh
DataNode
Result:
Thus the installation of Hadoop 2.6 on Ubuntu 14.04 which is a single node
clusted is executed successfully.
Ex.No:7 MOUNT THE ONE NODE HADOOP CLUSTER USING FUSE.
Date:
Aim:
Procedure:
However, one can leverage FUSE to write a userland application that exposes
HDFS via a traditional filesystem interface. fuse-dfs is one such FUSE-based
application which allows you to mount HDFS as if it were a traditional Linux filesystem.
If you would like to mount HDFS on Linux, you can install fuse-dfs, along with FUSE
as follows:
wget http://archive.cloudera.com/one-click-install/maverick/cdh3-repository_1.0_all.deb
sudo dpkg -i cdh3-repository_1.0_all.deb
sudo apt-get update
sudo apt-get install hadoop-0.20-fuse
Once fuse-dfs is installed, go ahead and mount HDFS using FUSE as follows.
Once HDFS has been mounted at <mount_point>, you can use most of the
traditional filesystem operations (e.g., cp, rm, cat, mv, mkdir, rmdir, more, scp).
However, random write operations such as rsync, and permission related operations
such as chmod, chown are not supported in FUSE-mounted HDFS.
Result:
Thus the one node Hadoop cluster is mounted successfully using FUSE.
Ex.No.8 API’S OF HADOOP TO INTERACT WITH IT – TO DISPLAY FILE
Aim:
To write a program to use the API’s of Hadoop to interact with it to display file
content of a file existing in hdfs.
Procedure:
/home/hduser/HadoopFScat.java:
import java.io.InputStream;
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
public class HadoopFScat {
public static void main(String[] args) throws Exception {
String uri = args[0];
Configuration conf = new Configuration();
FileSystem fileSystem = FileSystem.get(URI.create(uri), conf);
InputStream inputStream = null;
try{ inputStream = fileSystem.open(new Path(uri));
IOUtils.copyBytes(inputStream, System.out, 4096, false);
}finally{
IOUtils.closeStream(inputStream); }
}}
Download the jar file:
RESULT:
Thus a program to use the API’s of Hadoop to interact with it to display file
content of a file existing in hdfs is created and executed successfully.
EX:NO:9 WORD COUNT PROGRAM
Date:
AIM:
To write a word count program to demonstrate the use of Map and Reduce task.
PROCEDURE:
Step 1:
user@cs117-HP-Pro-3330-MT:/home/cs1-17$ cd\
user@cs117-HP-Pro-3330-MT:~$ start-all.sh
user@cs117-HP-Pro-3330-MT:~$ jps
9551 NodeManager
8924 NameNode
9857 Jps
9076 DataNode
9265 SecondaryNameNode
9420 ResourceManager
Step 2:
create a directory named ip1 on the desktop. in the ip1 directory create a two.txt file for
wordcount purpose. create a directory named op1 on the desktop.
user@cs117-HP-Pro-3330-MT:~$ cd /usr/local/hadoop1
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ user@cs117-HP-Pro-3330-
MT:/usr/local/hadoop1$ bin/hdfs dfs -put '/home/cs1-17/Desktop/op1' /user2
16/09/20 11:02:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library
for your platform... using builtin-java classes where applicable
Step 3:
16/09/20 11:02:13 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
16/09/20 11:02:13 INFO mapred.Merger: Down to the last merge-pass, with 1 segments
left of total size: 29 bytes
16/09/20 11:02:13 INFO mapred.Merger: Down to the last merge-pass, with 1 segments
left of total size: 29 bytes
16/09/20 11:02:13 INFO mapred.LocalJobRunner: 2 / 2 copied.
Map-Reduce Framework
Spilled Records=6
Shuffled Maps =2
Failed Shuffles=0
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
Bytes Read=42
Bytes Written=23
Step 4:
Found 2 items
Step 5:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ usr/local/hadoop1/bin/hadoop fs -cat
op1/result.txt
Step 6:
Step 7:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ /usr/local/hadoop1/bin/hadoop fs -
cat op1/*
hello 3
helo 1
world 3
Step 8:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ /usr/local/hadoop1/bin/hadoop fs -
cat op1/result.txt
Step 9:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ /usr/local/hadoop1/bin/hadoop fs
op1/result.txt
Step 10:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ /usr/local/hadoop1/bin/hadoop fs -
cat op1/>>result.txt
Step 11:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ /usr/local/hadoop1/bin/hadoop fs -
cat >> op1/result.txt
Step 12:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ stop-all.sh
stopping resourcemanager
no proxyserver to stop
Step 13:
user@cs117-HP-Pro-3330-MT:/usr/local/hadoop1$ cd\
>
user@cs117-HP-Pro-3330-MT:~$
Wordcount.java
//package org.myorg;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
int sum = 0;
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
job.waitForCompletion(true);
} }
Result:
Thus a word count program to demonstrate the use of Map and Reduce task is
created and executed successfully.