Cloud Computing Lab Record
Cloud Computing Lab Record
INDEX
S.N DATE EXPERIMENT NAME MARK SIGN
O S
1. Create a simple cloud software application and provide it as
a service using any Cloud Service Provider to demonstrate
Software as a Service (SaaS).
2. Create a Virtual Machine with 1 vCPU, 2GB RAM and
15GB storage disk using a Type 2 Virtualization Software
3. Create a Virtual Hard Disk and allocate the storage using
VM ware Workstation
4. Create a Snapshot and Cloning of a VM and Test it by
loading the Previous Version/Cloned VM
5. Demonstrate Infrastructure as a Service (IaaS) by Creating a
Virtual Machine using a Public Cloud Service Provider
(Azure/GCP/AWS), configure with minimum CPU, RAM,
and Storage and Launch the VM image.
6. Create a Simple Web Application using Java or Python and
host it in any Public Cloud Service Provider
(Azure/GCP/AWS) to demonstrate Platform as a Service
(PaaS)
7. Create a Storage service using any Public Cloud Service
Provider (Azure/GCP/AWS) and check the public
accessibility of the stored file to demonstrate Storage as a
Service
8. Create a SQL storage service and perform a basic query
using any Public Cloud Service Provider (Azure/GCP/AWS)
to demonstrate Database as a Service (DaaS)
9. Perform the basic configuration setup for Installing Hadoop
2.x like Creating the HDUSER and SSH localhost
10. Install Hadoop 2.x and configure the Name Node and Data
Node.
11. Launch the Hadoop 2.x and perform MapReduce Program
for a Word Count problem
EXP NO 1: CREATE A SIMPLE CLOUD SOFTWARE APPLICATION AND
PROVIDE IT AS A SERVICE USING ANY CLOUD SERVICE PROVIDER TO
DEMONSTRATE SOFTWARE AS A SERVICE (SAAS).
DATE:
AIM:
PROCEDURE:
IMPLEMENTATION:
STEP1: GOTO ZOHO.COM
STEP 2: LOGINTOTHE ZOHO.COM
PROCEDURE:
IMPLEMENTATION:
STEP 1:
DOWLOAD VMWARE WORKSTATION AND INSTALLED AS TYPE 2
HYPERVISOR
STEP2: DOWLOAD UBUNTU OR TINY OS AS ISO IMAGE FILE
RESULT
EXP 3: CREATE A VIRTUAL HARD DISK AND ALLOCATE THE STORAGE
USING VM WARE WORKSTATION
DATE:
AIM:
PROCEDURE:
IMPLEMENTATION:
STEP 1:
GOTO VM WARE WORKSTATION
PROCEDURE:
IMPLEMENTATION:
STEP 1: GOTO VMWARE WORKSTATION
CLONING OF A VM
STEP 1: GO TO VM AND GOTO MANAGE AND CLICK CLONE
STEP 2: CLICK CLONE
RESULT:
5.DEMONSTRATE INFRASTRUCTURE AS A SERVICE(IAAS) BY CREATING A
VIRTUAL MACHINE USING A PUBLIC CLOUD SERVICE
PROVIDER(AZURE/GCP/AWS) CONFIGURE WITH MINIMUM CPU,RAM AND
STORAGE AND LAUNCH THE VM IMAGE.
AIM:
Procedure:
Implementation:
STEP1:CREATE AN ACCOUNT IN MICROSOFT AZURE.
STEP7: NOW CONNECT THE VIRTUAL MACHINE AND DOWNLOAD THE RDP FILE
TO OPEN YOUR WINDOWS VIRTUAL MACHINE.
STEP8: CREATED A NEW WINDOWS VIRTUAL MACHINE.
RESULT:
6.CREATE A SIMPLE WEB SITE USING ANY PUBLIC CLOUD SERVICE
PROVIDER (AZURE/GCP/AWS) AND CHECK THE PUBLIC ACCESSIBILITY OF
THE STORED FILE TO DEMONSTRATE STORAGE AS A SERVICE
AIM:
Procedure:
IMPLEMENTATION:
STEP1: FIRSTLY GO TO APPSERVICE TO CREATE AN WEBAPP.
STEP2: ENTER THE RESOURCE GROUP AND WEBAPP NAME AND REGION
AND SELECT THE LINUX OS.
STEP3: AFTER ENTER THE ALL THE NECESSARY THINGS CLICK THE
REVIEW AND CREATE AND CLICK THE CREATE THE WEB APP.
PROCEDURE:
IMPLEMENTATION:
STEP2: ENTER THE RESOURC GROUP AND AND STORAGE ACCOUNT NAME
AND REVIEW AND CREATE AND CLICK TH CREATE AND YOUR STORAGE
ACCOUNT WILL BE DEPLOYED SUCESSFULLY.
STP3: AND OUR STORAGE ACCOUNT IS CREATED.
STEP8: AND AGAIN RETURN TO STATIC WEBSITE AND OPEN THE PRIMARY
LINK AND YOUR WEB PAGE IS CREATED
RESULT:
8.CREATE A SQL STORAGE SERVICE AND PERFORM A BASIC QUERY USING
ANY PUBLIC CLOUD SERVICE PROVIDER (AZURE/GCP/AWS) TO
DEMONSTRATE DATABASE AS A SERVICE (DAAS)
AIM:
PROCEDURE:
IMPLEMENTATION:
RESULT:
EXP. 9: PERFORM THE BASIC CONFIGURATION SETUP FOR INSTALLING
HADOOP 2.X LIKE CREATING THE HDUSER AND SSH LOCALHOST
AIM:
PROCEDURE:
Step 1 – System Update
$ sudo apt-get update
// After the installation is finished, Oracle Java is setup. Run the java command again to
check the version and vendor.
[or]
$ sudo apt-get install default-jdk
$ java -version
$ su hduser
$ ssh-keygen -t rsa -P ""
Implementation:
RESULT:
EXP. 10: INSTALL HADOOP 2.X AND CONFIGURE THE NAME NODE AND DATA
NODE.
AIM:
PROCEDURE:
1. ~/.bashrc
2. /usr/local/hadoop/hadoop-2.7.2/etc/hadoop/hadoop-env.sh
3. /usr/local/hadoop/hadoop-2.7.2/etc/hadoop/core-site.xml
4. /usr/local/hadoop/hadoop-2.7.2/etc/hadoop/hdfs-site.xml
5. /usr/local/hadoop/hadoop-2.7.2/etc/hadoop/yarn-site.xml
6. /usr/local/hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml.template
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.2
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-D.java.library.path=$HADOOP_HOME/lib"
export PATH=$PATH:/usr/local/hadoop/hadoop-2.7.2/bin
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
</property>
</configuration>
// Edit core-site.xml
$ sudo nano core-site.xml
// Add the following lines between <configuration> …… </configuration>
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
// Edit yarn-site.xml
$ sudo nano yarn-site.xml
// Add the following lines between <configuration> …… </configuration>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.Shuffle-Handler</value>
</property>
</configuration>
// Edit mapred-site.xmsudo
$ cp /usr/local/hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml.template
/usr/local/hadoop/hadoop-2.7.2/etc/hadoop/mapred-site.xml
$ sudo nano mapred-site.xml
// Add the following lines between <configuration> …… </configuration>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
AIM:
PROCEDURE:
Step 1 - Open Terminal
$ su hduser
Password:
Step 2 - Start dfs and mapreduce services
$ cd /usr/local/hadoop/hadoop-2.7.2/sbin
$ start-dfs.sh
$ start-yarn.sh
$ jps
/usr/local/hadoop/hadoop-2.7.2/share/hadoop/mapreduce folders.
WordCount.java
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.apache.hadoop.io.Text;
@Override
public int run(String[] arg0) throws Exception {
// TODO Auto-generated method stub
if(arg0.length<2)
{
System.out.println("check the command line arguments");
}
JobConf conf=new JobConf(WordCount.class);
FileInputFormat.setInputPaths(conf, new Path(arg0[0]));
FileOutputFormat.setOutputPath(conf, new Path(arg0[1]));
conf.setMapperClass(WordMapper.class);
conf.setReducerClass(WordReducer.class);
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
JobClient.runJob(conf);
return 0;
}
public static void main(String args[]) throws Exception
{
int exitcode=ToolRunner.run(new WordCount(), args);
System.exit(exitcode);
}
}
WordCountMapper.java
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.Mapper;
String s=arg1.toString();
for(String word:s.split(" "))
{
arg2.collect(new Text(word),new IntWritable(1));
}
}
}
WordCountReducer.java
import java.io.IOException;
import java.util.Iterator;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.io.Text;
}
}
Now click on File tab and select Export. under Java, select Runnable Jar.
In Launch Config – select the config fie you created in Step 9 (WordCountConfig).
RESULT: