Ccs335-Cloud Computing Lab Record
Ccs335-Cloud Computing Lab Record
2. Install a C compiler in the virtual machine created using virtual box and execute
Simple Programs.
3. Install Google App Engine. Create hello world app and other simple web
5. Simulate a cloud scenario using CloudSim and run a scheduling algorithm that is
6. Find a procedure to transfer the files from one virtual machine to another virtual
machine.
7. Install Hadoop single node cluster and run simple applications like wordcount.
DATE:
Install Virtualbox / VMware Workstation with different flavours of
linux or windows OS on top of windows7 or 8.
Aim:
To Install Virtualbox / VMware Workstation with different flavours of linux or windows
OS on top of windows7 or 8.
PROCEDURE:
n
Neb lo
Username
oneadmin
Password
APPLICATIONS:
There are various applications of cloud computing in today’s network world. Many search engines and
social websites are using the concept of cloud computing like www.amazon.com, hotmail.com,
facebook.com, linkedln.com etc. the advantages of cloud computing in context to scalability is
like reduced risk , low cost testing ,ability to segment the customer base and auto-scaling based
on application load.
RESULT:
Thus the procedure to run the virtual machine of different configuration.
EX.NO.:2
DATE:
Install a C compiler in the virtual machine created using virtual box
and execute Simple Programs
Aim:
To Install a C compiler in the virtual machine created using virtual box and
execute Simple Programs`
PROCEDURE:
APPLICATIONS:
Simply running all programs in grid environment.
RESULT:
Aim:
To Install Google App Engine. Create hello world app and other simple web
applications using python/java.
Procedure:
Figure – Deselect the “Google Web ToolKit“, and link your GAE Java SDK via the “configure
SDK” link.
Click finished, Google Plugin for Eclipse will generate a sample project automatically.
3. Hello World
Review the generated project directory.
Nothing special, a standard Java web project structure.
HelloWorld/
src/
...Java source
code... META-INF/
...other configuration...
war/
...JSPs, images, data files...
WEB-INF/
...app configuration...
lib/
...JARs for libraries...
classes/
...compiled classes...
Copy
The extra is this file ―appengine-web.xml―, Google App Engine need this to run and deploy the
application.
File : appengine-web.xml
4. Run it local
Right click on the project and run as ―Web Application―.
Eclipse console :
//...
INFO: The server is running at http://localhost:8888/
30 Mac 2012 11:13:01 PM com.google.appengine.tools.development.DevAppServerImpl start
INFO: The admin console is running at http://localhost:8888/_ah/admin
Copy
Access URL http://localhost:8888/, see output
and also the hello world servlet – http://localhost:8888/helloworld
In this demonstration, I created an application ID, named ―mkyong123‖, and put it in appengine -
web.xml.
File : appengine-web.xml
<?xml version="1.0" encoding="utf-8"?>
<appengine-web-app xmlns="http://appengine.google.com/ns/1.0">
<application>mkyong123</application>
<version>1</version>
</appengine-web-app>
Copy
To deploy, see following steps:
Figure 1.2 – Sign in with your Google account and click on the Deploy button.
Figure 1.3 – If everything is fine, the hello world web application will be deployed to this URL
– http://mkyong123.appspot.com/
Result:
Aim:
To Simulate a cloud scenario using CloudSim and run a scheduling algorithm
that is not present in CloudSim.
Steps:
CloudSimExample1 finished!
RESULT:
Aim:
To Use GAE launcher to launch the web applications.
Steps:
MakingyourFirstApplication
Paste http://localhost:8080 into your browser and you should see your
application asfollows:
Justforfun,edittheindex.pytochangethename―Chuck‖ to yourownname
and press Refresh in the browser to verify your updates.
EachtimeyoupressRefreshinyourbrowser–youcanseeitretrievingthe
output with a GET request.
Dealing With Errors
With two files to edit, there are two general categories of errors that you may
encounter. If youmake a mistake onthe app.yamlfile, the App Engine
willnotstart and your launcher will show a yellow icon near your application:
To get more detail on what is going wrong, take a look at the log for the application:
In this instance – the mistake is mis-•‐ indenting the last line in the app.yaml (line 8).
Ifyoumake asyntaxerror in the index.pyfile, a Pythontrace backerrorwillappear in
yourbrowser.
The error you need to see is likely to be the last few lines of the output – in this
case I made a Python syntax error on line one of our one-•‐ line application.
Reference: http://en.wikipedia.org/wiki/Stack_trace
When you make a mistake in the app.yaml file – you must the fix the
mistake and attempt to start the application again.
If you make a mistake in a file like index.py, you can simply fix the file and
press refresh in your browser – there is no need to restart the server.
Result:
Aim:
To Find a procedure to transfer the files from one virtual machine
to another virtual machine.
Steps:
1. You can copy few (or more) lines with copy & paste mechanism.
For this you need to share clipboard between host OS and guest OS, installing
Guest Addition on both the virtual machines (probably setting bidirectional
and restarting them). You copy from guest OS in the clipboard that is shared
with the host OS.
Then you paste from the host OS to the second guest OS.
2. You can enable drag and drop too with the same method (Click on the
machine, settings, general, advanced, drag and drop: set to bidirectional
)
3. You can have common Shared Folders on both virtual machines
and use one of the directory shared as buffer to copy.
Installing Guest Additions you have the possibility to set Shared Folders too.
As you put a file in a shared folder from host OS or from guest OS, is
immediately visible to the other. (Keep in mind that can arise some problems
for date/time of the files when there are different clock settings on the
different virtual machines).
If you use the same folder shared on more machines you can exchange files
directly copying them in this folder.
4. You can use usual method to copy files between 2 different computer with
client-server application. (e.g. scp with sshd active for linux, winscp... you
can get some info about SSH servers e.g. here)
You need an active server (sshd) on the receiving machine and a client
on the sending machine. Of course you need to have the authorization
setted (via password or, better, via an automatic authentication method).
Note: many Linux/Ubuntu distribution install sshd by default: you can see if
it is running with pgrep sshd from a shell. You can install with sudo apt-get
install openssh-server.
5. You can mount part of the file system of a virtual machine via NFS or
SSHFS on the other, or you can share file and directory with Samba.
You may find interesting the article Sharing files between guest and
host without VirtualBox shared folders with detailed step by step
instructions.
You should remember that you are dialling with a little network of machines
with different operative systems, and in particular:
Each virtual machine has its own operative system running on and
acts as a physical machine.
Each virtual machine is an instance of a program owned by an user in the
hosting operative system and should undergo the restrictions of the user in
the hosting OS.
E.g Let we say that Hastur and Meow are users of the hosting machine, but
they did not allow each other to see their directories (no read/write/execute
authorization). When each of them run a virtual machine, for the hosting OS
those virtual machine are two normal programs owned by Hastur and Meow
and cannot see the private directory of the other user. This is a restriction due
to the hosting OS. It's easy to overcame it: it's enough to give authorization to
read/write/execute to a directory or to chose a different directory in which both
users can read/write/execute.
Windows likes mouse and Linux fingers. :-)
I mean I suggest you to enable Drag & drop to be cosy with the Windows
machines and the Shared folders or to be cosy with Linux.
When you will need to be fast with Linux you will feel the need of ssh-keygen and
to Generate once SSH Keys to copy files on/from a remote machine without writing
password anymore. In this way it functions bash auto-completion remotely too!
PROCEDURE:
Steps:
1. Open Browser, type localhost:9869
2. Login using username: oneadmin, password: opennebula
3. Then follow the steps to migrate VMs
a. Click on infrastructure
b. Select clusters and enter the cluster name
c. Then select host tab, and select all host
d. Then select Vnets tab, and select all vnet
e. Then select datastores tab, and select all datastores
f. And then choose host under infrastructure tab
g. Click on + symbol to add new host, name the host then click on create.
4. on instances, select VMs to migrate then follow the stpes
a. Click on 8th icon ,the drop down list display
b. Select migrate on that ,the popup window display
c. On that select the target host to migrate then click on migrate.
Before migration
Host:SACET
Host:one-sandbox
Migrate Virtual Machine
€- + C B IocaIhast:9869
Open
M Nebula VMs
17?1d.LO0.
Settings
After Migration:
Host:one-sandbox
Host:SACET
APPLICATIONS:
Easily migrate your virtual machine from one pc to another.
Result:
Thus, the file transfer between VM was successfully completed.
EX NO.:7
DATE :
Install Hadoop single node cluster and run simple
applications like wordcount.
Aim:
To Install Hadoop single node cluster and run simple
applications like wordcount.
Steps:
Install Hadoop
Step 1: Click here to download the Java 8 Package. Save this file in your home
directory.
5: Add the Hadoop and Java paths in the bash file (.bashrc). Open. bashrc
file. Now, add Hadoop and Java Path as shown below.
Command: vi .bashrc
For applying all these changes to the current Terminal, execute the source command.
Command: source .bashrc
To make sure that Java and Hadoop have been properly installed on your system and can be
accessed through the Terminal, execute the java -version and hadoop version commands.
Command: cd hadoop-2.7.3/etc/hadoop/
Command: ls
All the Hadoop configuration files are located in hadoop-2.7.3/etc/hadoop directory as you can
see in the snapshot below:
core-site.xml informs Hadoop daemon where NameNode runs in the cluster. It contains
configuration settings of Hadoop core such as I/O settings that are common to HDFS &
MapReduce.
Command: vi core-site.xml
1
2 <?xml version="1.0" encoding="UTF-8"?
>
3 <?xml-stylesheet type="text/xsl"
4 href="configuration.xsl"?>
5 <configuration>
<property>
6 <name>fs.default.name</name>
7 <value>hdfs://localhost:9000</value>
Step 8: Edit hdfs-site.xml and edit the property mentioned below inside
configuration tag:
Command: vi hdfs-site.xml
Fig: Hadoop Installation – Configuring hdfs-site.xml
1
2 <?xml version="1.0" encoding="UTF-8"?
3 >
<?xml-stylesheet type="text/xsl"
4 href="configuration.xsl"?>
5 <configuration>
6 <property>
<name>dfs.replication</name>
7
<value>1</value>
8 </property>
9 <property>
10 <name>dfs.permission</name>
<value>false</value>
11
Step 9: Edit the mapred-site.xml file and edit the property mentioned below
inside configuration tag:
In some cases, mapred-site.xml file is not available. So, we have to create the mapred- site.xml
file using mapred-site.xml template.
Command: vi mapred-site.xml.
Fig: Hadoop Installation – Configuring mapred-site.xml
1
2 <?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl"
3 href="configuration.xsl"?>
4 <configuration>
5 <property>
<name>mapreduce.framework.name</name>
6 <value>yarn</value>
7 </property>
Step 10: Edit yarn-site.xml and edit the property mentioned below inside
configuration tag:
Command: vi yarn-site.xml
Step 11: Edit hadoop-env.sh and add the Java Path as mentioned below:
1
2
<?xml version="1.0">
3 <configuration>
4 <property>
5 <name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
6 </property>
7 <property>
8 <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</
9 name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
1 </property>
0
1
hadoop-env.sh contains the environment variables that are used in the script to run Hadoop
like Java home path, etc.
Command: vi hadoop–env.sh
Command: cd
Command: cd hadoop-2.7.3
This formats the HDFS via NameNode. This command is only executed for the first time.
Formatting the file system means initializing the directory specified by the dfs.name.dir
variable.
Never format, up and running Hadoop filesystem. You will lose all your data stored in the
HDFS.
Step 13: Once the NameNode is formatted, go to hadoop-2.7.3/sbin directory and start all the daemons.
Command: cd hadoop-2.7.3/sbin
Either you can start all daemons with a single command or do it individually.
Command: ./start-all.sh
The NameNode is the centerpiece of an HDFS file system. It keeps the directory tree of all files
stored in the HDFS and tracks all the file stored across the cluster.
On startup, a DataNode connects to the Namenode and it responds to the requests from
the Namenode for different operations.
Start ResourceManager:
ResourceManager is the master that arbitrates all the available cluster resources and
thus helps in managing the distributed applications running on the YARN system.
Its work is to manage each NodeManagers and the each application’s
ApplicationMaster.
Start NodeManager:
The NodeManager in each machine framework is the agent which is responsible for
managing containers, monitoring their resource usage and reporting the same to the
ResourceManager.
Start JobHistoryServer:
JobHistoryServer is responsible for servicing all job history related requests from client.
Step 14: To check that all the Hadoop services are up and running, run the below
command.
Command: jps
Result:
Thus the Hadoop one cluster was installed and simple applications executed
successfully.
EX NO. : 8
DATE:
Creating and Executing Your First Container Using Docker.
NTAINER ID IMAGE
COMMAND CREATED STATUS
PORTS NAMES
d6777df89fea nginx "nginx -
g 'daemon ..." Less than a
second ago Up 2 seconds
0.0.0.0:8080-
>80/tcp nginx
ead80a0db505 mongo
"docker-entrypoint..." 17
seconds ago Up 19 seconds
0.0.0.0:8081-
>27017/tcp mongo
af549dccd5cf ubuntu "top" 5
minutes ago Up 5 minutes
priceless_kepler
Step 3: Clean Up
First get a list of the
containers running using
docker container ls.
$ docker container ls
CONTAINER ID IMAGE
COMMAND CREATED STATUS
PORTS NAMES
d6777df89fea nginx "nginx -
g 'daemon ..."
3 minutes ago Up 3 minutes
0.0.0.0:8080-
>80/tcp nginx
ead80a0db505 mongo
"docker-entrypoint..." 3
minutes ago
Up 3 minutes 0.0.0.0:8081-
>27017/tcp mongo
af549dccd5cf ubuntu "top" 8
minutes ago
Up 8 minutes
priceless_kepler
Next, run docker container
stop [container id] for each
container in the list. You can
also use the
names of the containers that
you specified before.
$ docker container stop
d67 ead af5 d67
e
a
d
a
f
5
1. Remove the stopped
containers
docker system prune is a
really handy command to
clean up your system. It will
remove any
stopped containers, unused
volumes and networks, and
dangling images.
$ docker system
prune WARNING!
This will remove:
-
all stopped containers
-
all volumes not used by at
least one container
-
all networks not used by at
least one container
-
all dangling images
Are you sure you want to
continue? [y/N] y Deleted
Containers:
7872fd96ea4695795c41150a
06067d605f69702dbcb9ce49
492c9029f0e1b44b
AIM:
$ docker -h
Flag shorthand -h has been deprecated, please use --help
...
Management Commands:
builder Manage builds
config Manage Docker configs
container Manage containers
engine Manage the docker engine
image Manage images
network Manage networks
node Manage Swarm nodes
plugin Manage plugins
secret Manage Docker secrets
service Manage services
stack Manage Docker stacks
swarm Manage Swarm
system Manage Docker
trust Manage trust on Docker images
volume Manage volumes
The Docker command line can be used to manage several features of the Docker Engine. In this lab, we
will mainly focus on the container command.
If podman is installed, you can run the alternative command for comparison.
sudo podman -h
docker version
Client:
Version: 19.03.6
...
We are going to use the Docker CLI to run our first container.
top is a linux utility that prints the processes on a system and orders them by resource
consumption. Notice that there is only a single process in this output: it is the top process itself.
We don't see other processes from our host in this list because of the PID namespace isolation.
Containers use linux namespaces to provide isolation of system resources from other containers or
the host. The PID namespace provides isolation for process IDs. If you run top while inside the
container, you will notice that it shows the processes within the PID namespace of the container,
which is much different than what you can see if you ran top on the host.
Even though we are using the ubuntu image, it is important to note that our container does not
have its own kernel. Its uses the kernel of the host and the ubuntu image is used only to provide the
file system and tools available on an ubuntu system.
3. Inspect the container with docker container exec
The docker container exec command is a way to "enter" a running container's namespaces with a
new process.
Open a new terminal. On cognitiveclass.ai, select Terminal > New Terminal.
Using play-with-docker.com, to open a new terminal connected to node1, click "Add New
Instance" on the lefthand side, then ssh from node2 into node1 using the IP that is listed by 'node1
'. For example:
[node2] (local) [email protected] ~
$ ssh 192.168.0.18
[node1] (local) [email protected] ~
$
In the new terminal, use the docker container ls command to get the ID of the running container
you just created.
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
b3ad2a23fab3 ubuntu "top" 29 minutes ago Up 29 minutes
goofy_nobel
Then use that id to run bash inside that container using the docker container exec command. Since
we are using bash and want to interact with this container from our terminal, use -it flags to run
using interactive mode while allocating a psuedo-terminal.
$ docker container exec -it b3ad2a23fab3 bash
root@b3ad2a23fab3:/#
And Voila! We just used the docker container exec command to "enter" our container's
namespaces with our bash process. Using docker container exec with bash is a common pattern to
inspect a docker container.
Notice the change in the prefix of your terminal. e.g. root@b3ad2a23fab3:/. This is an indication
that we are running bash "inside" of our container.
From the same termina, run ps -ef to inspect the running processes.
root@b3ad2a23fab3:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 20:34 ? 00:00:00 top
root 17 0 0 21:06 ? 00:00:00 bash
root 27 17 0 21:14 ? 00:00:00 ps -ef
You should see only the top process, bash process and our ps process.
For comparison, exit the container, and run ps -ef or top on the host. These commands will work
on linux or mac. For windows, you can inspect the running processes using task list.
root@b3ad2a23fab3:/# exit
exit
$ ps -ef
# Lots of processes!
Technical Deep Dive PID is just one of the linux namespaces that provides containers with
isolation to system resources. Other linux namespaces include: - MNT - Mount and unmount
directories without affecting other namespaces - NET - Containers have their own network stack -
IPC - Isolated interprocess communication mechanisms such as message queues. - User - Isolated
view of users on the system - UTC - Set hostname and domain name per container
These namespaces together provide the isolation for containers that allow them to run together
securely and without conflict with other containers running on the same system. Next, we will
demonstrate different uses of containers. and the benefit of isolation as we run multiple containers
on the same host.
4. Clean up the container running the top processes by typing: <ctrl>-c, list all containers and remove
the containers by their id.
docker ps -a
We are using a couple of new flags here. The --detach flag will run this container in the
background. The publish flag publishes port 80 in the container (the default port for nginx), via
port 8080 on our host. Remember that the NET namespace gives processes of the container their
own network stack. The --publish flag is a feature that allows us to expose networking through the
container onto the host.
3. Access the nginx server on localhost:8080.
curl localhost:8080
will return the HTML home page of Nginx,
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
4. If you are using play-with-docker, look for the 8080 link near the top of the page, or if you run a
Docker client with access to a local browser,
5. Run a mongo DB server
Now, run a mongoDB server. We will use the official mongoDB image from the Docker Hub.
Instead of using the latest tag (which is the default if no tag is specified), we will use a specific
version of the mongo image: 4.4.
Step 3: Clean Up
Completing this lab results in a bunch of running containers on your host. Let's clean these up.
1. First get a list of the containers running using docker container ls.
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
d6777df89fea nginx "nginx -g 'daemon ..." 3 minutes ago Up 3 minutes
0.0.0.0:8080->80/tcp nginx
ead80a0db505 mongo "docker-entrypoint..." 3 minutes ago Up 3 minutes
0.0.0.0:8081->27017/tcp mongo
af549dccd5cf ubuntu "top" 8 minutes ago Up 8 minutes
priceless_kepler
2. Next, run docker container stop [container id] for each container in the list. You can also use the
names of the containers that you specified before.
$ docker container stop d67 ead af5
d67
ead
af5
3. Remove the stopped containers
docker system prune is a really handy command to clean up your system. It will remove any
stopped containers, unused volumes and networks, and dangling images.
$ docker system prune
WARNING! This will remove:
- all stopped containers
- all volumes not used by at least one container
- all networks not used by at least one container
- all dangling images
Are you sure you want to continue? [y/N] y
Deleted Containers:
7872fd96ea4695795c41150a06067d605f69702dbcb9ce49492c9029f0e1b44b
60abd5ee65b1e2732ddc02b971a86e22de1c1c446dab165462a08b037ef7835c
31617fdd8e5f584c51ce182757e24a1c9620257027665c20be75aa3ab6591740
RESULT:
Thus, creating and Executing the First Container Using Docker Ubuntu, Nginx and MongoDB containers.
EX NO. : 9
DATE:
Run a Container from Docker Hub
NTAINER ID IMAGE
COMMAND CREATED STATUS
PORTS NAMES
d6777df89fea nginx "nginx -
g 'daemon ..." Less than a
second ago Up 2 seconds
0.0.0.0:8080-
>80/tcp nginx
ead80a0db505 mongo
"docker-entrypoint..." 17
seconds ago Up 19 seconds
0.0.0.0:8081-
>27017/tcp mongo
af549dccd5cf ubuntu "top" 5
minutes ago Up 5 minutes
priceless_kepler
Step 3: Clean Up
First get a list of the
containers running using
docker container ls.
$ docker container ls
CONTAINER ID IMAGE
COMMAND CREATED STATUS
PORTS NAMES
d6777df89fea nginx "nginx -
g 'daemon ..."
3 minutes ago Up 3 minutes
0.0.0.0:8080-
>80/tcp nginx
ead80a0db505 mongo
"docker-entrypoint..." 3
minutes ago
Up 3 minutes 0.0.0.0:8081-
>27017/tcp mongo
af549dccd5cf ubuntu "top" 8
minutes ago
Up 8 minutes
priceless_kepler
Next, run docker container
stop [container id] for each
container in the list. You can
also use the
names of the containers that
you specified before.
$ docker container stop
d67 ead af5 d67
e
a
d
a
f
5
1. Remove the stopped
containers
docker system prune is a
really handy command to
clean up your system. It will
remove any
stopped containers, unused
volumes and networks, and
dangling images.
$ docker system
prune WARNING!
This will remove:
-
all stopped containers
-
all volumes not used by at
least one container
-
all networks not used by at
least one container
-
all dangling images
Are you sure you want to
continue? [y/N] y Deleted
Containers:
7872fd96ea4695795c41150a
06067d605f69702dbcb9ce49
492c9029f0e1b44b
AIM:
PROCEDURE:
The following section contains step-by-step instructions on how to get started with Docker Hub.
To create a repository:
1. Sign in to Docker Hub.
2. On the Repositories page, select Create repository.
3. Name it <your-username>/my-private-repo.
4. Set the visibility to Private.
5. Select Create.
You've created your first repository.
You need to download Docker Desktop to build, push, and pull container images.
2. Sign in to Docker Desktop using the Docker ID you created in step one.
Step 4: Pull and run a container image from Docker Hub
i. In your terminal, run docker pull hello-world to pull the image from Docker Hub. You should
see output similar to:
ii. Run docker run hello-world to run the image locally. You should see output similar to:
4. The Docker daemon streamed that output to the Docker client, which sent
it to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
Step 5: Build and push a container image to Docker Hub from your computer
iii. Run docker run <your_username>/my-private-repo to test your Docker image locally.
iv. Run docker push <your_username>/my-private-repo to push your Docker image to Docker
Hub. You should see output similar to:
v. Your repository in Docker Hub should now display a new latest tag under Tags:
RESULT: