Ccs335-Cloud Lab Manual Complete
Ccs335-Cloud Lab Manual Complete
BACHELOR OF ENGINEERING
2023 - 2024
FIFTH SEMESTER
CERTIFICATE
Certified that this is a bonafide record of work done by
Name :
University Reg.No :
Semester :
Branch :
Year :
DATE:
Install Virtualbox / VMware Workstation with different flavours of linux or
windows OS on top of windows7 or 8.
Aim:
To Install Virtualbox / VMware Workstation with different flavours of linux or windows
OS on top of windows7 or 8.
PROCEDURE:
6. Then installation was completed..the show virtual box icon on desktop screen….
Steps to import Open nebula sandbox:
1. Open Virtual box
2. File import Appliance
3. Browse OpenNebula-Sandbox-5.0.ova file
4. Then go to setting, select Usb and choose USB 1.1
5. Then Start the Open Nebula
6. Login using username: root, password:opennebula
Steps to create Virtual Machine through opennebula
1. Open Browser, type localhost:9869
2. Login using username: oneadmin, password: opennebula
3. Click on instances, select VMs then follow the steps to create Virtaul machine
a. Expand the + symbol
b. Select user oneadmin
c. Then enter the VM name,no.of instance, cpu.
d. Then click on create button.
e. Repeat the steps the C,D for creating more than one VMs.
APPLICATIONS:
There are various applications of cloud computing in today’s network world. Many search engines and social
websites are using the concept of cloud computing like www.amazon.com, hotmail.com, facebook.com,
linkedln.com etc. the advantages of cloud computing in context to scalability is like reduced risk , low cost testing
,ability to segment the customer base and auto-scaling based on application load.
RESULT:
Thus the procedure to run the virtual machine of different configuration.
EX.NO.:2
DATE:
Install a C compiler in the virtual machine created using virtual box and
execute Simple Programs
Aim:
To Install a C compiler in the virtual machine created using virtual box and
execute Simple Programs`
PROCEDURE:
APPLICATIONS:
Simply running all programs in grid environment.
RESULT:
Aim:
To Install Google App Engine. Create hello world app and other simple web
applications using python/java.
Procedure:
Figure – Deselect the “Google Web ToolKit“, and link your GAE Java SDK via the “configure
SDK” link.
Click finished, Google Plugin for Eclipse will generate a sample project automatically.
3. Hello World
Review the generated project directory.
Nothing special, a standard Java web project structure.
HelloWorld/
src/
...Java source code...
META-INF/
...other configuration...
war/
...JSPs, images, data files...
WEB-INF/
...app configuration...
lib/
...JARs for libraries...
classes/
...compiled classes...
Copy
The extra is this file “appengine-web.xml“, Google App Engine need this to run and deploy the
application.
File : appengine-web.xml
</appengine-web-app>
Copy
4. Run it local
Right click on the project and run as “Web Application“.
Eclipse console :
//...
INFO: The server is running at http://localhost:8888/
30 Mac 2012 11:13:01 PM com.google.appengine.tools.development.DevAppServerImpl start
INFO: The admin console is running at http://localhost:8888/_ah/admin
Copy
Access URL http://localhost:8888/, see output
and also the hello world servlet – http://localhost:8888/helloworld
In this demonstration, I created an application ID, named “mkyong123”, and put it in appengine-
web.xml.
File : appengine-web.xml
</appengine-web-app>
Copy
To deploy, see following steps:
Figure 1.2 – Sign in with your Google account and click on the Deploy button.
Figure 1.3 – If everything is fine, the hello world web application will be deployed to this URL –
http://mkyong123.appspot.com/
Result:
Aim:
To Use GAE launcher to launch the web applications.
Steps:
Now you need to create a simple application. We could use the “+” option to have the
launcher make us an application – but instead we will do it by hand to get a better sense of
what is going on.
Make a folder for your Google App Engine applications. I am going to make the Folder
on my Desktop called “apps” – the path to this folder is:
Paste http://localhost:8080 into your browser and you should see your
application as follows:
Just for fun, edit the index.py to change the name “Chuck” to your own name
and press Refresh in the browser to verify your updates.
You can watch the internal log of the actions that the web server is performing
when you are interacting with your application in the browser. Select your
application in the Launcher and press the Logs button to bring up a log window:
Each time you press Refresh in your browser – you can see it retrieving the
output with a GET request.
Dealing With Errors
With two files to edit, there are two general categories of errors that you may
encounter. If you make a mistake on the app.yaml file, the App Engine will not start
and your launcher will show a yellow icon near your application:
To get more detail on what is going wrong, take a look at the log for the application:
In this instance – the mistake is mis-‐indenting the last line in the app.yaml (line 8).
If you make a syntax error in the index.py file, a Python trace back error will appear in
your browser.
The error you need to see is likely to be the last few lines of the output – in this
case I made a Python syntax error on line one of our one--‐line application.
Reference: http://en.wikipedia.org/wiki/Stack_trace
When you make a mistake in the app.yaml file – you must the fix the mistake
and attempt to start the application again.
If you make a mistake in a file like index.py, you can simply fix the file and
press refresh in your browser – there is no need to restart the server.
Result:
Aim:
To Simulate a cloud scenario using CloudSim and run a scheduling algorithm
that is not present in CloudSim.
Steps:
CloudSimExample1 finished!
RESULT:
Aim:
To Find a procedure to transfer the files from one virtual machine
to another virtual machine.
Steps:
1. You can copy few (or more) lines with copy & paste mechanism.
For this you need to share clipboard between host OS and guest OS, installing
Guest Addition on both the virtual machines (probably setting bidirectional
and restarting them). You copy from guest OS in the clipboard that is shared
with the host OS.
Then you paste from the host OS to the second guest OS.
2. You can enable drag and drop too with the same method (Click on the
machine, settings, general, advanced, drag and drop: set to bidirectional )
3. You can have common Shared Folders on both virtual machines and
use one of the directory shared as buffer to copy.
Installing Guest Additions you have the possibility to set Shared Folders too.
As you put a file in a shared folder from host OS or from guest OS, is
immediately visible to the other. (Keep in mind that can arise some problems
for date/time of the files when there are different clock settings on the
different virtual machines).
If you use the same folder shared on more machines you can exchange files
directly copying them in this folder.
4. You can use usual method to copy files between 2 different computer with
client-server application. (e.g. scp with sshd active for linux, winscp... you
can get some info about SSH servers e.g. here)
You need an active server (sshd) on the receiving machine and a client on
the sending machine. Of course you need to have the authorization setted
(via password or, better, via an automatic authentication method).
Note: many Linux/Ubuntu distribution install sshd by default: you can see if
it is running with pgrep sshd from a shell. You can install with sudo apt-get
install openssh-server.
5. You can mount part of the file system of a virtual machine via NFS or
SSHFS on the other, or you can share file and directory with Samba.
You may find interesting the article Sharing files between guest and
host without VirtualBox shared folders with detailed step by step
instructions.
You should remember that you are dialling with a little network of machines
with different operative systems, and in particular:
Each virtual machine has its own operative system running on and acts
as a physical machine.
Each virtual machine is an instance of a program owned by an user in the
hosting operative system and should undergo the restrictions of the user in the
hosting OS.
E.g Let we say that Hastur and Meow are users of the hosting machine, but
they did not allow each other to see their directories (no read/write/execute
authorization). When each of them run a virtual machine, for the hosting OS
those virtual machine are two normal programs owned by Hastur and Meow
and cannot see the private directory of the other user. This is a restriction due
to the hosting OS. It's easy to overcame it: it's enough to give authorization to
read/write/execute to a directory or to chose a different directory in which both
users can read/write/execute.
Windows likes mouse and Linux fingers. :-)
I mean I suggest you to enable Drag & drop to be cosy with the Windows
machines and the Shared folders or to be cosy with Linux.
When you will need to be fast with Linux you will feel the need of ssh-keygen and
to Generate once SSH Keys to copy files on/from a remote machine without writing
password anymore. In this way it functions bash auto-completion remotely too!
PROCEDURE:
Steps:
1. Open Browser, type localhost:9869
2. Login using username: oneadmin, password: opennebula
3. Then follow the steps to migrate VMs
a. Click on infrastructure
b. Select clusters and enter the cluster name
c. Then select host tab, and select all host
d. Then select Vnets tab, and select all vnet
e. Then select datastores tab, and select all datastores
f. And then choose host under infrastructure tab
g. Click on + symbol to add new host, name the host then click on create.
4. on instances, select VMs to migrate then follow the stpes
a. Click on 8th icon ,the drop down list display
b. Select migrate on that ,the popup window display
c. On that select the target host to migrate then click on migrate.
Before migration
Host:SACET
Host:one-sandbox
After Migration:
Host:one-sandbox
Host:SACET
APPLICATIONS:
Easily migrate your virtual machine from one pc to another.
Result:
Thus the file transfer between VM was successfully completed…..
EX NO.:7
DATE :
Install Hadoop single node cluster and run simple
applications like wordcount.
Aim:
To Install Hadoop single node cluster and run simple
applications like wordcount.
Steps:
Install Hadoop
Step 1: Click here to download the Java 8 Package. Save this file in your home
directory.
5: Add the Hadoop and Java paths in the bash file (.bashrc). Open. bashrc
Command: vi .bashrc
For applying all these changes to the current Terminal, execute the source command.
Command: source .bashrc
To make sure that Java and Hadoop have been properly installed on your system and can be
accessed through the Terminal, execute the java -version and hadoop version commands.
Command: cd hadoop-2.7.3/etc/hadoop/
Command: ls
All the Hadoop configuration files are located in hadoop-2.7.3/etc/hadoop directory as you can
see in the snapshot below:
core-site.xml informs Hadoop daemon where NameNode runs in the cluster. It contains
configuration settings of Hadoop core such as I/O settings that are common to HDFS &
MapReduce.
Command: vi core-site.xml
1
<?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <configuration>
4 <property>
5 <name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
6 </property>
7 </configuration>
8
Step 8: Edit hdfs-site.xml and edit the property mentioned below inside
configuration tag:
Command: vi hdfs-site.xml
Fig: Hadoop Installation – Configuring hdfs-site.xml
1
2 <?xml version="1.0" encoding="UTF-8"?>
3 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
4 <property>
5 <name>dfs.replication</name>
6 <value>1</value>
7 </property>
<property>
8 <name>dfs.permission</name>
9 <value>false</value>
10 </property>
</configuration>
11
12
Step 9: Edit the mapred-site.xml file and edit the property mentioned below
inside configuration tag:
In some cases, mapred-site.xml file is not available. So, we have to create the mapred- site.xml
file using mapred-site.xml template.
Command: vi mapred-site.xml.
Step 10: Edit yarn-site.xml and edit the property mentioned below inside
configuration tag:
Command: vi yarn-site.xml
Step 11: Edit hadoop-env.sh and add the Java Path as mentioned below:
1
2 <?xml version="1.0">
3 <configuration>
4 <property>
5 <name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
6 </property>
7 <property>
8 <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</
name>
9
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
1 </property>
0 </configuration>
1
1
hadoop-env.sh contains the environment variables that are used in the script to run Hadoop
like Java home path, etc.
Command: vi hadoop–env.sh
Command: cd
Command: cd hadoop-2.7.3
This formats the HDFS via NameNode. This command is only executed for the first time.
Formatting the file system means initializing the directory specified by the dfs.name.dir
variable.
Never format, up and running Hadoop filesystem. You will lose all your data stored in the
HDFS.
Step 13: Once the NameNode is formatted, go to hadoop-2.7.3/sbin directory and start all the daemons.
Command: cd hadoop-2.7.3/sbin
Either you can start all daemons with a single command or do it individually.
Command: ./start-all.sh
The NameNode is the centerpiece of an HDFS file system. It keeps the directory tree of all files
stored in the HDFS and tracks all the file stored across the cluster.
Start DataNode:
On startup, a DataNode connects to the Namenode and it responds to the requests from
the Namenode for different operations.
Start ResourceManager:
ResourceManager is the master that arbitrates all the available cluster resources and
thus helps in managing the distributed applications running on the YARN system.
Its work is to manage each NodeManagers and the each application’s
ApplicationMaster.
Start NodeManager:
The NodeManager in each machine framework is the agent which is responsible for
managing containers, monitoring their resource usage and reporting the same to the
ResourceManager.
Start JobHistoryServer:
JobHistoryServer is responsible for servicing all job history related requests from client.
Step 14: To check that all the Hadoop services are up and running, run the below
command.
Command: jps
Result:
Thus the Hadoop one cluster was installed and simple applications executed
successfully.
EX.NO:8 Creating and Executing Your First Container Using Docker
DATE:
Aim:
To Find a procedure to Creating and Executing the First Container Using Docker
Steps:
Prerequisites
Its must have access to a docker client, either on localhost, use a terminal from Theia - Cloud IDE at
https://labs.cognitiveclass.ai/tools/theiadocker or be using Play with Docker for example.
Get Started
$ docker -h
Flag shorthand -h has been deprecated, please use --help
...
Management Commands:
builder Manage builds
config Manage Docker configs
container Manage containers
engine Manage the docker engine
image Manage images
network Manage networks
node Manage Swarm nodes
plugin Manage plugins
secret Manage Docker secrets
service Manage services
stack Manage Docker stacks
swarm Manage Swarm
system Manage Docker
trust Manage trust on Docker images
volume Manage volumes
The Docker command line can be used to manage several features of the Docker Engine. In this lab, we will
mainly focus on the container command.
If podman is installed, you can run the alternative command for comparison.
sudo podman –h
docker version
Client:
Version: 19.03.6
...
You note that Docker installs both a Client and a Server: Docker Engine. For instance, if you run the same
command for podman, you will see only a CLI version, because podman runs daemonless and relies on an
OCI compliant container runtime (runc, crun, runv etc) to interface with the OS to create the running
containers.
We are going to use the Docker CLI to run our first container.
Use the docker container run command to run a container with the ubuntu image using the top command. The
-t flags allocate a pseudo-TTY which we need for the top to work correctly
$ docker container run -it ubuntu top
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
aafe6b5e13de: Pull complete
0a2b43a72660: Pull complete
18bdd1e546d2: Pull complete
8198342c3e05: Pull complete
f56970a44fd4: Pull complete
Digest: sha256:f3a61450ae43896c4332bda5e78b453f4a93179045f20c8181043b26b5e79028
The docker run command will result first in a docker pull to download the ubuntu image onto your host.
Once it is downloaded, it will start the container. The output for the running container should look like this:
top - 20:32:46 up 3 days, 17:40, 0 users, load average: 0.00, 0.01, 0.00
Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.1 sy, 0.0 ni, 99.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 2046768 total, 173308 free, 117248 used, 1756212 buff/cache
KiB Swap: 1048572 total, 1048572 free, 0 used. 1548356 avail Mem
top is a linux utility that prints the processes on a system and orders them by resource consumption. Notice
that there is only a single process in this output: it is the top process itself. We don't see other processes from
our host in this list because of the PID namespace isolation.
Containers use linux namespaces to provide isolation of system resources from other containers or the host.
The PID namespace provides isolation for process IDs. If you run top while inside the container, you will
notice that it shows the processes within the PID namespace of the container, which is much different than
what you can see if you ran top on the host.
Even though we are using the ubuntu image, it is important to note that our container does not have its own
kernel. Its uses the kernel of the host and the ubuntu image is used only to provide the file system and tools
available on an ubuntu system.
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
b3ad2a23fab3 ubuntu "top" 29 minutes ago Up 29 minutes
goofy_nobel
Then use that id to run bash inside that container using the docker container exec command. Since we are
using bash and want to interact with this container from our terminal, use -it flags to run using interactive
mode while allocating a psuedo-terminal.
And Voila! We just used the docker container exec command to "enter" our container's namespaces with our
bash process. Using docker container exec with bash is a common pattern to inspect a docker container.
Notice the change in the prefix of your terminal. e.g. root@b3ad2a23fab3:/. This is an indication that we are
running bash "inside" of our container.
Note: This is not the same as ssh'ing into a separate host or a VM. We don't need an ssh server to connect
with a bash process. Remember that containers use kernel-level features to achieve isolation and that
containers run on top of the kernel. Our container is just a group of processes running in isolation on the
same host, and we can use docker container exec to enter that isolation with the bash process. After running
docker container exec, the group of processes running in isolation (i.e. our container) include top and bash.
From the same termina, run ps -ef to inspect the running processes.
root@b3ad2a23fab3:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 20:34 ? 00:00:00 top
root 17 0 0 21:06 ? 00:00:00 bash
root 27 17 0 21:14 ? 00:00:00 ps -ef
You should see only the top process, bash process and our ps process.
For comparison, exit the container, and run ps -ef or top on the host. These commands will work on linux or
mac. For windows, you can inspect the running processes using tasklist.
root@b3ad2a23fab3:/# exit
exit
$ ps -ef
# Lots of processes!
Technical Deep Dive PID is just one of the linux namespaces that provides containers with isolation to
system resources. Other linux namespaces include: - MNT - Mount and unmount directories without
affecting other namespaces - NET - Containers have their own network stack - IPC - Isolated interprocess
communication mechanisms such as message queues. - User - Isolated view of users on the system - UTC -
Set hostname and domain name per container
These namespaces together provide the isolation for containers that allow them to run together securely and
without conflict with other containers running on the same system. Next, we will demonstrate different uses
of containers. and the benefit of isolation as we run multiple containers on the same host.
Note: Namespaces are a feature of the linux kernel. But Docker allows you to run containers on Windows
and Mac... how does that work? The secret is that embedded in the Docker product or Docker engine is a
linux subsystem. Docker open-sourced this linux subsystem to a new project: LinuxKit. Being able to run
containers on many different platforms is one advantage of using the Docker tooling with containers.
In addition to running linux containers on Windows using a linux subsystem, native Windows containers are
now possible due the creation of container primitives on the Windows OS. Native Windows containers can
be run on Windows 10 or Windows Server 2016 or newer.
Note: if you run this exercise in a containerized terminal and execute the ps -ef command in the terminal, e.g.
in https://labs.cognitiveclass.ai, you will still see a limited set of processes after exiting the exec command.
You can try to run the ps -ef command in a terminal on your local machine to see all processes.
Clean up the container running the top processes by typing: <ctrl>-c, list all containers and remove the
containers by their id.
4. docker ps -a
5.
6. docker rm <CONTAINER ID>
7.
We are using a couple of new flags here. The --detach flag will run this container in the background.
The publish flag publishes port 80 in the container (the default port for nginx), via port 8080 on our host.
Remember that the NET namespace gives processes of the container their own network stack. The --publish
flag is a feature that allows us to expose networking through the container onto the host.
How do you know port 80 is the default port for nginx? Because it is listed in the documentation on the
Docker Hub. In general, the documentation for the verified images is very good, and you will want to refer to
them when running containers using those images.
We are also specifying the --name flag, which names the container. Every container has a name, if you don't
specify one, Docker will randomly assign one for you. Specifying your own name makes it easier to run
subsequent commands on your container since you can reference the name instead of the id of the container.
For example: docker container inspect nginx instead of docker container inspect 5e1.
Since this is the first time you are running the nginx container, it will pull down the nginx image from the
Docker Store. Subsequent containers created from the Nginx image will use the existing image located on
your host.
Nginx is a lightweight web server. You can access it on port 8080 on your localhost.
Access the nginx server on localhost:8080.
curl localhost:8080
will return the HTML home page of Nginx,
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
If you are using play-with-docker, look for the 8080 link near the top of the page, or if you run a Docker
client with access to a local browser,
We are using a couple of new flags here. The --detach flag will run this container in the background. The
publish flag publishes port 80 in the container (the default port for nginx), via port 8080 on our host.
Remember that the NET namespace gives processes of the container their own network stack. The --publish
flag is a feature that allows us to expose networking through the container onto the host.
How do you know port 80 is the default port for nginx? Because it is listed in the documentation on the
Docker Hub. In general, the documentation for the verified images is very good, and you will want to refer to
them when running containers using those images.
We are also specifying the --name flag, which names the container. Every container has a name, if you don't
specify one, Docker will randomly assign one for you. Specifying your own name makes it easier to run
subsequent commands on your container since you can reference the name instead of the id of the container.
For example: docker container inspect nginx instead of docker container inspect 5e1.
Since this is the first time you are running the nginx container, it will pull down the nginx image from the
Docker Store. Subsequent containers created from the Nginx image will use the existing image located on
your host.
Nginx is a lightweight web server. You can access it on port 8080 on your localhost.
Access the nginx server on localhost:8080.
curl localhost:8080
will return the HTML home page of Nginx,
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
Run a mongo DB server
Now, run a mongoDB server. We will use the official mongoDB image from the Docker Hub. Instead of
using the latest tag (which is the default if no tag is specified), we will use a specific version of the mongo
image: 4.4.
$ docker container run --detach --publish 8081:27017 --name mongo mongo:4.4
Unable to find image mongo:4.4 locally
4.4: Pulling from library/mongo
Status: Downloaded newer image for mongo:4.4
Again, since this is the first time we are running a mongo container, we will pull down the mongo image
from the Docker Store. We are using the --publish flag to expose the 27017 mongo port on our host. We have
to use a port other than 8080 for the host mapping, since that port is already exposed on our host. Again refer
to the official docs on the Docker Hub to get more details about using the mongo image.
Access localhost:8081 to see some output from mongo.
curl localhost:8081
which will return a warning from MongoDB,
It looks like you are trying to access MongoDB over HTTP on the native driver port.
If you are using play-with-docker, look for the 8080 link near the top of the page.
Check your running containers with docker container ls
8. $ docker container ls
9. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
10. d6777df89fea nginx "nginx -g 'daemon ..." Less than a second ago Up 2 seconds
0.0.0.0:8080->80/tcp nginx
11. ead80a0db505 mongo "docker-entrypoint..." 17 seconds ago Up 19 seconds 0.0.0.0:8081-
>27017/tcp mongo
12. af549dccd5cf ubuntu "top" 5 minutes ago Up 5 minutes priceless_kepler
13. You should see that you have an Nginx web server container, and a MongoDB container running on
your host. Note that we have not configured these containers to talk to each other.
You can see the "nginx" and "mongo" names that we gave to our containers, and the random name (in
my case "priceless_kepler") that was generated for the ubuntu container. You can also see that the
port mappings that we specified with the --publish flag. For more details information on these running
containers you can use the docker container inspect [container id command.
One thing you might notice is that the mongo container is running the docker-entrypoint command.
This is the name of the executable that is run when the container is started. The mongo image requires
some prior configuration before kicking off the DB process. You can see exactly what the script does
by looking at it on github. Typically, you can find the link to the github source from the image
description page on the Docker Store website.
Containers are self-contained and isolated, which means we can avoid potential conflicts between
containers with different system or runtime dependencies. For example: deploying an app that uses
Java 7 and another app that uses Java 8 on the same host. Or running multiple nginx containers that
all have port 80 as their default listening ports (if exposing on the host using the --publish flag, the
ports selected for the host will need to be unique). Isolation benefits are possible because of Linux
Namespaces.
Note: You didn't have to install anything on your host (other than Docker) to run these processes!
Each container includes the dependencies that it needs within the container, so you don't need to
install anything on your host directly.
Running multiple containers on the same host gives us the ability to fully utilize the resources (cpu,
memory, etc) available on single host. This can result in huge cost savings for an enterprise.
While running images directly from the Docker Hub can be useful at times, it is more useful to create
custom images, and refer to official images as the starting point for these images. We will dive into
building our own custom images in Lab 2.
Step 3: Clean Up
Completing this lab results in a bunch of running containers on your host. Let's clean these up.
1. First get a list of the containers running using docker container ls.
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
d6777df89fea nginx "nginx -g 'daemon ..." 3 minutes ago Up 3 minutes 0.0.0.0:8080-
>80/tcp nginx
ead80a0db505 mongo "docker-entrypoint..." 3 minutes ago Up 3 minutes
0.0.0.0:8081->27017/tcp mongo
af549dccd5cf ubuntu "top" 8 minutes ago Up 8 minutes
priceless_kepler
Next, run docker container stop [container id] for each container in the list. You can also use the names of
the containers that you specified before.
$ docker container stop d67 ead af5 d67 ead af5
Note: You only have to reference enough digits of the ID to be unique. Three digits is almost always enough.
Remove the stopped containers
docker system prune is a really handy command to clean up your system. It will remove any stopped
containers, unused volumes and networks, and dangling images.
3. $ docker system prune
4. WARNING! This will remove:
5. - all stopped containers
6. - all volumes not used by at least one container
7. - all networks not used by at least one container
8. - all dangling images
9. Are you sure you want to continue? [y/N] y
10. Deleted Containers:
11. Total reclaimed space: 12B
12.
After Migration:
Host:one-sandbox
Host:SACET
APPLICATIONS:
Easily migrate your virtual machine from one pc to another.
Result:
Thus the Creating and Executing the First Container Using Docker is completed
successfully.
EX.NO:9 Run a Container from Dockers Hub
DATE:
Aim:
To find a procedure to run a Container from Dockers Hub
Steps:
The following section contains step-by-step instructions on how to get started with Docker Hub.
.A Docker ID grants you access to Docker Hub repositories and lets you explore available images
from the community and verified publishers. You also need a Docker ID to share images on Docker
Hub.
To create a repository:
You need to download Docker Desktop to build, push, and pull container images.
1. In your terminal, run docker pull hello-world to pull the image from Docker Hub. You should see
output similar to:
2. $ docker pull hello-world
3. Using default tag: latest
4. latest: Pulling from library/hello-world
5. Status: Downloaded newer image for hello- world: latest
6. docker.io/library/hello-world:latest
7. Run docker run hello-world to run the image locally. You should see output similar to:
8. $ docker run hello-world
9. Hello from Docker!
10. To generate this message, Docker took the following steps:
11. 1. The Docker client contacted the Docker daemon.
12. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
13. (amd64)
14. 3. The Docker daemon created a new container from that image which runs the
15. executable that produces the output you are currently reading.
16. 4. The Docker daemon streamed that output to the Docker client, which sent
17. it to your terminal.
18.
19. To try something more ambitious, you can run an Ubuntu container with:
20. $ docker run -it ubuntu bash
21.
22. Share images, automate workflows, and more with a free Docker ID:
23. https://hub.docker.com/
24.
25. For more examples and ideas, visit:
26. https://docs.docker.com/get-started/
Step 5: Build and push a container image to Docker Hub from your computer
1. Start by creating a Dockerfile to specify your application as shown below:
2. # syntax=docker/dockerfile:1
3. FROM busybox
CMD echo "Hello world! This is my first Docker image."
4. Run docker build -t <your_username>/my-private-repo . to build your Docker image.
5. Run docker run <your_username>/my-private-repo to test your Docker image locally.
Run docker push <your_username>/my-private-repo to push your Docker image to Docker Hub. You
should see output similar to:
Note
You must be signed in to Docker Hub through Docker Desktop or the command line, and you must
also name your images correctly, as per the above steps.
Your repository in Docker Hub should now display a new latest tag under Tags
You've successfully:
Signed up for a Docker account
Created your first repository
Pulled an existing container image from Docker Hub
Built your own container image on your computer
Pushed it successfully to Docker Hub
Result:
Thus the Running a Container from Docker Hub is completed successfully