100% found this document useful (1 vote)
23 views18 pages

Assignment No - 4 - Docker

The document outlines an assignment to create a virtual machine using Docker, including installing Docker, creating a Dockerfile, and publishing a custom image based on Ubuntu with an Apache web server. It provides an overview of Docker's architecture, its benefits for application delivery, and detailed installation steps for Ubuntu. Additionally, it explains Docker Hub's features for sharing images and includes commands for managing Docker containers and images.

Uploaded by

juzerlight53
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
23 views18 pages

Assignment No - 4 - Docker

The document outlines an assignment to create a virtual machine using Docker, including installing Docker, creating a Dockerfile, and publishing a custom image based on Ubuntu with an Apache web server. It provides an overview of Docker's architecture, its benefits for application delivery, and detailed installation steps for Ubuntu. Additionally, it explains Docker Hub's features for sharing images and includes commands for managing Docker containers and images.

Uploaded by

juzerlight53
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Assignment No 4

Title: create a VM using Docker and write your own docker file and publish it.

References : https://docs.docker.com/get-started/overview/

Note:- This assignemnt has 2 parts:


1) Install and use docker
2) Create your own image and shre it or publish it. (Image containing base : ubuntu and softwre
isntalled: Apache web server)

 What is Docker:
Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in
packages called containers. Containers are isolated from one another and bundle their own software, libraries and
configuration files; they can communicate with each other through well-defined channels. All containers are run by a
single operating system kernel and therefore use fewer resources than virtual machines.

 What is the use of Docker:


Fast, consistent delivery of your applications
Docker streamlines the development lifecycle by allowing developers to work in standardized environments using local
containers which provide your applications and services. Containers are great for continuous integration and continuous
delivery (CI/CD) workflows.

Consider the following example scenario:

 Your developers write code locally and share their work with their colleagues using Docker containers.
 They use Docker to push their applications into a test environment and execute automated and manual tests.
 When developers find bugs, they can fix them in the development environment and redeploy them to the test
environment for testing and validation.
 When testing is complete, getting the fix to the customer is as simple as pushing the updated image to the
production environment.

Responsive deployment and scaling

Docker’s container-based platform allows for highly portable workloads. Docker containers can run on a developer’s
local laptop, on physical or virtual machines in a data center, on cloud providers, or in a mixture of environments.

Docker’s portability and lightweight nature also make it easy to dynamically manage workloads, scaling up or tearing
down applications and services as business needs dictate, in near real time.

Running more workloads on the same hardware

Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-based virtual machines, so
you can use more of your compute capacity to achieve your business goals. Docker is perfect for high density
environments and for small and medium deployments where you need to do more with fewer resources.
 Docker Architecture / Docker Engine

Fig: Docker Engine

Docker Engine is a client-server application with these major components:


 A server which is a type of long-running program called a daemon process (the dockerd
command).
 A REST API which specifies interfaces that programs can use to talk to the daemon and
instruct it what to do.
 A command line interface (CLI) client (the docker command).

Fig: Docker Architecture


The Docker daemon
The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images,
containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.

The Docker client


The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use
commands such as docker run, the client sends these commands to dockerd, which carries them out. The
docker command uses the Docker API. The Docker client can communicate with more than one daemon.

Docker registries
A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured
to look for images on Docker Hub by default. You can even run your own private registry.

When you use the docker pull or docker run commands, the required images are pulled from your configured
registry. When you use the docker push command, your image is pushed to your configured registry.

Docker objects
When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other objects.
This section is a brief overview of some of those objects.

Images
An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another
image, with some additional customization. For example, you may build an image which is based on the ubuntu
image, but installs the Apache web server and your application, as well as the configuration details needed to make your
application run.

You might create your own images or you might only use those created by others and published in a registry. To build
your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run
it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image,
only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast,
when compared to other virtualization technologies.

Containers
A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker
API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based
on its current state.

 Steps to isntall Docker on Ubuntu (https://docs.docker.com/engine/install/ubuntu/)

You can install docker in 3 ways:


1) Install using the repository (This is the recommended approach)
2) Install from a package
3) Install using the convenience script (most easiest way)

we will use fist method i.e. Install using respository

Step1: Begore staring isntallation remove the old installation if any, by using command:
$ sudo apt-get remove docker docker-engine docker.io containerd runc
Step 2: update the packages by using command:
sudo apt-get update

Step 3: install packages to allow apt to use a repository over HTTPS:


sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common

Step 4: Add Docker’s official GPG key:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -


Step
5:
Verify
that
you
now
have
the
key
with
the

fingerprint 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88, by searching
for the last 8 characters of the fingerprint.

$ sudo apt-key fingerprint 0EBFCD88

Step 6:
Use the

following command to set up the stable repository. To add the nightly or test repository, add the
word nightly or test (or both) after the word stable in the commands below.

$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
Install Docker Engine
Step 7: Update the apt package index, and install the latest version of
Docker Engine and containerd, or go to the next step to install a
specific version:

$ sudo apt-get update


$ sudo apt-get install docker-ce docker-ce-cli containerd.io

Step 8: Verify that Docker Engine is installed correctly by


running the hello-world image.
$ sudo docker run hello-world
Uninstall Docker Engine
Step 1: Uninstall the Docker Engine, CLI, and Containerd packages:

$ sudo apt-get purge docker-ce docker-ce-cli containerd.io

Step 2: Images, containers, volumes, or customized configuration


files on your host are not automatically removed. To delete all
images, containers, and volumes:
$ sudo rm -rf /var/lib/docker

Docker Compose
Compose is a tool for defining and running multi-container Docker applications.
With Compose, you use a YAML file to configure your application’s services.
Then, with a single command, you create and start all the services from your
configuration. To learn more about all the features of Compose, see the list of
features.

Compose works in all environments: production, staging, development, testing, as well as CI workflows. You can learn
more about each case in Common Use Cases.

Using Compose is basically a three-step process:

1. Define your app’s environment with a Dockerfile so it can be reproduced anywhere.


2. Define the services that make up your app in docker-compose.yml so they can be run together in an
isolated environment.

3. Run docker-compose up and Compose starts and runs your entire app
Docker Desktop
Docker Desktop is an easy-to-install application for your Mac or Windows
environment that enables you to build and share containerized applications and
microservices. Docker Desktop includes Docker Engine, Docker CLI client, Docker
Compose, Notary, Kubernetes, and Credential Helper.

Docker Command Line (commands)


Ref: https://docs.docker.com/engine/reference/commandline/docker/

Commands

1) docker -v
2) docker info - Display system-wide information

docker info [OPTIONS]


3) docker history - Show the history of an image

docker history [OPTIONS] IMAGE

4) docker image - Manage images

a) docker ps - List containers

docker ps [OPTIONS]

b) docker images - List images

docker images [OPTIONS] [REPOSITORY[:TAG]]

c) docker rmi -Remove one or more images

docker rmi [OPTIONS] IMAGE [IMAGE...]


Install Tensorflow image or copy image or any other image you want form the
Docker hub.

Here, I am pulling plain nginx image from docker.


Steps

1) login to docker hub account.

2) search “image name” in search box (I have searched for nginx image)
3) open the image link

4) copy the command


5) run the copied command in terminal

docker pull nginx


6) list the images by using command: sudo docker images

7) run the currently downloaded image using command:

sudo docker run -i -t d85a2075702d /bin/bash

-i => for input or interactive

-t => terminal

6) docker rm - Remove one or more containers

docker rm [OPTIONS] CONTAINER [CONTAINER...]

Docker Hub
Ref: https://docs.docker.com/docker-hub/

Docker Hub is a service provided by Docker for finding and sharing container
images with your team. It provides the following major features:
 Repositories: Push and pull container images.
 Teams & Organizations: Manage access to private repositories of container images.
 Official Images: Pull and use high-quality container images provided by Docker.
 Publisher Images: Pull and use high- quality container images provided by external vendors.
 Builds: Automatically build container images from GitHub and Bitbucket and push them to Docker Hub.
 Webhooks: Trigger actions after a successful push to a repository to integrate Docker Hub with other services.

Step 1: Sign up for Docker Hub


Start by creating an account.

Step 2) write own docker file and push it.


Run the docker build -t reverseproxy

Run the docker-compose up


Open your browser and navigate to http://localhost:8080 to make sure our html page is being served
correctly.

1. Open your browser and navigate to http://localhost:8081 to make sure our html page is being served
correctly
4.To stop container gracefully: docker-compose stop

5.To remove (and stop) the container by docker-compose up : docker-compose down

6.Push the server

You might also like