0% found this document useful (0 votes)
13 views6 pages

Synopsis-Docker Container

This paper discusses the growing security concerns associated with Docker images in software development and presents a CI/CD system to validate their security throughout the development lifecycle. It highlights the importance of both static and dynamic analysis in identifying vulnerabilities within Docker containers and emphasizes the need for user-friendly tools to enhance security. The findings demonstrate that automated checks can help developers make informed decisions regarding base images and improve the overall security of applications.

Uploaded by

Pooja G
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views6 pages

Synopsis-Docker Container

This paper discusses the growing security concerns associated with Docker images in software development and presents a CI/CD system to validate their security throughout the development lifecycle. It highlights the importance of both static and dynamic analysis in identifying vulnerabilities within Docker containers and emphasizes the need for user-friendly tools to enhance security. The findings demonstrate that automated checks can help developers make informed decisions regarding base images and improve the overall security of applications.

Uploaded by

Pooja G
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Abstract—Docker is popular within the software development community due to the

versatility, portability, and scalability of containers. However, concerns over vulnerabilities


have grown as the security of applications become increasingly dependent on the security of
the images that serve as the applications’ building blocks. As more development processes
migrate to the cloud, validating the security of images that are pulled from various
repositories is paramount. In this paper, we describe a continuous integration and continuous
deployment (CI/CD) system that validates the security of Docker images throughout the
software development life cycle. We introduce images with vulnerabilities and measure the
effectiveness of our approach at identifying the vulnerabilities. In addition, we use dynamic
analysis to assess the security of Docker containers based on their behavior and show that it
complements the static analyses typically used for security assessments.

I. INTRODUCTION
Containers have gained significant traction within the software development community
because they allow developers to avoid the time consuming configuration of libraries and
dependencies. An image is a file that contains the required code, configuration, and libraries
to run an application. Containers are instances of these images with every instance having the
same underlying dependencies. Because they only encompass the instructions and code that
are necessary for the application to run, containers are lightweight compared to alternative
approaches such as virtual machines (VMs). Multiple containers can run on a single physical
or virtual machine, making them ideal for many phases of the software development cycle. A
microservice is a software architectural paradigm that is commonly used with containers.
Microservices aim to make software development easier through optimal use of various
resources. In essence, each software function is provisioned in a single application
component. The components then use an application programming interface (API) to interact
with each other. Microservices serve as building blocks to design and develop scalable and
maintainable software. This approach decreases the dependencies between major software
functions and minimizes the code base with which developers interact. Docker is a popular
Platform as a Service (PaaS) for containers. Many organizations such as Google and Amazon
are incorporating Docker into their software development life cycle, as Docker allows
developers to rapidly design, develop, test, and deploy applications while optimizing resource
usage. Due to Docker’s popularity, the development community has created repositories of
Docker images, such as Docker Hub,1 to increase reusability and to encourage file sharing.
With the paradigm presented by containers and microservices, software development is
increasingly dependent on small, reusable components that are developed independently and
distributed by different organizations. This dependency, in turn, raises concerns regarding the
security of the entire Docker image distribution pipeline. Architects at Docker now encourage
developers and publishers to include risk analysis that considers the entire distribution
pipeline itself as actively malicious. Introducing a multi-layered security mechanism at the
Docker registry level may mitigate vulnerabilities from being introduced into Docker
repositories.

BACKGROUND:
Malware analysis usually takes the form of examining files or executables to detect
compromises. There are two categories of this analysis—static analysis and dynamic (or
behavioral) analysis. This section describes these two types of analysis as well as how they
pertain to Docker images. Historically, software and hardware vendors used various scoring
metrics to measure software vulnerabilities. The resulting lack of uniformity eventually led to
the creation of the Common Vulnerabilities and Exposures (CVE) system [2].
CVEs provide a framework to quantify and assess vulnerabilities and exposures, and it also
enables publicly sharing such information. One common way to prevent vulnerabilities from
being introduced to the distribution pipeline is to regularly scan Docker images against
CVEs. Detecting vulnerabilities within Docker images encourages actions to address them
[3].
Scanning for CVEs can actually be considered as a part of static analysis, but the term static
analysis covers a broader set of actions. In static analysis, the content of data is examined
without executing the instructions that are captured in the data. Static analysis has the
capability to detect bugs in source code such as unreachable code, variable misuse, uncalled
functions, improper memory usage, and boundary value violations. Static analysis also uses
signatures based on file names, hashes, and file types to indicate if a file is malicious. In
comparison, dynamic analysis observes a container’s behavior. Some methods of dynamic
analysis are port scans before or after execution, process monitoring, recording changes in
firewall rules, registry changes, and network activity monitoring. While dynamic analysis
typically takes longer than static analysis, the results may be more intuitive. However,
Docker containers must be launched in a confined sandbox so that other services and
resources in production are not impacted by the container. Although there have been some
efforts across the Docker community to encourage security analysis by users, they are often
ignored. Thus, it would be ideal to incorporate security analysis tools into the development
cycle of Docker images. Due to the rising concerns of vulnerabilities introduced by Docker
images’ vulnerabilities, there are several open source tools available that may be incorporated
into such as process. CoreOS Clair2 is one such tool that performs static analysis of image
vulnerabilities. Another tool that is currently available is Anchore Engine,3 which includes
CVEbased reporting. Anchore Engine’s security policy enables users to have fine-grained
control over security enforcement by allowing customized security policies, helping users to
achieve NIST 800-190 compliance [4].

DOCKER ARCHITECTURE:
Docker architecture talks about the relationship between client and server and so it is named
also named as ClientServer Architecture. This architecture has four main components of
docker which are: Docker Client & Server, Docker Containers, Docker Images, and Docker
Registries. Some detailed explanation of this component is given below.
Docker Client & Server
As shown in the Figure 1, using Client and Server application, docker can be explained in a
simple way. When the request from docker client arrives at docker server end, it will take
action correspondingly. Docker will then transfer the complete RESTful API, over UNIX
sockets or a network interface and also command line client binary. Now to run client and
server application it will require a same machine or remote machine. So Docker Client and
Docker Server or Daemon can be run on a particular given machine or can be run on another
machine by connecting local docker client to the remote server. Another Docker Client is
Docker Compose, it will help in communicating with applications consisting of a set of
containers. Talking about Docker Daemon or Server, it will check for requests and manages
the other Docker objects. This Docker objects includes networks, containers, images, and
volume. A Server can connect or communicate with other servers too for managing the
Docker services. Well for Docker Client also, it can communicate with one or more than one
servers or daemon.

Docker Images
In Docker Images, images are crated using two techniques. The First techniques allows an
image to be built by using multiple layer of read-only mechanism. Every image in Docker
starts with a base image and these base images are nothing but the Operating system images
like Fedora 20 or Ubuntu 14.04 LTS. Now completely running Operating system capability is
used by the container which is created by using the images of operating system. One thing to
be noted is that, base images can be made up from the scratch. Using the base image one can
add the applications whichever they need to add by making changes in base image. But every
time it is mandatory to create new image. And this whole process of creating or building a
new image is known as “Committing a Change”. Now second method to create an image is to
create a docker file. This docker files are nothing but a set of pre-written commands in a text
file. When this docker file is run on a bash terminal following the command “Docker Build”
then it will follow all the instructions which are inside the docker file and then it will form an
image. The second technique is a way where images are built automatically.
Docker Registry
In Docker Registry, all the docker images are stored in docker registries. Images are shoved
or dragged from a single source from the source code repositories which works
correspondingly. In Docker, there are mainly two types of registries, which are, public and
private. When the images which are available user can pull those available images and can
push their own images which they need without evening forming an image from start means
no base image is needed. This process takes place in public registry and is named as Docker
Hub. So by using characteristics of docker hub, images are given out to a specific area.
Another public registry is Docker Cloud. If one don’t want these public registry then they can
even run their own private registry. Docker by default take images from Docker Hub. But
when it comes to security and optimization of building the images, one must go through
Artificial Docker Registry.

Docker Container
Docker Containers are created by Docker images, it is a remote application built from one or
more images. Docker Containers are runnable instances of docker images. Operation such as
start, stop, move, delete or create can be performed on containers using Docker API or CLI.
Containers are very helpful because it contains whole set or kit of package system which are
required for an application to run. Some of the things included in containers are libraries,
CPU, memory, network resources and dependencies. Docker Container gives full guarantee
that the software can run in any condition or environment because it comes without operating
system, so it uses the Host OS for functioning. Thus it is more efficient, portable and
lightweight system.

ADVANTAGES:
Docker container has provided many benefits and thus has become more popular in less time
for virtualization in Linux containers which are in demand and enhanced. There are many
advantages of Docker some of them are highlighted here: Portability, Scalability, Speed,
Density, and Rapid delivery.
 Portability: Containers provide portable environment for the applications created
inside the docker. Bundles of applications are moved around as a single unit and it
won’t affect the performance of the container.
 Scalability: Docker can be run on all the system having Linux. Docker has the ability
to install it on several physical servers, cloud platforms, data centers, etc. Containers
can be simply transferred back and forth from cloud environment to desktop and from
there again to cloud with less than a minimal time (rapidly). Scaling is easy and can
be handled by the user as per the users need. User can Scale up or down and adjust the
scale according to its need. User can scale from one to many or many to one as per its
need.
 Speed: Small containers requires less time to build and process becomes more faster.
Speed can be considered as a trumpet advantage of Containers. Processes such as
testing, deployment and development are rapidly covered as the size of container is
small. If the container is built then it is pushed to the testing phase and from there to
the production phase.
 Density: In Docker Containers resource consumption is less and it uses the available
resources effectively as it does not use hypervisor. Compared to virtual machines,
containers can be run on a single host. Docker gives high performance because of
higher density and no wastage of resources to overhead.
 Rapid Delivery: A standardized container format of Docker helps programmers or
software teams to be free of worry about other tasks, no stress need to be taken.
Administrator and programmer have separate responsibility of deploy and maintain
the server of containers, and take care of the applications which are located inside the
containers respectively. All the applications inside the containers which are tested can
work in any environment because they have all the required dependencies embedded
in it.
Other advantages of docker containers are: It is Open Source and free, Consistent, Isolated,
and Security is well maintained.

CONCLUSION
Developers using containers are currently vulnerable to malware and lack tools that
effectively quantify this risk. Existing tools, while effective, are time consuming and
challenging to implement and may introduce new risks if not implemented correctly. Our
work addresses these issues by creating userfriendly tools to detect vulnerabilities and
malicious code. Our results show that virus scans and dynamic analysis are effective at
detecting malicious behavior in Docker containers. In particular, safe images show few file
modifications, running processes, and DNS queries whereas malicious images tend to
download and execute files not initially present in the image. By using our API service for
dynamic analysis, developers are better equipped to make decisions regarding which base
images to use. Automating the static and runtime checks also frees developers to build more
secure applications.

REFERENCES
[1] S. Winkel, “Security Assurance of Docker Containers: Part 1,” ISSA Journal, April 2017.
[2] P. Mell, K. Scarfone, and S. Romanosky, “The Common Vulnerability Scoring System
(CVSS) and Its Applicability to Federal Agency Systems,” National Institute of Standards
and Technology, Tech. Rep. Interagency Report 7435, August 2007.
[3] V. Adethyaa and T. Jernigan, “Scanning Docker Images for Vulnerabilities using Clair,
Amazon ECS, ECR, and AWS CodePipeline,” AWS Compute Blog, November 2018, online:
https://aws.amazon.com/blogs/compute/scanning-docker-images-forvulnerabilities-using-
clair-amazon-ecs-ecr-aws-codepipeline/.
[4] J. Valance, “Using Anchore Policies to Help Achieve the CIS Docker Benchmark,”
Anchore Blog, May 2019, online: https://anchore.com/cisdocker-benchmark/.
[5] ——, “Adding Container Security and Compliance Scanning to your AWS CodeBuild
pipeline,” Anchore Blog, February 2019, online: https://anchore.com/adding-container-
security-and-compliancescanning-to-your-aws-codebuild-pipeline/.
[6] J. Blackthorne, A. Bulazel, A. Fasano, P. Biernat, and B. Yener, “AVLeak: Fingerprinting
Antivirus Emulators through Black-Box Testing,” in 10th USENIX Workshop on Offensive
Technologies. Austin, TX: USENIX Association, Aug. 2016. [Online]. Available:
https://www.usenix.org/conference/woot16/workshop-program/presentation/blackthorne

You might also like