0% found this document useful (0 votes)
154 views20 pages

Chapter1 GetStartedWithContainer PDF

Uploaded by

Ahmad Zainudin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
154 views20 pages

Chapter1 GetStartedWithContainer PDF

Uploaded by

Ahmad Zainudin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

welcome furqanpr LOGOUT

Home » Introduction to Containers, Kubernetes, and Red Hat OpenShift (v3.9) - DO180 EN » Lessons

1. Getting Started with Container Technology

Introduction to Containers, Kubernetes, and Goal Describe how software can run in containers orchestrated by RedHat OpenShift Container Platform.
Red Hat OpenShift (v3.9) - DO180 EN
Course Chapters ▼
Describe the architecture of Linux containers.
1. Getting Started with Container Describe how containers are implemented using Docker.
Objectives
Technology
Describe the architecture of a Kubernetes cluster running on the Red Hat OpenShift Container Platform.
- Overview of the Container Architecture

- Quiz: Overview of the Container Architecture

- Overview of the Docker Architecture Container Architecture (and Quiz)

- Quiz: Overview of the Docker Architecture Sections Docker Architecture (and Quiz)

- Describing Kubernetes and OpenShift Container Orchestration with Kubernetes and OpenShift (and Quiz)

- Quiz: Describing Kubernetes and OpenShift

- Summary

2. Creating Containerized Services ▼

3. Managing Containers ▼
1. Getting Started with Container Technology
4. Managing Container Images ▼

5. Creating Custom Container Images ▼ Next...


6. Deploying Containerized Applications

on OpenShift

7. Deploying Multi-Container

Applications

8. Troubleshooting Containerized

Applications

9. Comprehensive Review of
Introduction to Containers, Kubernetes, ▼
and Red Hat OpenShift

A. Implementing Microservices

Architecture

As an authorized Reseller for Red Hat Academy we are required by the terms of our Agreement to share sales data including, but not limited to, a) product sold, b) quantity sold, c) ship to locations, d) buyer company name, e) buyer company address, f)
buyer company email, g) buyer company phone number, h) buyer individual name, i) buyer individual email address, j) buyer individual phone number and k) other personally identifiable information that may be included in a sales transaction. The
information disclosure is required solely for Red Hat Academy’s business operations function and will not be used, sold or repurposed for any other reason.

Terms and Conditions | Privacy Policy


© Copyright 2017 - Gilmore Global, All rights reserved.
Lesson 1 (of 7)

Objectives
After completing this section, students should be able to:

Describe the architecture of Linux containers.

Describe the characteristics of software applications.

List the approaches of using a container

Containerized Applications
Software applications are typically deployed as a single set of libraries and configuration files to a runtime environment. They are traditionally deployed to an operating system with a
set of services running, such as a database server or an HTTP server, but they can also be deployed to any environment that can provide the same services, such as a virtual machine or
a physical host.

The major drawback to using a software application is that it is entangled with the runtime environment and any updates or patches applied to the base OS might break the application.
For example, an OS update might include multiple dependency updates, including libraries (that is, operating system libraries shared by multiple programming languages) that might
affect the running application with incompatible updates.

Moreover, if another application is sharing the same host OS and the same set of libraries, as described in the ??? , there might be a risk of preventing the application from properly
running it if an update that fixes the first application libraries affects the second application.

Therefore, for a company developing typical software applications, any maintenance on the running environment might require a full set of tests o guarantee that any OS update does
not affect the application as well.

Depending on the complexity of an application, the regression verification might not be an easy task and might require a major project. Furthermore, any update normally requires a full
application stop. Normally, this implies an environment with high-availability features enabled to minimize the impact of any downtime, and increases the complexity of the deployment
process. The maintenance might become cumbersome, and any deployment or update might become a complex process. The ??? describes the difference between applications running
as containers and applications running on the host operating system.

Container versus operating system differences

Alternatively, a system administrator can work with containers, which are a kind of isolated partition inside a single operating system. Containers provide many of the same benefits as
virtual machines, such as security, storage, and network isolation, while requiring far fewer hardware resources and being quicker to launch and terminate. They also isolate the
libraries and the runtime environment (such as CPU and storage) used by an application to minimize the impact of any OS update to the host OS, as described in ???.

The use of containers helps not only with the efficiency, elasticity, and reusability of the hosted applications, but also with portability of the platform and applications. There are many
container providers available, such as Rocket, Drawbridge, and LXC, but one of the major providers is Docker.

Some of the major advantages of containers are listed below.

Low hardware footprint


Uses OS internal features to create an isolated environment where resources are managed using OS facilities such as namespaces and cgroups. This approach minimizes the amount of CPU and memory overhead
compared to a virtual machine hypervisor. Running an application in a VM is a way to create isolation from the running environment, but it requires a heavy layer of services to support the same low hardware
footprint isolation provided by containers.

Environment isolation
Works in a closed environment where changes made to the host OS or other applications do not affect the container. Because the libraries needed by a container are self-contained, the application can run without
disruption. For example, each application can exist in its own container with its own set of libraries. An update made to one container does not affect other containers, which might not work with the update.

Quick deployment
Deploys any container quickly because there is no need to install the entire underlying operating system. Normally, to support the isolation, a new OS installation is required on a physical host or VM, and any
simple update might require a full OS restart. A container only requires a restart without stopping any services on the host OS.

Multiple environment deployment


In a traditional deployment scenario using a single host, any environment differences might potentially break the application. Using containers, however, the differences and incompatibilities are mitigated because
the same container image is used.

Reusability
The same container can be reused by multiple applications without the need to set up a full OS. A database container can be used to create a set of tables for a software application, and it can be quickly destroyed
and recreated without the need to run a set of housekeeping tasks. Additionally, the same database container can be used by the production environment to deploy an application.

Often, a software application with all its dependent services (databases, messaging, file systems) are made to run in a single container. However, container characteristics and agility
requirements might make this approach challenging or ill-advised. In these instances, a multi-container deployment may be more suitable. Additionally, be aware that some application
actions may not be suited for a containerized environment. For example, applications accessing low-level hardware information, such as memory, file-systems and devices may fail due
to container constraints.

Finally, containers boost the microservices development approach because they provide a lightweight and reliable environment to create and run services that can be deployed to a
production or development environment without the complexity of a multiple machine environment.

Next
Lesson 2 (of 7)

Quiz: Overview of the Container Architecture


Choose the correct answers to the following questions:

Start Quiz

Back Next
Lesson 2 (of 7)
1.

Which two options are examples of software applications that might run in a container? (Select two.)

! An I/O monitoring tool responsible for analyzing the traffic and block data transfer.

" A database-driven Python application accessing services such as a MySQL database, a file transfer protocol (FTP) server, and a web server on a single physical host. !
answe
" A Java Enterprise Edition application, with an Oracle database, and a message broker running on a single VM. !
answe
! A memory dump application tool capable of taking snapshots from all the memory CPU caches for debugging purposes.

#" Correct!

Back Next
Lesson 2 (of 7)
2.

Which of the two following use cases are better suited for containers? (Select two.)

! A financial company is implementing a CPU-intensive risk analysis tool on their own containers to minimize the number of processors needed.

" A software provider needs to distribute software that can be reused by other companies in a fast and error-free way. !
answe
! A company is deploying applications on a physical host and would like to improve its performance by using containers.

" A data center is looking for alternatives to shared hosting for database applications to minimize the amount of hardware processing needed. !
answe

#" Correct!

% $

Back Next
Lesson 2 (of 7)
3.

A company is migrating their PHP and Python applications running on the same host to a new architecture. Due to internal policies, both are using a set of
custom-made shared libraries from the OS, but the latest update applied to them as a result of a Python development team request broke the PHP
application. Which two architectures would provide the best support for both applications? (Select two.)

! Deploy each application to different containers and apply the custom-made shared libraries to all containers.

! Deploy each application to different VMs and apply the custom-made shared libraries to all VM hosts.

" Deploy each application to different VMs and apply the custom-made shared libraries individually to each VM host. !
answe
" Deploy each application to different containers and apply the custom-made shared libraries individually to each container. !
answe

#" Correct!

% $

Back Next
Lesson 3 (of 7)

Objectives
After completing this section, students should be able to:

Describe how containers are implemented using Docker.

List the key components of the Docker architecture.

Describe the architecture behind the Docker command-line interface (CLI).

Describing the Docker Architecture


Docker is one of the container implementations available for deployment and supported by companies such as RedHat in their RedHat Enterprise Linux Atomic Host platform. Docker Hub provides a large set of
containers developed by the community.

Docker uses a client-server architecture, as described below:

Client
The command-line tool (docker) is responsible for communicating with a server using a RESTful API to request operations.

Server
This service, which runs as a daemon on an operating system, does the heavy lifting of building, running, and downloading container images.

The daemon can run either on the same system as the docker client or remotely.


For this course, both the client and the server will be running on the workstation machine.


In a RedHat Enterprise Linux environment, the daemon is represented by a systemd unit called docker.service.

Docker Core Elements


Docker depends on three major elements:

Images
Images are read-only templates that contain a runtime environment that includes application libraries and applications. Images are used to create containers. Images can be created, updated, or downloaded for
immediate consumption.

Registries
Registries store images for public or private use. The well-known public registry is Docker Hub, and it stores multiple images developed by the community, but private registries can be created to support internal
image development under a company's discretion. This course runs on a private registry in a virtual machine where all the required images are stored for faster consumption.

Containers
Containers are segregated user-space environments for running applications isolated from other applications sharing the same host OS.


Docker Hub website


In a RHEL environment, the registry is represented by a systemd unit called docker-registry.service.

Containers and the Linux Kernel


Containers created by Docker, from Docker-formatted container images, are isolated from each other by several standard features of the Linux kernel. These include:

Namespaces
The kernel can place specific system resources that are normally visible to all processes into a namespace. Inside a namespace, only processes that are members of that namespace can see those resources.
Resources that can be placed into a namespace include network interfaces, the process ID list, mount points, IPC resources, and the system's own hostname information. As an example, two processes in two
different mounted namespaces have different views of what the mounted root file system is. Each container is added to a specific set of namespaces, which are only used by that container.

Control groups (cgroups)


Control groups partition sets of processes and their children into groups in order to manage and limit the resources they consume. Control groups place restrictions on the amount of system resources the
processes belonging to a specific container might use. This keeps one container from using too many resources on the container host.

SELinux
SELinux is a mandatory access control system that is used to protect containers from each other and to protect the container host from its own running containers. Standard SELinux type enforcement is used to
protect the host system from running containers. Container processes run as a confined SELinux type that has limited access to host system resources. In addition, sVirt uses SELinux Multi-Category
Security (MCS) to protect containers from each other. Each container's processes are placed in a unique category to isolate them from each other.

Docker Container Images


Each image in Docker consists of a series of layers that are combined into what is seen by the containerized applications a single virtual file system. Docker images are immutable; any extra layer added over the
preexisting layers overrides their contents without changing them directly. Therefore, any change made to a container image is destroyed unless a new image is generated using the existing extra layer. The
UnionFS file system provides containers with a single file system view of the multiple image layers.


UnionFS wiki page

In summary, there are two approaches to create a new image:

Using a running container: An immutable image is used to start a new container instance and any changes or updates needed by this container are made to a read/ write extra layer. Docker commands can
be issued to store that read/write layer over the existing image to generate a new image. Due to its simplicity, this approach is the easiest way to create images, but it is not a recommended approach
because the image size might become large due to unnecessary files, such as temporary files and logs.

Using a Dockerfile: Alternatively, container images can be built from a base image using a set of steps called instructions. Each instruction creates a new layer on the image that is used to build the final
container image. This is the suggested approach to building images, because it controls which files are added to each layer.

Back Next
Lesson 4 (of 7)

Quiz: Overview of the Docker Architecture


Choose the correct answers to the following questions:

Start Quiz

Back Next
Lesson 4 (of 7)
1.

Which of the following three tasks are managed by a component other than the Docker client? (Select three.)

! Building a container image. !


answe
! Searching for images from a registry. !
answe
! Downloading container image files from a registry. !
answe
" Requesting a container image deployment from a server.

#" Correct!

Back Next
Lesson 4 (of 7)
2.

Which of the following best describes a container image?

! The container's index file used by a registry.

" A container blueprint from which a container will be created. !


answe
! A runtime environment where an application will run.

! A virtual machine image from which a container will be created.

#" Correct!

% $

Back Next
Lesson 4 (of 7)
3.

Which two kernel components does Docker use to create and manage the runtime environment for any container? (Choose two.)

! NUMA support

! LVM

! iSCSI

" Control groups !


answe
" Namespaces !
answe

#" Correct!

% $

Back Next
Lesson 4 (of 7)
4.

An existing image of a WordPress blog was updated on a developer's machine to include new homemade extensions. Which is the best approach to create
a new image with those updates provided by the developer? (Select one.)

! The updates made to the developer's custom WordPress should be copied and transferred to the production WordPress, and all the patches should be made within the
image.

" The updates made to the developer's custom WordPress should be assembled as a new image using a Dockerfile to rebuild the container image. !
answe
! A diff should be executed on the production and the developer's WordPress image, and all the binary differences should be applied to the production image.

! Copy the updated files from the developer's image to the /tmp directory from the production environment and request an image update.

#" Correct!

Back Next
Lesson 5 (of 7)

Objectives
After completing this section, students should be able to:

Describe the architecture of a Kubernetes cluster running on the RedHat OpenShift Container Platform (OCP).

List the main resource types provided by Kubernetes and OCP.

Identify the network characteristics of Docker, Kubernetes, and OCP.

List mechanisms to make a pod externally available.

OpenShift Terminology
RedHat OpenShift Container Platform (OCP) is a set of modular components and services built on top of RedHat Enterprise Linux and Docker. OCP adds PaaS capabilities such as
remote management, multitenancy, increased security, monitoring and auditing, application life-cycle management, and self-service interfaces for developers.

Throughout this course, the terms OCP and OpenShift are used to refer to the RedHat OpenShift Container Platform. The ??? illustrates the OpenShift Container Platform stack.

OpenShift architecture
In the figure, going from bottom to top, and from left to right, the basic container infrastructure is shown, integrated and enhanced by RedHat:

The base OS is RedHat Enterprise Linux (RHEL).

Docker provides the basic container management API and the container image file format.

Kubernetes manages a cluster of hosts, physical or virtual, that run containers. It uses resources that describe multicontainer applications composed of multiple resources, and how they interconnect. If
Docker is the &core& of OCP, Kubernetes is the &heart& that orchestrates the core.

Etcd is a distributed key-value store, used by Kubernetes to store configuration and state information about the containers and other resources inside the Kubernetes cluster.

OpenShift adds the capabilities required to provide a production PaaS platform to the Docker and Kubernetes container infrastructure. Continuing from bottom to top and from left to
right:

OCP-Kubernetes extensions are additional resource types stored in Etcd and managed by Kubernetes. These additional resource types form the OCP internal state and configuration.

Containerized services fulfill many PaaS infrastructure functions, such as networking and authorization. OCP leverages the basic container infrastructure from Docker and Kubernetes for most internal
functions. That is, most OCP internal services run as containers orchestrated by Kubernetes.

Runtimes and xPaaS are base container images ready for use by developers, each preconfigured with a particular runtime language or database. The xPaaS offering is a set of base images for JBoss
middleware products such as JBoss EAP and ActiveMQ.

DevOps tools and user experience: OCP provides Web and CLI management tools for managing user applications and OCP services. The OpenShift Web and CLI tools are built from REST APIs which can be
used by external tools such as IDEs and CI platforms.

A Kubernetes cluster is a set of node servers that run containers and are centrally managed by a set of master servers. A server can act as both a server and a node, but those roles are
usually segregated for increased stability.

Term Definition

Master A server that manages the workload and communications in a Kubernetes cluster.

Node A server that host applications in a Kubernetes cluster.

Label A key/value pair that can be assigned to any Kubernetes resource. A selector uses labels to filter eligible resources for scheduling and other operations.

OpenShift and Kubernetes architecture


An OpenShift cluster is a Kubernetes cluster which can be managed the same way, but using the management tools provided OpenShift, such as the command-line interface or the web
console. This allows for more productive workflows and makes common tasks much easier.

Kubernetes Resource Types


Kubernetes has five main resource types that can be created and configured using a YAML or a JSON file, or using OpenShift management tools:

Pods
Represent a collection of containers that share resources, such as IP addresses and persistent storage volumes. It is the basic unit of work for Kubernetes.

Services
Define a single IP/port combination that provides access to a pool of pods. By default, services connect clients to pods in a round-robin fashion.

Replication Controllers
A framework for defining pods that are meant to be horizontally scaled. A replication controller includes a pod definition that is to be replicated, and the pods created from it can be scheduled to different nodes.

Persistent Volumes (PV)


Provision persistent networked storage to pods that can be mounted inside a container to store data.

Persistent Volume Claims (PVC)


Represent a request for storage by a pod to Kubernetes.


For the purpose of this course, the PVs are provisioned on local storage, not on networked storage. This is a valid approach for development purposes, but it is not a recommended approach for a
production environment.

Although Kubernetes pods can be created standalone, they are usually created by high-level resources such as replication controllers.

OpenShift Resource Types


The main resource types added by OpenShift Container Platform to Kubernetes are as follows:

Deployment Configurations (dc)


Represent a set of pods created from the same container image, managing workflows such as rolling updates. A dc also provides a basic but extensible continuous delivery workflow.

Build Configurations (bc)


Used by the OpenShift Source-to-Image (S2I) feature to build a container image from application source code stored in a Git server. A bc works together with a dc to provide a basic but extensible continuous
integration and continuous delivery workflows.

Routes
Represent a DNS host name recognized by the OpenShift router as an ingress point for applications and microservices.

Although Kubernetes replication controllers can be created standalone in OpenShift, they are usually created by higher-level resources such as deployment controllers.

Networking
Each container deployed by a docker daemon has an IP address assigned from an internal network that is accessible only from the host running the container. Because of the
container's ephemeral nature, IP addresses are constantly assigned and released.

Kubernetes provides a software-defined network (SDN) that spawns the internal container networks from multiple nodes and allows containers from any pod, inside any host, to access
pods from other hosts. Access to the SDN only works from inside the same Kubernetes cluster.

Containers inside Kubernetes pods are not supposed to connect to each other's dynamic IP address directly. It is recommended that they connect to the more stable IP addresses
assigned to services, and thus benefit from scalability and fault tolerance.

External access to containers, without OpenShift, requires redirecting a port from the router to the host, then to the internal container IP address, or from the node to a service IP
address in the SDN. A Kubernetes service can specify a NodePort attribute that is a network port redirected by all the cluster nodes to the SDN. Unfortunately, none of these
approaches scale well.

OpenShift makes external access to containers both scalable and simpler, by defining route resources. HTTP and TLS accesses to a route are forwarded to service addresses inside the
Kubernetes SDN. The only requirement is that the desired DNS host names are mapped to the OCP routers nodes' external IP addresses.


Docker documentation website

Kubernetes documentation website

OpenShift documentation website

Back Next
Lesson 6 (of 7)

Quiz: Describing Kubernetes and OpenShift


Choose the correct answers to the following questions:

Start Quiz

Back Next
Lesson 6 (of 7)
1.

Which three sentences are correct regarding Kubernetes architecture? (Choose three.)

! Kubernetes masters schedule pods to specific nodes. !


answe
! Kubernetes masters manage pod scaling. !
answe
! A pod is a set of containers managed by Kubernetes as a single unit. !
answe
" Kubernetes tools cannot be used to manage resources in an OpenShift cluster.

" Kubernetes nodes can be managed without a master.

#" Correct!

Back Next
Lesson 6 (of 7)
2.

Which two sentences are correct regarding Kubernetes and OpenShift resource types? (Choose two.)

! A route is responsible for providing IP addresses for external access to pods. !


answe
" Containers created from Kubernetes pods cannot be managed using standard Docker tools.

! A replication controller is responsible for monitoring and maintaining the number of pods for a particular application. !
answe
" All pods generated from the same replication controller have to run in the same node.

" A pod is responsible for provisioning its own persistent storage.

#" Correct!

% $

Back Next
Lesson 6 (of 7)
3.

Which two statements are true regarding Kubernetes and OpenShift networking? (Select two.)

! A replication controller is responsible for routing external requests to the pods.

" A route is responsible for providing DNS names for external access. !
answe
! Kubernetes is responsible for providing internal IP addresses for each container.

! Kubernetes is responsible for providing a fully qualified domain name for a pod.

" A Kubernetes service can provide an IP address to access a set of pods. !


answe

#" Correct!

% $

Back Next
Lesson 6 (of 7)
4.

Which statement is correct regarding persistent storage in OpenShift and Kubernetes?

! A PVC represents the amount of memory that can be allocated on a node, so that a developer can state how much memory he requires for his application to run.

" PVC represents a storage area that can be requested by a pod to store data but is provisioned by the cluster administrator. !
answe
! A PVC represents a storage area that a pod can use to store data and is provisioned by the application developer.

! A PVC represents the number of CPU processing units that can be allocated on a node, subject to a limit managed by the cluster administrator.

#" Correct!

% $

Back Next
Lesson 2 (of 7)
4.

Which three kinds of applications can be packaged as containers for immediate consumption? (Select three.)

! A web server. !
answe
! A blog software, such as WordPress. !
answe
" A local file system recovery tool.

! A database. !
answe
" A virtual machine hypervisor.

#" Correct!

Back Next
Lesson 7 (of 7)
In this chapter, you learned:

Containers are an isolated application runtime created with very little overhead.

A container image packages an application with all its dependencies, making it easier to run the application in different environments.

Docker creates containers using features of the standard Linux kernel.

Container image registries are the preferred mechanism for distributing container images to multiple users and hosts.

OpenShift orchestrates applications composed of multiple containers using Kubernetes.

Kubernetes manages load balancing, high availability and persistent storage for containerized applications.

OpenShift adds to Kubernetes multitenancy, security, ease of use, and continuous integration and continuous development features.

OpenShift routes are key to exposing containerized applications to external users in a manageable way

Back Continue to next Lesson

You might also like