Chapter1 GetStartedWithContainer PDF
Chapter1 GetStartedWithContainer PDF
Home » Introduction to Containers, Kubernetes, and Red Hat OpenShift (v3.9) - DO180 EN » Lessons
Introduction to Containers, Kubernetes, and Goal Describe how software can run in containers orchestrated by RedHat OpenShift Container Platform.
Red Hat OpenShift (v3.9) - DO180 EN
Course Chapters ▼
Describe the architecture of Linux containers.
1. Getting Started with Container Describe how containers are implemented using Docker.
Objectives
Technology
Describe the architecture of a Kubernetes cluster running on the Red Hat OpenShift Container Platform.
- Overview of the Container Architecture
- Quiz: Overview of the Docker Architecture Sections Docker Architecture (and Quiz)
- Describing Kubernetes and OpenShift Container Orchestration with Kubernetes and OpenShift (and Quiz)
- Summary
3. Managing Containers ▼
1. Getting Started with Container Technology
4. Managing Container Images ▼
7. Deploying Multi-Container
▼
Applications
8. Troubleshooting Containerized
▼
Applications
9. Comprehensive Review of
Introduction to Containers, Kubernetes, ▼
and Red Hat OpenShift
A. Implementing Microservices
▼
Architecture
As an authorized Reseller for Red Hat Academy we are required by the terms of our Agreement to share sales data including, but not limited to, a) product sold, b) quantity sold, c) ship to locations, d) buyer company name, e) buyer company address, f)
buyer company email, g) buyer company phone number, h) buyer individual name, i) buyer individual email address, j) buyer individual phone number and k) other personally identifiable information that may be included in a sales transaction. The
information disclosure is required solely for Red Hat Academy’s business operations function and will not be used, sold or repurposed for any other reason.
Objectives
After completing this section, students should be able to:
Containerized Applications
Software applications are typically deployed as a single set of libraries and configuration files to a runtime environment. They are traditionally deployed to an operating system with a
set of services running, such as a database server or an HTTP server, but they can also be deployed to any environment that can provide the same services, such as a virtual machine or
a physical host.
The major drawback to using a software application is that it is entangled with the runtime environment and any updates or patches applied to the base OS might break the application.
For example, an OS update might include multiple dependency updates, including libraries (that is, operating system libraries shared by multiple programming languages) that might
affect the running application with incompatible updates.
Moreover, if another application is sharing the same host OS and the same set of libraries, as described in the ??? , there might be a risk of preventing the application from properly
running it if an update that fixes the first application libraries affects the second application.
Therefore, for a company developing typical software applications, any maintenance on the running environment might require a full set of tests o guarantee that any OS update does
not affect the application as well.
Depending on the complexity of an application, the regression verification might not be an easy task and might require a major project. Furthermore, any update normally requires a full
application stop. Normally, this implies an environment with high-availability features enabled to minimize the impact of any downtime, and increases the complexity of the deployment
process. The maintenance might become cumbersome, and any deployment or update might become a complex process. The ??? describes the difference between applications running
as containers and applications running on the host operating system.
Alternatively, a system administrator can work with containers, which are a kind of isolated partition inside a single operating system. Containers provide many of the same benefits as
virtual machines, such as security, storage, and network isolation, while requiring far fewer hardware resources and being quicker to launch and terminate. They also isolate the
libraries and the runtime environment (such as CPU and storage) used by an application to minimize the impact of any OS update to the host OS, as described in ???.
The use of containers helps not only with the efficiency, elasticity, and reusability of the hosted applications, but also with portability of the platform and applications. There are many
container providers available, such as Rocket, Drawbridge, and LXC, but one of the major providers is Docker.
Environment isolation
Works in a closed environment where changes made to the host OS or other applications do not affect the container. Because the libraries needed by a container are self-contained, the application can run without
disruption. For example, each application can exist in its own container with its own set of libraries. An update made to one container does not affect other containers, which might not work with the update.
Quick deployment
Deploys any container quickly because there is no need to install the entire underlying operating system. Normally, to support the isolation, a new OS installation is required on a physical host or VM, and any
simple update might require a full OS restart. A container only requires a restart without stopping any services on the host OS.
Reusability
The same container can be reused by multiple applications without the need to set up a full OS. A database container can be used to create a set of tables for a software application, and it can be quickly destroyed
and recreated without the need to run a set of housekeeping tasks. Additionally, the same database container can be used by the production environment to deploy an application.
Often, a software application with all its dependent services (databases, messaging, file systems) are made to run in a single container. However, container characteristics and agility
requirements might make this approach challenging or ill-advised. In these instances, a multi-container deployment may be more suitable. Additionally, be aware that some application
actions may not be suited for a containerized environment. For example, applications accessing low-level hardware information, such as memory, file-systems and devices may fail due
to container constraints.
Finally, containers boost the microservices development approach because they provide a lightweight and reliable environment to create and run services that can be deployed to a
production or development environment without the complexity of a multiple machine environment.
Next
Lesson 2 (of 7)
Start Quiz
Back Next
Lesson 2 (of 7)
1.
Which two options are examples of software applications that might run in a container? (Select two.)
! An I/O monitoring tool responsible for analyzing the traffic and block data transfer.
" A database-driven Python application accessing services such as a MySQL database, a file transfer protocol (FTP) server, and a web server on a single physical host. !
answe
" A Java Enterprise Edition application, with an Oracle database, and a message broker running on a single VM. !
answe
! A memory dump application tool capable of taking snapshots from all the memory CPU caches for debugging purposes.
#" Correct!
Back Next
Lesson 2 (of 7)
2.
Which of the two following use cases are better suited for containers? (Select two.)
! A financial company is implementing a CPU-intensive risk analysis tool on their own containers to minimize the number of processors needed.
" A software provider needs to distribute software that can be reused by other companies in a fast and error-free way. !
answe
! A company is deploying applications on a physical host and would like to improve its performance by using containers.
" A data center is looking for alternatives to shared hosting for database applications to minimize the amount of hardware processing needed. !
answe
#" Correct!
% $
Back Next
Lesson 2 (of 7)
3.
A company is migrating their PHP and Python applications running on the same host to a new architecture. Due to internal policies, both are using a set of
custom-made shared libraries from the OS, but the latest update applied to them as a result of a Python development team request broke the PHP
application. Which two architectures would provide the best support for both applications? (Select two.)
! Deploy each application to different containers and apply the custom-made shared libraries to all containers.
! Deploy each application to different VMs and apply the custom-made shared libraries to all VM hosts.
" Deploy each application to different VMs and apply the custom-made shared libraries individually to each VM host. !
answe
" Deploy each application to different containers and apply the custom-made shared libraries individually to each container. !
answe
#" Correct!
% $
Back Next
Lesson 3 (of 7)
Objectives
After completing this section, students should be able to:
Client
The command-line tool (docker) is responsible for communicating with a server using a RESTful API to request operations.
Server
This service, which runs as a daemon on an operating system, does the heavy lifting of building, running, and downloading container images.
The daemon can run either on the same system as the docker client or remotely.
For this course, both the client and the server will be running on the workstation machine.
In a RedHat Enterprise Linux environment, the daemon is represented by a systemd unit called docker.service.
Images
Images are read-only templates that contain a runtime environment that includes application libraries and applications. Images are used to create containers. Images can be created, updated, or downloaded for
immediate consumption.
Registries
Registries store images for public or private use. The well-known public registry is Docker Hub, and it stores multiple images developed by the community, but private registries can be created to support internal
image development under a company's discretion. This course runs on a private registry in a virtual machine where all the required images are stored for faster consumption.
Containers
Containers are segregated user-space environments for running applications isolated from other applications sharing the same host OS.
Docker Hub website
In a RHEL environment, the registry is represented by a systemd unit called docker-registry.service.
Namespaces
The kernel can place specific system resources that are normally visible to all processes into a namespace. Inside a namespace, only processes that are members of that namespace can see those resources.
Resources that can be placed into a namespace include network interfaces, the process ID list, mount points, IPC resources, and the system's own hostname information. As an example, two processes in two
different mounted namespaces have different views of what the mounted root file system is. Each container is added to a specific set of namespaces, which are only used by that container.
SELinux
SELinux is a mandatory access control system that is used to protect containers from each other and to protect the container host from its own running containers. Standard SELinux type enforcement is used to
protect the host system from running containers. Container processes run as a confined SELinux type that has limited access to host system resources. In addition, sVirt uses SELinux Multi-Category
Security (MCS) to protect containers from each other. Each container's processes are placed in a unique category to isolate them from each other.
UnionFS wiki page
Using a running container: An immutable image is used to start a new container instance and any changes or updates needed by this container are made to a read/ write extra layer. Docker commands can
be issued to store that read/write layer over the existing image to generate a new image. Due to its simplicity, this approach is the easiest way to create images, but it is not a recommended approach
because the image size might become large due to unnecessary files, such as temporary files and logs.
Using a Dockerfile: Alternatively, container images can be built from a base image using a set of steps called instructions. Each instruction creates a new layer on the image that is used to build the final
container image. This is the suggested approach to building images, because it controls which files are added to each layer.
Back Next
Lesson 4 (of 7)
Start Quiz
Back Next
Lesson 4 (of 7)
1.
Which of the following three tasks are managed by a component other than the Docker client? (Select three.)
#" Correct!
Back Next
Lesson 4 (of 7)
2.
#" Correct!
% $
Back Next
Lesson 4 (of 7)
3.
Which two kernel components does Docker use to create and manage the runtime environment for any container? (Choose two.)
! NUMA support
! LVM
! iSCSI
#" Correct!
% $
Back Next
Lesson 4 (of 7)
4.
An existing image of a WordPress blog was updated on a developer's machine to include new homemade extensions. Which is the best approach to create
a new image with those updates provided by the developer? (Select one.)
! The updates made to the developer's custom WordPress should be copied and transferred to the production WordPress, and all the patches should be made within the
image.
" The updates made to the developer's custom WordPress should be assembled as a new image using a Dockerfile to rebuild the container image. !
answe
! A diff should be executed on the production and the developer's WordPress image, and all the binary differences should be applied to the production image.
! Copy the updated files from the developer's image to the /tmp directory from the production environment and request an image update.
#" Correct!
Back Next
Lesson 5 (of 7)
Objectives
After completing this section, students should be able to:
Describe the architecture of a Kubernetes cluster running on the RedHat OpenShift Container Platform (OCP).
OpenShift Terminology
RedHat OpenShift Container Platform (OCP) is a set of modular components and services built on top of RedHat Enterprise Linux and Docker. OCP adds PaaS capabilities such as
remote management, multitenancy, increased security, monitoring and auditing, application life-cycle management, and self-service interfaces for developers.
Throughout this course, the terms OCP and OpenShift are used to refer to the RedHat OpenShift Container Platform. The ??? illustrates the OpenShift Container Platform stack.
OpenShift architecture
In the figure, going from bottom to top, and from left to right, the basic container infrastructure is shown, integrated and enhanced by RedHat:
Docker provides the basic container management API and the container image file format.
Kubernetes manages a cluster of hosts, physical or virtual, that run containers. It uses resources that describe multicontainer applications composed of multiple resources, and how they interconnect. If
Docker is the &core& of OCP, Kubernetes is the &heart& that orchestrates the core.
Etcd is a distributed key-value store, used by Kubernetes to store configuration and state information about the containers and other resources inside the Kubernetes cluster.
OpenShift adds the capabilities required to provide a production PaaS platform to the Docker and Kubernetes container infrastructure. Continuing from bottom to top and from left to
right:
OCP-Kubernetes extensions are additional resource types stored in Etcd and managed by Kubernetes. These additional resource types form the OCP internal state and configuration.
Containerized services fulfill many PaaS infrastructure functions, such as networking and authorization. OCP leverages the basic container infrastructure from Docker and Kubernetes for most internal
functions. That is, most OCP internal services run as containers orchestrated by Kubernetes.
Runtimes and xPaaS are base container images ready for use by developers, each preconfigured with a particular runtime language or database. The xPaaS offering is a set of base images for JBoss
middleware products such as JBoss EAP and ActiveMQ.
DevOps tools and user experience: OCP provides Web and CLI management tools for managing user applications and OCP services. The OpenShift Web and CLI tools are built from REST APIs which can be
used by external tools such as IDEs and CI platforms.
A Kubernetes cluster is a set of node servers that run containers and are centrally managed by a set of master servers. A server can act as both a server and a node, but those roles are
usually segregated for increased stability.
Term Definition
Master A server that manages the workload and communications in a Kubernetes cluster.
Label A key/value pair that can be assigned to any Kubernetes resource. A selector uses labels to filter eligible resources for scheduling and other operations.
Pods
Represent a collection of containers that share resources, such as IP addresses and persistent storage volumes. It is the basic unit of work for Kubernetes.
Services
Define a single IP/port combination that provides access to a pool of pods. By default, services connect clients to pods in a round-robin fashion.
Replication Controllers
A framework for defining pods that are meant to be horizontally scaled. A replication controller includes a pod definition that is to be replicated, and the pods created from it can be scheduled to different nodes.
For the purpose of this course, the PVs are provisioned on local storage, not on networked storage. This is a valid approach for development purposes, but it is not a recommended approach for a
production environment.
Although Kubernetes pods can be created standalone, they are usually created by high-level resources such as replication controllers.
Routes
Represent a DNS host name recognized by the OpenShift router as an ingress point for applications and microservices.
Although Kubernetes replication controllers can be created standalone in OpenShift, they are usually created by higher-level resources such as deployment controllers.
Networking
Each container deployed by a docker daemon has an IP address assigned from an internal network that is accessible only from the host running the container. Because of the
container's ephemeral nature, IP addresses are constantly assigned and released.
Kubernetes provides a software-defined network (SDN) that spawns the internal container networks from multiple nodes and allows containers from any pod, inside any host, to access
pods from other hosts. Access to the SDN only works from inside the same Kubernetes cluster.
Containers inside Kubernetes pods are not supposed to connect to each other's dynamic IP address directly. It is recommended that they connect to the more stable IP addresses
assigned to services, and thus benefit from scalability and fault tolerance.
External access to containers, without OpenShift, requires redirecting a port from the router to the host, then to the internal container IP address, or from the node to a service IP
address in the SDN. A Kubernetes service can specify a NodePort attribute that is a network port redirected by all the cluster nodes to the SDN. Unfortunately, none of these
approaches scale well.
OpenShift makes external access to containers both scalable and simpler, by defining route resources. HTTP and TLS accesses to a route are forwarded to service addresses inside the
Kubernetes SDN. The only requirement is that the desired DNS host names are mapped to the OCP routers nodes' external IP addresses.
Docker documentation website
Back Next
Lesson 6 (of 7)
Start Quiz
Back Next
Lesson 6 (of 7)
1.
Which three sentences are correct regarding Kubernetes architecture? (Choose three.)
#" Correct!
Back Next
Lesson 6 (of 7)
2.
Which two sentences are correct regarding Kubernetes and OpenShift resource types? (Choose two.)
! A replication controller is responsible for monitoring and maintaining the number of pods for a particular application. !
answe
" All pods generated from the same replication controller have to run in the same node.
#" Correct!
% $
Back Next
Lesson 6 (of 7)
3.
Which two statements are true regarding Kubernetes and OpenShift networking? (Select two.)
" A route is responsible for providing DNS names for external access. !
answe
! Kubernetes is responsible for providing internal IP addresses for each container.
! Kubernetes is responsible for providing a fully qualified domain name for a pod.
#" Correct!
% $
Back Next
Lesson 6 (of 7)
4.
! A PVC represents the amount of memory that can be allocated on a node, so that a developer can state how much memory he requires for his application to run.
" PVC represents a storage area that can be requested by a pod to store data but is provisioned by the cluster administrator. !
answe
! A PVC represents a storage area that a pod can use to store data and is provisioned by the application developer.
! A PVC represents the number of CPU processing units that can be allocated on a node, subject to a limit managed by the cluster administrator.
#" Correct!
% $
Back Next
Lesson 2 (of 7)
4.
Which three kinds of applications can be packaged as containers for immediate consumption? (Select three.)
! A web server. !
answe
! A blog software, such as WordPress. !
answe
" A local file system recovery tool.
! A database. !
answe
" A virtual machine hypervisor.
#" Correct!
Back Next
Lesson 7 (of 7)
In this chapter, you learned:
Containers are an isolated application runtime created with very little overhead.
A container image packages an application with all its dependencies, making it easier to run the application in different environments.
Container image registries are the preferred mechanism for distributing container images to multiple users and hosts.
Kubernetes manages load balancing, high availability and persistent storage for containerized applications.
OpenShift adds to Kubernetes multitenancy, security, ease of use, and continuous integration and continuous development features.
OpenShift routes are key to exposing containerized applications to external users in a manageable way