Kubernetes
Kubernetes
Containers
Effectively with
Kubernetes
Introduction to Kubernetes
There are two major container orchestration tools on the market:
Docker Swarm
Kubernetes
Automated Scheduling
Self-Healing Capabilities
3
Introduction to Kubernetes
Features of Kubernetes
Application-centric management
Auto-scalable infrastructure
5
Introduction to Kubernetes
Kubernetes Basics
6
Kubernetes Architecture
7
Kubernetes Architecture
Master Node:
The master node is the first and most vital component which is
responsible for the management of Kubernetes cluster.
It is the entry point for all kind of administrative tasks.
There might be more than one master node in the cluster to check for
fault tolerance.
The master node has various components like API Server, Controller
Manager, Scheduler, and ETCD.
8
Kubernetes Architecture
API Server:
The API server acts as an entry point for all the REST commands used for
controlling the cluster.
Scheduler:
The scheduler schedules the tasks to the slave node.
9
Kubernetes Architecture
Scheduler:
It also helps you to track how the working load is used on cluster nodes.
It helps you to place the workload on resources which are available and
accept the workload.
Etcd:
etcd components store configuration detail and wright values.
Kubelet:
This gets the configuration of a Pod from the API server and ensures that
the described containers are up and running.
11
Kubernetes Architecture
Docker Container:
Docker container runs on each of the worker nodes, which runs the
configured pods.
Kube-proxy:
Kube-proxy acts as a load balancer and network proxy to perform service
on a single worker node.
Pods:
A pod is a combination of single or multiple containers that logically run
together on nodes.
12
Kubernetes - Other Key Terminologies
Replication Controllers
A replication controller is an object which defines a pod template.
Replication Sets
Replication sets are an interaction on the replication controller design with
flexibility in how the controller recognizes the pods it is meant to manage.
It replaces replication controllers because of their higher replicate
selection capability.
13
Kubernetes - Other Key Terminologies
Deployments
Deployment is a common workload which can be directly created and
manage.
Deployment use replication set as a building block which adds the feature
of life cycle management.
Stateful Sets
It is a specialized pod control which offers ordering and uniqueness.
15
Kubernetes vs. Docker Swarm
16
Introduction to Kubernetes
Advantages of Kubernetes:
Easy organization of service with pods
19
Installing Kubernetes on a local machine
When developing a containerized application that is to be hosted on
Kubernetes, it is important to be able to run the application (with its
containers) on your local machine, before deploying it on remote
Kubernetes production clusters.
In order to install a Kubernetes cluster locally, there are several
solutions, which are as follows:
The first solution is to use Docker Desktop.
21
Installing Kubernetes on a local machine
2. After clicking on the Apply button, Docker Desktop will install a mini
Kubernetes cluster, and the kubectl client tool, on the local machine.
The second solution is to install Minikube, which also installs a
simplified Kubernetes cluster locally.
Following the local installation of Kubernetes, check its installation by
executing the following command in a Terminal:
kubectl version --short
22
Installing the Kubernetes dashboard
After installing our Kubernetes cluster, there is a need for another
element, which is the Kubernetes dashboard.
In order to install the Kubernetes dashboard, which is a pre-packaged
containerized web application that will be deployed in our cluster, we will run
the following command in a Terminal:
23
Installing the Kubernetes dashboard
Its execution is shown in the following screenshot:
24
Installing the Kubernetes dashboard
To open the dashboard and connect to it from our local machine, first
create a proxy between the Kubernetes cluster and our machine by
performing the following steps:
1. To create the proxy, we execute the kubectl proxy command in a
Terminal, and the detail of the execution is shown in the following
screenshot:
The proxy is open on the localhost address (127.0.0.1) with the 8001
port.
25
Installing the Kubernetes dashboard
Then, in a web browser, open the
URL
http://
localhost:8001/api/v1/namespace
s/kubernetes-dashboard/services/
https:kubernetes-dashboard
:/proxy/#/login
This is a local URL (localhost and
8001) that is created by the
proxy, and that points to the
Kubernetes dashboard application 26
Installing the Kubernetes dashboard
After clicking on the SIGN IN button, the dashboard is displayed as
follows:
27
First example of Kubernetes application deployment
28
First example of Kubernetes application deployment
29
First example of Kubernetes application deployment
30
First example of Kubernetes application deployment
In this example, chose two replicas, which can, at the very least,
distribute the traffic charge of the application (put in more replicas if
there is a high volume of load).
And also ensure the proper functioning of the application.
Therefore, if one of the two pods has a problem, the other, which is
an identical replica, will ensure the proper functioning of the
application.
Then, in the containers section, we indicate the image (from the
Docker Hub) with name and tag.
31
Finally, the ports property indicates the port that the container will
First example of Kubernetes application deployment
32
First example of Kubernetes application deployment
33
First example of Kubernetes application deployment
But, for the moment, it is only accessible inside the cluster only.
36
First example of Kubernetes application deployment
The execution of the command creates the service within the cluster,
and, to test the application, open a web browser with the
http://localhost:31000 URL, and the page is displayed as follows:
○ Finally, choose the directory, coming from the artifacts, which contains
the YAML specification files.
● This command displays the list of pods and services that are present
in our AKS Kubernetes cluster, and the result of this command is
shown in the following screenshot:
39
Automatic deployment of the application in Kubernetes
● We can see our two deployed web applications pods and the
NodePort service that exposes our applications outside the cluster.
● Then, we open a web browser with the http://localhost:31000 URL,
and our application is displayed correctly:
40
Kubernetes Core Concepts
● Kubernetes is built on a few basic objects, each of which can be manipulated
via kubectl and yaml files.
● Common Components
○ When writing Kubernetes .yaml you will always use a “metadata header” that
looks like this:
kind: KIND
apiVersion: VERSION
metadata:
name: NAME
● Basic Objects
○ Containers name: nginx
image: nginx:1.10
○ A container is a standardized group of software ports:
packaged as a unit and run on a container platform.
- containerPort: 80
Note: containers are always embedded in a pod spec.
41
Kubernetes Core ●Concepts
Services
● Pods o A service is a logical group of pods
o A pod is a logical unit of one or more together with a policy for accessing
containers which represents a single them. It is a networking abstraction
unit of deployment. that allows the containers in a pod to
kind: Pod
apiVersion: v1
be accessed via DNS entries and TCP
metadata: or UDP channels.
kind: Service
name: my-favorite-pod
apiVersion: v1
namespace: ckad-notes
metadata:
labels:
name: my-unique-name
app: my-app
spec:
spec:
ports:
containers:
- name: http
- name: first-container
port: 80
image: busybox
targetPort: 80
command: ["echo"]
- name: https
args: ["Hello World"]
port: 443
- name: second-container
targetPort: 443
image: nginx
selector:
ports:
app: random-app
- containerPort: 80
42
Kubernetes Core Concepts
apiVersion: v1
● Volumes kind: Pod
metadata:
● A volume is a local filesystem which is
name: test-pd
accessible to the containers in a pod as spec:
though it were present in a local containers:
- image: k8s.gcr.io/test-webserver
directory. The lifetime of a volume is tied name: test-container
to the lifetime of the pod it is attached volumeMounts:
- mountPath: /cache
to.
name: cache-volume
● There is no “generic” volume spec. The volumes:
specification is highly dependent on the - name: cache-volume
emptyDir: {}
source of the filesystem, which will
probably be a cloud provider.
43
Kubernetes Core Concepts
● Namespaces
o A namespace is a group of distinctly named cluster entities. No two entities of the
same type in the same namespace can share the same name.
o It can be useful to create these for scratch space when playing around.
o One-liner to change namespace:
kubectl config set-context $(kubectl config current-context) --namespace=<insert-namespace-name-here>
● Controllers
o Controllers build on the basic sets and are where the power of Kubernetes lies.
Most allow for horizontal scaling of pods to meet demand.
44
Kubernetes Core Concepts apiVersion: apps/v1
kind: Deployment
metadata:
● Deployments name: nginx-deployment
labels:
o Deployments manage a app: nginx
spec:
group of replicated pods replicas: 3
selector:
which use the same spec. matchLabels:
app: nginx
They can be scaled up and template:
metadata:
down on demand. labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
45
apiVersion: v1
kind: Service
spec.serviceName. - containerPort: 80
name: web
46
Kubernetes Core Concepts apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
● DaemonSet namespace: kube-system
labels:
47
Kubernetes Core Concepts apiVersion: batch/v1
kind: Job
metadata:
● Job name: pi
spec:
● A Job is a temporary deployment template:
which Kubernetes will start and spec:
containers:
ensure that a required number of - name: pi
image: perl
them successfully terminate.
command: ["perl", "-Mbignum=bpi", "-wle",
"print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
48
Kubernetes Health check
● Kubernetes is the leading orchestration platform for containerized applications. To
manage containers effectively, Kubernetes needs a way to check their health to
see if they are working correctly and receiving traffic. Kubernetes uses health
checks—also known as probes—to determine if instances of your app are running
and responsive.
● Probes are the technique for Kubernetes applications to provide monitoring of the
internal state of the application using information. They let your cluster identify the
running pods (containers) that monitor the health of an application and make sure
that only the healthy Pods serve the traffic.
● As you deploy and operate distributed applications, containers are created,
started, run, and terminated. To check a container's health in the different stages
of its lifecycle, Kubernetes uses different types of probes.
49
Why Probes are Important
50
Types of Probes
● As you deploy and operate distributed applications, containers are created, started,
run, and terminated. To check a container's health in the different stages of its
lifecycle, Kubernetes uses different types of probes.
● Liveness probes
○ Allow Kubernetes to check if your app is alive. The kubelet agent that runs on each node uses
the liveness probes to ensure that the containers are running as expected. If a container app is
no longer serving requests, kubelet will intervene and restart the container.
● Readiness probes
○ Readiness probes run during the entire lifecycle of the container. Kubernetes uses this probe to
know when the container is ready to start accepting traffic. If a readiness probe fails,
Kubernetes will stop routing traffic to the pod until the probe passes again.
● Startup probes
○ Startup probes are used to determine when a container application has been initialized
successfully. If a startup probe fails, the pod is restarted. When pod containers take too long to
become ready, readiness probes may fail repeatedly. In this case, containers risk being
terminated by kubelet before they are up and running. This is where the startup probe comes to
the rescue. 51
Creating Probes
52
Configure Probes
You can further utilise different options in these probes spec to control the behaviour
of liveness and readiness probes:
•initialDelaySeconds: Number of seconds after the container has started before liveness or
readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.
•periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds.
Minimum value is 1.
•timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1
second. Minimum value is 1.
•successThreshold: Minimum consecutive successes for the probe to be considered
successful after having failed. Defaults to 1. Must be 1 for liveness and startup
Probes. Minimum value is 1.
•failureThreshold: When a probe fails, Kubernetes will try failureThreshold times before
giving up. Giving up in case of liveness probe means restarting the container. In case
of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1.
HTTP probes
54
livenessProbe
● Allow Kubernetes to check if your app is alive. The kubelet agent that runs on each node uses the liveness
probes to ensure that the containers are running as expected. If a container app is no longer serving
requests, kubelet will intervene and restart the container.
● For example, if an application is not responding and cannot make progress because of a deadlock, the
liveness probe detects that it is faulty. Kubelet then terminates and restarts the container. Even if the
application carries defects that cause further deadlocks, the restart will increase the container's availability.
It also gives your developers time to identify the defects and resolve them later.
● Liveness probes can catch situations when the application is no longer responding or unable to make
progress and restart it. We address the case of HTTP liveness probes that send a request to the application’s
back-end (e.g., some server) and decide whether the application is healthy based on its response.
● Liveness probes let Kubernetes know if your app is alive or dead. If you app is alive, then Kubernetes leaves
it alone. If your app is dead, Kubernetes removes the Pod and starts a new one to replace it.
● Restart Policy :We can define the restart Policy in the pod to instruct the controller about the conditions
required to restart the Pod.
○ Always: Always restart the pod when it terminates.
○ OnFailure: Restart the pod only when it terminates with failure.
○ Never: Never restart the pod after it terminates.
55
livenessProbe : Hands on
● In this yaml file we will define the Liveness Probe and no restart Policy.
liveness.yml
kind: Pod
apiVersion: v1
metadata:
name: liveness-probe
spec:
containers:
- name: ubuntu-container
image: ubuntu
command:
- /bin/bash
- -ec
- touch /tmp/live; sleep 30; rm /tmp/live; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/live
initialDelaySeconds: 5
periodSeconds: 5
56
Liveness probe: Hands on
Pod configuration
• Creating a container with ubuntu image
• When container starts it will create a
file /tmp/live then sleep for 30 seconds
and at last remove the file /tmp/live
• This means the file will be available
only for 30 seconds and after that it is
no longer available in the container
• In liveness configuration — It will try to
find the file every 5 seconds with an
initial delay of 5 seconds
initalDelaySeonds : Number of seconds
controller will wait before launching the
probe
periodSeconds : Number of seconds after
which the probe will be repeated
periodically
Create a pod
kubectl create -f liveness.yml
kubectl describe pod liveness-probe
You will see the liveness-probe is succeed because
the command is executed successfully.
Now wait for 30 seconds and the run the below
command
58
readinessProbe
● Rediness Probe indicates whether the Container is ready to service requests. If the readiness
probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all
Services that match the Pod. The default state of readiness before the initial delay is Failure. If a
Container does not provide a readiness probe, the default state is Success.
● Let’s imagine that your app takes a minute to warm up and start. Your service won’t work until it
is up and running, even though the process has started. You will also have issues if you want to
scale up this deployment to have multiple copies. A new copy shouldn’t receive traffic until it is
fully ready, but by default Kubernetes starts sending it traffic as soon as the process inside the
container starts. By using a readiness probe, Kubernetes waits until the app is fully started before
it allows the service to send traffic to the new copy.
● Readiness probes are configured similarly to liveness probes. Readiness probes runs on the
container during its whole lifecycle.
● Caution: Incorrect implementation of readiness probes may result in an ever growing number of
processes in the container, and resource starvation if this is left unchecked.
59
Busybox Container with a Kubernetes Readiness Probe
readiness.yml
kubectl apply -f readiness.yml
kubectl describe pod readiness-exec
apiVersion: v1
kind: Pod
kubectl get pod readiness-exec
metadata: kubectl get events --watch
labels:
test: readiness
name: readiness-exec
spec:
containers:
- name: readiness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; After few seconds again execute
sleep 30 kubectl describe pod readiness-exec
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
AfterperiodSeconds:
40 seconds again5 execute
kubectl describe pod readiness-exec
Pod configuration
● Creating a container with busybox image
● The kubelet executes the touch /tmp/healthy command in the target container. Kubelet considers your
container is ready to accept and serve traffic.
● Will sleep for 30 seconds, this means the file will be available only for 30 seconds
61
● The job of the Kubelet here will run the touch /tmp/healthy command
in the container to execute a ReadinessProbe.
● Below is the output when you perform a event –watch command:
• In readiness configuration — It will try to find the file every 5 seconds with an initial delay of 5
seconds. Kubernetes terminates and restarts the container.
62
Creating readiness Probes for a Node.js application
const express = require('express');
● Step 1: Creating a Node JS App
const app = express();
Prepared for readiness Probes
o To implement a working readiness app.get('/', (req, res) => {
res.send(`
probe, we designed a containerized <h1>Kubernetes-Demo app!</h1>
application capable of processing it. We <p>Try sending a request to /error and see what happens</p>
`);
containerized a simple Node JS web
});
server with two routes configured to
process requests from the readiness app.get('/error', (req, res) => {
process.exit(1);
probes. The application was });
containerized using Docker container app.get('/healthz', (req, res) => {
res.send(`
runtime and pushed to the public Docker
<h1>Kubernetes-Demo app!</h1>
repository. The code that implements <p>health check</p>
basic server functionality and routing is `);
});
located in the app.js file: app.listen(8080);
63
Creating readiness Probes for a Node.js application
● We’ve configured three server routes responding to client GET requests. The first one
serves requests to the server’s web root path / that sends a basic greeting from the
server.
● The second path named /error terminates with Exit Code 1 indicates that a container
shut down and telling kubelet that the application has crashed or deadlocked.
● The third path named /healthz returns a 200 HTTP success status telling a rediness
probe that our application is healthy and running. By default, any HTTP status code
greater than or equal to 200 and less than 400 indicates success. Status codes greater
than 400 indicate failure.
● This application is just a simple example to illustrate how you can configure your
server to respond to readiness/liveness probes. All you need to implement HTTP
readiness probes is to allocate some paths in your application and expose your
server’s port to Kubernetes.
64
Creating readiness Probes for a Node.js application
67
Startup Probes
● Startup probes are a newly developed feature supported as a beta in Kubernetes v.1.18. These probes
are very useful on slow-start applications; it is much better than increasing initialDelaySeconds on
readiness or liveness probes. Startup probe allows our application to become ready, joined with
readiness and liveness probes, it can dramatically increase our applications' availability.
● It is responsible for the application which is deployed inside the container. It indicates if the application
started successfully.
● Kubernetes can determine whether your software, which executes within a container inside a pod, has
properly started using startup probes.
● To review, the liveness probe will restart a container when it becomes unresponsive and the
readiness probe is used to decide when a container is ready to start or stop accepting traffic
and startup probe, which allows your containers to notify the Kubernetes when they start up
and are prepared to be evaluated for liveness and readiness.
● Use startup probes to decouple liveness and readiness checks from application initialization and
ultimately make services more reliable.
68
Creating a Startup Probe
69
Creating a Startup Probe
● In the example above, a periodSeconds of 10 and
a failureThreshold of 10 means the container will have up to a
hundred seconds in which to start—up to ten checks with ten
seconds between them. The container will be restarted if the probe
still doesn’t succeed after this time.
● You can use the other config parameters to further tune your probe.
If you know a container has a minimum startup time,
setting initialDelaySeconds will prevent it from being probed
immediately after creation, when you know the check will fail.
70
Creating a Startup Probe
A pod with a startup probe that will
fail: In this case, the probe looks at /etc/foobar, which doesn’t exist in
the container. The probe will run every ten seconds, as specified
apiVersion: v1
kind: Pod
by the value of periodSeconds. Up to ten attempts will be made,
metadata: as allowed by failureThreshold.
name: startup-probe-demo
spec:
containers:
- name: startup-probe-demo
image: busybox:latest
args:
- /bin/sh
- -c
- sleep 300
startupProbe:
exec:
command:
- cat
- /etc/foobar
periodSeconds: 10
failureThreshold: 10
71
Creating a Startup Probe
If the container creates /etc/foobar before the last attempt, the probe will succeed, and Kubernetes will begin to direct liveness
and readiness probes to the container. Otherwise, the startup probe will be marked as failed, and the container will be killed.
This event log shows that the startup probe failed because of the
missing /etc/foobar file. After ten attempts, the container’s status changed
to Killing, and a restart was scheduled. Looking for failed startup probe lines
in your pod’s logs will help you find containers that have been restarted for
this reason.
72
Kubenetes Networking
● The communication between pods, services and external services to the ones in a cluster brings in
the concept of Kubernetes networking.
● Kubernetes networking allows Kubernetes components to communicate with each other
and with other applications. The Kubernetes platform is different from other networking
platforms because it is based on a flat network structure that eliminates the need to map host
ports to container ports.
● When pods are created, they are assigned an IP address. You use this IP to access the pod from
anywhere within the Kubernetes cluster. Containers inside a pod share the same network space,
which means that, within the pod, containers can communicate with each other by using the
localhost address.
● A Kubernetes cluster might be split across different nodes. A node is a physical machine where
resources run. A cluster is a logical view of a set of nodes. These nodes are different machines, but
they work together as a logical unit. This makes it easier to work with different machines at the
same time because you can simply deploy resources to the cluster and not to individual nodes.
73
Kubernetes Networking Visualized
74
Networking in a Kubernetes cluster
● There are mainly 4 problems to solve with the networking concepts.
○ Container to container communication
○ Pod to pod Communication
○ Pod to service communication
○ External to service Communication
75
Container to Container communication on the same pod
77
Communication between two different Pods within the same machine(Node)
78
Try Yourself?
Create 2 Pods on same node.
pod1_nw.yml pod2_nw.yml
79
Pod-to-Service Communication
● A Kubernetes service maps a single IP address to a pod group to address the issue of a pod's changing address. The service
creates a single endpoint, an immutable IP address and hostname, and forwards requests to a pod in that service.
● Basically, services are a type of resource that configures a proxy to forward the requests to a set of pods, which will receive
traffic & is determined by the selector. Once the service is created it has an assigned IP address which will accept requests on
the port.
● A service in kubernetes can be considered as a virtual load balancer for set of pods in the cluster, so the flow for any request
coming to kubernetes cluster will be redirected to pods through a service. The matching between service and pod is done
through a service selector and a pod label. Also routing requests between different pods in kubernetes cluster is done through
services
● Now, there are various service types that give you the option for exposing a service outside of your cluster IP address.
Types of Services
● ClusterIP: This is the default service type which exposes the service on a cluster-internal IP by making the service only
reachable within the cluster.
● NodePort: This exposes the service on each Node’s IP at a static port. Since, a ClusterIP service, to which
the NodePort service will route, is automatically created. We can contact the NodePort service outside the cluster.
● LoadBalancer: This is the service type which exposes the service externally using a cloud provider’s load balancer. So,
the NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
● ExternalName: This service type maps the service to the contents of the externalName field by returning a CNAME record
with its value.
80
ClusterIP Service type
Cluster IP service type is used when we want
to expose pods only inside kuberntes
cluster, so pods will not be available to the
outside world. Usually database pods are
managed by ClusterIP services as we don’t
want database endpoints to be exposed to
the outside world.
As you can notice below the declaritive
definition for a service, this service has a
selector of (layer: db), So only pods with
label (layer: db) will match the service,
means any request coming to this service
will be redirected to one of these pods:
81
NodePort Service type
NodePort service type exposes the service
to the outside world through
<NodeIP>.<NodePort>. The default range
of the NodePort is (30000–32767). It is an
optional field in the definition of the
service. Each node of the cluster proxies
that port to the kubernetes service port.
Don’t get confused here about the
difference between port, targetPort and
nodePort. So “port” is the port of the
service, “targetPort” is the port of the pod.
The request flow will start from the
nodePort, proxied to the port then
forwarded to the targetPort.
82
LoadBalancer service type
83
ExternalName
84
Try Yourself?
● Deploying NGINX on a Kubernetes Cluster and expose
kubectl create deployment nginx --image=nginx
kubectl create service nodeport nginx --tcp=80:80
kubectl get svc
85
Kubernetes Service Discovery
● In Kubernetes, an application
deployment consists of a pod or set
of pods. Those pods are dynamic in
nature, meaning that the IP
addresses and ports change
constantly. This constant change
makes service discovery a significant
challenge in the Kubernetes world.
● One way Kubernetes provides service
discovery is through its endpoints
API. With the endpoints API, client
software can discover the IP and
ports of pods in an application.
86
Try Yourself?
Create a test namesp
kubectl create ns demo
87
Create a service object for the
deployment using the kubectl
expose command.
89
Now, let’s create a client pod to
connect to the application deployment.
We will test service discovery by doing
nslookup on the service name and see
the auto-created environment
variables related to service discovery.
nslookup nginx-deployment
curl nginx-deployment
90
Kubenetes volume apiVersion: v1
kind: Pod
metadata:
name: two-containers
● Create a Pod that runs two Containers. The spec:
two containers share a Volume that they can restartPolicy: Never
use to communicate. volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian
container > /pod-data/index.txt"]
91
• kubectl apply -f
deploy_multicontainer.yaml
• kubectl describe po mc1
• kubectl exec mc1 -c 1st -- /bin/cat
/usr/share/nginx/html/index.html
• kubectl exec mc1 -c 2nd -- /bin/cat
/html/index.html
• kubectl exec --stdin --tty mc1 -c 2nd
-- /bin/sh
92