0% found this document useful (0 votes)
13 views19 pages

Kubernets Command Ckad

The document provides various Kubernetes commands and YAML configurations for creating and managing Pods, Services, Deployments, and other resources. It includes examples of using imperative commands, generating YAML files with dry-run options, and configuring resource limits, probes, and persistent storage. Additionally, it covers core concepts, configuration, multi-container Pods, observability, and sample questions for Kubernetes certification preparation.

Uploaded by

Ankit Tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views19 pages

Kubernets Command Ckad

The document provides various Kubernetes commands and YAML configurations for creating and managing Pods, Services, Deployments, and other resources. It includes examples of using imperative commands, generating YAML files with dry-run options, and configuring resource limits, probes, and persistent storage. Additionally, it covers core concepts, configuration, multi-container Pods, observability, and sample questions for Kubernetes certification preparation.

Uploaded by

Ankit Tiwari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 19

kubectl run <pod-name> --image=<image-name> --namespace=<namespace>

Service YAML (redis-service.yaml):

apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: finance
spec:
selector:
app: redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
type: ClusterIP

ReplicationController YAML (redis-rc.yaml):

apiVersion: v1
kind: ReplicationController
metadata:
name: redis-rc
namespace: finance
spec:
replicas: 3
selector:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379

Deployment YAML (redis-deployment.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: finance
spec:
replicas: 3
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379

kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx-pod.yaml

Certification Tip: Imperative Commands


While you would be working mostly the declarative way - using definition files,
imperative commands can help in getting one-time tasks done quickly, as well as
generate a definition template easily. This would help save a considerable amount
of time during your exams.

Before we begin, familiarize yourself with the two options that can come in handy
while working with the below commands:

--dry-run: By default, as soon as the command is run, the resource will be created.
If you simply want to test your command, use the --dry-run=client option. This will
not create the resource. Instead, tell you whether the resource can be created and
if your command is right.

-o yaml: This will output the resource definition in YAML format on the screen.

Use the above two in combination along with Linux output redirection to generate a
resource definition file quickly, that you can then modify and create resources as
required, instead of creating the files from scratch.

kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx-pod.yaml

POD
Create an NGINX Pod

kubectl run nginx --image=nginx

Generate POD Manifest YAML file (-o yaml). Don't create it(--dry-run)

kubectl run nginx --image=nginx --dry-run=client -o yaml

Deployment
Create a deployment

kubectl create deployment --image=nginx nginx


Generate Deployment YAML file (-o yaml). Don't create it(--dry-run)

kubectl create deployment --image=nginx nginx --dry-run -o yaml

Generate Deployment with 4 Replicas

kubectl create deployment nginx --image=nginx --replicas=4

You can also scale deployment using the kubectl scale command.

kubectl scale deployment nginx --replicas=4

Another way to do this is to save the YAML definition to a file and modify

kubectl create deployment nginx --image=nginx--dry-run=client -o yaml > nginx-


deployment.yaml

You can then update the YAML file with the replicas or any other field before
creating the deployment.

Service
Create a Service named redis-service of type ClusterIP to expose pod redis on port
6379

kubectl expose pod redis --port=6379 --name redis-service --dry-run=client -o yaml

(This will automatically use the pod's labels as selectors)

Or

kubectl create service clusterip redis --tcp=6379:6379 --dry-run=client -o yaml


(This will not use the pods' labels as selectors; instead it will assume selectors
as app=redis. You cannot pass in selectors as an option. So it does not work well
if your pod has a different label set. So generate the file and modify the
selectors before creating the service)

Create a Service named nginx of type NodePort to expose pod nginx's port 80 on port
30080 on the nodes:

kubectl expose pod nginx --port=80 --name nginx-service --type=NodePort --dry-


run=client -o yaml

(This will automatically use the pod's labels as selectors, but you cannot specify
the node port. You have to generate a definition file and then add the node port in
manually before creating the service with the pod.)
Or

kubectl create service nodeport nginx --tcp=80:80 --node-port=30080 --dry-


run=client -o yaml

(This will not use the pods' labels as selectors)

Both the above commands have their own challenges. While one of it cannot accept a
selector the other cannot accept a node port. I would recommend going with the
`kubectl expose` command. If you need to specify a node port, generate a definition
file using the same command and manually input the nodeport before creating the
service.

Reference:

https://kubernetes.io/docs/reference/kubectl/conventions/

---------------------

1. Core Concepts (13%)


Understand Kubernetes API primitives:

What are Pods, Services, Deployments, and StatefulSets?


How do you use ConfigMaps and Secrets?
How do you use Namespaces and Labels?
Create and configure basic Pods:

How do you create a Pod using kubectl run and kubectl create?
How do you configure a Pod with environment variables?
2. Configuration (18%)
Understand ConfigMaps:

How do you create and use ConfigMaps?


How do you mount a ConfigMap as a volume?
Understand Secrets:

How do you create and use Secrets?


How do you mount a Secret as a volume?
Define and create applications using Kubernetes resources:

How do you use kubectl apply to create resources?


How do you create and manage Deployments, StatefulSets, and DaemonSets?
3. Multi-Container Pods (10%)
Understand Pod design:
How do you create multi-container Pods?
What are the different types of multi-container Pod patterns (e.g., sidecar,
ambassador, adapter)?
4. Observability (18%)
Understand Liveness and Readiness Probes:
How do you configure Liveness and Readiness probes in a Pod?
Understand container logging:

How do you access logs for a container using kubectl logs?


Understand monitoring and debugging:

How do you use kubectl describe to inspect resources?


How do you use kubectl exec to interact with a container?
5. Pod Design (20%)
Understand Labels, Selectors, and Annotations:

How do you use labels and selectors to organize resources?


How do you use annotations to store non-identifying metadata?
Understand resource requests and limits:

How do you set resource requests and limits for Pods?


Understand service discovery and networking:

How do you create and manage Services?


What are the different types of Services (ClusterIP, NodePort, LoadBalancer)?
6. State Persistence (8%)
Understand PersistentVolumeClaims for state persistence:
How do you create and use PersistentVolumes and PersistentVolumeClaims?
7. Services & Networking (13%)
Understand Services:

How do you create and manage Services (ClusterIP, NodePort, LoadBalancer)?


How do you use Network Policies to control network access?
Understand Ingress:

How do you configure and use Ingress resources for HTTP and HTTPS routing?

Sample Questions:

Create a Pod with a specific image:


kubectl run nginx --image=nginx --restart=Never

Expose a Deployment as a Service:

kubectl expose deployment my-deployment --type=NodePort --port=80

Set resource limits on a Pod:

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image
resources:
limits:
memory: "128Mi"
cpu: "500m"
requests:
memory: "64Mi"
cpu: "250m"

Create a ConfigMap and use it in a Pod:

kubectl create configmap my-config --from-literal=key1=value1

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image
envFrom:
- configMapRef:
name: my-config

Add a Liveness and Readiness probe to a Pod:

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /readiness
port: 8080
initialDelaySeconds: 3
periodSeconds: 3

Scale a Deployment to 5 replicas:

kubectl scale deployment redis-deployment --replicas=5

Create an Ingress resource for HTTP and HTTPS routing:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80

Create a Pod with a liveness probe:

apiVersion: v1
kind: Pod
metadata:
name: liveness-pod
spec:
containers:
- name: my-container
image: nginx
livenessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 3
periodSeconds: 3

Create a PersistentVolume and PersistentVolumeClaim and mount it in a Pod:

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

apiVersion: v1
kind: Pod
metadata:
name: pv-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: pv-storage
volumes:
- name: pv-storage
persistentVolumeClaim:
claimName: pv-claim

Set resource requests and limits on a Pod:

apiVersion: v1
kind: Pod
metadata:
name: resource-pod
spec:
containers:
- name: my-container
image: nginx
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"

Create a ConfigMap and use it in a Pod:

kubectl create configmap my-config --from-literal=key1=value1

apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
envFrom:
- configMapRef:
name: my-config

Get logs from a running Pod:

kubectl logs <pod-name>


Expose a Deployment as a ClusterIP service:

kubectl expose deployment redis-deployment --port=6379 --target-port=6379 --


type=ClusterIP

Create a Deployment with 3 replicas using the redis image:

kubectl create deployment redis-deployment --image=redis --replicas=3

Create a Pod with an Nginx container:

kubectl run nginx --image=nginx --restart=Never

Create a Deployment and roll out an update:

kubectl create deployment my-deployment --image=nginx:1.14


kubectl set image deployment/my-deployment nginx=nginx:1.15

Rollback a Deployment update:

kubectl rollout undo deployment/my-deployment

Create a Pod with a readiness probe:


apiVersion: v1
kind: Pod
metadata:
name: readiness-pod
spec:
containers:
- name: my-container
image: nginx
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 5
periodSeconds: 5

Create a ServiceAccount and use it in a Pod:

kubectl create serviceaccount my-serviceaccount

apiVersion: v1
kind: Pod
metadata:
name: serviceaccount-pod
spec:
serviceAccountName: my-serviceaccount
containers:
- name: my-container
image: nginx
Create a Horizontal Pod Autoscaler (HPA):

kubectl autoscale deployment my-deployment --cpu-percent=50 --min=1 --max=5

Create a Custom Resource Definition (CRD):


apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: crontabs.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
scope: Namespaced
names:
plural: crontabs
singular: crontab
kind: CronTab
shortNames:
- ct

Create a Pod that uses a PersistentVolume and PersistentVolumeClaim:

apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

apiVersion: v1
kind: Pod
metadata:
name: pvc-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-volume
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: my-pvc

Create a Network Policy to allow traffic only from a specific namespace

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-namespace
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: my-project

Create a DaemonSet to run a Pod on every node:

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-example
spec:
selector:
matchLabels:
name: daemonset-example
template:
metadata:
labels:
name: daemonset-example
spec:
containers:
- name: daemonset-container
image: nginx

Create a Pod with a sidecar container:

apiVersion: v1
kind: Pod
metadata:
name: sidecar-pod
spec:
containers:
- name: main-container
image: nginx
- name: sidecar-container
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do echo hello; sleep 10;done"]

Create a Service of type LoadBalancer:

apiVersion: v1
kind: Service
metadata:
name: loadbalancer-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer

Create a Pod that prints environment variables:

apiVersion: v1
kind: Pod
metadata:
name: env-printer
spec:
containers:
- name: my-container
image: busybox
command: ["sh", "-c", "printenv"]
env:
- name: MY_VAR
value: "Hello"

Create a Pod that has access to the secret data through a Volume:

apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
spec:
containers:
- name: test-container
image: nginx
volumeMounts:
# name must match the volume name below
- name: secret-volume
mountPath: /etc/secret-volume
readOnly: true
# The secret data is exposed to Containers in the Pod through a Volume.
volumes:
- name: secret-volume
secret:
secretName: test-secret
kubectl exec -i -t secret-test-pod -- /bin/bash

-------------------------

A quick note on editing Pods and Deployments


Edit a POD
Remember, you CANNOT edit specifications of an existing POD other than the below.

spec.containers[*].image

spec.initContainers[*].image

spec.activeDeadlineSeconds

spec.tolerations

For example you cannot edit the environment variables, service accounts, resource
limits (all of which we will discuss later) of a running pod. But if you really
want to, you have 2 options:

1. Run the kubectl edit pod <pod name> command. This will open the pod
specification in an editor (vi editor). Then edit the required properties. When you
try to save it, you will be denied. This is because you are attempting to edit a
field on the pod that is not editable.

A copy of the file with your changes is saved in a temporary location as shown
above.

You can then delete the existing pod by running the command:

kubectl delete pod webapp

Then create a new pod with your changes using the temporary file

kubectl create -f /tmp/kubectl-edit-ccvrq.yaml

2. The second option is to extract the pod definition in YAML format to a file
using the command

kubectl get pod webapp -o yaml > my-new-pod.yaml

Then make the changes to the exported file using an editor (vi editor). Save the
changes

vi my-new-pod.yaml

Then delete the existing pod

kubectl delete pod webapp

Then create a new pod with the edited file


kubectl create -f my-new-pod.yaml

Edit Deployments
With Deployments you can easily edit any field/property of the POD template. Since
the pod template is a child of the deployment specification, with every change the
deployment will automatically delete and create a new pod with the new changes. So
if you are asked to edit a property of a POD part of a deployment you may do that
simply by running the command

kubectl edit deployment my-deployment

ConfigMaps
A ConfigMap is an API object used to store non-confidential data in key-value
pairs. Pods can consume ConfigMaps as environment variables, command-line
arguments, or as configuration files in a volume.

A ConfigMap allows you to decouple environment-specific configuration from your


container images, so that your applications are easily portable.

kubectl -n qos-example patch pod qos-demo-5 --patch '{"spec":{"containers":


[{"name":"qos-demo-ctr-5", "resources":{"requests":{"cpu":"800m"}, "limits":
{"cpu":"800m"}}}]}}'

A pod has a pod status, and some conditions.

The pod status tells us where the pod is in its lifecycle.

When a pod is first created,

it is in a pending state.

This is when the scheduler tries

to figure out where to place the pod.

If the scheduler cannot find the node

to place the pod, it remains in a pending state.

To find out why it's stuck in a pending state,

run the kubectl describe pod command

and it will tell you exactly why.

Once the pod is scheduled,


it goes into a container creating status

where the images required for the application

are pulled and the container starts.

Once all the containers in a pod starts,

it goes into a running state

where it continues to be

until the program completes successfully

or is terminated.

You can see the pod status in the output

of the kubectl get pods command.

So remember, at any point in time,

the pod status can only be one of these values,

and only gives us a high-level summary of a pod.

However, at times, you may want additional information.

Conditions complement pod status.

It is an array of true or false values

that tell us the state of a pod.

When a pod is scheduled on a node,

the pod scheduled condition is set to true.

When the pod is initialized,

its value is set to true.

We know that a pod has multiple containers.

When all the containers in the pod are ready,

the container's ready condition is set to true.

And finally, the pod itself is considered to be ready.

To see the state of pod conditions,

run the kubectl describe pod command

and look for the conditions section.

You can also see the ready state of the pod

in the output of the kubectl get pods command.


And that is the condition we are interested in

for this lecture.

The ready conditions indicate

that the application inside the pod is running

and is ready to accept user traffic.

What does that really mean?

The containers could be running different kinds

of applications in them.

It could be a simple script that performs the job.

It could be a database service or a large web server

serving front end users.

The script may take a few milliseconds to get ready.

The database service may take a few seconds to power up.

Some web servers could take several minutes to warm up.

If you try to run an instance of a Jenkins server,

you will notice that it takes about 10 to 15 seconds

for the server to initialize

before a user can access the web UI.

Even after the web UI is initialized,

it takes a few seconds for the server to warm up

and be ready to serve users.

During this wait period,

if you look at the state of the pod,

it continues to indicate that the pod is ready,

which is not very true.

So why is that happening and how does Kubernetes know

whether the application inside the container

is actually running or not?

But before we get into that discussion,


why does it matter if the state is reported incorrectly?

Let's look at a simple scenario

where you create a pod and expose it to external users

using a service.

The service will route traffic to the pod immediately.

The service relies on the pod's ready condition

to route traffic.

By default, Kubernetes assumes

that as soon as the container is created,

it is ready to serve user traffic.

So it sets the value after ready condition

for each container to true.

But if the application within the container took longer

to get ready, the service is unaware of it

and sends traffic through as a container

is already in a ready state,

causing users to hit a pod

that isn't yet running a live application.

What we need here is a way

to tie the ready condition

to the actual state of the application inside the container.

As a developer of the application,

you know better what it means for the application

to be ready.

There are different ways that you can define

if an application inside a container is actually ready.

You can set up different kinds of tests or probes,

which is the appropriate term.

In case of a web application,

it could be when the API server is up and running.


So you could run an HTTP test

to see if the API server responds.

In case of a database,

you may test to see if a particular TCP socket is listening.

Or you may simply execute a command

within the container to run a custom script

that would exit successfully

if the application is ready.

So how do you configure that test?

In the pod definition file,

add a new field called readiness probe

and use the HTTP get option.

Specify the port and the ready API.

Now when the container is created,

Kubernetes does not immediately set the ready condition

on the container to true.

Instead, it performs a test

to see if the API responds positively.

Until then, the service does not forward any traffic

to the pod as it sees that the pod is not ready.

There are different ways a probe can be configured.

For HTTP, use the HTTP get option

with the path and the port.

For TCP, use the TCP socket option with the port.

And for executing a command,

specify the exec option with the command

and options in an array format.

There are some additional options as well.

If you know that your application will take a minimum


of say 10 seconds to warm up,

you can add an additional delay to the probe.

If you'd like to specify how often to probe,

you can do that using the period seconds option.

By default, if the application is not not ready

after three attempts, the probe will stop.

If you'd like to make more attempts,

use the failure threshold option.

We will look through more options

in the documentation walkthrough.

Finally, let us look at how readiness probes

are useful in a multi-pod setup.

Say you have a replica set or deployment

with multiple pods and a service serving traffic

to all the pods.

There are two pods already serving users.

Say you were to add an additional pod

and let's say the pod takes a minute to warm up,

without the readiness probe configured correctly,

the service would immediately start routing traffic

to the new pod.

That will result in service disruption

to at least some of the users.

Instead, if the pods were configured

with the correct readiness probe,

the service will continue to serve traffic only

to the older pods and wait until the new pod is ready.

Once ready, the traffic will be routed

to the new pod as well,

ensuring no users are affected.

You might also like