Kubernetes_Notes
Kubernetes_Notes
===========
Minikube:
========
Minikube is a single cluster with all k8s component.
minikube is used for learning and development purposes.
What is kubernetes:
==================
Kubernetes is also called as k8s.
k8s is an open source popular orchestration tool developed and used by google in
prod env.
We have different orchestration technologies like dockerswarm,kubernets and
mesos.
Advantages of k8s:
==================
With the help of orchestration our application is highly available, as it is
deployed on multiple instances.
we can scale up the cluster if demand increase and also scale up nodes if we are
out of resources.
Kubernetes Architecture:
=======================
We have two Kind of machines in k8s.
1) Master
2) Worker node
Master Components:
==================
1) Kube api server
2) etcd
3) Controller
4) Scheduler
apiversion
kind
metadata
spec
====================
kind version
=====================
POD v1
service v1
ReplicaSet apps/v1
Deployment apps/v1
=====================
apiVersion: v1
kind: Pod
metadata:
name: firstpod
labels:
env: prod
spec:
containers:
- name: nginx
image: nginx
How will we know what fields we can use in metadata and spec?
We can get different options using
kubectl explain pods --recursive
kubectl run nginx --dry-run=client --image=nginx -o yaml
Imperative :
You have to manage different resources like pods, service, replica sets, etc by
your own.
Imperative object configuration will helps us to modify the objects and these
changes are not stored in yaml.
The kubectl create command creates a resource from a file or from stdin. JSON and
YAML formats are accepted.
If the resource already exists, kubectl create will error.
Declarative :
K8 will take care of all the resources, all you need have to specify what is your
actual requirement.
Declarative object configuration will helps to modify the yaml file.
The kubectl apply command applies a configuration to a resource by file name or
stdin. The resource name must be specified.
This resource will be created if it doesn’t exist yet. If the resource already
exists, this command will not error. JSON and YAML formats are accepted.
apiVersion: v1
kind: Pod
metadata:
name: firstpod
labels:
env: prod
name: sabair
spec:
containers:
- name: nginx
image: nginx
env:
- name: myname
value: sabair
- name: City
value: Hyderabad
args: ["sleep", "50"]
Note: Both these containers will be sharing the same network and can communicate
between themselves.
How to verify?
1) Use the exec command and login to both the containers.
kubectl exec firstpod -it -c firstcontainer bash
2) Check the open ports by using netstat -nltp --> We can see port 80 is open in
both cases
3) Try to open port 8000 by using netcat -l -p 8000
4) verify againg using netstat -nltp command
5) Use telnet localhost 8000 on second container and write any commands to verify
Services in k8s:
===============
Services are used to expose our pods to outside world.
For example we have deployed a apache webserver and want to access from our
computer/outside network
then it is possible with services only.
services are also a object like nodes,pods in k8s.
Default service will be ClusterIp in k8s.
if we dont specify type in spec then it will assume as ClusterIp only.
CLusterIp:
Cluster ip is used to communicate within the cluster.
we may have some pods running frontent,backend and database.
And in micro services frontent will be talking to backend and databases.
we can create a service for backend and database and group the pods related to
Microservice.
Each service will have a ip and we call it as ClusterIp which help to communciate
with other services
NodePort:
NodePort is used to access for pod outside the world,means via browser.
Port can be assigned between (30000 - 32767)
service.yml
===========
apiVersion: v1
kind: Service
metadata:
name: firstservice
spec:
type: NodePort
ports:
- nodePort: 32000
port: 9000
targetPort: 80
selector:
type: app
Replication Controller:
======================
Replciation controller or RC will ensure that these many pods are running on the
cluster.
For any reason if the pod is down then RC will monitor that and spin up a new pod
for us.
RC will get attached to the pods based on labels and selectors.
Service.yml
apiVersion: v1
kind: Service
metadata:
name: firstservice
spec:
type: NodePort
ports:
- nodePort: 32000
port: 9000
targetPort: 80
selector:
env: prod
This will only delete rc and pods will be running,if pod is deleted then they will
not restart as
we dont have any rc configured.
if we create new rc with same label then a new pod will be created as the existing
pod has already
assigned with rc.
spec:
replicas: 4
selector: --> Selector (not compulsory to pass in RC) should be same of
pod label
env: prod ## if we dont pass selector field then it will take the
labels in count by default.
template:
metadata:
name: firstpod
labels:
env: prod
spec:
containers:
- name: firstcontainer
image: nginx
env:
- name: myname
Replica set:
===========
RS and RC both purpose is same but they are not same.
RC is older technology and k8s recommend to use RS.
RS is just the updated version of RP.
Let us create two pod one by label "Prod" and one by label "test"
rs.yaml:
=======
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: firstrc
labels:
appname: testapp
spec:
replicas: 2
selector:
matchExpressions:
- key: env
operator: In #value can be In or NotIn only
values:
- prod
- test
template:
metadata:
name: firstpod
labels:
env: prod
spec:
containers:
- name: firstcontainer
image: nginx
env:
- name: myname
If Suppose we have 3 pods now with label one as "prod","test" and one pod with two
labels "prod" and "backend"
and we want to ignore a specific pods then how can we use this rs.
rs.yaml:
=======
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: firstrc
labels:
appname: testapp
spec:
replicas: 3
selector:
matchExpressions:
- key: env
operator: In
values:
- prod
- test
- key: type
operator: NotIn #ignore the pod with label as backend
values:
- backend
template:
metadata:
name: firstpod
labels:
env: prod
spec:
containers:
- name: firstcontainer
image: nginx
env:
- name: myname
Deployments:
===========
Deployments are used to deploy,upgrade,undo the changes,pausing and again resuming
the changes.
Deployments use rolling update means the new version will be slowing updated with
the older version
and the users will be able to access our application without downtime.
Deployment strategy:
===================
There are two types of depoyment strategy in k8s.
1) Recreate
This will destroy the existing pods and create new.
We can see downtime in recreate deployment strategy
2) Rolling update
This is the default deployment strategy.
No downtime for rolling update strategy.
For example if we have 5 pods then 1 will be destroyed and a newver version will be
we can see the difference when we use kubectl describe deployment deployment_name
command
In recreate it will scale down the RS to 0 and then Scale up to 5.
In Rolling it will scale down to 4 and and bring one new up and then vice versa
spec:
replicas: 3
selector:
matchExpressions:
- key: env
operator: In
values:
- prod
template:
metadata:
name: firstpod
labels:
env: prod
spec:
containers:
- name: firstcontainer
image: nginx
env:
- name: myname
Commands:
kubectl create -f deploymen.yml --> Create deployment.
kubectl get deployments --> To get the list of deployments.
kubetl apply -f deployment.yml --> To update deployment.
kubectl rollout status deployment/myapp-deployment.yml --> to check the status of
deployments.
kubectl rollout out deployment/myapp-deploy --> Rollback to previous version
Note: Rollout will get triggered only when there are change in the container
configurations only.
Max Surge: This will imply the number of pods that will be there on top of total
replicas being mentioned.
Example if the replicas in the deployment is mentioned as 3, when rolling update
will kick in this property
will define how many extra pods will be created at that point of time.
spec:
replicas: 6
minReadySeconds: 30 --> It will take 30 seconds to start a new container.
strategy:
rollingUpdate: --> Strategy used for deployment
maxSurge: 2 --> 2 extra pods (Replica 6 and maxsurge 2 then it
will be 8 pods)
maxUnavailable: 1 --> 1 pod will create and get deleted
selector:
matchExpressions:
- key: env
operator: In
values:
- prod
template:
metadata:
name: firstpod
labels:
env: prod
spec:
containers:
- name: firstcontainer
image: nginx
env:
- name: myname
Note: Maxunavailable and Max surge value can be passed in percentage also.
From above output we can see revision as 1 means firstrollout and change-cause
<none> because we haven't added
any annotations in deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: firstdeployment
labels:
appname: testapp
annotations:
kubernetes.io/change-cause: Custom message
spec:
replicas: 4
selector:
matchExpressions:
- key: env
operator: In
values:
- prod
template:
metadata:
name: firstpod
labels:
env: prod
spec:
containers:
- name: firstcontainer
image: nginx
env:
- name: myname
Output:
PS C:\Users\0075\Desktop\k8spractical> kubectl rollout history deployment
firstdeployment
deployment.apps/firstdeployment
REVISION CHANGE-CAUSE
1 Custom message
K8s Rollback:
============
Rollback will helps us to get back to old version of deployment.
Recreate Strategy:
=================
By using recreate strategy we can see some down time,as recreate will delete all
the pods at once and
create new one.
spec:
replicas: 6
strategy:
type: Recreate -->Recreate Deployment
selector:
matchExpressions:
- key: env
operator: In
values:
- prod
template:
metadata:
name: firstpod
labels:
env: prod
spec:
containers:
- name: firstcontainer
image: nginx
env:
- name: myname
Resource Request:
================
When we schedule a pod on any node without any resource request then it will
automatically take the resource
available on the node.
Resource request will be RAM and CPU request.
Note:
CPU:
requests.cpu: The amount of CPU resources the container requests. It can be
specified in millicores (m) or cores (1).
limits.cpu: The maximum amount of CPU resources the container can use.
Memory:
requests.memory: The amount of memory the container requests. It can be specified
in bytes (B), kilobytes (Ki), megabytes (Mi), or gigabytes (Gi).
limits.memory: The maximum amount of memory the container can use.
Resource Limits:
===============
Resource limit will ensures that container is not using more than the specified RAM
and CPU.
Example template:
===============
apiVersion: v1
kind: Pod
metadata:
name: thirdpod
labels:
env: prod
spec:
containers:
- name: firstcontainer
image: nginx
env:
- name: myname
resources:
requests:
memory: 100M
cpu: 1
limits:
cpu: 2
memory: 200M
Namespaces in k8s:
==================
In Kubernetes, namespaces provide a way to divide a cluster into virtual partitions
or segments.
They act as a logical boundary that isolates and separates resources within a
cluster.
In Kubernetes,A cluster can be divided into multiple namespaces, and each namespace
can have its own set of resources.
This helps organize and manage applications and services running in the cluster.
By Default the pod what we have created will be created in "Default" namespace.
1)default
This ns will get created when we try to create any object without mentioning
namespace
2)kube-node-lease
node lease is a mechanism for workers nodes to master about there health status and
they are ready to take workloads.
lease should be sent for every 60 secs,if lease is not renewed then master may be
considered nodes as unhealthy or unresponsive.
3)kube-public
serves as a central location for storing public resources that need to be accessed
by all users and service accounts within a Kubernetes cluster.
4)kube-system
All the pods,management related stuff to create clusters are stored in kube-system
Uses of Deployment?
1) If multiple teams working on deployments then we can create multiple name spaces
for there project.
2) Multiple environments app can be deployed in different namespaces.
3) We can restrict the numbers of objects to get created in Namespaces.
4) We can also provide resource limit in Namespaces.
5) We can provide RBAC to namespaces for different users.
Service DNS:
===========
In k8s service DNS, also known as the service discovery DNS,
is a built-in mechanism that allows communication between services using their
names instead of their
IP addresses. Each service deployed in a Kubernetes cluster is assigned a DNS name
that can be used by
other services to access it.
Use case:
We can use ServiceDNS to communicate between pods in two different namespaces.
Ex:
Create one pod in default namespace and attach a nodeport service to it.
Create one pod is custom namespace and try to login to container and curl.
After creating compute based quota and trying to creating pod without any limits in
podthen it will throw error.
it is mandatory to create pod using limits for POD.
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
containers:
- name: firstcontainer
image: nginx
env:
- name: myname
resources:
requests:
memory: 250Mi
cpu: 0.1
limits:
cpu: 0.5
memory: 500Mi
Note:
If suppose we haven't set requests quota in pod then by default it will set the
values configured in LIMITS.
We can run our pod with setting Requests quota as well.
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
containers:
- name: firstcontainer
image: nginx
env:
- name: myname
resources:
# requests:
# memory: 250Mi
# cpu: 0.1
limits:
cpu: 0.5
memory: 500Mi
Everytime adding resource requests and limits will be headache to add in YAML.
TO avoid adding that each and everytime we can use LimitRange in k8s.
LimitRange in k8s:
==================
The LimitRange resource in Kubernetes allows you to define default and maximum
resource limits for
containers running within a namespace.
It helps ensure resource fairness and prevent containers from consuming excessive
resources.
How to check?
kubectl describe ns test
apiVersion: v1
kind: LimitRange
metadata:
name: testlimit
spec:
limits:
- default:
cpu: 200m
memory: 500Mi
defaultRequest:
cpu: 100m
memory: 250Mi
type: Container
Example:
apiVersion: v1
kind: LimitRange
metadata:
name: testlimit
spec:
limits:
- default:
cpu: 200m
memory: 500Mi
defaultRequest:
cpu: 100m
memory: 250Mi
min:
cpu: 100m
memory: 250Mi
max:
cpu: 500m
memory: 700Mi
type: Container
Now try to create a pod using below yaml and it will get failed.
apiVersion: v1
kind: Pod
metadata:
name: testpod
spec:
containers:
- name: firstcontainer
image: nginx
env:
- name: myname
resources:
requests:
memory: 100Mi
limits:
memory: 1000Mi
Config Maps:
===========
ConfigMaps in Kubernetes are used to store non-sensitive configuration data that
can be consumed by pods or other Kubernetes objects. They provide a way to decouple
Example:
Create a directory configmaps
Create a file with name application.properties and add all the properties.
Create a configmap using the below command
kubectl create cm <Config_map_name> --from-file=<file_name>
If suppose we have 50 files then it will be difficult to add them using --from-
literal,--from-file
so we have --from-directory which can be used in such cases.
Example:
Create a directory properties
Create multiple files with anyname add all the properties.
Create a configmap using the below command
kubectl create cm <Config_map_name> --from-file=<folder_name>
variable1=value1
variable2=value2
variable1=value1
variable2=value2
kind: ConfigMap
metadata:
name: configmap
apiVersion: v1
data:
test1: "#environment variables from file test1\r\n\r\nvariable1=value1\r\
nvariable2=value2\r\n"
test2: "#environment variables from file test2\r\n\r\nvariable1=value1\r\
nvariable2=value2"
kind: ConfigMap
metadata:
name: configmap
apiVersion: v1
kind: Pod
metadata:
name: firstpod
labels:
env: prod
spec:
containers:
- name: firstcontainer
image: nginx
imagePullPolicy: Never #It will not try to pull the latest image
from internet
env:
- name: variablefromcm #name of environemnt variable
valueFrom:
configMapKeyRef:
key: variable2 #In key which variable vallue you want to
store.
name: env #Config map name
apiVersion: v1
kind: Pod
metadata:
name: firstpod
spec:
containers:
- name: firstcontainer
image: nginx
imagePullPolicy: Never
volumeMounts: #mount point of volume
- mountPath: "/config" #directory inside pod where the variables will be
copied
name: test #name of our volume
readOnly: true
volumes:
- name: test #volume name
configMap:
name: env #config map name
Secrets:
=======
Secrets in k8s is used to store small amount of secure data like passwords.
Small amount of data can be upto 1MB of size.
When ever we try to create any secret then it will be converted into base64 and
stored.
Secrets Will be again of 3 types:
1) Docker-registry
2) Generic
3) tls
Most commonly used will be generic.
apiVersion: v1
data:
name: c2FiYWly
password: QWxpbmFAMDUwOQ==
kind: Secret
metadata:
name: first
apiVersion: v1
kind: Pod
metadata:
name: firstpod
spec:
containers:
- name: firstcontainer
image: nginx
imagePullPolicy: Never
volumeMounts: #mount point of volume
- mountPath: "/secret" #directory inside pod where the variables will be
copied
name: test #name of our volume
readOnly: true
volumes:
- name: #volume name
secret:
secretName: first #Secret name
Example:
Nodes with special hardware. if you have nodes with special hardware (e.g GPUs)
you want to repel Pods that do not need this hardware and attract Pods that do need
it.
This can be done by tainting the nodes that have the specialized hardware
(e.g. kubectl taint nodes nodename special=true:NoSchedule ) and adding
corresponding toleration
to Pods that must use this special hardware.
In Exists it will check if any key with value special is there not and it will
apply toleration.
apiVersion: v1
kind: Pod
metadata:
name: firstpod
spec:
containers:
- name: firstcontainer
image: nginx
imagePullPolicy: Never
tolerations:
- key: "special"
operator: "Exists" #Operator value can be Equal or Exists
effect: "NoExecute"
tolerationSeconds: 60 #Pod will tolerate for 60 seconds and get
terminated.
Node Selector:
=============
Node selector helps us to select node where the pod can be scheduled.
It will work based on the label attached to node.
How to attach label to node ?
kubectl label nodes <node_name> env=prod
Node Affinity:
=============
Node affinity is a set of rules used by the scheduler to determine where a pod can
be placed.
apiVersion: v1
kind: Pod
metadata:
name: secondpod
spec:
containers:
- name: firstcontainer
image: nginx
imagePullPolicy: Never
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: env #label name
operator: In #Values for operator can be IN,NotIn
values:
- test # label value
K8s Volumes:
===========
Volumes in k8s are used to store persistent data.
We have two types of pods one is stateful,other can be stateless.
Stateful: In simple terms sateful applications can be storing any data.
Stateless: In simple terms stateless applications will not be storing any data.
How to test:
1) Login to container and create some random files in /data location.
2) Create some random files in tmp location.
3) Stop nginx service to terminate your container.
4) Now you can see pod will create a new container and try to login into container
5) Validate the files in both the locations.
6) Files in directory /data will be visible.
Note:
Problem with empty dir is if the pod got deleted then we will be loosing all our
data.
2) Hostpath:
===========
We create volume on hostpath, means volume will be created outside pod.
If pod get deleted and a new pod is created then it can access the volume available
on the host.
Example:
apiVersion: v1
kind: Pod
metadata:
name: firstpod
spec:
containers:
- name: firstcontainer
image: nginx
volumeMounts:
- mountPath: /data #Directory inside container
name: first-volume #any logical name
volumes:
- name: first-volume
hostPath:
path: /tmp/data # Path inside host machine (Minikube)
How to test:
1) Login to container and create some random files in /data location.
2) Login to minikube and check if the files are available in /tmp/data.
4) Now delete the pod and create a new pod.
5) Login to new pod and check if files are available in data directory.
Note:
If we have multiple nodes then we will other nodes will not be able to access the
volume
created on node.
3) Amazon Elastic Block Storage:
===============================
If we have multinode k8s cluster,then in this case we need to keep our volume
outside the cluster.
If our pod is created on other node then the volume should also be moved to that
node.
We have already setup our 3 node k8s cluster.
1) Login to aws and create on ebs volume in the same region of cluster.
2) Create one IAM role for aws-ebs-csi-driver
Search for = AmazonEBSCSIDriverPolicy and create one role and attach to all the
nodes in k8s cluster.
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-ebs-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: aws-ebs
awsElasticBlockStore:
volumeID: <your-ebs-volume-id> #EBS volume ID
fsType: ext4
5) We need to create on claim for Persistent volume (PVC) for our EBS.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-ebs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: aws-ebs
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- name: data-volume
mountPath: /data #data where we can write data
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: my-ebs-pvc #PVC Name
How to test:
1) Once pod is created then we can check the volume as it will show in use status.
2) Delete the pod and try to see the status of volume.
3) Create new pod again and this time if it is scheduled on another node then
volume will be attached to that node.
Note:
Here we can map multiple pods to the volumes but we cannot map two pods to same
volume at same point of time.
2)ReadOnlyMany (ROX):
This mode allows the volume to be mounted as read-only by multiple Nodes
simultaneously.
It is suitable for scenarios where multiple Nodes need read-only access to the
volume,
such as for shared logs or configuration files.
3)ReadWriteMany (RWX):
This mode allows the volume to be mounted as read-write by multiple Nodes
simultaneously.
It is suitable for scenarios where multiple Nodes need read-write access to the
volume,
such as shared file systems or distributed applications.
2)Delete:
With this policy, the PV is automatically deleted and the underlying storage
resources are released when the
PVC is deleted.
The data in the PV will be permanently lost.
3)Recycle (deprecated):
This policy is being deprecated and is not recommended for use.
With this policy, the PV's contents are not cleared or deleted when released.
The PV can be reused by a new PVC after the previous PVC is deleted, but the data
remains intact
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc #storage class name
csi:
driver: efs.csi.aws.com #Efs csi driver name
volumeHandle: <efs-file-system-id>::<efs-access-point-id> #file system id and
access point id
5) We need to create on claim for Persistent volume (PVC) for our EBS.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
apiVersion: v1
kind: Pod
metadata:
name: efs-pod
spec:
containers:
- name: my-app
image: nginx
volumeMounts:
- name: efs-volume
mountPath: /efs-mount #mount directory inside filesystem
volumes:
- name: efs-volume
persistentVolumeClaim:
claimName: efs-pvc #PVC name
How to test:
1) Create one pod and login to pod and create some random files in efs-mount
directory.
2) Create one more pod and check if the files are available or not
3) We can now connect multiple pods to same volume.
Note:
Here we can map multiple pods to same EFS volume.
EFS cost is 3time higher to EBS.
Example:
========
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: example-daemonset
spec:
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- name: samplenginx
image: nginx
Liveness Probe:
==============
Liveness can use with pod configuration to check the health status of pod.
If pod in not healthy for any reason then Liveness will restart the pod.
Example:
=======
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-app
image: nginx #Image Name
ports:
- containerPort: 80
livenessProbe:
httpGet: #get request or we can use tcpSocket, or exec
path: / #health check path
port: 80
initialDelaySeconds: 15 #It will wait 15 second before applying first
liveness
periodSeconds: 10 #Every 10 seconds the liveness will be checked.
How to test?
Create pod using above yaml and use describe to check the liveness.
or create one yaml with wrong container ports.
Example:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-app
image: nginx #Image Name
ports:
- containerPort: 8080
livenessProbe:
httpGet: #get request or we can use tcpSocket, or exec
path: / #health check path
port: 8080
initialDelaySeconds: 15 #It will wait 15 second before applying first
liveness
periodSeconds: 10 #Every 10 seconds the liveness will be checked.
Readiness Probe:
===============
Readiness can use with pod configuration to check the if the application running in
the pod is ready to serve traffic.
If pod in not healthy for any reason then Readiness will remove the pod from
endpoints and it will be in "not Ready" status.
Example:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-app
image: nginx
ports:
- containerPort: 80
readinessProbe:
httpGet:
path: /index.html
port: 80
initialDelaySeconds: 10
periodSeconds: 5
How to test ?
Login to the pod and delete the index.html file
Execute kubectl get pods -wait
We can now see pod is not ready status.
Ingress:
========
Ingress will helps us to expose the applciations and manage
external access by providing http/https routing rules to the services
within a k8s cluster.
Ingress controller:
==================
Nothing but a load balancer which is responsible for implementing and managing
Ingress resources.
Deployment.yml:
===============
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Service.yml
===========
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30001
type: NodePort
Ingress.yml
===========
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80