Mar 2023 7-40AM K8s Running Notes
Mar 2023 7-40AM K8s Running Notes
Mar 2023 7-40AM K8s Running Notes
Installation
============
kubeadm --> We can setup multi node k8's cluster using kubeadm.
kubespray --> We can setup multi node k8's cluster using kubespray(Ansbile
Playbooks Used internally by kubespray).
KOPS --> Kubernetes Operations is a software using which we can create production
ready
highily available kubenetes services in Cloud like AWS.KOPS will leverage Cloud
Sevices like
AWS AutoScaling & Lanuch Configurations to setup K8's Master & Workers. It will
Create 2 ASG & Lanuch Configs
one for master and one for worekrs. Thesse Auto Scaling Groups will manage EC2
Instances.
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-strong-
getting-started-strong-
Name Spaces
ex:
# Create name space using command(Imperative)
kubectl create ns test-ns
apiVersion: v1
kind: Namespace
metadata:
name: <NameSpaceName>
lables: # Labels are key value pairs(Metadata)
<key>: <value>
<key> <value>
# Example
apiVersion: v1
kind: Namespace
metadata:
name: test-ns
labels:
team: testingteam
# Command to apply
kubectl apply -f <fileName>.yaml
ex:
ex:
# If we don't mention name space it will create in default(current) namespace.
kubectl run nginxpod --image=nginx --labels app=nginx --port=80
# POD Manifest
apiVersion: v1
kind: Pod
metadata:
name: <PodName>
labels:
<Key>: <value>
namespace: <nameSpaceName>
spec:
containers:
- name: <NameOfTheCotnainer>
image: <imagaName>
ports:
- containerPort: <portOfContainer>
Example Pod
-----------
apiVersion: v1
kind: Pod
metadata:
name: javawebapp
labels:
app: javawebapp
namespace: test-ns
spec:
containers:
- name: javawebapp
image: dockerhandson/java-web-app:1
ports:
- containerPort: 8080
Another Example
---------------
apiVersion: v1
kind: Pod
metadata:
name: mavenwebapppod
namespace: test-ns
labels:
app: mavenwebapp
spec:
containers:
- name: mavenwebapp
image: dockerhandson/maven-web-application:1
ports:
- containerPort: 8080
ex:
kubect get pods -n test-ns
POD --> Pod is the smallest building block which we can deploy in k8s.Pod
represents running process.Pod contains one or more containers.These container will
share same network,storage and any other specifications.Pod will have unique IP
Address in k8s cluster.
Pods
SingleContainerPods --> Pod will have only one container.
ex:
apiVersion: v1
kind: Pod
metadata:
name: nodeapppod
labels:
app: nodeapp
spec:
containers:
- name: nodeapp
image: dockerhandson/nodejs-app-mss:2
ports:
- containerPort: 9981
- name: ngnixapp
image: nginx
ports:
- containerPort: 80
Service
========
apiVersion: v1
kind: Service
metadata:
name: <serviceName>
namespace: <nameSpace>
spec:
type: <ClusterIP/NodePort>
selector:
<key>: <value>
ports:
- port: <servciePort> # default It to 80
targetPort: <containerPort>
Example
-------
apiVersion: v1
kind: Pod
metadata:
name: nodeapp
labels:
app: nodeapp
spec:
containers:
- name: nodeapp
image: dockerhandson/node-app-mss:1
ports:
- containerPort: 9981
---
apiVersion: v1
kind: Service
metadata:
name: nodeappsvc
spec:
type: NodePort
selector:
app: nodeapp
ports:
- port: 80
targetPort: 9981
Example:
-------
apiVersion: v1
kind: Pod
metadata:
name: mavenwebapppod
labels:
app: mavenwebapp
spec:
containers:
- name: mavenwebapp
image: dockerhandson/maven-web-application:1
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: mavenwebappsvc
spec:
type: ClusterIP
selector:
app: mavenwebapp
ports:
- port: 80
targetPort: 8080
apiVersion: v1
kind: Pod
metadata:
name: javawebapp
labels:
app: javawebapp
namespace: test-ns
spec:
containers:
- name: javawebapp
image: dockerhandson/java-web-app:1
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
What is FQDN?
Fully Qualified Domain name.
If one POD need access to service & which are in differnent names space we have to
use FQDN of the serivce.
Syntax: <serivceName>.<namespace>.svc.cluster.local
ex: mavenwebappsvc.test-ns.svc.cluster.local
POD --> Pod is the smallest building block which we can deploy in k8s.Pod
represents running process.Pod contains one or more containers.These container will
share same network,storage and any other specifications.Pod will have unique IP
Address in k8s cluster.
Pods
SingleContainerPods --> Pod will have only one container.
We should not create pods directly for deploying applications.If node goes down
in which pods are running. Pods will not be rescheduled.
We have to create pods with help of controllers.Which manages POD life cycle.
Kubernetes pods have a defined lifecycle. For example, once a pod is running in
your cluster then a critical fault on the node where that pod is running means that
all the pods on that node fail. Kubernetes treats that level of failure as final:
you would need to create a new Pod to recover, even if the node later becomes
healthy.
However, to make life considerably easier, you don't need to manage each Pod
directly. Instead, you can use workload resources that manage a set of pods on your
behalf. These resources configure controllers that make sure the right number of
the right kind of pods are running, to match the state you specified.
StatefulSet lets you run one or more related Pods that do track state somehow. For
example, if your workload records data persistently, you can run a StatefulSet that
matches each Pod with a PersistentVolume. Your code, running in the Pods for that
StatefulSet, can replicate data to other Pods in the same StatefulSet to improve
overall resilience.
Controllers
===========
ReplicationController
ReplicaSet
DaemonSet
Deploymnet
StatefullSet
# Replication Conrtoller
apiVersion: v1
kind: ReplicationController
metadata:
name: <replicationControllerName>
namespace: <nameSpaceName>
spec:
replicas: <noOfReplicas>
selector:
<key>: <value>
template: # POD Template
metadata:
name: <PODName>
labels:
<key>: <value>
spec:
- containers:
- name: <nameOfTheContainer>
image: <imageName>
ports:
- containerPort: <containerPort>
Example:
========
apiVersion: v1
kind: ReplicationController
metadata:
name: mavenwebapprc
namespace: test-ns
spec:
replicas: 2
selector:
app: mavenwebapp
template:
metadata:
labels:
app: mavenwebapp
spec:
containers:
- name: mavenwebapp
image: dockerhandson/maven-web-application:1
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: mavenwebappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: mavenwebapp
ports:
- port: 80
targetPort: 8080
# Another Appplication
----------------------
apiVersion: v1
kind: ReplicationController
metadata:
name: pythonrc
namespace: test-ns
spec:
replicas: 2
selector:
app: pythonapp
template: # Pod template
metadata:
name: pythonapppod
labels:
app: pythonapp
spec:
containers:
- name: pythonappcontainer
image: dockerhandson/python-flask-api:1
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: pythonappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: pythonapp
ports:
- port: 80
targetPort: 5000
ReplicaSet:
It's next gernation of replication controller. Both manages the pod replicas. But
only difference as now is
selector support.
RS --> Supports eqaulity based selectors and also set based selectors.
Set Based
key in (value1,value2,value3)
key notin (value1)
selector:
matchLabels: # Equality Based
key: value
matchExpressions: # Set Based
- key: app
operator: in
values:
- javawebpp
# Mainfest File RS
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: <RSName>
spec:
replicas: <noOfPODReplicas>
selector: # To Match POD Labels.
matchLabels: # Equality Based Selector
<key>: <value>
matchExpressions: # Set Based Selector
- key: <key>
operator: <in/not in>
values:
- <value1>
- <value2>
template:
metadata:
name: <PODName>
labels:
<key>: <value>
spec:
- containers:
- name: <nameOfTheContainer>
image: <imageName>
ports:
- containerPort: <containerPort>
Example:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: javawebapp
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebapp
labels:
app: javawebapp
spec:
containers:
- name: javawebapp
image: dockerhandson/java-web-app:2
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
Another Example:
---------------
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: pythonapp
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: pythonapp
template:
metadata:
labels:
app: pythonapp
spec:
containers:
- name: pythonapp
image: dockerhandson/python-flask-app:1
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: pythonappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: pythonapp
ports:
- port: 80
targetPort: 5000
kubectl get rs
kubectl get rs -n <namespace>
kubectl get all
kubectl scale rs <rsName> --replicas <noOfReplicas>
kubectl describe rs <rsName>
kubectl delete rs <rsName>
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: <DSName>
spec:
selector: # To Match POD Labels.
matchLabels: # Equality Based Selector
<key>: <value>
matchExpressions: # Set Based Selector
- key: <key>
operator: <in/not in>
values:
- <value1>
- <value2>
template:
metadata:
name: <PODName>
labels:
<key>: <value>
spec:
containers:
- name: <nameOfTheContainer>
image: <imageName>
ports:
- containerPort: <containerPort>
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginxds
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
kubectl get ds
kubectl get ds -n <namespace>
kubectl get all
# Deployment ReCreate
---------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebappdeployment
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: javawebapp
strategy:
type: Recreate
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: dockerhandson/java-web-app:1
ports:
- containerPort: 8080
# Rolling Update
----------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebappdeployment
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: javawebapp
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
minReadySeconds: 30
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: dockerhandson/java-web-app:1
ports:
- containerPort: 8080
It’s also possible for applications to take up more resources than they should.
This could be caused by a team spinning up more replicas than they need to
artificially decrease latency.
Due to a bad configuration change that causes a program to go out of control and
use 100% of the available CPU or memory.
Regardless of whether the issue is caused by a bad developer, bad code, or bad
luck.
To aviod such type of issues as best practice we can use resources requests &
limits to control resource allocation & consumption.
Requests and limits are the mechanisms Kubernetes uses to control resources such as
CPU and memory. Requests are what the container is guaranteed to get. If a
container requests a resource, Kubernetes will only schedule it on a node that can
give it that resource. Limits, on the other hand, make sure a container never goes
above a certain value. The container is only allowed to go up to the limit, and
then it is restricted.
It is important to remember that the limit can never be lower than the request. If
you try this, Kubernetes will throw an error and won’t let you run the container.
Requests and limits are on a per-container basis. While Pods usually contain a
single container, it’s common to see Pods with multiple containers as well. Each
container in the Pod gets its own individual limit and request, but because Pods
are always scheduled as a group, you need to add the limits and requests for each
container together to get an aggregate value for the Pod.
Resoruce reuest:
---------------
A request is the amount of that resources that the system will guarantee for the
container, and Kubernetes will use this value to decide on which node to place the
pod.
Resource Limit:
---------------
A limit is the maximum amount of resources that Kubernetes will allow the container
to use.
ex:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mavenwebappdeployment
namespace: test-ns
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
app: mavenwebapp
template:
metadata:
name: mavenwebappod
labels:
app: mavenwebapp
spec:
containers:
- name: mavenwebapp
image: dockerhandson/maven-web-application:22
ports:
- containerPort: 8080
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: mavenwebappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: mavenwebapp
ports:
- port: 80
targetPort: 8080
Anothe App:
----------
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeapp
spec:
replicas: 2
selector:
matchLabels:
app: nodeapp
template:
metadata:
labels:
app: nodeapp
spec:
containers:
- name: nodeapp
image: dockerhandson/node-app-mss:1
ports:
- name: nodeappport
containerPort: 9981
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 0.5
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: nodeappsvc
spec:
type: NodePort
selector:
app: nodeapp
ports:
- port: 80
targetPort: nodeappport
POD AutoScaler
==============
POD AutoScaling --> Kuberenets POD AutoScaling Will make sure u have minimum number
pod replicas available at any time & based the observed CPU/Memory utilization on
pods it can scale PODS. HPA Will Scale up/down pod replicas of
Deployment/ReplicaSet/ReplicationController based on observerd CPU & Memory
utilization base the target specified.
AWS AutoScaling --> It will make sure u have enough number of nodes(Servers).
Always it will maintian minimum number of nodes. Based the observed CPU/Memory
utilization of node it can scale nodes.
Note: Deploy metrics server as k8s addon which will fetch metrics. Follow bellow
link to deploy metrics Server.
====
https://github.com/MithunTechnologiesDevOps/metrics-server
https://github.com/kubernetes-sigs/metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpadeployment
spec:
replicas: 2
selector:
matchLabels:
name: hpapod
template:
metadata:
labels:
name: hpapod
spec:
containers:
- name: hpacontainer
image: k8s.gcr.io/hpa-example
ports:
- name: http
containerPort: 80
resources:
requests:
cpu: "100m"
memory: "64Mi"
limits:
cpu: "100m"
memory: "256Mi"
---
apiVersion: v1
kind: Service
metadata:
name: hpaclusterservice
labels:
name: hpaservice
spec:
ports:
- port: 80
targetPort: 80
selector:
name: hpapod
type: NodePort
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: hpadeploymentautoscaler
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: hpadeployment
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 40
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 40
# Create temp POD using below command interatively and increase the load on demo
app by accessing the service.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: javawebappdeploymenthpa
namespace: test-ns
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: javawebappdeployment
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 90
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 90
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
apiVersion: apps/v1
kind: Deployment
metadata:
name: springapp
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: springapp
template:
metadata:
labels:
app: springapp
spec:
containers:
- name: springappcontainer
image: dockerhandson/spring-boot-mongo:1
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
memory: "512Mi"
cpu: "500m"
ports:
- containerPort: 8080
env:
- name: MONGO_DB_HOSTNAME
value: mongosvc
- name: MONGO_DB_USERNAME
value: devdb
- name: MONGO_DB_PASSWORD
value: devdb@123
---
apiVersion: v1
kind: Service
metadata:
name: springappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: springapp
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mongo
namespace: test-ns
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: devdb
- name: MONGO_INITDB_ROOT_PASSWORD
value: devdb@123
volumeMounts:
- name: mogodbhostvol
mountPath: /data/db
volumes:
- name: mogodbhostvol
hostPath:
path: /mongodata
---
apiVersion: v1
kind: Service
metadata:
name: mongosvc
namespace: test-ns
spec:
type: ClusterIP
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017
Step 1:
The above command lets us install the latest available version of a software
through the Ubuntu repositories.
Now, run the following command in order to install the NFS Kernel Server on your
system:
sudo vi /etc/exports
$ sudo exportfs -a
PVC
If pod requires access to storage(PV),it will get an access using PVC. PVC will be
attached to PV.
RWO - ReadWriteOnce
ROX - ReadOnlyMany
RWX - ReadWriteMany
Claim Policies
A Persistent Volume Claim can have several different claim policies associated with
it including
Commands
kubectl get pv
kubectl get pvc
kubectl get storageclass
kubectl describe pvc <pvcName>
kubectl describe pv <pvName>
https://github.com/MithunTechnologiesDevOps/Kubernates-Manifests/tree/master/pv-pvc
Static Volumes
1) Create PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-hostpath
namespace: test-ns
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mongodata"
2) Create PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongopvc
namespace: test-ns
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Commands://
=========
kubectl get pv
kubectl get pvc
Note: Configure Storage Class for Dynamic Volumes based on infra sturcture. Make
that one as default storage class.
NFS Provisioner
Prerequisiets:
1) NFS Server
2) Insall nfs client softwares in all k'8s nodes.
https://raw.githubusercontent.com/MithunTechnologiesDevOps/Kubernates-Manifests/
master/pv-pvc/nfsstorageclass.yml
And update Your NFS Server IP Address(2 Places you need update IP Addrees) And path
of nfs share. Apply
Dynamic Volumes
https://raw.githubusercontent.com/MithunTechnologiesDevOps/Kubernates-Manifests/
master/SpringBoot-Mongo-DynamicPV.yml
1) Create PVC(If we don't mention storageclass name it will use defautl storage
class which is configured.) It will create PV.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-nfs-pvc
namespace: test-ns
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mongodb
namespace: test-ns
spec:
selector:
matchLabels:
app: mongodb
template:
metadata:
name: mongodbpod
labels:
app: mongodb
spec:
volumes:
- name: mongodb-pvc
persistentVolumeClaim:
claimName: mongodb-nfs-pvc
containers:
- name: mongodbcontainer
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: devdb
- name: MONGO_INITDB_ROOT_PASSWORD
value: devdb@123
volumeMounts:
- name: mongodb-pvc
mountPath: /data/db
# Complete Manifest Where in single yml we defined Deployment & Service for
SpringApp & PVC(with NFS Dynamic StorageClass),ReplicaSet & Service For Mongo.
apiVersion: apps/v1
kind: Deployment
metadata:
name: springappdeployment
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: springapp
template:
metadata:
name: springapppod
labels:
app: springapp
spec:
containers:
- name: springappcontainer
image: dockerhandson/spring-boot-mongo
ports:
- containerPort: 8080
env:
- name: MONGO_DB_USERNAME
value: devdb
- name: MONGO_DB_PASSWORD
value: devdb@123
- name: MONGO_DB_HOSTNAME
value: mongosvc
---
apiVersion: v1
kind: Service
metadata:
name: springapp
namespace: test-ns
spec:
selector:
app: springapp
ports:
- port: 80
targetPort: 8080
type: NodePort
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodbpvc
namespace: test-ns
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mongodb
namespace: test-ns
spec:
selector:
matchLabels:
app: mongodb
template:
metadata:
name: mongodbpod
labels:
app: mongodb
spec:
volumes:
- name: pvc
persistentVolumeClaim:
claimName: mongodbpvc
containers:
- name: mongodbcontainer
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: devdb
- name: MONGO_INITDB_ROOT_PASSWORD
value: devdb@123
volumeMounts:
- name: pvc
mountPath: /data/db
---
apiVersion: v1
kind: Service
metadata:
name: mongosvc
namespace: test-ns
spec:
type: ClusterIP
selector:
app: mongodb
ports:
- port: 27017
targetPort: 27017
We can create ConfigMap & Secretes in Cluster using command or also using yml.
ConfigMap Using Command
======================
kubectl create configmap springappconfig --from-literal=mongodbusername=devdb -n
test-ns
Or Using yml
---
apiVersion: v1
kind: ConfigMap
metadata: # We can define multiple key value pairs.
name: springappconfig
namespace: test-ns
data:
mongodbusername: devdb
Using Yml:
apiVersion: v1
kind: Secret
metadata:
name: springappsecret
namespace: test-ns
type: Opaque
stringData: # We can define multiple key value pairs.
mongodbpassword: devdb@123
apiVersion: v1
kind: ConfigMap
metadata:
name: springappconfig
namespace: test-ns
data: # We can define multiple key value pairs.
mongodbusername: proddb
Using Yml:
apiVersion: v1
kind: Secret
metadata:
name: springappsecret
namespace: test-ns
type: Opaque
stringData: # We can define multiple key value pairs.
mongodbpassword: prodb@123
apiVersion: apps/v1
kind: Deployment
metadata:
name: springappdeployment
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: springapp
template:
metadata:
labels:
app: springapp
spec:
containers:
- name: springappcontainer
image: dockerhandson/spring-boot-mongo:1
env:
- name: MONGO_DB_HOSTNAME
value: mongosvc
- name: MONGO_DB_USERNAME
valueFrom:
configMapKeyRef:
name: springappconfig
key: mongodbusername
- name: MONGO_DB_PASSWORD
valueFrom:
secretKeyRef:
name: springappsecret
key: mongodbpassword
resources:
limits:
memory: "512Mi"
cpu: "500m"
requests:
cpu: "200m"
memory: "256Mi"
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: springappsvc
namespace: test-ns
spec:
type: NodePort
selector:
app: springapp
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mongodb
namespace: test-ns
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongocontainer
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
configMapKeyRef:
name: springappconfig
key: mongodbusername
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: springappsecret
key: mongodbpassword
volumeMounts:
- name: mongodbhostvol
mountPath: /data/db
volumes:
- name: mongodbhostvol
hostPath:
path: /tmp/mongo
---
apiVersion: v1
kind: Service
metadata:
name: mongosvc
namespace: test-ns
spec:
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017
apiVersion: v1
kind: ConfigMap
metadata:
name: javawebappconfig
data:
tomcat-users.xml: |
<?xml version='1.0' encoding='utf-8'?>
<tomcat-users xmlns="http://tomcat.apache.org/xml"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-
users.xsd"
version="1.0">
<user username="tomcat" password="tomcat" roles="admin-gui,manager-gui"/>
</tomcat-users>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebdeployment
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebappod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: dockerhandson/java-web-app:3
ports:
- containerPort: 8080
volumeMounts:
- name: tomcatusersconfig
mountPath: "/usr/local/tomcat/conf/tomcat-users.xml"
subPath: "tomcat-users.xml"
volumes:
- name: tomcatusersconfig
configMap:
name: javawebappconfig
items:
- key: "tomcat-users.xml"
path: "tomcat-users.xml"
ex:
Docker Hub: --docker-server is optional in case of docker hub
ECR # Get ECR password using AWS CLI and use the password below. If its EKS cluster
we just need to attache ECR Policies(Permsisssions) to IAM Role and attach that
role EKS nodes.No
need to create a secret and use that as imagepull secret.
# Nexus
kubectl create secret docker-registry nexuscred --docker-server=172.31.106.247:8083
--docker-username=admin --docker-password=admin123
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebappdeployment
spec:
replicas: 2
revisionHistoryLimit: 10
strategy:
type: Recreate
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: dockerhandson/java-web-app:1
ports:
- containerPort: 8080
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 1
memory: 1Gi
livenessProbe:
httpGet:
path: /java-web-app
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /java-web-app
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebappdeployment
spec:
replicas: 2
revisionHistoryLimit: 10
strategy:
type: Recreate
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: dockerhandson/java-web-app:1
ports:
- containerPort: 8080
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 1
memory: 1Gi
livenessProbe:
httpGet:
path: /java-web-app
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /java-web-app
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
Statefullset:
Manages the deployment and scaling of a set of Pods, and provides guarantees about
the ordering and uniqueness of these Pods.
Statefullset:
Manages the deployment and scaling of a set of Pods, and provides guarantees about
the ordering and uniqueness of these Pods.
#######MongoDB StatefulSet###########
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongod
namespace: test-ns
spec:
selector:
matchLabels:
app: mongod
serviceName: mongodb-service
replicas: 3
template:
metadata:
labels:
app: mongod
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongodbcontainer
image: mongo
command:
- "mongod"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "MainRepSet"
resources:
requests:
cpu: 200m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-persistent-storage-claim
mountPath: "/data/db"
volumeClaimTemplates:
- metadata:
name: mongodb-persistent-storage-claim
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
namespace: test-ns
spec:
clusterIP: None # Headless Service
selector:
app: mongod
ports:
- port: 27017
targetPort: 27017
# Setup Mongodb Reple Set And Added Members And Create the Administrator for the
MongoDB
mongosh
######Spring App#######
apiVersion: apps/v1
kind: Deployment
metadata:
name: springappdeployment
namespace: test-ns
spec:
replicas: 1
selector:
matchLabels:
app: springapp
template:
metadata:
name: springapppod
labels:
app: springapp
spec:
containers:
- name: springappcontainer
image: dockerhandson/spring-boot-mongo:1
ports:
- containerPort: 8080
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 300m
memory: 256Mi
env:
- name: MONGO_DB_USERNAME
value: devdb
- name: MONGO_DB_PASSWORD
value: devdb123
- name: MONGO_DB_HOSTNAME
value: mongodb-service
---
apiVersion: v1
kind: Service
metadata:
name: springappsvc
namespace: test-ns
spec:
selector:
app: springapp
ports:
- port: 80
targetPort: 8080
type: NodePort
# Node Selector
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebapp
spec:
replicas: 1
selector:
matchLabels:
app: javawebapp
strategy:
type: Recreate
template:
metadata:
name: javawebapp
labels:
app: javawebapp
spec:
nodeSelector:
name: workerOne
containers:
- name: javawebapp
image: dockerhandson/java-web-app:3
ports:
- containerPort: 8080
---
# requiredDuringSchedulingIgnoredDuringExecution(HardRule)
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebapp
spec:
replicas: 1
selector:
matchLabels:
app: javawebapp
strategy:
type: Recreate
template:
metadata:
name: javawebapp
labels:
app: javawebapp
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "node"
operator: In
values:
- workerOne
containers:
- name: javawebapp
image: dockerhandson/java-web-app:3
ports:
- containerPort: 8080
---
# preferredDuringSchedulingIgnoredDuringExecution(Soft Rule)
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebapp
spec:
replicas: 1
selector:
matchLabels:
app: javawebapp
strategy:
type: Recreate
template:
metadata:
name: javawebapp
labels:
app: javawebapp
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: name
operator: In
values:
- workerone
containers:
- name: javawebapp
image: dockerhandson/java-web-app:3
ports:
- containerPort: 8080
Pod Affinity
------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginxdeployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginxpod
labels:
app: nginx
spec:
containers:
- name: nginxcontainer
image: nginx
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebappdeployment
spec:
replicas: 1
selector:
matchLabels:
app: javawebapp
strategy:
type: Recreate
template:
metadata:
name: javawebappod
labels:
app: javawebapp
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
containers:
- name: javawebappcontainer
image: dockerhandson/java-web-app:4
ports:
- containerPort: 8080
Pod AntiAffinity
----------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebappdeployment
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: javawebapp
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
containers:
- name: javawebapp
image: dockerhandson/java-web-app:3
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 1
memory: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
spec:
type: NodePort
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
Network policies
=================
Network policies are Kubernetes resources that control the traffic between pods
and/or network endpoints. They uses labels to select pods and specify the traffic
that is directed toward those pods using rules. Most CNI plugins support the
implementation of network policies, however, if they don’t and we create a
NetworkPolicy, then that resource will be ignored.
The most popular CNI plugins with network policy support are:
Calico
Weave
Cilium
Romana
In Kubernetes, pods are capable of communicating with each other and will accept
traffic from any source, by default. With NetworkPolicy we can add traffic
restrictions to any number of selected pods, while other pods in the namespace
(those that go unselected) will continue to accept traffic from anywhere. The
NetworkPolicy resource has mandatory fields such as apiVersion, kind, metadata and
spec. Its spec field contains all those settings which define network restrictions
within a given namespace:
# We are applying network policy for mongodb pods here. Only spring app pods can
connect mongodb pods if we apply below rule.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mongodb-network-pol
namespace: test-ns
spec:
podSelector:
matchLabels:
app: mongodb
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: springapp
ports:
- protocol: TCP
port: 27017
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: mongodb
namespace: test-ns
spec:
selector:
matchLabels:
app: mongodb
template:
metadata:
name: mongodbpod
labels:
app: mongodb
spec:
volumes:
- name: mongodb-pvc
persistentVolumeClaim:
claimName: mongodb-nfs-pvc
containers:
- name: mongodbcontainer
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: devdb
- name: MONGO_INITDB_ROOT_PASSWORD
value: devdb@123
volumeMounts:
- name: mongodb-pvc
mountPath: /data/db
---
apiVersion: v1
kind: Service
metadata:
name: mongosvc
namespace: test-ns
spec:
selector:
app: mongo
ports:
- port: 27017
targetPort: 27017
Resource Quotas:
===============
When several users or teams share a cluster with a fixed number of nodes, there is
a concern that one team could use more than its fair share of resources.
2) Users create resources (pods, services, etc.) in the namespace, and the quota
system tracks usage to ensure it does not exceed hard resource limits defined in a
ResourceQuota.
4) If quota is enabled in a namespace for compute resources like cpu and memory,
users must specify requests or limits for those values; otherwise, the quota system
may reject pod creation.
Hint: Use the LimitRange admission controller to force defaults for pods that make
no compute resource requirements.
apiVersion: v1
kind: Namespace
metadata:
name: test-ns
labels:
team: testing
---
#Resource Quota
apiVersion: v1
kind: ResourceQuota
metadata:
name: test-ns-quota
namespace: test-ns
spec:
hard:
requests.cpu: "1"
requests.memory: "1Gi"
limits.cpu: "2"
limits.memory: "4Gi"
pods: "2"
count/deployments.apps: "1"
---
# LimitRange
apiVersion: v1
kind: LimitRange
metadata:
name: testns-limit-range
namespace: test-ns
spec:
limits:
- default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 200m
memory: 256Mi
type: Container
#########
EKS Setup
#########
1) Create Dedicated VPC For EKS Cluster. Using CloudFormation.
https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-08-12/
amazon-eks-vpc-private-subnets.yaml
2) Create IAM Role For EKS Cluster.
EKS – Cluster
3) Create EKS Cluster.
4) Create IAM Role For EKS Worker Nodes.
AmazonEKSWorkerNodePolicy
AmazonEKS_CNI_Policy
AmazonEC2ContainerRegistryReadOnly
AmazonEBSCSIDriverPolicy
5) Create Worker Nodes.
6) Create An Instance (If Not Exists) Install AWS CLI , IAM Authenticator And
kubectl. Configure AWS CLI using Root or IAM User Access Key & Secret Key. Or
Attach IAM With Required Policies.
chmod +x ./kubectl
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o
"awscliv2.zip"
3) Extract Zip
unzip awscliv2.zip
4) Install
sudo ./aws/install -i /usr/local/aws-cli -b /usr/local/bin
5) Verify
aws --version
######## Configure AWS CLI using ACCESS Key & Secret Key ########
aws configure
##### Get KubeConfig file #####
# Install git if not already installed before exeucting below command as we are
applying k8s manifests of EBSCSI driver direclyt from git hub.
sudo yum install git -y
apiVersion: apps/v1
kind: Deployment
metadata:
name: mavenwebapp
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: mavenwebapp
template:
metadata:
name: mavenwebappod
labels:
app: mavenwebapp
spec:
containers:
- name: mavenwebapp
image: dockerhandson/maven-web-application:1
ports:
- containerPort: 8080
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
livenessProbe:
httpGet:
path: /maven-web-application
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /maven-web-application
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: mavenwebappsvc
namespace: test-ns
spec:
type: LoadBalancer
selector:
app: mavenwebapp
ports:
- port: 80
targetPort: 8080
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeScalingActivities",
"autoscaling:DescribeTags",
"ec2:DescribeInstanceTypes",
"ec2:DescribeLaunchTemplateVersions"
],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": [
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"ec2:DescribeImages",
"ec2:GetInstanceTypesFromInstanceRequirements",
"eks:DescribeNodegroup"
],
"Resource": ["*"]
}
]
}
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-autoscaler
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
rules:
- apiGroups: [""]
resources: ["events", "endpoints"]
verbs: ["create", "patch"]
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["cluster-autoscaler"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "list", "get", "update"]
- apiGroups: [""]
resources:
- "namespaces"
- "pods"
- "services"
- "replicationcontrollers"
- "persistentvolumeclaims"
- "persistentvolumes"
verbs: ["watch", "list", "get"]
- apiGroups: ["extensions"]
resources: ["replicasets", "daemonsets"]
verbs: ["watch", "list", "get"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["watch", "list"]
- apiGroups: ["apps"]
resources: ["statefulsets", "replicasets", "daemonsets"]
verbs: ["watch", "list", "get"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses", "csinodes", "csidrivers", "csistoragecapacities"]
verbs: ["watch", "list", "get"]
- apiGroups: ["batch", "extensions"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "patch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create"]
- apiGroups: ["coordination.k8s.io"]
resourceNames: ["cluster-autoscaler"]
resources: ["leases"]
verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create", "list", "watch"]
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-
expander"]
verbs: ["delete", "get", "update", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-autoscaler
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
k8s-addon: cluster-autoscaler.addons.k8s.io
k8s-app: cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler
template:
metadata:
labels:
app: cluster-autoscaler
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '8085'
spec:
priorityClassName: system-cluster-critical
securityContext:
runAsNonRoot: true
runAsUser: 65534
fsGroup: 65534
seccompProfile:
type: RuntimeDefault
serviceAccountName: cluster-autoscaler
containers:
- image: registry.k8s.io/autoscaling/cluster-autoscaler:v1.26.2
name: cluster-autoscaler
resources:
limits:
cpu: 100m
memory: 600Mi
requests:
cpu: 100m
memory: 600Mi
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --expander=least-waste
-
--node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/
cluster-autoscaler/EKS-Demo
volumeMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs/ca-certificates.crt # /etc/ssl/certs/ca-
bundle.crt for Amazon Linux Worker Nodes
readOnly: true
imagePullPolicy: "Always"
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
volumes:
- name: ssl-certs
hostPath:
path: "/etc/ssl/certs/ca-bundle.crt"
cd kubernetes-ingress/deployments
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <name>
namespace: <nsname>
spec:
ingressClassName: nginx
rules:
- host: <domain>
http:
paths:
- pathType: Prefix
path: "/<Path>"
backend:
service:
name: <serviceName>
port:
number: <servicePort>
- pathType: Prefix
path: "/<path>"
backend:
service:
name: <servcieName>
port:
number: <servicePort>
example App:
----------
apiVersion: apps/v1
kind: Deployment
metadata:
name: javawebappdeployment
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: javawebapp
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
minReadySeconds: 30
template:
metadata:
name: javawebapppod
labels:
app: javawebapp
spec:
containers:
- name: javawebappcontainer
image: dockerhandson/java-web-app:4
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /java-web-app
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /java-web-app
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
resources:
requests:
cpu: 500m
memory: 256Mi
limits:
cpu: 1
memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
name: javawebappsvc
namespace: test-ns
spec:
type: ClusterIP
selector:
app: javawebapp
ports:
- port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: javaweabppingressrule
namespace: test-ns
spec:
ingressClassName: nginx
rules:
- host: javawebapp.mithuntechdevops.co.in
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: javawebappsvc
port:
number: 80
# Another App
-------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: mavenwebapp
namespace: test-ns
spec:
replicas: 2
selector:
matchLabels:
app: mavenwebapp
template:
metadata:
name: mavenwebapppod
labels:
app: mavenwebapp
spec:
containers:
- image: dockerhandson/maven-web-application:1
name: mavenwebappcontainer
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: mavenwebappsvc
namespace: test-ns
spec:
type: ClusterIP
selector:
app: mavenwebapp
ports:
- port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mavenweabppingressrule
namespace: test-ns
spec:
ingressClassName: nginx
rules:
- host: mavenwebapp.mithuntechdevops.co.in
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: mavenwebappsvc
port:
number: 80
As a Admin:
=============
1) Create IAM User With IAM Policy to Create/Manage AccessKeys & List & Read EKS
Cluster to get Kube Config File in AWS IAM Console.
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::921483055369:role/EKS_Node_Role
username: system:node:{{EC2PrivateDNSName}}
mapUsers: |
- userarn: arn:aws:iam::935840844891:user/Balaji # Update your user
arn here
username: Balaji # Update your user
name.
kind: ConfigMap
metadata:
creationTimestamp: "2020-10-19T03:35:20Z"
name: aws-auth
namespace: kube-system
resourceVersion: "792449"
selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
uid: 8135dcd1-90e6-4dfb-872f-636601475aca
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: readonly
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","list","create","delete","update"]
- apiGroups: ["apps"]
resources: ["deployments","replicasets","daemonsets"]
verbs: ["get","list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: full_access_role_binding
namespace: default
subjects:
- kind: User
name: Balaji # Map with username
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: readonly
apiGroup: rbac.authorization.k8s.io
Clinet Side:
===========
1) Install Kubectl & AWS CLI.
2) Configure AWS CLI(With Access Key & Secret Key of IAM User which you created in
AWS IAM)