Understanding Kubernetes in A Visual Way
Understanding Kubernetes in A Visual Way
÷
l
.
ÎIÏË ÏÎÏËÈË ÏÏËËÏËIÏËÏÎË
":!¥ : Ï ÷: ¥:
: :*
Understand in g
in a
¥
Mrad
visual
:*
Kubernetes
way
⇐*☒
Aurélie Vache
•
Understand in g Kubernetes
in Visual
a
way
Aurélie Vache
Speciaelhanfstoeo
Laurent husband Alexandre and all
,
my ,
, my son ,
Reviewers
Thanks to Gaede Aaas
,
Denis Germain &
Stephane
Philip part who took the time to review this book .
Changeling
Th"
Release Date 8 31/05/2020
Release Version 8
Licences
Creative Commons BY .
NC -
ND
2
http://creativecommons.org/licenses/by-nc-nd/3.0/
ÏË:¥ËÎ!÷:ÏÏëË Ï!ÎËIÇË:*Ï÷,;¥ ÏᵗᵐË*ï; ~
:¥*
.
¥¥:*
:*:*
. :*
±÷ * *±
Kable of
:*
.
Kubernetes Components
Kubeconfig file
"""" " " "
Namespace
Resource Quota ④
"
"*
Pod }
280
Lifecycle I
Délétion
Job
→ Cron Job
!④
Config MAP ④ 3
?⃝
⑤ Secret
*
Deployment > ⊕
,
Rolling Update
Pull
images configuration
Replica set ⑤
y
DaemonSet ④
Service
Ingress \
PV
,
PVC & StorageClass
✗
RBAC ⑤
Pod Security Polig ( PSP )
Lifecycle
Node
Operations
Debugging / Troubleshooting ④
Kubectl convert
Tools } 210
"Ë
Kubectx
Rubens
•
Stern
IË ÏË ËÊÏË ÏË÷˱¥
*
.
Krew
€ÆË¥ËËÏ¥ÆËÊÆ / ✗ 9s
WE
Kiss
Skaffold ④
Ë Ë Ë Ë ÏëË É Î Kustom :c >
Tips
☐ Knbeseal
ËÈE☒ËâëÊË%Ê÷ï Trivy
-
*⑤Tᵐˢ≈ ""°
ᵉᵐï Ë ÷Ë ÷ ¥± Popeye
5
?⃝
?⃝
?⃝
?⃝
?⃝
?⃝
Kyverno D
23
changess O238
Kubernetes 1.19 C
4z
Kubernetes 1.20 Ó
245
Kubernetes 1.21
248
O
Kubernetes 1. 22 àsı
Kubernetes 1.23
i
O
53
s
Kubernetes 1. 24 O
st
Kubernetes 1.25 Él
Kubernetes 1. 26
Ó
¿6
s
depreca
Docker tion
i
Ò
6g
Glossary Ciz
6
Ë¥ËËË
É¥API-
Ï ÏËËÏËÏ I
"
lˢcheÏ^
"☒
☒☒ *☒
Node
7
Es
→ Distrib uted hey - value database
→☒
☒
s etcd : single of
API-Î{
Î""""
→ Exposes Kubernetes API
8
Schechter
→ Responsible for find ing the best Node for nearly
created Pods
:
-
serrer No problem ,
÷:: [[
my mission is to find a
I want a
sui table Node for
this new Pod
www.Man-ager/y
-
to make arder to
→ Responsible changes in more the
9
→ Runs several separate controller processes :
Tots
→ Watches Nodes & do action
fReplicat
→ en sures the desiree number of Pods are
flairer/
pop
ulates the end point Objects
( responsible for Services & Pods connections )
nfservicertcount/
◦
&
→
ÏË
→ create default accounts and
are
healthy
→ Can create and delete Pods
|☒ù|
**
Oh, oh ! Ineed
A Pod
failed ! | one
!
Ï÷-
Prox
→ Runs on each Nodes
Ë¥Ë in
my .
ns
namespace
bui
Ok
,
which cluster
¥9
→ Kubectl hnows where ( in which cluster )
→ Kube config
files are structured in YANL files
ü 12
Now herbe config files are loaded ?
-
① -
-
herbeconfig flag , if specified
, by default
[email protected]
i
Tips
Etr
'
It to files
s
possible specify several in
$ export KUBECONFIG=file1:file2:file3
13
ËEïË
kubectlvers.in/-
.ˢʰ◦uldIinstall& of kubectl
Which version
<
?
↓!
A
""
Kubectl is
supported one minor version
Example :
API Server
.
E 1.23
kubectlct-TPKubectl-Averylogic-a.CI?/ 14
?
<
.Ltaay
"
M' deploy
"
to 5 repli cas
?
¥
,
ttaU
that have the status ! =
"
Running
"
✓ ?
$ kubectl delete po
—-field-selector=status.phase!=‘Running’ ↓! 15
?
t'es ordered by status
[email protected] name
4¥
?
$ kubectl get ns
-o custom-columns="NAME":".metadata.name" —-no-headers
?
.ᵗt in
my
- service Service
16
This option aims to bisten for changes to a particular object
Example :
$ kubectl get pod —w
Example :
$ kubectl get pod my-pod -o wide
Example :
$ kubectl get secret
-l sealedsecrets.bitnami.com/sealed-secrets-key -o name
17
cc
This option allows to delete
only resource
Ee :
$ kubectl delete job my-job cascade=false -n my-namespace
18
Ë:¥Eïf
•
°
per project
→ Way of isolation
§ > per
per
team
family of component
| | |
⑤ï* ⊕ï*
"*
.*
.
*
.
⊕æ* **
**
"*
-- meme
| ÏïEü
- - SVC
. ma
-
deploy / my.de p*Ëy
-
nsl 2
my -
my
- ns
19
Each
→ ressources can
appear in
Orly one
namespace
✗
Ë÷ËÏ*Ï
"
*
[email protected] ,
PSP 000
A Pod in
namespace A can
'
Tread
a secret from namespace B
| ¥3 |
*
.
'
**
a- ÷ 20
Special namespace :
hnbe public
-
→ reserved mainly for cluster use
→ s object
'
HewtbeatË
oupdapes.gv.desw.SU
ability
determine the avait of a Noode .
2
forms of heartbeats :
◦ lease Object
21
ËjHuÏ
Il-7
} Switch to my _ ns
namespace
$ kubens my-ns
Of
are
namespace
$ kubectl api-resources —-namespaced=true
22
"
}
"
Delete a
namespace
stucked in Termina ting state
Ëa please help me µ
'
ter-inat.no#y.stucked-nsKubernetesversion(y
bb
-
-
Works eren in Old
23
EËËËË
0
o v
°
•
T-faqvotaise%6E-hemryiy.ae
need to
specify request & limit for Pods
24
Ëjw_
Il
25
ËË÷ Pod
0
Ë÷µ÷☒±⇐*÷ç *⇐ ÷☒
Smallest unit
Can contain
several containers
☒
One IP address
per Pod
26
?⃝
ftp.wto?--Y
Il
namespace
&
'
} Copy a
file stored locally to a Pod
27
ËÏ¥Ü
Pod
*.
¥1
- e-
Œï¥ {←ŒŒÈ
"
28
-
Pendi
Container images have not yet been created .
-
Runni
Bound to a Node .
All containers are created .
At least one is
running .
-
Svcceed
All containers are terminated with success .
-
rail
At least one container is in fàilure ( exited with
non -
zero code or terminale d by System ) .
29
status Pool
of could not be obtained .
?
Why
7
30
ËÏ¥Ë
Pod
( with a new
configuration or code for ex no
)
ŒÂ {
→ The délétion can be manual
Cy forced
31
ËI↳Ï→
}
PodgracefulyIwaitforwnfigurati@8omknbe tn-P.oQhubEteln
Delete a
"=
-
:ï:ï÷÷÷Ï÷ï
÷
I removed it &
deare
Pool name
*☒ *☒ *☒
32
?⃝
} Delete a Pod instant ly ( nanually forced)
whenyouforcGPodfdelet.name
is automatically freed from the API Server
33
Ë:¥Ï &
Readiness
| |
Live ness & Readiness
probes should be configured
-
1AM .nl .
ludwika
Areyouae.ve?I@
c
ÜÏ live Probe : Ok
Knb④
ness
→
live ness Probe : OK
34
# live ness Probe
Ù④ : Nok
☒
→ Knbelet uses Lireness probes to know when to
restart a container
before to
perform the first probe
35
→
Different types of Li veness probes exist :
a-
on s Server
on
port 8080
on / heal the
knb ← _
200
If HTTP code ) =
apiVersion: v1
kind: Pod
…
spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 60
periodSeconds: 20
36
Define a Liveness probe that execute the Command
knb
If return code = = 0
apiVersion: v1
kind: Pod
…
spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
37
Liveness which connecta to
Define a
probe port 8080
apiVersion: v1
kind: Pod
…
spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
-di
Areyouready
c
readiness Probe : Ok
Knb④
-1s readiness Probe : OK
Service
38
?⃝
Kubelik
→ readiness Probe : Nok
s*
Kubelet to know when
→ uses Readiness probes a
container is
ready to start accepting traffic .
39
Ë:¥ËË!
•
U V
ËÏ *
readiness Probe ?
T-t.sasokt.io#forsl6startingPds.
40
} Don't test for lireness until HTTP end point
is available
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image:1.0
ports:
- name: liveness-port
containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 1
periodSeconds: 10
}
startupProbe: the have
httpGet:
a
pp
path: /healthz
port: liveness-port
5min ( 30 ✗ 10s )
failureThreshold: 30
to
periodSeconds: 10 finish its startup
41
IËïÏË
0
"
K
events
u v
.
? ?
r.
O
¥
42
Iida
0 00
Not as fast as
you
,
up ? sorry µ
43
O OO
toi µ
44
2.
ftp./HowTo?--Y
) Create a
Deployment with 3 repli cas .
proxy
(
envoy )
is
ready
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-image:latest
lifecycle:
postStart:
httpGet:
path: /healthz/ready
port: 15020
45
Ë
Create a
Deployment with a container that
…
spec:
containers:
- name: my-app
image: my-image:latest
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello
Kubernetes lovers"]
46
to goodbye
Sag
Wait,
you reed
to
d)
ËÏH◦Ï→
) Create a
Deployment that hills gracefully
app bef.ie the Pod terminates
my
-
…
spec:
containers:
- name: lifecycle-demo-container
image: nginx
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "my-app -kill;
do sleep 1; done"]
47
LËËÏï
0
÷ 0
in containers
.
→
hubectl exec Command allows to run Commands inside
you
a Pod in a container
a
$ hubectl pod container
ÏË¥÷**¥ËJ÷Ï÷÷ÏËÏÜ
exec
my e
my
- - -
-
it - - sh
0
☒
py.gg?y--
( \ .
À
$ ls -l
$ cat path/to/a/file
?
◦
µ,
} Connect to a
specific container & open an interactive shell
> ls -l
> tail -f /var/log/debug.log 48
-
ii
'
*
specify a default container
:
y
apiVersion: v1
kind: Pod
metadata:
name: my-pod
annotations:
kubectl.kubernetes.io/default-container: my-container-2
spec:
containers:
- name: my-container
image: my-image
- name: my-container-2
image: my-image-2
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 1;
done
( no need to specify -
c
option thanks to the annotation )
49
贕f
0
o v
.
→ In it containers run
before app
containers in a Pod
☒
⑥
ËËËËÏÏËÏ
&oneormoreinitconLainers÷
A Pod can have one or more containers
50
[email protected] y
d'S
& then containers
Usual
ÏüÉË -
① container .
¥Œ ✓
② ]ËË
| Init container 2
K¥7 ✓
ÈË¥Ë¥Ë Ë ËÏËÈ¥I
☐fÏËËËËË ÎFµËËÏË containers
r r r
-
¥
✓
F-achinitcontainersneeda-tartsuesfdybeforeexecut.mg
the next one .
51
µ,*"ç
]ÊÈ
| Init containers
② Init container 2
Ca it
Ë ✗
until
succeed
③
☐ÏËÏËË:
52
,±÷,*"÷a
Ï
]ÊÈ
| Init containers
②
FËË÷Æ Init container 2
☒
. FÏ÷ Ï÷¥Ë
¥ 53
→ Useful for :
D Init DB schémas
} Set up
permissions QQ
54
Ëjæ_
Il
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: my-app:1.0
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: check-service
image: busybox
command: ['sh', '-c', "until nslookup my-svc.$(cat /var/run/
secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local;
do echo waiting for my-svc; sleep 5; done"]
55
} Create a Pod with :
☒
Shared volume
0 a
⑥
↓ÀËËËË"
e
ⁿÏÊËËËËÀ¥ÏiË¥¥Ï¥ËËË÷ï
-
¥Ë¥¥¥Ë¥ËÏÏ
https://github.co
website git
my
.
.
apiVersion: v1
kind: Pod
metadata:
name: my-website
spec:
initContainers:
- name: clone-repo
image: alpine/git
command:
- git
- clone
- --progress
- https://github.com/scraly/my-website.git
- /usr/share/nginx/html
volumeMounts:
- name: website-content
mountPath: "/usr/share/nginx/html"
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
volumeMounts:
- name: website-content
mountPath: "/usr/share/nginx/html"
volumes:
- name: website-content 56
emptyDir: {}
'
t.spe.i.spaficanpentnt.websena.JP
☒
A Pod is mortal ,
so
everytime a new one is
0 ☒
*
*
ÏËËË"
NT
Commit
Developers
Ë }
• ☐¥ Ï Ë Ë Ê Ë Æ Ë Ë
clone the
repository ÏËÊÊËËÆËËÏËE÷ÉÏÏ?Ëô☒Ë ËÉËÏË¥Ï ☐ ☐ËÊËËË÷¥:Ï
€_ÉEŒŒsE
-
Pulling ;n× ,
Ëi⇐Ë÷Ë ËŒæËÏ¥Ï
Docker image 57
:
贆
?
*
IL Ëob
} A
process that runs a certain time to complétions
batch process
backup
database migration / clean
"
*
58
When the Pod launched
J by a Job is
finished ,
tobcanbeone.timed-qun.ae parallel
rormÎ
j'ÏÏ <
Based on Cron
format
i.
☒
pp
[to
→☒
☒
59
ËjHouÏ
-7Il
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
spec:
}
backoffLimit: 5
activeDeadlineSeconds: 120 Optimal
completions: 1
parallelism: 1
template:
spec:
containers:
- name: busybox
image: busybox
restartPolicy: onFailure ) Required
60
} Create Job that will be deleted automatic
a
ally
after the time defined
} Delete a Job
only
$ kubectl delete job my-job cascade=false
61
?
¥
7-restart.li
starts Pod
If the Pool failed ,
Job controller a new .
[email protected]
on the No de but containers will re -
run .
manea
container
62
restartpolicyisapQ.la?dntaJb
Ïᵉᵗ↳æaa
| When a
Pod terminales with success
,
By default back to
off Limit is e
quae 6 .
active-D-a-4-d.ee
Once a Job reaches the number of active Deadline Seconds,
⑤ →
±J_
f.xchofflinitisnotyetreach.ec/-)
deploy
A Job will not additional Pods when
Ë
By default complétions is
equal to 1
,
which micans the
terminales
successfully -
same time .
64
?⃝
T-fcompletions.IQ#efineditisequal
to the
parallélisme number .
65
ini
ÎÊË
.
Jobs Gon
bË ⑤
→ Create Job on a time based schedule
-
°o°
22220
◦
Orperiodicaly.o 0IVstimetowaheap@Oi_.o
→
66
form.FI Based on Cron
format •☒
☒
ËÏw_
Il
} Create a
Cron Job
from lousy box image ( Déclarative way )
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
failedHistoryLimit: 3 } number of failed Jobs
successfulJobHistoryLimit: 1 } number of successful Job
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello readers
restartPolicy: onFailure
67
'
T-fstartingdeadknesecondsa-thanndsic.vn
Job
may
not been scheduled Y.
68
Ë:Ü
0
÷ .
☒
:
Ô
¥ {putnon-sensitivedatainlonfigrlapinstead.tl/
-
Don't hardcode configuration data in
app ,
to
→ Allows deploy a same
app indifferent environments
|
*
.
¥
.
*
a
☒ my
-
Cn -
prod
•
☒
my - Cn -
stag
-
staging production
69
3 to create
→ way s a
Config Map
←
from hey and value
{ Ça n en **
.
file
from a
fi le
70
Ëjw_
Il
} Create a
Config Map from hey and value
I Create a
Config Map from a file
$ kubectl create cm my-cm-1 —-from-file=my-file.txt
-n my-namespace
} Create a
Config Map from an env
file
$ kubectl create cm my-cm-1 —-from-env-file=my-envfile.txt
-n my-namespace
my -
apiVersion: v1
kind: Pod
metadata:
name: my-pod-2
spec:
containers:
- name: my-container
image: busybox
env:
- name: MY-ENV-KEY
valueFrom:
configMapKeyRef:
name: my-cm
key: my-key
72
ËÏËÜ .
→
Save sensitive data in secrets
lencrsrteg.iq?jen*ed--
⑤
*
encoded
\ automatic a
decided
leg
when attached
to a Pod
73
→ 3 to create Secret
ways a :
-
from hey and value
+ from an
.
en.** file
from a
fi le
"
docker registry
→ Several types of secret
-
74
ËIH◦Ï→
} Create a Secret from hey and value
& decode it
75
ËË
iii. Deployment
.
→ Deployment are
responsible for Pods
*
.
Deployment → Replica →
☒"
.
*
76
→ Features :
Deployment history
77
ËjHuÏ
Il-7
} Create a
Deployment that create one Pod with busybox
image ( Imperative way )
} Create a
Deployment that create 3 replica of Pod
image ( Déclarative
way )
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: my-deploy
template:
metadata:
labels:
app: my-deploy
spec:
containers:
- name: nginx
image: nginx
78
Deployment
☒
→
÷¥ ¥ ˵¥ Ë÷ÊË
ËË¥ËË
} Seale a
Deployment to 5 repli cas
79
ËÏ¥ËüË Deployment
Deployment update
→ Increment
ally update instances by
creating new one
reverted to a
previous version
80
image : busybox image : busybox : 1.2.93
|
-⑧**⊖ ⊕ _⑧→* ⊕
" "
myy myy
↑ Create
.mn , ↑ Update
÷.
O O
¥ ¥
① ②
81
image : busybox : 1.2.93
-⑧* ④ ④
\ /→
•☒
☒
a new Pod
is created
nyy
1
0¥3
82
image : lousy box : 1.29.3
* ⊕
\ ]
The Pool is removed
my-
↑
0
¥
④
83
"
"
"
☒
Î
I
same Hring for
2nd replica / Pod
84
¥0
Rollback to
2nd revision
85
ËjHouÏ
Il-7
① Create a
Deployment in a
file named
my
-
deploy yaml
-
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: busybox
spec:
replicas: 3
selector:
matchLabels:
app: busybox
template: ☒
metadata:
labels:
app: my-deploy
spec:
containers:
- name: busybox
image: busybox
{
100% strategy:
of Pods will be
rollingUpdate:
created & then old Pods maxSurge: 100%
maxUnavailable: 0%
will be deleted type: RollingUpdate
86
③ Update deployment container
'
s
image name /
tag
$ kubectl set image deploy my-deploy
busybox=busybox:1.29.3 —-record
④ show
deployment rolling update logs
$ kubectl rollout status deploy my-deploy
87
?⃝
'
Ë
000
:
} Restart a
deployment / Kiel existing Pods and
88
ËË¥Ï
Pull
Configuration
☐ Cron Job ☐
statefueset
☐ Job
000
Euery resource type including container
Spec
89
ETÉ
-
…
Thanksto hiswnfiguratio62Iknow henI e dtopul@bel containerimag.eI
containers:
- name: my-container
image: my-registry.com/my-app:tag
imagePullPolicy: Always
Ifivotpresentldefauetvalue :
Always :
Neyer :
attempt the
No is made to pull image .
90
ÉTÉ
-
imagreg.sk#
to access to the
91
ËIH◦Ï→
…
containers:
- name: my-container
image: my-registry.com/my-app:tag
imagePullPolicy: Always
imagePullSecrets:
- name: registry-secret
g-son file :
92
Ë¥I Replicaset
"
*
'
It'susefueËtheUËa
but not
mandatory to
mariage
them manu
ally .
93
贕f 0
- - -
ËüËiëË
↳rnetesscheÈJnaagt
but Daemon Set controller Will handle then instead .
94
[email protected] s
will be
will
created
be automatic
: one Pod
ally
per
Node
killed and
.
new ones
a-
t
Existing Pods need to be manually deleted in arder
95
ËIH◦Ï→
① Create a Daemon Set that create one
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-app
spec:
selector:
matchLabels:
name: my-app
template:
metadata:
labels:
name: my-app
}ˢPᵉ%
Daemon set
spec:
tolerations: Will run on
- key: node-role.kubernetes.io/master
effect: NoSchedule Master Nodes
containers:
- name: busybox
image: busybox
96
② Create a Daemon Set that create Pods
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-app
spec:
selector:
matchLabels:
name: my-app
template:
metadata:
labels:
name: my-app
spec:
nodeSelector:
my-key: my-value
containers:
- name: busybox
image: busybox
97
ÊÎ
DaemonSet want to create Pods on each No de
Pod is stack in
"
star.
"
"" ↳
98
:
Quick solutions
☒
ces
→
List all the Nodes in the cluster
→
Compare the lists in arder to find Node in trouble .
99
tËï "
u
K
v
Service
Assigns to
→ a
group of Pods a
unique DNS name
☒
(t -
↓ •Ï
123
my auesome app
- - -
s →
- •☒☒
/
Ë
456
my auesome app
-
-
\,
-
⊕
-789
my auesome app
- -
100
?⃝
→ The set of Pods targeted by a
Service is
usually
determined by a Selector
☒
(Y Selector
↓
§app:my-a
ns.svc cluster local
my awesome-app.my
- -
.
.
s →
⊕µ☒app:my-appJ
/
Ë
456
my avesone
app
-
-
\,
-
101
→ Several Lind of services :
IP
? ?
→ "
0 i
nExt
◦ #
[b
↳adBalan
102
JE
b§
-P •
-⑧I*
$ Curl <
spec .
clusterIPS :{ Spec .
ports [ I. port )
↳ *¥ →
s →
Æ*
←
pËËÉ%|
-
103
rt
{Æ=P)8Çpec.ports[o].nodePo§
Reachableoutsideoftheclusterbyrequestingf
↓
'
104
,↳adBalan
◦
→ Assigns a
fixed external I P address
pËË?Ë%neÏ
←
¥
yaaypw.ge.wpp.in
clusters
Orly for managed
&
105
External
ot
⊕ˢÆ☒
service cluster local
my my namespace svc
-
. . .
-
.
↓
e ÷>
scralyiomhz
106
À
◦
real type
<
ËÏn
somecas
y ,
À
bb
service
discovery mechanism
→ Kube -
proxy
doesn't handle the service
/shouldbeequalts
Spec .
cluster IP
107
ËÏH◦Ï→
} Create a Service that
exposes a
Deployment
on port 80
icauseclusterIPfieldismnutable.si/Y;betre-. -
You can
' t edit an
existing Service Cluster IP "
None "
¥
108
Ë¥ËË
ËÎ value pairs attached to an
Object
§app:my-appJ
→
Several objects / ressources can have the
same label
-2
my pod mg pod
-
-
1 -
" "
☒ ☒
[email protected] §app:my-app
|reservedforkubernetescorompnenty
hubernetes.io/k8s.ioprefixesin Keys
label are
109
ËI↳Ï→
> Create a
Pod with two labels
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: my-app
version: v1
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
110
→
Selections
Selector use labels to filter
or select Objects / resources
→ Selector can be :
-
bels
◦
exact match : =
,
= =
,
! =
-
_Œsi
◦
match expressions : in ,
not in ,
exists
111
2.
ftp./HowTo?--Y
> Show labels for all of my
Pods
> Create a
Deployment that will manage Pods
label app :
my app
-
apiVersion: v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: my-app
}
spec:
selector: quality based
matchLabels:
app: my-app
112
> Create a
Deployment that mariage Pods that
have label version value equals to v1 or v2
& app =
my app
-
apiVersion: v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
}
matchExpressions:
"
- {key: version, operator: In, values: ["v1","v2"]} based
& version = v1 ,
v2 or v3
113
Ë¥Ej
•
Ingress
°
→ Allows access to
your
Services from outside
the cluster
÷ÏËËÏË÷ËÏÏ .Ë÷÷Ë;ËËÏ÷;÷ÏË÷
÷:÷ ÷ ±Ï÷:÷±÷Ï÷*
MGM
p
^ ^
114
→ An Ingress is implemented by a
3ʳᵈ party :
Controller
an
Ingress
↳ ex tends
specs
to
support additional
features
→
Consolidate several routes in a
single resource
115
Ëjw_
Il
> Create a
Nginx based
Ingress which define two routes :
/ ap / v11
to Service
my myapi v1
-
- : -
/ ap / v21
to v2 Service
my myapi
-
- : -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules: no host is If
, specific
- host: scraly.com
http:
}
roles are appli ed to
paths: inbound HTTP
- path: /my-api/v1
all
traffic
pathType: Prefix
backend:
service:
name: my-api-v1
port:
number: 80
- path: /my-api/v2
pathType: Prefix
backend:
service:
name: my-api-v2
port:
number: 80
116
-
I
In arder to should be handled
specify which Ingress
by controller
…
metadata:
name: my-gce-ingress
annotations:
kubernetes.io/ingress.class: gce
*
PÉ .
" "" " ""
with
Exact : Matches the URL case sens
itivity
/ matches /
my api / v11 my api / v11
-
-
/ ap / v1 / toto
my
- :
117
> Create an
Ingress Secureet through TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress-with-tls
spec:
tls:
- hosts:
- scraly.com
secretName: secret-tls
rules:
- host: scraly.com
…
118
-
But
Els
[email protected]
" "÷
d. b.
À€ Ingress is a
Gateway
" "° -
÷ny; ±iÆE%Y$B
Contour
Contour
'
s name I
Ingress Route
*œÆÆ
|
I'm have understood it
sure
you
|dependingonthechoosay
,
i.
. 119
Ë¥i &
Storage Class
In short :
-
→
Pods are mortal
by default
& Nodes Will not live forever to
À
-
Et
To store data Persistent Volume
→ permanent ly ,
use
120
Static ( with out Storage Class )
t
provisioning :
- - - - - - -
- - -
.
-
-
!
,
- ☐ - '
i volume (
^
f.
I Poet l
'
- - - - - - -
-
BÏ? -1
Namespace
¥
E
storage
121
Dynamic
provisioning
Vênê
l
-s
o PVC i
í
i
(
Poct
l i n ke a --
-
,
-
-
- -
-
-
-
-
-
I
Namespace to
dynamically
creares
v -
I
storageclasss
¿
E
It
storage
122
→
Pe-rsistentvolumemspron.de
a
storage location that has a
lifetime
independant of Podor Node
any
PV not sticked
→ is in a
namespace
NFS
,
Cloud provider specific Storage System °o°
2 sorts
→
of provisioning :
d
µ↑ µ↑
77 77
Storage # Name s
Storage Class Name
123
-1
ACCÈS
↳ ROX read
only by Nodes
many
-
Ès• by
write
si
read -
ce
single Pod .
1. 22
☐ or write to it
124
-1
ReclàmP
125
?
Q IX
How to
?
create a PVA NFs linked to
with WSEFS
type a
,
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
path: /
server: fs-xxx.efs.eu-central-1.amazon.com
readOnly: false
126
→
Storage Class
-
arder to set
→ In -
up dynamic provisioning
e- ↓
GCE PersistentDisk Azure Disk AWS EBS 000
volume &
binding dynamic provision will
ftp.wto?--t
Il
> Create a
Storage class with SSD disks for
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: faster
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
volumeBindingMode: Immediate
127
Persister
-
?
ft ftp.wto?-
Il _
many
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: ""
128
volumeName: my-pv
AHach_toaPo
accessible for all containers
in a Pod
ftp.uto?--t
Il
with Kad -
only access
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- mountPath: "/data"
name: nfs-storage
readOnly: true
volumes:
- name: nfs-storage
persistentVolumeClaim:
claimName: my-pvc
129
Ë¥ËËËï Pod
on observe d CPU
- - -
130
Ël• .
autoscaling
/v2 beta2
- - -
V1
-
-
131
→ Available for ReplicaSet Deployment
,
&
statefnlset
Reé
/
Ï
OR
☒ Dt
*.
Ès _⑥* ï*✗
\ ᵉˢé OR
DæmËnÇJY
132
?⃝
ËIH◦Ï→
> Autoscale / create a HPA for a
deployment
that maintains an
average
CPU
usage
accro ss all Pods of 80 %
133
-
| -
New Features
}
> Create a HPA with behari or ?
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deploy
minReplicas: 3
maxReplicas: 10 1. 18
targetCPUUtilizationPercentage: 80
behavior:
scaleUp:
Ici
policies:
- type: Percent
value: 90 EST
periodSeconds: 15
scaleDown:
policies:
# scale down 1 Pod every 10 min
- type: Pods
value: 1
periodSeconds: 600
134
L贕j
Limit
Ë⑨ :
Ë
JDefaultmemoryrequestlrli-nitvalues.GL
] for containers in a
namespace
ËÏH◦Ï→ :
> Create a
Limit
Range
apiVersion: v1
kind: LimitRange
metadata:
name: memory-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
135
?
µ 1-
Il Eris
① Pod creation with one container
without & limit
memory request
gÏË④ÎnÎm}
scheduled on a Node
g.w.g.ya.i.an.gg
-
apiVersion: v1 exists
kind: Pod
punir range in
metadata:
your namespace
{
,
name: my-pod
spec:
containers:
} I
append configuration with
minimum}
your
- name: my-container
values
image: nginx default ,
now I can
Lmd
resources:
requests:
memory: 256Mi Il
limit:
memory: 512Mi
Limit Range
f) admission Controller
11
136
?⃝
Pod creation with
② one container
with
memory limit oney
-
ï:::::÷"
apiVersion: v1
kind: Pod
metadata:
namespace , I add
§ name: my-pod
spec:
memory
containers:
- name: my-container
ᵈHⁿeË
image: nginx
resources:
ffmnmmhs.gl
limit:
memory: 512Mi
requests:
← Limit Range
[µ
"
memory: 256Mi
admission Controller
[(
137
?⃝
Pod creation with
③ one container
with
Memory request Orly
scheduled on a Node
<
-
çᵐᵐᵐᵐᵐʰ } üæ
apiVersion: v1
kind: Pod
metadata:
namespace , I add
memory
{ name: my-pod
spec: limit equals to the one
containers:
- name: my-container
image: nginx
resources:
{ ᵉᵈËï
[mnmmYg
requests: }
memory: 100Mi
limit:
← Limit Range
memory: 512Mi
[| admission Controller
[(
138
?⃝
贕 Request & Limites
ËË
°
¥2 ÷
"
/T-fpodsusestoomchr-ne.my/y
00M Killer can des troy them
IE~X-TE-R.MY NAÎT -
Ï ËÏ ËÏ"ᵐoᵐÏ
Ï .
res
" "
ces
-
-
FÉ
139
→ A 00M ( Out of Memory ) Killer runs on each Node
should
define request and limites CPU &
you Memory
resources:
requests:
memory: 64Mi
cpu: 250m
limit:
memory: 128Mi
cpu: 500m
uhmrwhnrvnmvhn
140
→ Scheduler uses these information to decide
Mme
}
resources:
requests:
memory: 64Mi
ni cpu: 250m
limit:
memory: 128Mi
%¥Î÷: §ᵐᵐ
ï:Ë¥ï÷¥Ë
cpu: 500m
ËÏÏÏÏÏÏ÷Ï
ÏÏÏÏ
Nodet Node 2
141
"
r.
between request & limit ?
-
REQUEST
(nininumneededforpodt-b.hu
=
DLiM, =
142
<betæeny? difference
Is there a
[µ
Et
€0 , , 1
"
heep running -
milliCore
Defined in : l Core = 1000m
- - -
- - -
- - - - - -
- - - - -
.
- -
- . _
A A
ÆÆËÆÆ
IN
Defined in
bytes .
143
Defining[Ëy
applications performances Yo
144
ËIH◦Ï→
> Create a Pod with defined requests & limites
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: my-image:1.0.0
resources:
requests:
memory: 64Mi
cpu: 250m
Cait be lower
}
limit:
memory: 128Mi
cpu: 500m than request MM.
145
?
◦ _
Et
HÉ>
limites should be
→ Request and defined for each
individuel Pods ,
in arder to be sealed
☒±
146
S OTO
[ËËËË
0
fisitsufficientfornyapplicat.in
defined replica for my Deployment
I 3 ,
y arailability ?
#
"
Unfortunately ,
non .
→ PDB is a
way to increase application avaitability
→ PDB
priori des protection against volontary Evictions 8
→ Node drain
→ Rolling upgrade
→ Delete a Deployment
→ and ooo delete a Pod
147
¥.:÷ïïïï |
Deployment : -
Spec .
repli cas :3
IF-victionn.PT#
ÏËËËÏÏËÏËÏËËËÏ Ï
this Pod , but ma ✗ Una: valable = 1
PDBcannotpre%6E-arydisrup.is
148
Ëjw_
Il
name: ingress-nginx-controller-pdb
spec:
minAvailable: 1 } how many Pods must always be available
maxUnavailable: 1} how Pods can be evicted
many
selector:
matchLabels:
app: my-app }
match only Pods with label app my app : -
IEsalsopossible-L-sefi-nailableasapercenta.ge
: minAvailable: 50%
} Show PDB in
my namespace
$ kubectl get pdb -n my-namespace
149
Ë:¥Ï
•
U U
Service
0f
o
→ Scheduler uses
request to make decisions about
thisdeedsM.fm
( OK I can put
v
it in this Node .
(Thiayqa
limites ,
I don't know if it 's
✓
a big app or not
.
'
Ô
-
?
a.
et
-
"
H
150
→ Depend ing on
request and limites defined ,
ce Pod is
classified in a certain QoS class .
TES
3 QoS classes 5
⊕ᵐÆ☒
Guaranteed
☆ Pods are
guaranteed to not be killed until
151
"
☒
Burstab.LT/
☆ When they reached their limit ,
R☒
BestEf
☆ Containers can use
any
amount of free Memory
and CPU in the Node .
152
A-ssignaspecificQosclasstoaPod.us
"
☒
Guaranteed
☆ Request and limites must be equals
in all containers ( even Init containers )
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: nginx
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "20m"
limit:
memory: "64Mi"
cpu: "20m"
153
→"
*
Burstab ✓
☆ At least one container in the Pod must have
a
memory or CPU request defined
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-container
image: nginx
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
limit:
memory: "128Mi"
154
?⃝
?⃝
→"
☒
BestE✓
☆ If request and limit are not set ,
155
?⃝
ËIH◦Ï→
> Get QoS class for my pod
-
Pod
156
ËËÏËË!
Network ☒
d. 1-
°o°
O
#
bb ÷
ÎïïïüËï ) ,
→ ←
→ Scope d by Nam
espaces
157
→ Limited to IP addresses ( no domain name )
→ Pre -
requis ites :
ï ¥
niÏ
Ok are allowed
,
you
t
d. 1-
158
ËIH◦Ï→
①
t
d. 1-
-
-
-
|r˥/ |˥
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: my-namespace
spec: Ifempty pod Selector ;
}
podSelector: Select
matchLabels: all Pods in
namespace
role: my-role Pods
policyTypes:
- Ingress
- Egress
ingress:
}
Allow connections
- from:
- podSelector:
matchLabels: to all Pods labels
role: allow-role
ports: role : allow role
-
- protocol: TCP
port: 6379 to communicante to our Pods
159
② ☒
- t
-
lroleimy.ro#-J/
}
egress: Allow connections
- to:
- podSelector:
matchLabels: for our Pods
role: allow-to-role
ports:
i
to communia ate to Pods
- protocol: TCP
port: 5978 with labels role : allow to role
- -
(WhenaNetwËyÇ
targeted
.
Ê
&reJectal trafficnotdefinedinanyNPJ
Pods become isolated
160
③ Derry all Ingress traffic to our Pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: my-namespace
spec:
podSelector: {} } all Pods in
policyTypes: my
-
namespace
- Ingress
④ Allow all F-
gress traffic from our Pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-egress
namespace: my-namespace
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- {} }
allow au
161
Ï Est
Et
→ In Ingress & F-
gress ,
several Selector exists E
- ↓ -
pod Selector namespace
Selector ip Block
162
T I TO
Ë÷ËÜ RBAC
0
-
/ Role-BaædAont§
→ Method of regulating access
to resources based on
→
Useful when want to Control what your users can do
you
for which find of ressources in cluster
your
Kubernetes
→ Introduce 4
0K¥
RÊÈÀ Cluster Role
Binding
* create I list
d get 1¥ update
°O°
&
← →
cluster scoped
- resources
{ namespaced
accro ss
resources
all name
( like Pods )
Spaces
( Node no )
non resource
- end point
( Healthcare)
◦
pp
ËI:
'
tsanethings,buttheirscopesaredif œ
Yes Role & Cluster Role objects do the
,
Fb
→
A Role Binding reference Role in the same
namespace
can
any
in a
namespace
f)
T↳→Riig
[
user
group PAQ ④
Service Accounts
Role
0
÷iwgyaa to
my . user
#
bb
[email protected]
f) e-
-
View 165
my namespace
-
clusterkoleB.no/ingJ
→ Same as Pole Binding but
for all
namespace ( cluster wide) _
g
f Aersig
my group to read secrets
-
in
any ns
pp
Fb
QQ clusterkoleB.no/ingQ
[email protected]
e- →
read - secret
'
severalcksterRolecana-aggegated.no
thanks to aggregationthule
166
[email protected] -
in
my namespace namespace
_
apiVersion:
rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: my-namespace
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
-
rend access
to a user
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: my-namespace
subjects:
- kind: User
name: my-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader 167
apiGroup: rbac.authorization.k8s.io
[email protected]
f) e-
secret reader
.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
}
metadata:
name: secret-reader
No metadata
namespace in
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
Cluster Role to a
group
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: Group
name: my-group
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
168
yqutSer iceAc ountlinke [email protected]
Access in a cluster as a
≥ hubecte
t
e-
Rᵈi→
secret reader
read -
pods .
① Create test
" "
ns
a
namespace
-
service Accourt
$ export TEAM_SA=my-sa-test
$ export NAMESPACE_SA=test-ns
170
Ë¥ÏË¥☒
0
U V 11.22.55
•
Generals | Deprecated o
→
Set of conditions / ru les a Pod must follow in arder to
volumes
Types
of Privileged Pods
Read .
only files ystem
( sudo )
host Port ¥9 Privileges elevation =
→
So, first you reed to create policies
•
taekwondo
controller on
your
cluster in arder to use them
→
When PSP are activated,
euery
Pod that want to run
InPracti
→ A Pod is linked to PSP thanks to his Service Accourt
grants permissions
9
RoleBindi①
ⁿʰ- ËË
?
-
-
i
L - _ _
)
namespace
↓
cluster wide- associates a cluster Role
172
permissions to a des ired SA
?⃝
→ Only one PSP for a
Pod can be a
pp
lied
nutable&
Ynonmta
ÉË ☒
|
<
notpassH.IM
You shall
¥ d to
mutable
non
policy
TE don't change the Pod
-
Imitables
l et.sadaptlchangefewthi.gs
Hey & You don't respect the roles ,
⊕ do to
→
&
you el have
'
the right to run
ËD:-< [email protected]
Woking ?
◦
{
①
"
non mutable
policies first non mutable n
do to
Pod
Security mutable
②
if several mutable
{
policies matches ,
À
chose ◦ ne ordered
by name
%
a →
( alphabet ically ) à
.
174
ËIH◦Ï→
① Define policy policy that prevent the creation
" "
a named my _
of privileged Pods
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: my-psp
spec:
privileged: false → Don't allow the creation
seLinux:
}
rule: RunAsAny of privileged Pods
supplementalGroups:
rule: RunAsAny
runAsUser:
other control
rule: RunAsAny
fsGroup:
aspects
rule: RunAsAny
volumes:
- '*' }
Allow access to all available volumes
verbs:
- use → ①
Grant access
resourceNames:
- my-psp
→ to ②
my psp
.
175
③ Create a Role Binding linked to a des ired
Service Accourt ( SA )
[ }
- kind: ServiceAccount
name: default
to the SA in
default
namespace: my-namespace my namespace ns
-
012
3
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-rolebinding
namespace: my-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-cluster-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:serviceaccounts 176
④ Enable Pod Security Policy admission Controller
in an
existing GKE cluster
177
v
ËËÏÏËï &
Node ( Anti ) Affinity o
→ By default ,
when
you deploy a Pod ,
Kubernetes
¥
Pod
"
-
t
-
↓ d
€0
'
" ᵈᵉ
enpty fuel
µÇË→
d. to
a
Podandvodelantilaffiniti.es?
But do
you know that you can Control
178
waystohandlenodeandpodafintes8Node5 [email protected]
→ Several
ffii # fiiy
-
◦ [b
,
fii
◦
179
lect
a-
"
À
¥
:
ByÆÜhaÇ%ab
lorunonaNode÷
but you can add
your
our labels to
force ce Pod
180
*
:
match Criteria
181
Allowed Operators :
In,Exi Lt ,
Not In ,
Doesnttfxists
"
Affinity
"
Node Anti -
fp.a.y.n.i.gy.gg/
If the label on a Node
notting Changed ,
182
Pointy
→ Allow to run a Pool on
the same Node
cool
'
_.
Ë¥P%d A Pod B
thesamemachine.CI
Useful for Pods that reed to run on
183
,
fii
◦
→
Allow to run several replica of a
Deployment
on a
different Node
youwikbeable-Gdis-fibeydsndiffere.int
Nodes If Node died the
willstillbeavailable.ie
one
.
,
app
184
ËIH◦Ï→
☆ List all Nodes with their labels
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
⊖
÷:::÷÷:::
- name: nginx
image: nginx
nodeSelector:
in a Mode with given label
node: my-node
À
II
Ë÷ËË
Æ¥- 185
☆ Create a Pod with No de Affinity ,
that will run
label
in a Mode that have a
specific
000
if possible
apiVersion: v1
kind: Pod
metadata:
name: my-pod-with-node-affinity
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: app
operator: In
values:
- my-app
containers:
- name: my-container
image: busybox
Ë÷ËË
ËÏ
186
☆ Create a Pod that will run next to Pod
with label friand
"
truc
"
=
apiVersion: v1
kind: Pod
metadata:
name: my-pod-with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: friend
operator: In
values:
- true
topologyKey: topology.kubernetes.io/zone
containers:
- name: my-container
image: busybox
֍B
÷
"""
same
" " "
No de
À
↓
À
-
t 187
☆ Create a
Deployment with 3
repli cas .
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-db
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
À
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: kubernetes.io/hostname
containers:
- image: my-image:1.0
name: my-container
-
-
↓ ŒEÏ ŒE%
188
贆 .
ÉËËËïï
→ A Node is a physica l or Virtual Machine ( VM )
Et
189
→ Container runtime :
ËÏ
ËËï ± ¥Ë÷ÈÊ÷Ê÷¥
⑨ ËËEÏ
ÏËÊÊËËÆË¥ÏÏ ÏË ÏËËÆ¥Ë¥Ï☐ ËËË÷Ê:¥
☐˱ ←←ËÆ
registry
images
ÏËÊËËË¥ËÏÏ Ï
Ë Ï?ËᵐË ë :
€o←←s #
footage
②
↓
runtime
un
pack the
container
ËÆËËË
Æ i*ÆË Ëü¥Ë¥ÏÎË
Æ Ë ËÏ*ÀÏ˥˥↑ÏË Ë Ê
.
③
run
the
app
190
Ë¥ïË
doesn't already
\
,•→→ÇË"
""""
; .
...
?
T.rs
.
INOUÏ# Out of
Disk ?
191
q<±n
' '
"
cordon ed
Ready,SchedulingDb
192
Ëjæ_
Il
} Show more
informations about a No de
( conditions ,
Capacity ◦ ◦o
)
$ kubectl describe node my-node
193
Ë¥¥ïË
0
] Upgrade Node
Why ?
€4
a
Ein
→
ËËÆ voie
NoSchedv# Taint
→
F- vict all Pods
from a No de & mark the No de
ces unschedulable
- *
LE
'
→
Node
e
otN◦
Pods that don't tolerate the taint can
'
t be schedule on the Node
195
ÜferNÎ/
Kubernetes avoid scheduled Poels that don't tolerate
the Taint
ouf→
NOËL
Pods are cricket from the Node if they are
already
running ,
else they Cant be schedule
ing
?
◦
Et
Il
|HowT/
_
} Pause a Node ,
don't accepte new workloads on it
} Un pause a Node
...
spec:
tolerations:
- key: "specialkey"
operator: "Equal"
value: "specialvalue"
effect: "NoExecute"
...
spec:
tolerations:
- key: "my-key"
operator: "Equal"
value: "my-value"
effect: "NoExecute"
tolerationSeconds: 6000 } Delay Pod eviction
'
Nodecontrodercan-mtomati6-a.ae
with out a manual action
Node
☐ is not
ready
whey? ☐ Node is vnreachable
☐ Out of disk
☐ Network vrai valable 197
000
Debugging
ËËËË
•
"
"
dit Radins
↓
i. ?
¥ ↓
Œi
198
①
shownoreinformalionaboutyourpods-mgD.es
play all Pods in a
namespace
"
my - ns
"
status :
ËÊË æ Émirat
.
199
② Mypodstnckinpendingstatus
%àËÏËÏ* :Ë Ë ☒
Podcant
¥
scheduled
"
d. 1-
amie
Solutions :
⇐
_
000
200
?⃝
③SkËmag
→ Display all events for namespace
my ns
_
J
www.yg.ap.g.W
whenyouexecuteknbectldescribecommanc.ae
Events are name
spaced .
201
04nypodstuckinwait.mg/ImagePullBaek0ffstatu-
ËË
"" "
" " " mieux , app ,
" " . .
"
.
_ .
Questions
8
Image name
, tag & URL are good ?
Can it ?
you pull
202
1.18
Êtes
Æ_¥
→
Runan Ephemera l Container near the Pod
we want to debug
-
Ephemera l Containers = true
Ë application
debug
container
container
ÀË
-
Share
.
a
process namespace
with a container inside the pod
U
debug container can see all
203
processes created by my -
pod
Podhavebeenrestartedmuetipleline.CO
] If pods use ta much
Memory
00M Killer can
destry them
ÊÉÏÏY
ÏTᵐᵐᵐî
ÊÏËÏ ÏË Use ta much
resources
'
☒ -
FÉ
Solution
i :
"
F ¥÷.→ ↓×ÈïÏ
E-
Ée ☐Æ
ConfigMap Secret
When
you
link a
Pod to a
Config Map and / or a Secret ,
② nypodiskunningbutt-don.tknowwhyitsntworrk.mg?
You can
simply watch Pod logs in arder to
Ery to understand :
whenapodisfevicftedfmaderterminate.cl
, logs are no longer available .
205
⑨
containwisrestartingo-ndeo-sl.ve
ness Probe is used to tel container is
healthy ,
Possible issues :
-
Application take a
long time to start
④ I want to access
my
Pod with out an external
load-bal-ancer.ve/Tr----
µ4.Iis
,
and
un
your
manga gum , www.nap
Computer :
. ,
T-fremoteport-andlfaprtwe-esa.me
, specify Just one
$ curl localhost:8080/my-api
207
ËËÏÏ Convert
→
1.
21
TE
updatemanifeaspecific.US
gg.ge , au ,
, .gg
,. .. . ,
,
O L API version
pp
Fb
" "
E- → E-
E-
I-deprecate.cl API
API 208
ËÏHuÏ -
} Convert
my
-
209
: Tools
i
y kubectx
ce
Mariage & switch
https://github.com/ahmetb/kubectx
210
"
Et
habens
a
Manage & switch
between
namespace )
s
https://github.com/ahmetb/kubectx
$ kubens
Switch to a
namespace
$ kubens my-namespace
'
211
"
Et
Stern
"
Kubectl logs
Lender steroids )
s
https://github.com/wercker/stern
Display logs of
" "
$ stern my-pod-start
'
212
"
Et
krew
ITË Ê:ËÏË÷ÏËÊ ËÊÏÏË Ë÷:
"
Package manager
for kubectl plugins )
s
https://github.com/kubernetes-sigs/krew
M-fterinstadation.thesetd-aeaa.ba
$ kubectl ctx and $ kubectl ns
Add
scraly private
'
index
scraly
'
in
Install plugin
" "
season
"
in
plugin
"
Upgrade
"
season
214
"
Et
le 9s
ˌȥƥËᵉËÆ*ËÊÆ
LE
https://github.com/derailed/k9s
Launch fr95
$ k9s
Run h 9s in a
namespace
$ k9s -n my-namespace
215
"
Et
k 3s
CC
Lightweight Kubernetes
cluster »
https://k3s.io
216
↓ skaffold
"
https://skaffold.dev
In it
your project ( and create skaffold .
yame)
$ skaffold init
Deploy image
$ skaffold deploy
←
https://github.com/kubernetes-sigs/kustomize
→
Build -
in kubectl since v1.4
ËB:
LIJgiekusniehkeaakewithayersa.ie
O
GkfaEBY←
r
mix - in toto
secrets
☒EŒJEÉ¥ mix - in
replica
JTeaTSozN-_ mix in-
Base
€Æ€--
→ The aim is to add
layers modifications on top of base
?
◦
Et
Il
|HowT/
_
① Define a
deployment .
yame file
apiVersion: apps/v1
kind: Deployment
metadata:
name: kustomize-app
labels:
app: kustomize
spec:
replicas: 1
selector:
matchLabels:
app: kustomize
template:
metadata:
labels:
app: kustomize
spec:
containers:
- name: app
image: gcr.io/foo/kustomize:latest
ports:
- containerPort: 8080
name: http
protocol: TCP
219
② Create a
file called Custom -
env .
yame
apiVersion: apps/v1
kind: Deployment
metadata:
name: kustomize-app
labels:
app: kustomize
spec:
replicas: 1
selector:
matchLabels:
app: kustomize
template:
metadata:
labels:
app: kustomize
spec:
containers:
- name: app
www.aani#EI
}
env:
- name: MESSAGE_BODY
value: by Kustomize to add to
- name: MESSAGE_FROM
value: overlay ‘custom-env’ our base
resources:
- ../../base } base = our deployment
patchesStrategicMerge:
- custom-env.yaml } patchs to
apply
④ Apply
$ kubectl apply -k /src/main/k8s/overlay/prod
220
ÎÎ
| c'ést
'
kustomize i
⇐
① Set / change
image tag
$ kustomize edit set image
my-image=my-repo/project/my-image:tag
② Create a secret
- edit
221
generated huston : ration yaml
.
:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- custom-env.yaml
- replica.yaml
secretGenerator:
- literals:
- password=toto
name: my-secret
type: kubernetes.io/dockerconfigjson
③ Add a
préfix and a
suffi x in resource 's name
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
namePrefix: my-
nameSuffix: -v1
222
④ Create a hustomization.name which create a
Config Map
Esiee
SË
: :
{
configMapGenerator: configMapGenerator:
- name: my-configmap - name: my-configmap2
files: literals:
- application.properties - key=value
in
Config Map and Secret generated resources name
generatorOptions:
disableNameSuffixHash: true
o
generator Options change behari or
.
of all Config Map & Secret
generator
223
| hubeseal
En
cc
Crypt your secret
https://github.com/bitnami-labs/sealed-secrets
Create a Secret
224
""
sealed secret
" .
q
secret
Y -
""
÷!
"
← "
ü
q
÷ ,
§
.
secret
my -
225
"
Et
" ""
ËÊ ↓☒ÊçË¥Ë
"
" " ""
Security "
scanner clusters
'
»
for .
ooo
https://github.com/aquasecurity/trivy
→ Vulnérabilités détection :
☐ of 0s
packages
☐ Application dependence ie
( npm , yarn , cargo , pipent composer bundle
,
,
no )
→ Misconfiguration detection
( Kubernetes ,
Docker , Terraform ooo )
→ Secret détection
→
Simple and Fast scanner
→
Easy integration for CI
→ A Knbernetes opera
tor 226
Scan an
image
$ trivy image python:3.4-alpine
227
↓ "
G⑤T=Iˢ
.
https://velero.io
create a
full backup
$ velero backup create my-backup
Istio Gateway
'
a s
Schedule backup
$ velero schedule create daily-backup
—-schedule="0 10 * * *" 228
i ¥
→
Create a
backup before a cluster upgrade
→
And an it is a
good practica to backup daily / weekly
your resources in sensitive clusters
→
Add revisions feature in the backup location bucket
229
ËË
ÏËËË
ËÏ ÷Ë¥ ÷ ï÷Ë÷
https://github.com/derailed/popeye
→
Scan cluster and output a
report ( with a score )
→ Several to install it
ways
( lo cally , Docker ,
in the cluster ooo )
list
Scan
Orly a
of specified resources
230
Scan with a
config file
$ popeye —f spinach.yaml
231
| kyverno
https://kyverno.io
◦
vali date
◦
generale
◦ mutante
NAME WEBHOOKS
AGE
validatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-policy-validating-webhook-cfg 1
52s
validatingwebhookconfiguration.admissionregistration.k8s.io/kyverno-resource-validating-webhook-cfg 2
52s
232
i
i
= :
= :
ü
ï
-
-
( ¥
%•*o
↓!
.
>
herbe cluster
yny
.
yqË÷:
,"ç ÷ ÷ë:±÷÷± :±:÷÷:⇐*
- -
\
Mode -
pool ,
÷÷÷÷÷÷
÷±:± ÷
Ë˱÷±ÏÏ÷÷Ë:÷:÷
ᵐïµ ;Ï•Ë Ëâᵐᵗæ
233
?
◦
À IH◦wT◦?
} Create a
policy that dis allow deployment for Pods in
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-default-namespace
spec:
validationFailureAction: enforce
rules:
- name: validate-namespace
match:
resources:
kinds:
- Pod
validate:
message: "Using \"default\" namespace is not
allowed."
pattern:
metadata:
namespace: "!default"
- name: require-namespace
match:
resources:
kinds:
- Pod
validate:
message: "A namespace is required."
pattern:
metadata:
namespace: "?*"
234
'
setvalidationfa%CA-ifenfre.BG
☒
} Create a
policy that create a
Config Map in all
name
Spaces excepted herbe system .
,
herbe -
setsynchronizeto-I-lo.HU#'accros changes .
} Create a
policy that adds -
label
my
-
awesome -
app
to Pods
,
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-label
spec:
rules:
- name: add-label
match:
resources:
kinds:
- Pod
- Service
- ConfigMap
- Secret
namespaces:
- team-a
mutate:
patchStrategicMerge:
metadata:
labels:
app: my-awesome-app
236
} Display existing policies for all namespace
$ kubectl get cpol -A
NAME BACKGROUND ACTION READY
add-label true audit
disallow-default-namespace true enforce true
☒
diàiagaa way :
https://github.com/Kyverno/policy-reporter
*
Fb
237
ËË Kubernetes
.
¥
È☐|
GENERAL
oFirst2o2J
0 own logo
CLASS
[ INGRESS
æwresouræIngÎs↳
annotation ingress class
.
238
TEE
KUBECTLDE ,AlphaMhW
Ëᵗ
eünÏ
"" E " "" " "" """ "" "ᵉ
Now to :
poddebugcontainername.cn
$ kubectl alpha debug -it my-pod —-image=busybox
—-target=my-pod —-container=my-debug-container
FÊTA paœmes
.
processes created by my _
.
pod
239
-
tADA_AER -
_
I
§
"" " "" " " &" "" " " & "" " ""
\
maxReplicas: 10
targetCPUUtilizationPercentage: 80
behavior:
scaleUp:
policies:
- type: Percent
value: 90
periodSeconds: 15
scaleDown:
policies:
# scale down 1 Pod every 10 min
- type: Pods
value: 1
periodSeconds: 600
- - -
240
ËËïïïÏÏËï
→
"" table F- " "" " " ↳ =
Now ?
mutable
in : true field
t
KUBECTL-RUN-fkubec. t l
Ù¥
|
run command now create Orly a Poot
A-imsoneconmand-oneusage.ci#
241
Ë¥ïï 1. 19
/
#
/ maturity
GENERAL
☐ ◦
o
longest delivery cycle
34 enhancement
due to COVID
T-nnuTABLESECRETGCONriby GMAP-e-mhy.ge
§
Allows to not edit sensitive data mistake
|
.
w
.
apiVersion: v1
kind: ConfigMap
metadata:
name: my-cm
immutable: true
data:
my-key: my-value
242
Ë
beËT3
°""""""|
"" " " "" " "" "
" ""
"" " ""
"°
243
?⃝
-
É -
MhW,GA
§
SecGmp ( Secure Computing Mode ) is a
security feature
" "e " "✗ ʰᵉᵐ" " " " """ " "" " " " ᵗ
"
"
i
◦ Pro vide Sand boxing
apiVersion: v1
kind: Pod
metadata:
name: my-audit-pod
labels:
app: my-audit-pod
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/audit.json
containers:
- name: test-container
image: hashicorp/http-echo:0.2.3
args:
- "-text=just made some syscalls!"
securityContext:
allowPrivilegeEscalation: false
244
EË¥ËÜ
0
1. 20
u o
.
GENERAI
ËOÏÏËÏÏ
o 42 enhancement
"""⇐ "
[
Docker support in
%
%
Ku belet is now deprecated
::::::::::
But on don't Panic your Docker produced-
245
l'r ü:÷÷÷:÷Ë::
time outs .
du
is
respecte
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image:1.0
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5
246
|
Hold off all the other probes until the Pod
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image:1.0
ports:
- name: liveness-port
containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: liveness-port
failureThreshold: 1
periodSeconds: 10
startupProbe:
httpGet:
path: /healthz
port: liveness-port
5min ( 30 ✗ 10s )
failureThreshold: 30
to
periodSeconds: 10 finish its startup
-
247
ËÏ¥Ëï 1. 21
0
GENERAL
0 51
ÊËaa
enhancement
ˢ ←""ᵗ^←^°"
[
PSP is
deprecated & will continue to be ]
|
available and
Jully function al
. until 1.25 .
-
https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-
present-and-future/
248
?⃝
[YEYYYini-4.a.s-obsares.iq/. :
p
Generally Available
o the new
( GA ) .
.
,
249
t
" "" ""
""" " " "" " " "" "
apiVersion: v1
kind: Pod
metadata:
name: my-pod
annotations:
kubectl.kubernetes.io/default-container:
my-container-2
spec:
containers:
- name: my-container
,
image: my-image
- name: my-container-2
image: my-image-2
1
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 1;
done
250
Ë¥ïÜ 1. 22
.
GENERAL
-
⊖
oggugppgnaugan.am#N
◦ The release cadence has Changed .
0 53 enhancement
251
WARNINGMECHANÎSI
Mtf
-
'
I
Kubernetes returns Warning messages
when you
InnutABLELABE-E-slableyhw.BY
I
default , namespace are not guaranteed
|
to have identified labels
any
.
qnbe.n.ge , ,
, , mepadapa.name .ua , na , geen added
to all
↳ namespace name
namespaces .
iaa"
INetworkpolicy.io#J
252
?⃝
Ë¥ïÏ 1. 23
GENERAL
☐ ◦
0
least release
47 enhancement
of 2021
tXNTAPDAT-AER-Mhy.gg
-
§
" " " " " " " "" " "" " "
+
" """
"" " " "" " " """ " " "" " " " "" "
"
"
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
spec: - - -
scaleTargetRef:
apiVersion: apps/v1
y H
kind: Deployment
name: my-deploy
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 80
behavior: -
scaleUp:
policies:
- type: Percent
value: 90
periodSeconds: 15
253
.÷ ÏÎÎ
"
Ê
Ï
[ [ "ᵉᵗ
"" " ""
" "" " "" ""
"
wÎÎË
""
""" "
de the Poi
.
debugcontainercansee.ci
processes created by my _
.
pod
254
TRÈS
T
Welcome to the new
field ttl Seconds After Finished
|
in Jobs that will delete the Job it has
Spec once
""""""""""""ᵗ
No need to schedule Cleanup of Jobs
any more µ
apiVersion: batch/v1
kind: Job
metadata:
name: my-job-with-ttl
spec:
ttlSecondsAfterFinished: 100
template:
metadata:
name: my-job-with-ttl
spec:
containers:
- name: busybox
image: busybox
255
Mtf
POËT -
" "" " """ "" " "" & " " "
removed 1.25
|
in .
fromeverhavingdangerouscapabilities.TT
"°^ÛᵐW
-
fAll clusters
at the same
now
time
support
.
IP v4 and IP v6
duae.stackcap.se?F-pE..I
www.a.yg.e.UW
To use this feature ,
Nodes must have IP v4 &
IP v6 network interfaces .
So
you
must use a
256
?⃝
Ë¥ÏË 1. 24
GENERAL
☐ ◦
0
First release of
46 enhancement
2022
"""""""
[ Docker support in
Ës
%
%
Kube let is
imageswil continuetoworhiny◦wcluste÷
I -
257
ÎᵐW
§;ÏÏÏ ÏÏ"
Aim to configure startup / Livenessl Readiness prob
apiVersion: v1
kind: Pod
metadata:
name: etcd-with-grpc
spec:
containers:
- name: etcd .
image: k8s.gcr.io/etcd:3.5.1-0
Ls
command: [ "/usr/local/bin/etcd", "--data-dir",
"/var/lib/etcd", "--listen-client-urls", "http://
0.0.0.0:2379", "--advertise-client-urls", "http://
127.0.0.1:2379", "--log-level", "debug"]
ports:
- containerPort: 2379
livenessProbe:
grpc:
port: 2379
initialDelaySeconds: 10
CLASSO.tl#BAAE-
[
|
Newfield
Y
Load Balancer Class
ofloadBalanceryouwant.LY
258
,Ë
Welcome to a new metric in Kerbelet :
Èᵐᵐᵐ
ÏD . Ë ËË Your
ces
259
ïBI
[
Alpha APIs are disabled by default and can be
CREATESATO-K.CN
l'
"" "" "" """ "" " " "" " " " &"
apiVersion: v1
kind: Secret
metadata:
name: my-sa-secret
annotations:
kubernetes.io/service-account.name: mysa
type: kubernetes.io/service-account-token
---
|
apiVersion: v1
kind: ServiceAccount
metadata:
name: read-only
namespace: default
secrets: my-sa-secret
260
ËËïÏ 1. 25
GENERAL
ËlË
0 40 enhancement
ypœs "i
PSP have been removed because of its
Migration guide :
https://kubernetes.io/docs/tasks/configure-pod-
container/migrate-from-psp/
261
Mü
"ᵉ^"""" .AlphaMhW
[
The fact is that there are a lot of
" "" " " " "" " " " " " ""
to Pod
given a .
Welcome to a
long awaited feature 8
user nan
espaces support in Kerber net es MY
jPre -
requis
ites 8
" ☐
active feature gate -
|[, g. ü
User Namespace Support = true
Now to :
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
hostUsers: false
containers:
- name: my-container
image: my-image
æËÇÊi 262
POËT
Mtbf
-
"" " d "" " ↳ "" " " " " "ˢ
yangeragcapapgqy.in|
now stable ,
and . bvilt - in
,
in corder
,ACphaMhgy
ïEHE
f
-
i
a
snapshot of a
running container, that
Î
can be transferred to an other Node
|
i.
Et Ï Ë ËËËË .
:
Pre.requisitessactivefeature-gatea.IT
Container checkpoint Restore = true '
263
?⃝
¥
:÷::::::÷ïˢ
Thanks to this feature you can
fpre.requisitesgactivefeature.GOV
→
Job Bsacfsoff Poli cy = true
Mt
[Ë
sna.ap.a.na.e.pa.w.ga.w.a.mn#J
Aim to run Ephemera l Container near the
Pod we want to debug
, U
debugcontainercansee.ci
processes created by my _
.
pod
264
Lii
klemetes
,
he
evena
,"
teabancment
The powe
17
of the monmnity
,
MMS
- POD DISRUPTION BUDGET -
{
& HEALTHY PODS
Aepha
l
PDB
only take in account Running Pods but
i
!
a Pod can be Running and not Ready
.
This feature allow to prevent eriction
.
new an
troe
PBUntealthypodfrictionli
265
i
PROVISiOn FROM VOLUNESNAPSHOT
f
ACROSS NANESPACES
I
W
Ain to create a
Persistent Volume Clain
namespaces
.
öixu
enbad
İ
-
i!
: Poct
!. --
"ö....
I
Namespace d
- -
-
T
i
i
?-X
w
volume i
snapshot
-
-
i
-
Namespace
I
2
266
MKa
BALANCER
LOAD
-
SERVi -
{
l
Aim to create a service
of type equals l
I
definitions with different protocols
.
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
U
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: tcp
port: 5555
targetPort: 5555
protocol: TCP
- name: udp
port: 5556
targetPort: 5556
protocol: UDP
args:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
267
PRIVMMGA
WINDOWS
,
ivolen toline prilegrd cdaies 1:
nupentD .
F
I
'!AUIHAL
PEROVAL T
catcinod ic antsiber nutiom for kubelets
,
netmancoprn
Cn in
6
268
ËËËÜ Docker
Âgrâ 1. 20
7-
☐ 880%
%
→ Docker support in Ku belet is now deprecated
& will be removed in a
future release
Erstfeld
§°° PBuil.cl/--Runt.ome-/
G- .
→ . .
PreviousarchitectureJ.'Hk
LI support
Problem se
ÏüÎ
→
-
gg Docker includes so
many components ( networking ,
volumes UX enhancement )
, o»
7E
hube④~Æ- contained
Ï :
M-wntainerrn.in#isresp6siblefrpuU-g
☒
271
贕l 0
> CM ◦
Config Map
: Container Network
> CNI
◦ Interface
> CRI 0
◦ Container Runtime Interface
> HPA
0
◦ Horizontal Pod Autos caler
y NP Network Policy
0
◦
> Ns : Namespace
> OCI : Open Container Initiative
> PDB 0
◦
Pod Disruption Budget
> PSP Pod Security Policy
y PV 8 Persistent Volume
> PVC ◦
◦
Persistent Volume Clain
272
CEI?
Dev Rel GᵐÊ Dev Ops + 16
years
•
xp
ÆŒÆ→ Ê •*
☒ as Google Developer Expert in Cloud technologies
#ïÏ¥ÆÏËËËËË!ÏËÏ¥ËÊÏËËÏËÎËÏ
IË¥ÇËÆ
CNCF Ambassador !Ïü☒ÆË Docker
Æ☒¥¥r¥Ï
The:-#Eh
•
Ë⇐⇐¥ËŒ¥Y Captain
Duchess France / Oomen in tech association
Technica writer -
Speaker -
Sketch noter #Ï
Contact ne !•
Aurélie Vache -
Abstract
Understand ing Knbernetes can be
difficult or time .
consuming .
in arder to
Ery to explain the technology in a Visual
way .
I¥d :
Kubernetes Components
Resources
Concrete example
Tips & Tools