Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
42 views
43 pages
Whyk 8 S
Uploaded by
Damian
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download
Save
Save whyk8s For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
0 ratings
0% found this document useful (0 votes)
42 views
43 pages
Whyk 8 S
Uploaded by
Damian
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Carousel Previous
Carousel Next
Download
Save
Save whyk8s For Later
Share
0%
0% found this document useful, undefined
0%
, undefined
Print
Embed
Report
Download
Save whyk8s For Later
You are on page 1
/ 43
Search
Fullscreen
Overview Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem, Kubernetes services, support, and tools are widely available. © 1.4: Namespaces © 1.5: Annotations © 1.6: Field Selectors © 17:Einalizers © 1.8: Owners and Dependents © 1.9: Recommended Labels + 2: Kubernetes Components + 3: The Kubernetes API ‘This page isan overview of Kubernetes. Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available. ‘The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight leters between the "k" and the "s". Google open- sourced the Kubernetes project in 2014, Kubernetes combines over 15 years of Google's ‘experience running production workloads at scale with best-of-breed ideas and practices from the community, Going back in time Let's take a look at why Kubernetes is so useful by going back in time. ‘Traditional deployment era: Early on, organizations ran applications on physical servers. ‘There was no way to define resource boundaries for applications in a physical server, and this ‘caused resource allocation issues. For example, if mukiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform. A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers,Virtualized deployment era: As a solution, vitwalzation was introduced. It allows you to run ‘multiple Virtual Machines (VMs) ona single physical server's CPU, Virtualization allows applications to be isol ‘one application canni sd between VMs and provides a level of security as the information of be freely accessed by another application. Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more. With virtualization you can present a set of physical resources as a cluster of disposable virtual machines. Each VM is 2 full machine running all the components, including its own operating system, on top ofthe virtualized hardware. Container deployment era: Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications, Therefore, containers are considered lightweight. Similar to a VM, a container has its own filesystem, share of CPU, memory, pracess space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and 0S distributions, Containers have become popular because they provide extra benefits, such as: + Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use. + Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and efficient rollbacks (due to image immutability) + Dev and Ops separation of concerns: create application container images at build/release time rather than deployment time, thereby decoupling applications from Infrastructure. * Observabilty: not only surfaces OS-level information and metrics, but also application health and other signals. * Environmental consistency across development, testing, and production: runs the same ‘on a laptop as it does in the cloud. * Cloud and 05 distribution portability: runs on Ubuntu, RHEL, CoreOS, on-premises, on major public clouds, and anywhere else. + Application-centric management: raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources, * Loosely coupled, distributed, elastic, liberated micro-services: applications are broken into smaller, independent pieces and can be deployed and managed dynamically - not 2 ‘monolithic stack running on one big single-purpose machine, + Resource isolation: predictable application performance. + Resource utilization: high efficiency and density. Why you need Kubernetes and what it can do Containers are a good way to bundle and run your applications, n a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, ifa container goes down, another container needs to start. Wouldn't it be easier if this behavior was handled by a system? That's how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example: Kubernetes can easily manage a ‘canary deployment for your system, Kubernetes provides you with: + Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to @ container is high, Kubernetes is able to load balance and distribute the network trafic so that the deployment is stable. + Storage orchestration Kubernetes allows you to automatically mount a storage system ‘of your choice, such as local storages, public cloud providers, and more. + Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desiredstate at a controlled rate. For example, you can automate Kubernetes to create new ‘containers for your deployment, remove existing containers and adopt all their resources to the new container. ‘+ Automatic bin packing You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes can fit containers onto your nodes to make the best use of your resources. + Selfsheating Kubernetes restarts containers that fall, replaces containers, kills ‘containers that don't respond to your user-defined health check, and doesnt advert them to clients until they are ready to serve. + Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy ‘and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration, What Kubernetes is not Kubernetes is nota traditional, alLinclusive PaaS (Platform as a Service) system. Since Kubernetes operates at the container level rather than at the hardware level, it provides some generally applicable features common to PaaS offerings, such as deployment, scaling, load balancing, and lets users integrate their logging, monitoring, and alerting solutions. However, Kubernetes is not monolithic, and these default solutions are optional and pluggable. Kubernetes provides the building blocks for building developer platforms, but preserves user choice and flexibility where itis important Kubernetes: # Does not li he types of applications supported. Kubernetes aims to support an extremely diverse variety of workloads, including stateless, stateful, and data-processing, workloads. If'an application can run in a container, it should run great on Kubernetes. + Does not deploy source code and does not build your application. Continuous Integration, Delivery, and Deployment (CCD) workflows are determined by organization ‘uttures and preferences as well as technical requirements. + Does not provide application-level services, such as middleware (for example, message buses), data-processing frameworks (for example, Spark), databases (for example, MySQL), caches, nor cluster storage systems for example, Ceph) as built-in services, ‘Such components can run on Kubernetes, and/or can be accessed by applications running on Kubernetes through portable mechanisms, such as the Open Service Broker. * Does not dictate logging, monitoring, or alerting solutions. It provides some integrations as proof of concept, and mechanisms to collect and export metric. + Does not provide nor mandate a configuration language/system (for example, jsonnet) It provides a declarative API that may be targeted by arbitrary forms of declarative specifications. + Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems. + Additionally, Kubernetes is not a mere orchestration system. In fat, it eliminates the need for orchestration, The technical definition of orchestration is execution of a defined workfiow: first do A, then B, then C. In contrast, Kubernetes comprises a set of independent, composable control processes that continuously drive the current state towards the provided desired state. shouldn't matter how you get from Ato C. Centralized control's also not required, This results in a system that is easier to use and more powerful, robust, resilient, and extensible. What's next + Take 2 look at the Kubernetes Components + Take a look at the The Kubernetes API + Take a look at the Cluster Architecture © Ready to Get Started?1 - Objects In Kubernetes Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Learn about the Kubernetes object model and how to work with these obje ts. This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can express them in. .yanl_ format. Understanding Kubernetes objects ‘Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe: ‘+ What containerized applications are running (and on which nodes) + The resources available to those applications * The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance ‘AKubernetes object's a "record of intent"~once you create the object, the Kubernetes system will constantly work to ensure that abject exists. By Creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster’ desired state. To work with Kubernetes objects—whether to create, modify, or delete them—you'll need to use the Kubernetes API, When you use the kubect commandline interface, for example, the CLl makes the necessary Kubernetes API calls for you. You can also use the Kubernetes API directly in your own pragrams using one of the Client Object spec and status ‘Almost every Kubernetes object includes two nested object fields that govern the object's configuration: the object spec and the object status For objects that have a spec , you have to set this when you create the ‘object, providing a description of the characteristics you want the resource to have: its desired store The status describes the current stove ofthe object, supplied and updates by the Kubernetes system and its components, The Kubernetes control plane continually and actively manages every object's actual state to match the desired state you supplied, For example: in Kubernetes, 2 Deployment is an object that can represent ‘an application running on your cluster. When you create the Deployment, you might set the Deployment spec to specify that you want three replicas ‘ofthe application to be running, The Kubernetes system reads the Deployment spec and starts three instances of your desired application updating the status to match your spec. If any of those instances should {ail (a status change), the Kubernetes system responds to the difference between spec and status by making a correction-in this case, starting a replacement instance. For more information on the object spec, status, and metadata, see the Kubernetes API Conventions.Describing a Kubernetes object When you create an object in Kubernetes, you must provide the object spec that describes its desired state, as well as some basic information about the object (such as a name). When you use the Kubernetes API to create the object (ether directly or via kubects ), that API request must include that information as JSON in the request body. Most often, you provide the information to kubect in file known as 2 monifest. By Convention, manifests are YAML (you could also use JSON format) Tools Such a5 kubecti convert the information from a manifest into JSON or ‘another supported serialization format when making the API request over HTTP. Here's an example manifest that shows the required fields and object spec for a Kubernetes Deployment: saopLication/éenloyaent.yast(E) apiversion: apps/v1 kind: Deployment rane: nginx-deploynent selector app: nginx replicas: 2 # tells deployment to run 2 pods matching the template ‘template: retadata: labels: app: nginx containers: = name: nai sage: ogine:a.14.2 ports: = containerPort: 80 (One way to create a Deployment using 4 manifest fil ke the one above is touse the kubecti anply. command in the wubect] command-line Interface, passing the -yaei file as an argument. Here's an example: kubectt apply = nttps://k8s.10/exanples/application/ deployment. yan ‘The output is similar to this deploynent.apps/agine-deploynent created Required fields Inthe manifest (YAM or SON file) for the Kubernetes object you want to create, you'll need to set values for the following fields: + apiversion - Which version of the Kubernetes API you're using to create this object + kind -What kind of object you want to create+ setacata - Data that helps uniquely Identify the object, including a rane string, UID , and optional nanespace + spec -What state you desire for the object The precise format of the object spec is different for every Kubernetes ‘object, and contains nested fields specific to that object. The Kubernetes ‘ABI Reference can help you find the spec format for all of the objects you ‘ean create using Kubernetes, For example, see the susc field for the Pod API reference, For each Pod, the spec field specifies the pod and its desired state (such as the container image name for each container within that pod). Another ‘example of an abject specification is the spac field forthe StatefulSet APL For StatefulSet, the .spee fled specifies the StatefulSet and its desired state, Within the spec of a StatefulSet is a template for Pod objects. That template describes Pods that the StatefulSet controller will create in order to satisfy the StatefulSet specification, Different kinds of abject can also have different. status ; again, the API reference pages detail the structure ofthat «status field, and its content for each different type of object Note: See Configuration Best Practices for additional information on writing YAML configuration files. Server side field validation Starting with Kubernetes v1.25, the API server offers server side field validation that detects unrecognized or duplicate fields in an object. It provides all the functionality of kubectl --valtcste on the server side. The kubectl tool uses the --val1este flag to set the level of field validation, It accepts the values senore, warn and steict while aso accepting the values *rve (equivalent to strict )and false (equivalent t {gnore ). The default validation setting for kubect) Is --valigate-true Strict field validation, errors on validation failure Field validation is performed, but errors are exposed as warnings rather than failing the request Ignore No server side field validation is performed When xwoecti cannot connect to an API server that supports field validation it wil fal back to using client-side validation, Kubernetes 1.27 and later versions always offer field validation; older Kubernetes releases might not. fyour cluster is older than v1.27, check the documentation for your version of Kubernetes. What's next youre new to Kubernetes, read more about the following + Pods which are the mast important basic Kubernetes objects. + Deployment objects. + Controllers in Kubernetes. + kubect! and kubectl commands.Kubernetes Object Management explains how to use kubect! to manage objects. You might need to instal kubect! if you don't already have it available. To learn about the Kubernetes API in general, vist: + Kubernetes APL overview To learn about objects in Kubernetes in more depth, read other pages in this section:1.1 - Kubernetes Object Management The kvbectl command-line tool supports several different ways to create and manage Kubernetes objects. This document provides an overview of the different approaches. Read the Kubect! book for details of managing objects by Kubect Management techniques ‘Warning: A Kubernetes abject should be managed using only one technique. Mixing and matching techniques for the same object results Inundefined behavior. Management Operates Recommended Supported —_Learning technique on environment writers, curve Imperative Live objects Development 1 Lowest commands projects Imperative object Individual Production projects. 1 Moderate configuration files Declarative abject Directories Production projects. 1+ Highest configuration of files Imperative commands When using imperative commands, a user operates directly on live objects Ina cluster, The user provides operations to the kubect1 command as arguments or flags. This Is the recommended way to get started orto run a one-off taskin a cluster. Because tis technique operates directly on ve objects, provides no history of previous configurations Examples Run an instance of the nginx container by creating a Deployment object: kubect create deployment nginx =-inage nginx Trade-offs ‘Advantages compared to object configuration: + Commands are expressed as a single action word. + Commands require only a single step to make changes to the cluster. Disadvantages compared to object configuration * Commands do not integrate with change review processes. + Commands do not provide an audit tral associated with changes, + Commands do not provide a source of records except for what i lve,+ Commands do not provide a template for creating new objects. Imperative object configuration In imperative object configuration, the kubectl command specifies the operation (create, replace, etc), optional fags and at least one file name. The file specified must contain a full definition of the object in YAML or JSON format. See the API reference for more detalls on object definitions Warning: The imperative reslace command replaces the existing spec with the newly provided one, dropping all changes to the abject, missing from the configuration file, This approach should not be used with resource types whose specs are updated independently of the configuration file. Services of type Loadiotsncer, for example, have their externas field updated independently from the configuration by the duster. Examples Create the objects defined in a configuration fle kubectl create -f nginx. yan] Delete the objects defined in two configuration files kubectl delete - imx.yaml -f redis.yanl Update the objects defined in a configuration file by overwriting the live configuration: Trade-offs Advantages compared o imperative commands + object configuration can be stored in a source control system suchas Gi + Object configuration cn integrate with processes such a5 reviewing changes before push and avd tras, + Object configuration provides a template for creating new objects, Disadvantages compared to imperative commands: + Object configuration requires basic understanding of the object schema. ‘Object configuration requires the additional step of writing a YAML file, ‘Advantages compared to declarative object configuration: + Imperative object configuration behavior is simpler and easier to understand,+ As of Kubernetes version 1.5, imperative object configuration is more mature, Disadvantages compared to declarative object configuration: + Imperative object configuration works best on file, not directories *# Updates to live objects must be reflected in configuration files, or they will be lost during the next replacement, Declarative object configuration When using declarative abject configuration, a user operates on object, configuration files stored locally, however the user does not define the ‘operations to be taken on the files, Create, update, and delete operations are automatically detected per-object by kubect1 . This enables working on directories, where different operations might be needed far different objects, Note: Declarative object configuration retains changes made by other writers, even ifthe changes are not merged back to the object configuration file. This is possible by using the p<» APL operation to. write only observed differences, instead of using the replace APL ‘operation to replace the entire object configuration, Examples Process all abject configuration files in the contigs directory, and create or patch the live objects. You can first a+ to see what changes are going to bbe made, and then apply kubectl diff - configs/ kubectl apply =f configs/ Recursively process directories: ubectt diff -R -€ contigs/ kabect apply -R ~F contigs/ Trade-offs ‘Advantages compared to imperative object configuration: + Changes made directly to live objects are retained, even ifthey are not merged back into the configuration files. + Declarative abject configuration has better support for operating on directories and automatically detecting operation types (create, patch, delete) per-object. Disadvantages compared to imperative object configuration: + Declarative object configuration is harder to debug and understand results when they are unexpected + Partial updates using diffs create complex merge and patch operations,What's next + Managing Kubernetes Objects Using Imperative Commands les + Declarative Management of Kubernetes Objects Using Configuration Biles + Declarative Management of Kubernetes Objects Using Kustomize + Kubectl Command Reference * Kubectl Book + Kubernetes API Reference1.2 - Object Names and IDs Each abject in your cluster has a Name that is unique for that type of | resource. Every Kubernetes object also has 2 UID that is unique across your whole cluster, For example, you can only have one Pod named ayapp-1234 within the same namespace, but you can have one Pod and one Deployment that are each named ryapp-1234 For non-unique user-provided attributes, Kubernetes provides labels and Names A client-provided string that refers to an object in a resource URL, such as J9pi/vipods/sore-nane Only one object ofa given kind can have a given name at a time, However, ifyou delete the abject, you can make a new object with the same name. Names must be unique across all API versions of the same resourc API resources are distinguished by thelr API group, resource type, namespace (for namespaced resources), and name. In other words, ‘API version is irrelevant in this context. Note: In cases when objects represent a physical entity like @ Node representing a physical host, when the host is re-created under the ‘same name without deleting and re-creating the Node, Kubernetes {treats the new host as the old one, which may lead to inconsistencies, Below are four types of commonly used name constraints for resources, DNS Subdomain Names Most resource types require a name that can be used as a DNS subdomain name as defined in REC 1123, This means the name must: + contain no more than 253 characters * contain only lowercase alphanumeric characters, “or + start with an alphanumeric character + end with an alphanumeric character RFC 1123 Label Names Some resource types require their names to follow the DNS label standard as defined in REC 1123, This means the name must: + contain at most 63 characters * contain only lowercase alphanumeric characters or * start with an alphanumeric character + end with an alphanumeric character RFC 1035 Label Names Some resource types require their names to follow the DNS label standard {as defined in REC 1035, This means the name must: + contain at most 63 characters. + contain only lowercase alphanumeric characters or+ start with an alphabetic character + end with an alphanumeric character Path Segment Names Some resource types require thelr names to be able to be safely encoded ‘asa path segment. In other words, the name may not be”.” or." and the name may not contain "" or "i Here's an example manifest for a Pod named nginx-deno netadata: rane: nginx-dero = nate: nginx fnage: ngine:1.34.2 = containerPort: 80 Note: Some resource types have additional restrictions on their UIDs ‘A Kubernetes systems-generated string to uniquely identity objects, Every object created over the whole lifetime of a Kubernetes cluster has 2 distinct UID. tis intended to distinguish between historical occurrences of similar entities Kubernetes UIDs are universally unique identifiers (also known as UUIDS). UUIDs are standardized as ISO/IEC 9834-8 and as ITU-T X.687. What's next + Read about labels and annotations in Kubernetes. + See the Identifiers and Names in Kubernetes design document,1.3 - Labels and Selectors Labels are key/value pairs that are attached to objects such as Pods. Labels are intended to be used to specify idenefying attributes of objects that are ‘meaningful and relevant to users, but do not directly imply semantics to the core system, Labels can be used to organize and to select subsets of objects. Labels can be attached to objects at creation time and subsequently added and modified at any time. Each object can have a set of key/value labels defined. Each Key must be unique for a given object. values", heya" + "value Labels allow for efficient queries and watches and are ideal for use in Uls ‘and CLls. Non-identifying information should be recorded using Motivation Labels enable users to map their own organizational structures onto system objects in a loosely coupled fashion, without requiring clients to store these mappings. Service deployments and batch processing pipelines are often mult dimensional entities (e.g, multiple partitions or deployments, multiple release tracks, multiple tiers, multiple micro-services per tier) Management often requires cross-cutting operations, which breaks ‘encapsulation of strictly hierarchical representations, especially rigid hierarchies determined by the infrastructure rather than by users. Example labels: © celer” “frontend” , “tier” + “backend™ “ther” + “eache’ ‘These are examples of commonly used labels: you are free to develop your ‘own conventions. Keep in mind that label Key must be unique for a given object. Syntax and character set Labels are key/value pairs. Valid label keys have two segments: an optional prefix and name, separated by a slash (/ ). The name segment is required ‘and must be 63 characters or less, beginning and ending with an alphanumeric character { (2-z0-88-2) ) with dashes (- ), underscores ( _ ), dots (.), and alphanumerics between, The prefix is optional. I specified, the prefix must be a DNS subdomain: a series of DNS labels separated by dots (.), not longer than 253 characters in total, followed by a slash ( /)Ifthe prefix is omittea, the label Key is presumed to be private co the user. ‘Automated system components (eg. kube-seneduier , kube-cont manager , kubo-apiserver , kubectl , oF other third-party automation) which add labels to end-user objects must specify a prefix. The kubernetes.to/ and kas.40/ prefixes are reserved for Kubernetes, core components, Valid label value: ‘+ must be 63 characters or less (can be empty), + unless emply, must begin and end with an alphanumeric character ((9-20-98-71) + could contain dashes ( - ), underscores (_), dots. },and alphanumerics between. For example, here's a manifest for a Pod that has two labels environeent: netadata: rane: Label-dero app: eginx spec containers = nate: nginx nage: nginx:t. 6.2 ports: = containerPort: 80 Label selectors Unlike names and UIDs, labels do not provide uniqueness. In general, we ‘expect many abjects to carry the same label(). Via a label selector, the client/user can identify a set of objects, The label selector is the core grouping primitive in Kubernetes. ‘The API currently supports two types of selectors: equolity-based and set based. A label selector can be made of multiple requirements which are ‘comma-separated, Inthe case of muliple requirements, all must be satisfied so the comma separator acts as a logical AND (aa ) operator, ‘The semantics of empty or non-specified selectors are dependent on the context, and API types that use selectors should document the validity and meaning of them, Note: For some AP! types, such as ReplicaSets, the label selectors of two instances must not overlap within a namespace, or the controller can see that as conflicting instructions and fall to determine how many replicas should be present. Caution: For both equality-based and set-based conditions there Is no logical OR (|) operator. Ensure your filter statements are structured accordingly,Equality-based requirement Equallty-or inequality-based requirements allow fikering by label keys and values. Matching objects must satisfy all of the specified label constraints, ‘though they may have additional labels as well. Three kinds of operators, are admitted =, ==, 1= The first two represent equality (and are synonyms), while the latter represents inequality, For example environment = production ‘er I frontend The former selects all resources with key equal to environment and value ‘equal to preauction . The latter selects all resources with key equal 10 tier and value distinct from Frontera , and all resources with no labels with the ‘tier key. One could filter for resources in production excluding frontend using the comma operator: environent=preduction,tier!=frontend ‘One usage scenario for equality-based label requirement is for Pods to specify node selection criteria, For example, the sample Pod below selects ‘nodes with the label" accelerator-nvidia-tesia-p100 ". apiversion: vi kind: Pot spec = mane: cuda-test Anage: “registry KBs. io/cuda-vector-add:v0.1 Latte: rvidia.con/epu nodeselector accelerator: avidia-tesia-p100 Set-based requirement Set-based label requirements allow fikering keys according to a set of values. Three kinds of operators are supported: in, notin and exists (only the key identifien. For example: enviroment in (production, qa) ter notin (Frontend, backend) partition partition + The first example selects all resources with key equal to. environment land value equal to production or aa + The second example selects all resources with key equal to tier and values other than frontend and backend , and all resources with no labels with the tier key. + The third example selects all resources including a label with key partition ;no values are checked + The fourth example selects all resources without a label with key partition ;no values are checked ‘Similarly the comma separator acts as an AND operator. So filtering resources with a partition key (no matter the value) and with environment different than ga can be achieved usingpartition ervinensent notin (qa) . The set-based label selector is a general form of equality since ervirennent-preduction Is equivalent fo environment {in (production); similarly for t= and_ notin Set-based requirements can be mixed with equality-bosed requirements. For example: partition in (cu WPA, customer) ,environnent =qa API LIST and WATCH filtering LIST and WATCH operations may specify abel selectors to filter the sets of objects returned using a query parameter. Both requirements are Permitted (presented here as they would appear in a URL query string *# equalty-based requirements: + abel Selector-environnentX0prodct on, tierk30frontend + setbosed requirements: ? label elector-environnent+insS2sproductionR2cqakzon2ctientinsZ8¢ront Both label selector styles can be used to list or watch resources via @ RES’ client, For example, targeting apiserver with kubect1 and using equality- based one may write kubectl get pods -1 enviconnent=preduction, tler-franteng (oF using set-based requirements: kubectl get pods -1 ‘environment in (production), tier in (Frontend) {already mentioned set-bosed requirements are more expressive, For instance, they can implement the OR operator on values: kubectl get pods -1 'e ronment An (production, qa) Cr restricting negative matching via notin operator: kubect get pods -1 /roonent jenviroanent notin (frontend) Set references in API objects Some kubernetes objects, such as services and cwclicationontcolers also use label selectors to specity sets of other resources, such as pas. Service and ReplicationController The set of pods thata service targets is defined with a label selector. Similarly, the population of pods that a replicationcontraiter should manage is also defined with a label selector. Label selectors for both objects are defined in maps, and only equality-based requirement selectors are supported: on oF yant files usingselector": ( “component” : “regis”, component: redis This selector (respectively in Json oF yoni. format) is equivalent to conponentsredis OF conponert in (redis) Resources that support set-based requirements Newer resources, such as Job, cesloynent , Replicaset , and oaenonset , support set-based requirements as well. component: redis natchexpressions = (key: ter, operator: In, values: (cache) } = { key: environtent, operator: NetTn, values: [dev] } ratcnLabels iS amap of (key value) pairs. Asingle (key,value) In the atenLabels map is equivalent to an element of ratentxpressions , whose ey field is "key", the operator is "in", and the values array contains only value". natehtxpressiens is alist of pod selector requirements. Valid ‘operators include In, NotIn, Exists, and DoesNotExist. The values set must be non-empty in the case of In and Nottn. All ofthe requirements, from both matehtabels and matchowpresstons are ANDed together ~ they must all be satisfied in order to match, Selecting sets of nodes One use case for selecting over labels is to constrain the set of nodes onto which 2 pod can schedule. See the documentation on node selection for more information, Using labels effectively You can apply a single label to any resources, but ths is mot always the best practice. There are many scenarios where multiple labels should be used to distinguish resource sets from one another. For instance, different applications would use different values for the app label, but a multitier application, such as the guestbook example, would additionally need to distinguish each tier. The frontend could carry the following labels: app: guestbook ‘ser: frontendwhile the Reais master and replica would have different tier labels, and perhaps even an additional rate label rabers: app: guestbook ther: backend role: master and rabers: app: guestbook ‘ter: backend role: replica The labels alow for slicing and dicing the resources along any dimension specified by a label kubect apply -f exanples/questhook/all-in-one/guestbook-all-in-one.yat kubectl get pous -Lapp -Ltier -Lrole awe READY STATUS RESTARTS AGE APP uestbook=fe-anpb w/a Running @ inquest ustbook-fe-ghtes 2/1 funning @ an guestl uestbook-fe-tpy62 1/1 Running @ an guest fuestbook-redts-master-5pgib_ 1/1 Running @ tn est uestbook-redis-replica-2q2yf 1/1 Running @ An ust guesthook-recis-replica-qgzzl 1/1 Running @ an guestl sy-ngine-diviz 1/1 Running @ 290 nginx sy-ngine- oe Wt Running @ 250 nginx kubectl get pods -lapp-uestbook,role-replica fuesthook-redis-replica-2g2yf 1/1 Running @ ae fuestbook-recis-replica-qgazl 1/1 Running @ a Updating labels ‘Sometimes you may want to relabel existing pods and other resources before creating new resources. This can be done with kubectl Lebel . For ‘example, if ou want to label all your NGINX Pods as frontend tier, runs kubectt Tabet pods -1 appenginx pod/ny-nginn-2035384211-J5¢hi labeled pod/ny-nginx-2035384211-u2¢7e labeled pod/ny-nginx-2035384211-u3t6x labeledThis first fiters all pods with the label “app=nginx’, and then labels them with the “tler=fe". To see the pods you labeled, run: kubectt get pods -1 apo: nave Rcaoy STATUS RESTARTS GET sy-nginn-203538401%-J5mi 4/1 Running @ De fe sy-ngine-203538421i-u2e7@ 3/1 Running @ pe sy-ngine-2035380211-u3t5 2/1 Running @ ae This outputs all app=ngin tler (specified with -L_ or pods, with an additional label column of pods’ abel-coluaas ) For more information, please see kubectl abel What's next + Learn how to adda label to a nade + Find Well-known labels, Annotations and Taints + See Recommended labels + Enforce Pod Security Standards with Namespace Labels + Read a blog on Writing a Controller for Pod Labels,1.4 - Namespaces In Kubernetes, namespaces provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc) and not for cluster-wide objects (eg. StorageCluss, Nodes, PersistentVolumes, etc) When to Use Multiple Namespaces Namespaces are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at al. Start using, namespaces when you need the features they provide. Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Namespaces ‘cannot be nested inside one another and each Kubernetes resource can ‘only be in one namespace. Namespaces are a way to divide cluster resources between multiple users (Wia resource quota). Itis not necessary to use multiple namespaces to separate slightly different resources, such as different versions of the same software: use labels to distinguish resources within the same namespace. Note: For a production cluster, consider not using the efau namespace. Instead, make other namespaces and use those. Initial namespaces Kubernetes starts with four intial namespaces: Kubernetes includes this namespace so that you can start using your new cluster without first creating a namespace. ‘This namespace holds Lease objects associated with each node. Node leases allow the kubelet to send heartbeats so that the control plane can detect node failure kube- public This namespace is readable by olf clients (including those not, authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect ofthis namespace is only @ convention, not a requirement, ‘The namespace for objects created by the Kubernetes system.Working with Namespaces Creation and deletion of namespaces are described in the Admin Guide documentation for namespaces. Note: Avoid creating namespaces with the prefix kuve-, since Its reserved for Kubernetes system namespaces. Viewing namespaces You can lst the current namespaces in a cluster using kubactl get namespace nave sraTus Ase defaurt ative ad kusecnode-lesse Active ad kube-puolic Active ad Setting the namespace for a request To set the namespace for a current request, use the --narespace flag, For example kubect run ngine --Snage-ngimx ~-nanespace-cinsert-nanespace-nane-her| kubectt get pods --narespace-cinsert-nanespace-nane-here> Setting the namespace preference You can permanently save the namespace far all subsequent kubectl ‘commands in that context. # votidate ie kubectl config view —-ninsty | grep nanespace: Namespaces and DNS When you create a Service, it creates a corresponding DNS entry. This entry Is of the form
.
, it will resolve to the service which is local to a namespace, This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN), nane>.sve.cluster.2ocal , which ‘As a result, all namespace names must be valid REC 1123 DNS labels, Warning:By creating namespaces with the same name as public top-level domains, Services in these namespaces can have short ONS names that overlap with public DNS records, Workloads from any namespace performing a DNS lookup without a railing dot will be redirected to those services, taking precedence over public DNS. ‘To mitigate this, limit privileges for creating namespaces to trusted Users. If required, you could additionally configure third-party security controls, such as admission webhooks, to block creating any namespace with the name of public TLDs. Not all objects are in a namespace Most Kubernetes resources (e.g. pods, services, replication controllers, and ‘others) are in some namespaces. However namespace resources are not themselves in a namespace, And low-level resources, such as nodes and ersistentVolumes, are not in any namespace, To see which Kubernetes resources are and aren't in a namespace: kubectd apt-resources anespaced=true kubectt api-resources --nanespaced-faise Automatic labelling FEATURE STATE: kubenretes 4.22 (stable The Kubernetes control plane sets an immutable label kubernetes.to/retadata.rane on all namespaces. The value of the label is the namespace name What's next + Learn more about creating a new namespace, + Learn more about deleting a namespace.1.5 - Annotations You can use Kubernetes annotations to attach arbitrary non-identifying ‘metadata to objects. Clients such as tools and libraries can retrieve this metadata Attaching metadata to objects You can use either labels or annotations to attach metadata to Kubernetes objects, Labels can be used to select objects and to find collections of objects that satisfy certain conditions. In contrast, annotations are not used to identity and select objects. The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels Ann ions, lke labels, are key/value maps: etacata’: annotations": { key?" + “value2 Note: The keys and the values in the map must be strings. In other words, you cannot use numeric, boolean, list or other types for either the keys or the values, Here are some examples of information that could be recorded in annotations: * Fields managed by a declarative configuration layer. Attaching these fields as annotations distinguishes them from default values set by clients or servers, and from auto-generated fields and fields set by auto-sizing or auto-scaling systems. * Build, release, or image information lke timestamps, release IDs, git branch, PR numbers, image hashes, and registry address * Pointers to logging, monitoring, analytics, or audit repositories, + Client library or too! information that can be used for debugging purposes: for example, name, version, and build information, + User or tool/system provenance information, such as URLS of related objects from other ecosystem components * Lightweight rollout tool metadata: for example, config or checkpoints. + Phone or pager numbers of persons responsible, or directory entries that specify where that information can be found, such as a team web site. * Directives from the end-user to the Implementations to modify behavior or engage non-standard features. Instead of using annotations, you could store this type of information in an external database or directory, but that would make it much harder to produce shared client libraries and tools for deployment, management, introspection, and the like,Syntax and character set Annotations are key/value pairs. Valid annotation keys have two segments: an optional prefix and name, separated by a slash ( ). The name segment is required and must be 63 characters or less, beginning and ending with an alphanumeric character ( (2-28-94-2] ) with dashes (- ), underscores (_.),dots(. }, and alphanumerics between. The prefix is optional. If specified, the prefix must be a DNS subdomain: a series of DNS labels, separated by dots (. },not longer than 253 characters in total, followed by aslash(/). If the prefix is omitted, the annotation Key is presumed to be private to the user. Automated system components (6.8. kube-seneduler » kube= ler-nanager , kube-apiserver , kubectl , oF other third-party automation) which add annotations to end-user objects must specify @ prefix The kabernetes.to/ and kas.to/ prefixes are reserved for Kubernetes core components, For example, here's a manifest for a Pod that has the annotation snageregistry: https://hub.docker.con/ apiversion: vi rane: annotations-deno annotations snageregistry: “ht tps://hub. cocker. con!” spec containers = nares nginx images nginx:1.34.2 ports: + cantainerPort: What's next + Learn more about Labels and Selectors, + Find Well-known labels, Annotations and Taints1.6 - Field Selectors Field selectors let you select Kubernetes objects based on the value of one lor more resource fields. Here are some examples of field selector queries: © netacata.naneony-service + netacata.nanespace! default + status. phase-Pending This sabect. command selects all Pods for which the value of the status.phase field is. Running kubectl get pods --Ffeld-selector status.phase-Running Note: Field selectors are essentially resource fiters. By defauk, no selectors/fiers are applied, meaning that all resources of the specified type are selected, This makes the kusect1 queries kubect get pods and elector ** equivalent. Supported fields ‘Supported field selectors vary by Kubernetes resource type. All resource types support the matacate.nane and metedata.navaspace felds. Using unsupported field selectors produces an error. For example: kubectl get ingress --field-selector foo.bar-baz Enron fram server (BadRequest): Unable to find “ingresses™ that mated J Supported operators You can use the = mean the same thing). This kuectl command, for example, selects all Kubernetes Services that aren't in the default namespace: and != operators with field selectors (= and == kubect get services -nanespaces --Fleld-selector metadsta.canes) Note: Set field selectors, neni) are not supported for Chained selectors ‘As with label and other selectors, feld selectors can be chained together as ‘a comma-separated Ist. This kubectl command selects all Pods for which the status.phase does not equal running andthe spec.restartPelicy field equals almaysselector-status. phase! Running, spec.restart?o Multiple resource types You can use field selectors across multiple resource types. This kubect command selects all Statefulsets and Services that are not in the default namespace: kubectl get stetefulsets, services --all namespaces --field-selector nel1.7 - Finalizers Finalzers are namespaced keys that tell Kubernetes to wait until specific conditions are met before it fully deletes resources marked for deletion, Finalzers alert controllers to clean up resources the deleted object owned. When you tell Kubernetes to delete an object that has finalizers specified for it, the Kubernetes API marks the object for deletion by populating -retadata.deletionTinestanp , and returns a 202 status code (HTTP “Accepted’), The target abject remains in a terminating state while control plane, of other components, take the actions defined by the finalizers, After these actions are complete, the controller removes the relevant finalizers from the target object. When the netadsta.finalizers field is empty, Kubernetes considers the deletion complete and deletes the object. You can use finalizers to control garbage collection of resources. For ‘example, you can define a finalizer to clean up related resources or infrastructure before the controller deletes the targé You can use finalizers to control ga le bjects by alerting controllers to perform specific cleanup tasks before deleting the target, Finalzers dont usually specify the code to execute. Instead, they are typically lists of keys on a specific resource similar to annotations, Kubernetes specifies some fnalizers automatically, but you can also specify your own. How finalizers work When you create a resource using a manifest file, you can specify finalizers Inthe netadata.¢inalizers field. When you attempt to delete the resource, the API server handling the delete request notices the values in the finalizers field and does the following: + Modifies the object to add a netadats.cetetiontinestanp field with the time you started the deletion. + Prevents the object from being removed until all items are removed fromits netadata.finalizers fielé + Returns a 202 status code (HTTP "Accepted ‘The controller managing that finalzer notices the update to the abject selling the retadata.deletionrinestane , indicating deletion of the object has been requested, The controller then attempts to satisfy the requirements of the finalizers specified for that resource. Each time 3 finalzer condition is satisfied, the controller removes that key from the resource's finalizers field, When the finsiizers fields emptied, an object with @ detetiontinestanp field set is automaticaly deleted. You can also use finaizers to prevent deletion of unmanaged resources. ‘A common example of afinalizer is prevents accidental deletion of Persistentvelune objects, When a Persistentvolune object is in use by 4 Pod, Kubernetes adds the pv: protection finalizer. Ifyou try to delete the ersistentvolune it enters a % status, but the controller can't delete it because the finalizer ‘exists. When the Pod stops using the Persistentvo1une , Kubernetes clears the pv-protection finalizer, and the controller deletes the volume. mina Note:+ When you vette an object, Kubernetes adds the deletion timestamp for that object and then immediately starts to restrict changes to the .retadata.finalizers field for the object that is now pending deletion. You can remove existing finalizers, {deleting an entry fromm the finalizers Ist) but you cannot add a new finalzer, You also cannot modify the deietsontinestanp for an object once itis set, ‘+ After the deletion is requested, you can not resurrect this object, ‘The only way is to delete it and make a new similar object. Owner references, labels, and finalizers Like labels camer ceferences describe the elatonships between objects in Kulberetes, but are used for a diferent purpose. When a manages objects ike Pods ituses labels to rack changes to groups of relates objects For example when ajob creates one or more Pods, the ob controler apples labels to those pods and racks changes to any Pods in the cluster wit the some label The Job controller also adds owner references to those Pods, pointing at the Job that created the Pods. Ifyou delete the jab while these Pods are running, Kubernetes uses the owner references (not labels) to determine which Pods in the cluster need cleanup. Kubernetes also processes finalizers when it identifies owner references on a resource targeted for deletion, In some situations, finalizers can black the deletion of dependent abjects, which can cause the targeted owner object to remain for longer than ‘expected without being fully deleted. n these situations, you should check finalizes and owner references on the target owner and dependent ‘objects to troubleshoot the cause, Note: In cases where objects are stuck in a deleting state, avoid ‘manually removing finalizers to allow deletion to continue. Finalizers| are usually added to resources for a reason, so forcefully removing them can lead to issues in your cluster. This should only be done when the purpose of the finalizer is understood and is accomplished in another way (for example, manually cleaning up some dependent object What's next + Read Using Finalizers to Control Deletion on the Kubernetes blog,1.8 - Owners and Dependents In Kubernetes, some objects are owners of other objects. For example, a ReplicaSet is the owner ofa set of Pods. These owned objects are dependents of thelr owner. ‘Ownership is different fram the labels and selectors mechanism that some resources also use. For example, consider a Service that creates Endpointsiice objects, The Service uses labels to allow the control plane to determine which Endpoinesiice objects are used for that Service. In addition to the labels, each endpointsiice that is managed on behalf of a Service has an owner reference. Owner references help different parts of Kubernetes avoid interfering with objects they dontt control Owner references in object specifications Dependent objects have a netadata.omerneferences field that references their owner object. A valid over reference consists ofthe object name ‘and a UID within the same namespace as the dependent object Kubernetes sets the value of this field automatically for objects that are dependents of other objects like ReplicaSets, DaemonSets, Deployments, Jobs and Cronjobs, and ReplicationControllers. You can also configure these relationships manually by changing the value ofthis field. However, you usually don’t need to and can allow Kubernetes to automatically manage the relationships. Dependent abjects also have an omerteferences. lockomerDaletion field ‘that takes a boolean value and controls whether specific dependents can block garbage collection from deleting thelr owner object, Kubernetes automaticaly sets this field to true if controller (for example, the Deployment controller sets the value of the etadata.cunerReferences field. You can also set the value of the blockounerdeletion field manually to ‘control which dependents block garbage collection, ‘A Kubernetes admission controller controls user access to change this field for dependent resources, based on the delete permissions of the owner. This control prevents unauthorized users from delaying owner object, deletion Note: Cross-namespace oviner references are disallowed by design. Namespaced dependents can specify cluster-scoped or namespaced owners. A namespaced owner must existin the same namespace as the dependent. Ifit does not, the owner reference is treated as absent, ‘and the dependent is subject to deletion once all owners are verified absent. Cluster-scoped dependents can only specify luster-scoped owners. In ‘v1.20, ifa cluster-scoped dependent specifies a namespaced kind as ‘an owner, itis treated as having an unresolvable owner reference, and is not able to be garbage collected. Inv1.204, ifthe garbage collector detects an invalid cross-namespace cownerReference , oF a cluster-scoped dependent with an cownerteference referencing a namespaced kind, a warning Event with a reason of oxnersefinvaliclanespace and an involveddoject of the Invalid dependent is reported. You can check for that kind of Event by Tunning kubect! get events -A --fteld solector-reason-OunerefInval canespaceOwnership and finalizers When you tll Kubernete to delete a resource the AP server allows the managing controller to process any fnaize ues forthe resource Finalgers preven accidental deletion of resources your custer may sti reed to function correct For example, you ty delete a PersstentValume thats tla use bya Pod, the deletion doesnot happen immediately because the Perststentvolune has the kubernetes.io/p¥= protection finalizer on it. Instead, the valume remains in the Terminating status until Kubernetes clears the finalizer, which only happens after the ure is no longer bound to a Pod. Kubernetes also adds finalizers to an owner resource when you use either foreground ar orphan cascading deletion. n foreground deletion, it adds the ‘oreground finalizer so that the controller must delete dependent resources that also have amerReferences.blockownerBeletion-true before It deletes the owner. Ifyou specify an orphan deletion policy, Kubernetes adds the orphan finalizer so that the controller ignores dependent resources after it deletes the owner object. What's next + Learn more about Kubernetes finalizers + Learn about garbage collection + Read the API reference for object metadata,1.9 - Recommended Labels You can visualize and manage Kubernetes objects with more tools than kubect! and the dashboard, A common set of labels allows tools to work interoperably, describing objects in a common manner that all tools can understand, In adcition to supporting tooling, the recommended labels describe applications in a way that can be queried. The metadata is organized around the concept of an application, Kubernetes is not a platform as a service (PaaS) and doesn't have or enforce 2 formal notion of an application. Instead, applications are informal and described with metadata, The definition of what an application contains is loose, Note: These are recommended labels. They make it easier to manage applications but aren't required for any core tooling. Shared labels and annotations share a common prefix: app.kubernetes. 10 Labels without a prefix are private to users. The shared prefix ensures that shared labels do not interfere with custom user labels. Labels In order to take full advantage of using these labels, they should be applied on every resource object. key Description Example app. kubernetes.i0/name The name of the application sysqi app-kubernetes.io/instance Auniquename identifying aysql- the instance of an abexzy ‘application app-kubernetes.io/version The current version ofthe 5.7.21 application (e.g, a SemVer 10, revision hash, etc) app-kubernetes. to/conponent The component within the database architecture app.kubernetes.io/part-of Thenamefahigher level wordpress application this one is part of app-kubernetes.io/managed- The tool being used to hein by manage the operation of an application Toilustrate these labels in object: n, cansider the following St Type string string string string string stringhis apiversion: apps/vi kind: statefulset saber: app. kubernetes.to/nane: mysql ‘app. kubernetes.fo/instance: nysql-abexzy ‘app-kubernetes.10/version: "5.7.21 app. kubernetes.io/conponent: database app-kubernetes.to/part-of: wordpress ‘app-kubernetes.to/managed-by: hel Applications And Instances Of Applications ‘An application can be installed one or more times into a Kubernetes cluster and, in some cases, the same namespace. For example, WordPress can be installed more than once where different websites are different installations of WordPress. The name of an application and the instance name are recorded separately. For example, WordPress has a app.kubernetes.to/nane of wordpress while ithas an instance name, represented as app.kubernetes.to/instance with 2 value of wordpress-aboezy . This enables ‘the application and instance of the application to be identifiable. Every instance of an application must have a unique name. Examples Toiillustrate different ways to use these labels the following examples have varying complexity A Simple Stateless Service Consider the case for a simple stateless service deployed using vepioynent and service objects, The following two snippets represent how the labels ‘could be used in their simplest form, The oeptoynent. Is used to oversee the pods running the application itself. apiversion: apps/vi kind: Deployrent netadata: ‘app-kubernetes.to/nane: ayservice ‘app-kubernetes.io/instance: ayservice-abexzy The service Is used to expose the application,apiversion: vi kind: Service metadata: abel app. kbernetes.fo/nane: ayservice app-kubernetes.fo/instance: nyservice-abexay Web Application With A Database Consider a slightly more complicated application: a web application (WordPress) using a database (MySQL), installed using Helm, The following snippets illustrate the start of objects used to deploy this application. The start tothe following veploynent is used for WordPress: apiversion: apps/vi kind: Deployrent netadata: abet: ‘app-kubernetes.to/nane: wordpress ‘app-kubernetes.to/instance: wordpress-abex2y app-kubernetes.to/version: "4.9.4! ‘app-kubernetes. io/managed-by: hele app. kubernetes.to/conponent: server app. kubernetes.to/part-of: wordpress The service is used to expose WordPress: apiversion: vi kind: Service ‘app-kubernetes.to/nane: wordpress app. kubernetes.fo/instance: wordpress-abcxzy app-kubernetes.io/version: "4.9.4" ‘app-kubernetes.to/managed-by: hel ‘app-kubernetes.to/component: se-ver ‘app-kubernetes.10/part-of: wordpress MySQL is exposed as a statefuiset with metadata for both it and the larger application it belongs to apiersion: apps/vi kind: statefulset ‘app-kubernetes.to/nane: aysql app. kubernetes.{o/Instance: mysql-abexzy ‘app-kubernetes.f0/version: "5.7.23" ‘app-kubernetes.to/nanaged-by: hel app-kubernetes.10/component: database app-kubernetes.{o/part-of: wordpressThe service Is used to expose MySQL. as part of WordPress: spiversion: vi kind: Service netadata: abel ap. kubernetes.t0/nane: mysql app-kubernetes.to/instance: mysql-abexzy ‘app. kubernetes.to/version: "5.7.23" ‘app-kubernetes. io/managed-by: hele app-kubernetes.io/conponent: database app-kubernetes.t0/part-of : wordpress With the MySQL statefulset and service youll notice information about both MySQL and WordPress, the broader application, are included.2 - Kubernetes Components A Kubernetes cluster consists of the components that are a part of the control plane and a set of machines called nodes. When you deploy Kubernetes, you get a cluster. ‘A Kubernetes cluster consists of a set of worker machines, called nodes, ‘that run containerizea applications. Every cluster has at least one worker node. The worker node(s) hos the Pods that are the components ofthe application workloaé. The contol plane manages the worker nodes and the Pods inthe caster. In production envronments, the conta plane usually runs across muliple computers and a cluster usually runs multple nodes, providing fault-tolerance and high avaliably. This document outlines the various components you need to have for a complete and working Kubernetes cluster. The components of a Kubernetes cluster Control Plane Components The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster ‘events (for example, starting up a new pod when a deployment’s replicas field is unsatisfied). Control plane components can be run on any machine inthe cluster, However, for simplicity, set up scripts typically start all control plane ‘components on the same machine, and do not run user containers o machine. See Creating Highly Available clusters with kubeadm for an ‘example control plane setup that runs across multiple machines, kube-apiserver The API server is a component of the Kubernetes control plane that ‘exposes the Kubernetes API. The API server isthe front end for the Kubernetes control plane. ‘The main implementation of a Kubernetes API server is kube-apiserver, kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances,etcd Consistent and highly-available key value store used as Kubernetes! backing store forall cluster data your Kubernetes cluster uses eted as its backing store, make sure you have a back up plan for the data. You can find In-depth information about eted inthe official documentation, kube-scheduler Control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on. Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines, kube-controller-manager Contral plane component hat runs controller pracesses, Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. There are many different types of controllers. Some examples of them are: + Node controller: Responsible for noticing and responding when nodes go down, + Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion, + EndpointSlice controller: Populates EndpointSlice objects (to provide a link between Services and Pods). *# ServiceAccount controller Create default ServiceAccounts for new namespaces. The above is not an exhaustive list. cloud-controller-manager ‘A Kubernetes control plane component that embeds cloud-specific control logic, The claus controller manages lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster. The cloud-contraller-manager only runs controllers that are specifi to your loud provider. Fyou are running Kubernetes on your own premises, or in 2 learning environment inside your own PC, the cluster does not have a cloud controller manager. {As with the kube-controller-manager, the cloud-controller-manager ‘combines several logically Independent control loops into a single binary that you run as a single process, You can scale horizontally (run more than ‘one copy) to improve performance or to help tolerate failures. The following controllers can have cloud provider dependencies: + Node controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding + Route controller: For setting up routes in the underlying cloud infrastructure + Service controller: For creating, updating and deleting cloud provider load batancersNode Components Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment. kubelet ‘An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod. The kubeler takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes, kube-proxy kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept. kube-proxy maintains network rules on nodes, These network rules allow network communication to your Pods from network sessions inside or outside of your cluster. kube-proxy uses the operating system packet fitering layer if there is one and it’s avalable. Otherwise, kube-proxy forwards the traffic itself Container runtime ‘fundamental component that empowers Kubernetes to run containers effectively tis responsible for managing the execution and lifecycle of Containers within the Kubernetes environment. Interface), Addons ‘Addons use Kubernetes resources (Daemonset, Dé implement cluster features. Because these are providing cluster-level features, namespaced resources for addons belong within the kuve-systen namespace. Selected addons are described below; for an extended list of available addons, please see Addons, DNS While the other addons are nat strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it. Cluster DNS is @ DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically nclude this DNS server in their DNS searches,Web UI (Dashboard) Dashboard is @ general purpose, web-based U for Kubernetes clusters. allows users to manage and troubleshoot applications running inthe luster, 5 well asthe cluster ist. Container Resource Monitoring Container Resource Monitoring records generic time-series metrics about contains ina central database, and provides aU! for browsing that data, Cluster-level Logging ‘A cluster-level logging mechanism is responsible for saving container logs toa central log store with search/browsing interface. Network Plugins Network plugins are software components that implement the container network interface (CNI} specification. They are responsible for allocating IP addresses to pods and enabling them to communicate with each other within the cluster What's next Learn more about the following: + Nodes and their communication with the control plane, * Kubernetes controllers + kube-scheduler which is the default scheduler for Kubernetes. + Eted’s official documentation, + Several container runtimes in Kubernetes, + Integrating with cloud providers using cloud-controller-manager. + kubect! commands,3 - The Kubernetes API The Kubernetes API lets you query and manipulate the state of objects in Kubernetes, The core of Kubernetes' control plane is the API server and the HTTP API that it exposes Users, the different parts of your cluster, and external components all communicate with one another through the API server. The core of Kubernetes' control plane is the API server. The API server ‘exposes an HTTP API that lets end users, different parts of your cluster, and external components communicate with one another. The Kubernetes AP! lets you query and manipulate the state of API objects in Kubernetes (for example: Pods, Namespaces, ConfigMaps, and Events). Most operations can be performed through the kubect! commandline interface or other command-line tools, such as kubeadm, which in turn use the API. However, you can also access the API directly using REST calls Consider using one of the client libraries if you are writing an application using the Kubernetes APL OpenAPI specification Complete API details are documented using OpenAPL OpenaPl V2 ‘The Kubernetes AP! server serves an aggregated OpenAPI v2 spec via the fopenapiiv2_ endpoint. You can request the response format using request headers as follows: Header Possible values Notes Accept- gzip not supplying this header is Encoding also acceptable accept application/cos.github.proto- mainly for intro-custer use openapi.spec.v28v1.erprotobuf application/json default * serves application/Json kKubernetes implements an alternative Protobuf based serialization forrmat that is primarily intended for intra-cluster communication. For more information about this format, see the Kubernetes Protobuf serialization design proposal and the Interface Definition Language (IDL) files for each schema located in the Go packages that define the API objects. OpenaPl V3 FEATURE STATE: cubessetes 1.27 [stab Kubernetes supports publishing a description ofits APIs as OpenAP! v3,‘A discovery endpoint /openapi/v3 Is provided to see a lst of all group/verslons avallable. This endpoint only returns JSON. These group/versions are provided in the following format: paths": ¢ sapinass ¢ serverRelativeuR.": “/openapi/v3/api/vAthash=Ccee92F09920) » “apis /adeissfonregistration.kts.do/¥i": ( erverRelativeuRl": “/openapi/v3/2pis/adeissionregistrati) » ‘The relative URL are pointing to immutable OpenAP! descriptions, in order to improve client-side caching, The proper HTTP caching headers are also set by the API server for that purpose ( expires to 1 year in the future, and cacne-control to immutable ). When an obsolete URL is used, the API server returns a redirect to the newest URL ‘The Kubernetes API server publishes an OpenAPl v3 spec per Kubernetes {group version at the fopenapi/v3/apis/
‘endpoint. Refer to the table below for accepted request headers, Header Possible values Notes accept gzip ‘not supplying this header is Encoding aso acceptable Accept application/cos.github.proto- mainly for itro-custer use openapi. spec. v38v1.erprotobuf application/json default application/json {A olang implementation to fetch the OpenAPI V3 is provided in the package Kas. io/client-go/operapi3 Persistence Kubernetes stores the serialized state of objects by writing them into etcd, API Discovery Alist ofall group versions supported by a cluster is published at the /aps and /apis endpoints. Each group version also advertises the lst of resources supported via /apis/
/eversion> (for example’ Japis/roac authorization, kés.i0/vialphat ). These endpoints are used by kubect! to fetch thelist of resources supported by a clusterAggregated Discovery FEATURE STATE: coho Kubernetes offers beta support for aggregated discovery, publishing all resources supported by a cluster through two endpoints ( api and. /apis ) compared to one for every group version. Requesting this endpoint drastically reduces the number of requests sent to fetch the discovery for the average Kubernetes cluster. This may be accessed by requesting the respective endpoints with an Accept header indicating the aggregated discovery resource: Accept: ppl ication/ son; v=-vabetasg-apldiscovery. KBs. 10; as-APIGroupDLscaveryL ist ‘The endpoint also supports ETag and protobuf encoding, API groups and versioning To make it easier to eliminate fields or restructure resource representations, Kubernetes supports multiple API versions, each at a different API path, such as. /api/va. or {Japis/rbac.authorszation. ktsS0/vIalpha2 Versioning is done at the API level rather than at the resource or field level to ensure that the API presents a clear, consistent view of system resources and behavior, and to enable controlling access to end-ofife and/or experimental APIs. ‘To make it easier to evolve and to extend its API, Kubernetes implements ‘APL groups that can be enabled or disabled. API resources are distinguished by their API group, resource type, namespace (for namespaced resources), and name, The API server handles the conversion between API versions transparently all the different versions are actually representations of the same persisted data. The API server may serve the same underlying data through multiple API versions. For example, suppose there are two API versions, v1 and vibetaa , for the same resource. Ifyou originally created an object using the vibetat version of ts API, you can later read, update, or delete that object using either the vivetai orthe v1 APlversion,untilthe vipers: version is, deprecated and removed. At that point you can continue accessing and modifying the object using the v1 API API changes ‘Any system that's successful needs to grow and change as new use cases ‘emerge or existing ones change. Therefore, Kubernetes has designed the Kubernetes API to continuously change and grow, The Kubernetes project, aims to not break compatibility with existing cents, and to maintain that compatibility fora length of time so that other projects have an ‘opportunity to adapt. Ingeneral, new API resources and new resource fields can be added often and frequently. Elimination of resources or fields requires following the APL Kubernetes makes a strong commitment to maintain compatibility for official Kubernetes APIs once they reach general avallablity (GA), typically at API version 1 , Additionally, Kubernetes maintains compatibility with data persisted via beta API versions of official Kubernetes APIs, and ‘ensures that data can be converted and accessed via GA API versions when the feature goes stableIfyou adopt a beta API version, you will need to transition to a subsequent beta or stable API version once the API graduates. The best time to do this, is while the beta API is in its deprecation period, since objects are simultaneously accessible via both API versions. Once the beta API completes its deprecation period and is no longer served, the replacement APL version must be used. Note: Although Kubernetes also aims to maintain compatiblity for ‘alpha APIs versions, in some circumstances this Is not possible. If you use any alpha API versions, check the release notes for Kubernetes when upgrading your cluster, in case the API did change in Incompatible ways that require deleting all existing alpha objects prior to upgrade. Refer to APL versions reference for more details on the API version level definitions, API Extension ‘The Kubernetes API can be extended in one of two ways: 1, Custom resources let you declaratively define how the API server should provide your chasen resource API. 2. You can also extend the Kubernetes API by implementing an aggregation layer. What's next + Learn how to extend the kubernetes API by adding your own CustomResourceDefinition, * Controlling Access To The Kubernetes API describes how the cluster ‘manages authentication and authorization for API access, + Learn about API endpoints, resource types and samples by reading API Referencé + Learn about what constitutes a compatible change, and how t ‘change the API, from APL changes,
You might also like
Kubernetes in Action, Second Edition MEAP V15
PDF
50% (2)
Kubernetes in Action, Second Edition MEAP V15
823 pages
Concepts - Kubernetes
PDF
No ratings yet
Concepts - Kubernetes
609 pages
Kubernetes in Action Second Edition p01 50
PDF
No ratings yet
Kubernetes in Action Second Edition p01 50
50 pages
Introduction To Kubernetes
PDF
No ratings yet
Introduction To Kubernetes
96 pages
Kubernetes Book by Rakesh Kumar Jangid
PDF
No ratings yet
Kubernetes Book by Rakesh Kumar Jangid
147 pages
Kubernetes For Full-Stack Developers
PDF
No ratings yet
Kubernetes For Full-Stack Developers
637 pages
Kubernetes 3
PDF
No ratings yet
Kubernetes 3
42 pages
Getting Started With Kubernetes
PDF
No ratings yet
Getting Started With Kubernetes
31 pages
Kubernetes Practicals Ebook
PDF
75% (4)
Kubernetes Practicals Ebook
187 pages
TS Report-160
PDF
No ratings yet
TS Report-160
16 pages
Kubernetes For Beginners (Step by Step) ?
PDF
No ratings yet
Kubernetes For Beginners (Step by Step) ?
43 pages
Containers Orchestration
PDF
100% (1)
Containers Orchestration
72 pages
Learn Kubernetes 5 Minutes at A Time
PDF
No ratings yet
Learn Kubernetes 5 Minutes at A Time
187 pages
Kubernetes Concepts
PDF
No ratings yet
Kubernetes Concepts
623 pages
Kubernetes For Everyone v2 PDF
PDF
100% (1)
Kubernetes For Everyone v2 PDF
33 pages
Kubernetes
PDF
No ratings yet
Kubernetes
6 pages
Kubernetes Ebook
PDF
No ratings yet
Kubernetes Ebook
55 pages
Ebook Kubernetes PDF
PDF
No ratings yet
Ebook Kubernetes PDF
26 pages
Overview - Kubernetes
PDF
No ratings yet
Overview - Kubernetes
37 pages
Kubernetes in DevOps
PDF
No ratings yet
Kubernetes in DevOps
17 pages
Kubernetes Campaign Ebook
PDF
No ratings yet
Kubernetes Campaign Ebook
34 pages
Kubernetes Security Specialist (CKS) Exam Cram Notes (Specialist, IP) (Z-Library)
PDF
No ratings yet
Kubernetes Security Specialist (CKS) Exam Cram Notes (Specialist, IP) (Z-Library)
54 pages
Practical Guide To Learn Kubernetes - Kubernetes Essentials
PDF
No ratings yet
Practical Guide To Learn Kubernetes - Kubernetes Essentials
165 pages
Concepts
PDF
No ratings yet
Concepts
651 pages
The Essential Guide To Container Monitoring
PDF
No ratings yet
The Essential Guide To Container Monitoring
11 pages
A Beginner's Guide To Kubernetes Monitoring
PDF
No ratings yet
A Beginner's Guide To Kubernetes Monitoring
11 pages
Kubernetes 5
PDF
No ratings yet
Kubernetes 5
19 pages
Kubernets
PDF
No ratings yet
Kubernets
14 pages
Modern Guide To Container Monitoring and Orchestration
PDF
No ratings yet
Modern Guide To Container Monitoring and Orchestration
11 pages
What Is Kubernetes - Kubernetes
PDF
No ratings yet
What Is Kubernetes - Kubernetes
7 pages
Technical Seminar
PDF
No ratings yet
Technical Seminar
20 pages
A Comprehensive Guide To Kubernetes Core Concepts ?
PDF
No ratings yet
A Comprehensive Guide To Kubernetes Core Concepts ?
9 pages
Kubernetes 101 v2.0
PDF
No ratings yet
Kubernetes 101 v2.0
14 pages
Kubernetes
PDF
No ratings yet
Kubernetes
5 pages
Reed, Mark - Kubernetes - The Ultimate Beginners Guide To Effectively Learn Kubernetes Step-by-Step-Publishing Factory (2020)
PDF
No ratings yet
Reed, Mark - Kubernetes - The Ultimate Beginners Guide To Effectively Learn Kubernetes Step-by-Step-Publishing Factory (2020)
88 pages
Kubernetes 1
PDF
No ratings yet
Kubernetes 1
8 pages
Kubernetes ATC Kube Ebook-Final Feb 2020 PDF
PDF
No ratings yet
Kubernetes ATC Kube Ebook-Final Feb 2020 PDF
11 pages
1 Introduction
PDF
No ratings yet
1 Introduction
4 pages
Introduction of Kubernetes: Trang Nguyen Presentation
PDF
No ratings yet
Introduction of Kubernetes: Trang Nguyen Presentation
40 pages
Kuber Net Es
PDF
No ratings yet
Kuber Net Es
8 pages
5 What Is Kubernetes
PDF
No ratings yet
5 What Is Kubernetes
3 pages
Five Strategies To Accelerate Kubernetes Deployment in The Enterprise
PDF
No ratings yet
Five Strategies To Accelerate Kubernetes Deployment in The Enterprise
15 pages
An Introduction To Enterprise Kubernetes: Technology Detail
PDF
No ratings yet
An Introduction To Enterprise Kubernetes: Technology Detail
8 pages
Kubernetes Vs Docker
PDF
No ratings yet
Kubernetes Vs Docker
9 pages
Intro Kubernetes
PDF
No ratings yet
Intro Kubernetes
40 pages
Kubernetes Notes - (Start)
PDF
No ratings yet
Kubernetes Notes - (Start)
32 pages
Kubernetes Primer
PDF
No ratings yet
Kubernetes Primer
13 pages
An Introduction To Kubernetes (Feb 2019)
PDF
No ratings yet
An Introduction To Kubernetes (Feb 2019)
55 pages
Kubernetes: Open-Source Platform For Automating Deployment, Scaling, and Operations of Application Containers
PDF
No ratings yet
Kubernetes: Open-Source Platform For Automating Deployment, Scaling, and Operations of Application Containers
8 pages
Kuber Net Es
PDF
No ratings yet
Kuber Net Es
2 pages
VMware K8s For Operators Ebook
PDF
No ratings yet
VMware K8s For Operators Ebook
15 pages
K8 S
PDF
No ratings yet
K8 S
5 pages
Concepts Kubernetes
PDF
No ratings yet
Concepts Kubernetes
642 pages
VMware K8sForOperators Ebook 042320
PDF
No ratings yet
VMware K8sForOperators Ebook 042320
15 pages
What Is Kubernetes PDF
PDF
No ratings yet
What Is Kubernetes PDF
4 pages
k8s Basic
PDF
No ratings yet
k8s Basic
21 pages
k8s Arch Conn
PDF
No ratings yet
k8s Arch Conn
32 pages
What Is Kubernetes
PDF
No ratings yet
What Is Kubernetes
3 pages
k8s Module Ansible
PDF
No ratings yet
k8s Module Ansible
17 pages
K 8 S 1
PDF
No ratings yet
K 8 S 1
1 page