devops_3_Namespaces
devops_3_Namespaces
# Not in a namespace
kubectl api-resources --namespaced=false
Service
Expose an application running in your cluster behind a single outward-facing
endpoint, even when the workload is split across multiple backends.
In Kubernetes, a Service is a method for exposing a network application that is
running as one or more Pods in your cluster.
A key aim of Services in Kubernetes is that you don't need to modify your
existing application to use an unfamiliar service discovery mechanism. You can
run code in Pods, whether this is a code designed for a cloud-native world, or an
older app you've containerized. You use a Service to make that set of Pods
available on the network so that clients can interact with it.
If you use a Deployment to run your app, that Deployment can create and
destroy Pods dynamically. From one moment to the next, you don't know how
many of those Pods are working and healthy; you might not even know what
those healthy Pods are named. Kubernetes Pods are created and destroyed to
match the desired state of your cluster. Pods are ephemeral resources (you
should not expect that an individual Pod is reliable and durable).
Each Pod gets its own IP address (Kubernetes expects network plugins to ensure
this). For a given Deployment in your cluster, the set of Pods running in one
moment in time could be different from the set of Pods running that application
a moment later.
This leads to a problem: if some set of Pods (call them "backends") provides
functionality to other Pods (call them "frontends") inside your cluster, how do
the frontends find out and keep track of which IP address to connect to, so that
the frontend can use the backend part of the workload?
Enter Services.
Services in Kubernetes
The Service API, part of Kubernetes, is an abstraction to help you expose
groups of Pods over a network. Each Service object defines a logical set of
endpoints (usually these endpoints are Pods) along with a policy about how to
make those pods accessible.
For example, consider a stateless image-processing backend which is running
with 3 replicas. Those replicas are fungible—frontends do not care which
backend they use. While the actual Pods that compose the backend set may
change, the frontend clients should not need to be aware of that, nor should they
need to keep track of the set of backends themselves.
The Service abstraction enables this decoupling.
The set of Pods targeted by a Service is usually determined by a selector that
you define. To learn about other ways to define Service endpoints,
see Services without selectors.
If your workload speaks HTTP, you might choose to use an Ingress to control
how web traffic reaches that workload. Ingress is not a Service type, but it acts
as the entry point for your cluster. An Ingress lets you consolidate your routing
rules into a single resource, so that you can expose multiple components of your
workload, running separately in your cluster, behind a single listener.
The Gateway API for Kubernetes provides extra capabilities beyond Ingress and
Service. You can add Gateway to your cluster - it is a family of extension APIs,
implemented using CustomResourceDefinitions - and then use these to
configure access to network services that are running in your cluster.
Cloud-native service discovery
If you're able to use Kubernetes APIs for service discovery in your application,
you can query the API server for matching EndpointSlices. Kubernetes updates
the EndpointSlices for a Service whenever the set of Pods in a Service changes.
For non-native applications, Kubernetes offers ways to place a network port or
load balancer in between your application and the backend Pods.
Either way, your workload can use these service discovery mechanisms to find
the target it wants to connect to.
Defining a Service
A Service is an object (the same way that a Pod or a ConfigMap is an object).
You can create, view or modify Service definitions using the Kubernetes API.
Usually you use a tool such as kubectl to make those API calls for you.
For example, suppose you have a set of Pods that each listen on TCP port 9376
and are labelled as app.kubernetes.io/name=MyApp. You can define a Service
to publish that TCP listener:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
Applying this manifest creates a new Service named "my-service" with the
default ClusterIP service type. The Service targets TCP port 9376 on any Pod
with the app.kubernetes.io/name: MyApp label.
Kubernetes assigns this Service an IP address (the cluster IP), that is used by the
virtual IP address mechanism. For more details on that mechanism, read Virtual
IPs and Service Proxies.
The controller for that Service continuously scans for Pods that match its
selector, and then makes any necessary updates to the set of EndpointSlices for
the Service.
The name of a Service object must be a valid RFC 1035 label name.
Port definitions
Port definitions in Pods have names, and you can reference these names in
the targetPort attribute of a Service. For example, we can bind the targetPort of
the Service to the Pod port in the following way:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app.kubernetes.io/name: proxy
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
name: http-web-svc
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app.kubernetes.io/name: proxy
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: http-web-svc
This works even if there is a mixture of Pods in the Service using a single
configured name, with the same network protocol available via different port
numbers. This offers a lot of flexibility for deploying and evolving your
Services. For example, you can change the port numbers that Pods expose in the
next version of your backend software, without breaking clients.
The default protocol for Services is TCP; you can also use any other supported
protocol.
Because many Services need to expose more than one port, Kubernetes
supports multiple port definitions for a single Service. Each port definition can
have the same protocol, or a different one.
Services without selectors
Services most commonly abstract access to Kubernetes Pods thanks to the
selector, but when used with a corresponding set of EndpointSlices objects and
without a selector, the Service can abstract other kinds of backends, including
ones that run outside the cluster.
For example:
You want to have an external database cluster in production, but in your
test environment you use your own databases.
You want to point your Service to a Service in a different Namespace or
on another cluster.
You are migrating a workload to Kubernetes. While evaluating the
approach, you run only a portion of your backends in Kubernetes.
In any of these scenarios you can define a Service without specifying a selector
to match Pods. For example:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
Because this Service has no selector, the corresponding EndpointSlice (and
legacy Endpoints) objects are not created automatically. You can map the
Service to the network address and port where it's running, by adding an
EndpointSlice object manually. For example:
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: my-service-1 # by convention, use the name of the Service
# as a prefix for the name of the EndpointSlice
labels:
# You should set the "kubernetes.io/service-name" label.
# Set its value to match the name of the Service
kubernetes.io/service-name: my-service
addressType: IPv4
ports:
- name: http # should match with the name of the service port defined above
appProtocol: http
protocol: TCP
port: 9376
endpoints:
- addresses:
- "10.4.5.6"
- addresses:
- "10.1.2.3"
Custom EndpointSlices
When you create an EndpointSlice object for a Service, you can use any name
for the EndpointSlice. Each EndpointSlice in a namespace must have a unique
name. You link an EndpointSlice to a Service by setting
the kubernetes.io/service-name label on that EndpointSlice.
Note:
The endpoint IPs must not be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6),
or link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6).
The endpoint IP addresses cannot be the cluster IPs of other Kubernetes
Services, because kube-proxy doesn't support virtual IPs as a destination.
For an EndpointSlice that you create yourself, or in your own code, you should
also pick a value to use for the label endpointslice.kubernetes.io/managed-by. If
you create your own controller code to manage EndpointSlices, consider using a
value similar to "my-domain.example/name-of-controller". If you are using a
third party tool, use the name of the tool in all-lowercase and change spaces and
other punctuation to dashes (-). If people are directly using a tool such
as kubectl to manage EndpointSlices, use a name that describes this manual
management, such as "staff" or "cluster-admins". You should avoid using the
reserved value "controller", which identifies EndpointSlices managed by
Kubernetes' own control plane.
Accessing a Service without a selector
Accessing a Service without a selector works the same as if it had a selector. In
the example for a Service without a selector, traffic is routed to one of the two
endpoints defined in the EndpointSlice manifest: a TCP connection to 10.1.2.3
or 10.4.5.6, on port 9376.
An ExternalName Service is a special case of Service that does not have
selectors and uses DNS names instead. For more information, see
the ExternalName section.
EndpointSlices are objects that represent a subset (a slice) of the backing
network endpoints for a Service.
Your Kubernetes cluster tracks how many endpoints each EndpointSlice
represents. If there are so many endpoints for a Service that a threshold is
reached, then Kubernetes adds another empty EndpointSlice and stores new
endpoint information there. By default, Kubernetes makes a new EndpointSlice
once the existing EndpointSlices all contain at least 100 endpoints. Kubernetes
does not make the new EndpointSlice until an extra endpoint needs to be added.
See EndpointSlices for more information about this API.
Endpoints
In the Kubernetes API, an Endpoints (the resource kind is plural) defines a list
of network endpoints, typically referenced by a Service to define which Pods
the traffic can be sent to.
The EndpointSlice API is the recommended replacement for Endpoints.
Over-capacity endpoints
Kubernetes limits the number of endpoints that can fit in a single Endpoints
object. When there are over 1000 backing endpoints for a Service, Kubernetes
truncates the data in the Endpoints object. Because a Service can be linked with
more than one EndpointSlice, the 1000 backing endpoint limit only affects the
legacy Endpoints API.
In that case, Kubernetes selects at most 1000 possible backend endpoints to
store into the Endpoints object, and sets an annotation on the
Endpoints: endpoints.kubernetes.io/over-capacity: truncated. The control plane
also removes that annotation if the number of backend Pods drops below 1000.
Traffic is still sent to backends, but any load balancing mechanism that relies on
the legacy Endpoints API only sends traffic to at most 1000 of the available
backing endpoints.
The same API limit means that you cannot manually update an Endpoints to
have more than 1000 endpoints.
Application protocol
FEATURE STATE: Kubernetes v1.20 [stable]
The appProtocol field provides a way to specify an application protocol for each
Service port. This is used as a hint for implementations to offer richer behavior
for protocols that they understand. The value of this field is mirrored by the
corresponding Endpoints and EndpointSlice objects.
This field follows standard Kubernetes label syntax. Valid values are one of:
IANA standard service names.
Implementation-defined prefixed names such as mycompany.com/my-
custom-protocol.
Kubernetes-defined prefixed names:
Protocol Description
Persistant data
https://kubernetes.io/docs/concepts/storage/persistent-volumes/