31 Sequential Breakdown of The Process
31 Sequential Breakdown of The Process
In this lesson, we will go through the sequential processes kicked off by a Service creation.
• The Sequence
• 📝 A note to the Windows users
The Sequence #
The processes that were initiated with the creation of the Service are as
follows:
2. Endpoint controller is watching the API server for new service events. It
detected that there is a new Service object.
3. Endpoint controller created endpoint objects with the same name as the
Service, and it used Service selector to identify endpoints (in this case the
IP and the port of go-demo-2 Pods).
5. kube-proxy added iptables rules which capture traffic to the Service port
and redirect it to endpoints. For each endpoint object, it adds iptables
rule which selects a Pod.
Name: go-demo-2-svc
Namespace: default
Labels: db=mongo
language=go
service=go-demo-2
type=backend
Annotations: <none>
Selector: service=go-demo-2,type=backend
Type: NodePort
IP: 10.0.0.194
Port: <unset> 28017/TCP
TargetPort: 28017/TCP
NodePort: <unset> 31879/TCP
Endpoints: 172.17.0.4:28017,172.17.0.5:28017
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Line 1-2: We can see the name and the namespace. We did not yet
explore namespaces (coming up later) and, since we didn’t specify any, it
is set to default .
Line 3-6: Since the Service is associated with the Pods created through
the ReplicaSet, it inherited all their labels. The selector matches the one
from the ReplicaSet. The Service is not directly associated with the
ReplicaSet (or any other controller) but with Pods through matching
labels.
Line 9-13: Next is the NodePort type which exposes ports to all the nodes.
Since NodePort automatically created ClusterIP type as well, all the Pods
in the cluster can access the TargetPort . The Port is set to 28017 . That is
the port that the Pods can use to access the Service. Since we did not
specify it explicitly when we executed the command, its value is the same
as the value of the TargetPort , which is the port of the associated Pod
that will receive all the requests. NodePort was generated automatically
since we did not set it explicitly. It is the port which we can use to access
the Service and, therefore, the Pods from outside the cluster. In most
cases, it should be randomly generated, that way we avoid any clashes.
Line 1-2: We used the filtered output of the kubectl get command to
retrieve the nodePort and store it as the environment variable PORT .
Line 3: We retrieved the IP of the minikube VM.
Line 4: Finally, We opened MongoDB UI in a browser through the service
port.
The same applies to Services. Even though kubectl expose did the work, we
should try to use a documented approach through YAML files. In that spirit,
Now that we have destroyed the Service, we will explore creating Services
through declarative syntax in the next lesson.