0% found this document useful (0 votes)
80 views22 pages

CKA-Debug Services

Uploaded by

Rpl Marseille
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views22 pages

CKA-Debug Services

Uploaded by

Rpl Marseille
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

08/02/2020 Debug Services - Kubernetes

Tasks
HOMEGETTING STARTEDCONCEPTSTASKSTUTORIALSREFERENCECONTRIBUTE

Search

Debug Services

An issue that comes up rather frequently for new installations of Kubernetes is that a Service is not working properly. You’ve run your

Deployment and created a Service , but you get no response when you try to access it. This document will hopefully help you to gure out

what’s going wrong.

Conventions
Running commands in a Pod
Setup
Does the Service exist?
Does the Service work by DNS?
Does the Service work by IP?
Is the Service correct?
Does the Service have any Endpoints?
Are the Pods working?
Is the kube-proxy working?
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 1/22
08/02/2020 Debug Services - Kubernetes

Seek help
What's next

Conventions

Throughout this doc you will see various commands that you can run. Some commands need to be run within a Pod , others on a Kubernetes
Node , and others can run anywhere you have kubectl and credentials for the cluster. To make it clear what is expected, this document will use

the following conventions.

If the command “COMMAND” is expected to run in a Pod and produce “OUTPUT”:

u@pod$ COMMAND
OUTPUT

If the command “COMMAND” is expected to run on a Node and produce “OUTPUT”:

u@node$ COMMAND
OUTPUT

If the command is “kubectl ARGS”:

kubectl ARGS
OUTPUT

Running commands in a Pod


https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 2/22
08/02/2020 Debug Services - Kubernetes

For many steps here you will want to see what a Pod running in the cluster sees. The simplest way to do this is to run an interactive alpine Pod :

kubectl run -it --rm --restart=Never alpine --image=alpine sh


/ #

Note: If you don’t see a command prompt, try pressing enter.

If you already have a running Pod that you prefer to use, you can run a command in it using:

kubectl exec <POD-NAME> -c <CONTAINER-NAME> -- <COMMAND>

Setup

For the purposes of this walk-through, let’s run some Pods . Since you’re probably debugging your own Service you can substitute your own
details, or you can follow along and get a second data point.

kubectl run hostnames --image=k8s.gcr.io/serve_hostname \


--labels=app=hostnames \
--port=9376 \
--replicas=3
deployment.apps/hostnames created

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 3/22
08/02/2020 Debug Services - Kubernetes

kubectl commands will print the type and name of the resource created or mutated, which can then be used in subsequent commands.

Note:
This is the same as if you started the Deployment with the following YAML:

apiVersion: apps/v1
kind: Deployment
metadata:
name: hostnames
spec:
selector:
matchLabels:
app: hostnames
replicas: 3
template:
metadata:
labels:
app: hostnames
spec:
containers:
- name: hostnames
image: k8s.gcr.io/serve_hostname
ports:
- containerPort: 9376
protocol: TCP

Con rm your Pods are running:

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 4/22
08/02/2020 Debug Services - Kubernetes

kubectl get pods -l app=hostnames


NAME READY STATUS RESTARTS AGE
hostnames-632524106-bbpiw 1/1 Running 0 2m
hostnames-632524106-ly40y 1/1 Running 0 2m
hostnames-632524106-tlaok 1/1 Running 0 2m

Does the Service exist?

The astute reader will have noticed that we did not actually create a Service yet - that is intentional. This is a step that sometimes gets
forgotten, and is the rst thing to check.

So what would happen if I tried to access a non-existent Service ? Assuming you have another Pod that consumes this Service by name you
would get something like:

u@pod$ wget -O- hostnames


Resolving hostnames (hostnames)... failed: Name or service not known.
wget: unable to resolve host address 'hostnames'

So the rst thing to check is whether that Service actually exists:

kubectl get svc hostnames


No resources found.
Error from server (NotFound): services "hostnames" not found

So we have a culprit, let’s create the Service . As before, this is for the walk-through - you can use your own Service ’s details here.

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 5/22
08/02/2020 Debug Services - Kubernetes

kubectl expose deployment hostnames --port=80 --target-port=9376


service/hostnames exposed

And read it back, just to be sure:

kubectl get svc hostnames


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames ClusterIP 10.0.1.175 <none> 80/TCP 5s

As before, this is the same as if you had started the Service with YAML:

apiVersion: v1
kind: Service
metadata:
name: hostnames
spec:
selector:
app: hostnames
ports:
- name: default
protocol: TCP
port: 80
targetPort: 9376

Now you can con rm that the Service exists.

Does the Service work by DNS?

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 6/22
08/02/2020 Debug Services - Kubernetes

From a Pod in the same Namespace :

u@pod$ nslookup hostnames


Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

Name: hostnames
Address 1: 10.0.1.175 hostnames.default.svc.cluster.local

If this fails, perhaps your Pod and Service are in different Namespaces , try a namespace-quali ed name:

u@pod$ nslookup hostnames.default


Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

Name: hostnames.default
Address 1: 10.0.1.175 hostnames.default.svc.cluster.local

If this works, you’ll need to adjust your app to use a cross-namespace name, or run your app and Service in the same Namespace . If this still
fails, try a fully-quali ed name:

u@pod$ nslookup hostnames.default.svc.cluster.local


Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

Name: hostnames.default.svc.cluster.local
Address 1: 10.0.1.175 hostnames.default.svc.cluster.local

Note the su x here: “default.svc.cluster.local”. The “default” is the Namespace we’re operating in. The “svc” denotes that this is a Service . The
“cluster.local” is your cluster domain, which COULD be different in your own cluster.

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 7/22
08/02/2020 Debug Services - Kubernetes

You can also try this from a Node in the cluster:

Note: 10.0.0.10 is my DNS Service , yours might be different.

u@node$ nslookup hostnames.default.svc.cluster.local 10.0.0.10


Server: 10.0.0.10
Address: 10.0.0.10#53

Name: hostnames.default.svc.cluster.local
Address: 10.0.1.175

If you are able to do a fully-quali ed name lookup but not a relative one, you need to check that your /etc/resolv.conf le is correct.

u@pod$ cat /etc/resolv.conf


nameserver 10.0.0.10
search default.svc.cluster.local svc.cluster.local cluster.local example.com
options ndots:5

The nameserver line must indicate your cluster’s DNS Service . This is passed into kubelet with the --cluster-dns ag.

The search line must include an appropriate su x for you to nd the Service name. In this case it is looking for Services in the local
Namespace ( default.svc.cluster.local ), Services in all Namespaces ( svc.cluster.local ), and the cluster ( cluster.local ).

Depending on your own install you might have additional records after that (up to 6 total). The cluster su x is passed into kubelet with the
--cluster-domain ag. We assume that is “cluster.local” in this document, but yours might be different, in which case you should change that
in all of the commands above.

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 8/22
08/02/2020 Debug Services - Kubernetes

The options line must set ndots high enough that your DNS client library considers search paths at all. Kubernetes sets this to 5 by default,
which is high enough to cover all of the DNS names it generates.

Does any Service exist in DNS?


If the above still fails - DNS lookups are not working for your Service - we can take a step back and see what else is not working. The
Kubernetes master Service should always work:

u@pod$ nslookup kubernetes.default


Server: 10.0.0.10
Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local

Name: kubernetes.default
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

If this fails, you might need to go to the kube-proxy section of this doc, or even go back to the top of this document and start over, but instead of
debugging your own Service , debug DNS.

Does the Service work by IP?

Assuming we can con rm that DNS works, the next thing to test is whether your Service works at all. From a node in your cluster, access the
Service ’s IP (from kubectl get above).

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 9/22
08/02/2020 Debug Services - Kubernetes

u@node$ curl 10.0.1.175:80


hostnames-0uton

u@node$ curl 10.0.1.175:80


hostnames-yp2kp

u@node$ curl 10.0.1.175:80


hostnames-bvc05

If your Service is working, you should get correct responses. If not, there are a number of things that could be going wrong. Read on.

Is the Service correct?

It might sound silly, but you should really double and triple check that your Service is correct and matches your Pod ’s port. Read back your
Service and verify it:

kubectl get service hostnames -o json

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 10/22
08/02/2020 Debug Services - Kubernetes

{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "hostnames",
"namespace": "default",
"uid": "428c8b6c-24bc-11e5-936d-42010af0a9bc",
"resourceVersion": "347189",
"creationTimestamp": "2015-07-07T15:24:29Z",
"labels": {
"app": "hostnames"
}
},
"spec": {
"ports": [
{
"name": "default",
"protocol": "TCP",
"port": 80,
"targetPort": 9376,
"nodePort": 0
}
],
"selector": {
"app": "hostnames"
},
"clusterIP": "10.0.1.175",
"type": "ClusterIP",
"sessionAffinity": "None"
},
"status": {
"loadBalancer": {}
}
}

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 11/22
08/02/2020 Debug Services - Kubernetes

Is the port you are trying to access in spec.ports[] ?

Is the targetPort correct for your Pods (many Pods choose to use a different port than the Service )?

If you meant it to be a numeric port, is it a number (9376) or a string “9376”?

If you meant it to be a named port, do your Pods expose a port with the same name?

Is the port’s protocol the same as the Pod ’s?

Does the Service have any Endpoints?

If you got this far, we assume that you have con rmed that your Service exists and is resolved by DNS. Now let’s check that the Pods you ran

are actually being selected by the Service .

Earlier we saw that the Pods were running. We can re-check that:

kubectl get pods -l app=hostnames


NAME READY STATUS RESTARTS AGE
hostnames-0uton 1/1 Running 0 1h
hostnames-bvc05 1/1 Running 0 1h
hostnames-yp2kp 1/1 Running 0 1h

The “AGE” column says that these Pods are about an hour old, which implies that they are running ne and not crashing.

The -l app=hostnames argument is a label selector - just like our Service has. Inside the Kubernetes system is a control loop which
evaluates the selector of every Service and saves the results into an Endpoints object.

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 12/22
08/02/2020 Debug Services - Kubernetes

kubectl get endpoints hostnames


NAME ENDPOINTS
hostnames 10.244.0.5:9376,10.244.0.6:9376,10.244.0.7:9376

This con rms that the endpoints controller has found the correct Pods for your Service . If the hostnames row is blank, you should check that
the spec.selector eld of your Service actually selects for metadata.labels values on your Pods . A common mistake is to have a typo or

other error, such as the Service selecting for run=hostnames , but the Deployment specifying app=hostnames .

Are the Pods working?

At this point, we know that your Service exists and has selected your Pods . Let’s check that the Pods are actually working - we can bypass
the Service mechanism and go straight to the Pods .

Note: These commands use the Pod port (9376), rather than the Service port (80).

u@pod$ wget -qO- 10.244.0.5:9376


hostnames-0uton

pod $ wget -qO- 10.244.0.6:9376


hostnames-bvc05

u@pod$ wget -qO- 10.244.0.7:9376


hostnames-yp2kp

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 13/22
08/02/2020 Debug Services - Kubernetes

We expect each Pod in the Endpoints list to return its own hostname. If this is not what happens (or whatever the correct behavior is for your
own Pods ), you should investigate what’s happening there. You might nd kubectl logs to be useful or kubectl exec directly to your Pods
and check service from there.

Another thing to check is that your Pods are not crashing or being restarted. Frequent restarts could lead to intermittent connectivity issues.

kubectl get pods -l app=hostnames


NAME READY STATUS RESTARTS AGE
hostnames-632524106-bbpiw 1/1 Running 0 2m
hostnames-632524106-ly40y 1/1 Running 0 2m
hostnames-632524106-tlaok 1/1 Running 0 2m

If the restart count is high, read more about how to debug pods.

Is the kube-proxy working?

If you get here, your Service is running, has Endpoints , and your Pods are actually serving. At this point, the whole Service proxy
mechanism is suspect. Let’s con rm it, piece by piece.

Is kube-proxy running?
Con rm that kube-proxy is running on your Nodes . You should get something like the below:

u@node$ ps auxw | grep kube-proxy


root 4194 0.4 0.1 101864 17696 ? Sl Jul04 25:43 /usr/local/bin/kube-proxy --master=https://kubernetes-master --

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 14/22
08/02/2020 Debug Services - Kubernetes

Next, con rm that it is not failing something obvious, like contacting the master. To do this, you’ll have to look at the logs. Accessing the logs
depends on your Node OS. On some OSes it is a le, such as /var/log/kube-proxy.log, while other OSes use journalctl to access logs. You
should see something like:

I1027 22:14:53.995134 5063 server.go:200] Running in resource-only container "/kube-proxy"


I1027 22:14:53.998163 5063 server.go:247] Using iptables Proxier.
I1027 22:14:53.999055 5063 server.go:255] Tearing down userspace rules. Errors here are acceptable.
I1027 22:14:54.038140 5063 proxier.go:352] Setting endpoints for "kube-system/kube-dns:dns-tcp" to [10.244.1.3:53]
I1027 22:14:54.038164 5063 proxier.go:352] Setting endpoints for "kube-system/kube-dns:dns" to [10.244.1.3:53]
I1027 22:14:54.038209 5063 proxier.go:352] Setting endpoints for "default/kubernetes:https" to [10.240.0.2:443]
I1027 22:14:54.038238 5063 proxier.go:429] Not syncing iptables until Services and Endpoints have been received fro
I1027 22:14:54.040048 5063 proxier.go:294] Adding new service "default/kubernetes:https" at 10.0.0.1:443/TCP
I1027 22:14:54.040154 5063 proxier.go:294] Adding new service "kube-system/kube-dns:dns" at 10.0.0.10:53/UDP
I1027 22:14:54.040223 5063 proxier.go:294] Adding new service "kube-system/kube-dns:dns-tcp" at 10.0.0.10:53/TCP

If you see error messages about not being able to contact the master, you should double-check your Node con guration and installation steps.

One of the possible reasons that kube-proxy cannot run correctly is that the required conntrack binary cannot be found. This may happen on
some Linux systems, depending on how you are installing the cluster, for example, you are installing Kubernetes from scratch. If this is the case,
you need to manually install the conntrack package (e.g. sudo apt install conntrack on Ubuntu) and then retry.

Is kube-proxy writing iptables rules?


One of the main responsibilities of kube-proxy is to write the iptables rules which implement Services . Let’s check that those rules are
getting written.

The kube-proxy can run in “userspace” mode, “iptables” mode or “ipvs” mode. Hopefully you are using the “iptables” mode or “ipvs” mode. You
should see one of the following cases.

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 15/22
08/02/2020 Debug Services - Kubernetes

Userspace

u@node$ iptables-save | grep hostnames


-A KUBE-PORTALS-CONTAINER -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -
-A KUBE-PORTALS-HOST -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames:default" -m tcp --dport 80 -j DNA

There should be 2 rules for each port on your Service (just one in this example) - a “KUBE-PORTALS-CONTAINER” and a “KUBE-PORTALS-
HOST”. If you do not see these, try restarting kube-proxy with the -v ag set to 4, and then look at the logs again.

Almost nobody should be using the “userspace” mode any more, so we won’t spend more time on it here.

Iptables

u@node$ iptables-save | grep hostnames


-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000
-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.3
-A KUBE-SEP-WNBA2IHDGP2BOBGZ -s 10.244.1.7/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000
-A KUBE-SEP-WNBA2IHDGP2BOBGZ -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.1
-A KUBE-SEP-X3P2623AGDH6CDF3 -s 10.244.2.3/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000
-A KUBE-SEP-X3P2623AGDH6CDF3 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.2
-A KUBE-SERVICES -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUB
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.3333
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.5000
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-57KPRZ3JQVENLNBR

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 16/22
08/02/2020 Debug Services - Kubernetes

There should be 1 rule in KUBE-SERVICES , 1 or 2 rules per endpoint in KUBE-SVC-(hash) (depending on SessionAffinity ), one

KUBE-SEP-(hash) chain per endpoint, and a few rules in each KUBE-SEP-(hash) chain. The exact rules will vary based on your exact con g

(including node-ports and load-balancers).

IPVS

u@node$ ipvsadm -ln


Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
...
TCP 10.0.1.175:80 rr
-> 10.244.0.5:9376 Masq 1 0 0
-> 10.244.0.6:9376 Masq 1 0 0
-> 10.244.0.7:9376 Masq 1 0 0
...

IPVS proxy will create a virtual server for each service address(e.g. Cluster IP, External IP, NodePort IP, Load Balancer IP etc.) and some
corresponding real servers for endpoints of the service, if any. In this example, service hostnames( 10.0.1.175:80 ) has 3 endpoints(
10.244.0.5:9376 , 10.244.0.6:9376 , 10.244.0.7:9376 ) and you’ll get results similar to above.

Is kube-proxy proxying?
Assuming you do see the above rules, try again to access your Service by IP:

u@node$ curl 10.0.1.175:80


hostnames-0uton

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 17/22
08/02/2020 Debug Services - Kubernetes

If this fails and you are using the userspace proxy, you can try accessing the proxy directly. If you are using the iptables proxy, skip this section.

Look back at the iptables-save output above, and extract the port number that kube-proxy is using for your Service . In the above
examples it is “48577”. Now connect to that:

u@node$ curl localhost:48577


hostnames-yp2kp

If this still fails, look at the kube-proxy logs for speci c lines like:

Setting endpoints for default/hostnames:default to [10.244.0.5:9376 10.244.0.6:9376 10.244.0.7:9376]

If you don’t see those, try restarting kube-proxy with the -v ag set to 4, and then look at the logs again.

A Pod cannot reach itself via Service IP


This can happen when the network is not properly con gured for “hairpin” tra c, usually when kube-proxy is running in iptables mode and
Pods are connected with bridge network. The Kubelet exposes a hairpin-mode ag that allows endpoints of a Service to loadbalance back to

themselves if they try to access their own Service VIP. The hairpin-mode ag must either be set to hairpin-veth or promiscuous-bridge .

The common steps to trouble shoot this are as follows:

Con rm hairpin-mode is set to hairpin-veth or promiscuous-bridge . You should see something like the below. hairpin-mode is set
to promiscuous-bridge in the following example.

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 18/22
08/02/2020 Debug Services - Kubernetes

u@node$ ps auxw|grep kubelet

root 3392 1.1 0.8 186804 65208 ? Sl 00:51 11:11 /usr/local/bin/kubelet --enable-debugging-handlers

Con rm the effective hairpin-mode . To do this, you’ll have to look at kubelet log. Accessing the logs depends on your Node OS. On some
OSes it is a le, such as /var/log/kubelet.log, while other OSes use journalctl to access logs. Please be noted that the effective hairpin

mode may not match --hairpin-mode ag due to compatibility. Check if there is any log lines with key word hairpin in kubelet.log. There
should be log lines indicating the effective hairpin mode, like something below.

I0629 00:51:43.648698 3252 kubelet.go:380] Hairpin mode set to "promiscuous-bridge"

If the effective hairpin mode is hairpin-veth , ensure the Kubelet has the permission to operate in /sys on node. If everything works
properly, you should see something like:

for intf in /sys/devices/virtual/net/cbr0/brif/*; do cat $intf/hairpin_mode; done


1

1
1

If the effective hairpin mode is promiscuous-bridge , ensure Kubelet has the permission to manipulate linux bridge on node. If cbr0
bridge is used and con gured properly, you should see:

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 19/22
08/02/2020 Debug Services - Kubernetes

u@node$ ifconfig cbr0 |grep PROMISC

UP BROADCAST RUNNING PROMISC MULTICAST MTU:1460 Metric:1

Seek help if none of above works out.

Seek help

If you get this far, something very strange is happening. Your Service is running, has Endpoints , and your Pods are actually serving. You

have DNS working, iptables rules installed, and kube-proxy does not seem to be misbehaving. And yet your Service is not working. You
should probably let us know, so we can help investigate!

Contact us on Slack or Forum or GitHub.

What's next

Visit troubleshooting document for more information.

Feedback

Was this page helpful?

Yes No

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 20/22
08/02/2020 Debug Services - Kubernetes

Create an Issue Edit This Page

Page last modi ed on January 03, 2020 at 12:13 PM PST by add missing backquote (#18413) (Page History)

Home

Blog

Partners

Community

Case Studies

Contribute

© 2020 The Kubernetes Authors | Documentation Distributed under CC BY 4.0


Copyright © 2020 The Linux Foundation ®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark
Usage page
ICP license: 京ICP备17074266号-3

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 21/22
08/02/2020 Debug Services - Kubernetes

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/ 22/22

You might also like