Create Kubernetes Cluster1728210188222
Create Kubernetes Cluster1728210188222
In this Article we are going to learn How to Create Kubernetes cluster using Kubeadm on
Ubuntu 22.04 LTS and Join Worker Node to the Cluster.
Prerequisites:
Kubernetes API
TCP Inbound 6443* All
1 server
etcd server client kube-
TCP Inbound 2379-2380
2 API apiserver,etcd
Self, Control
TCP Inbound 10250 Kubelet API
3 plane
Master node :
You can clone the repository for reference.
sudo swapoff -a
overlay
br_netfilter
EOF
Execute the following commands to enable overlayFS & VxLan pod communication.
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward =1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo vi /etc/apt/sources.list.d/kubernetes.list
ls
sudo ./common.sh
PUBLIC_IP_ACCESS="false"
PUBLIC_IP_ACCESS="true"
After using sudo ./master.sh the master / control plane will generate the token as mentioned below
the same token has to use in every node join with master then the communication will be establish in
between control plane and cluster
sudo ./master.sh
+ POD_CIDR=192.168.0.0/16
+ [[ true == \f\a\l\s\e ]]
+ [[ true == \t\r\u\e ]]
++ curl ifconfig.me
++ echo ''
+ MASTER_PUBLIC_IP=43.205.242.73
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] apiserver serving cert is signed for DNS names [ip-1-0-0-73 kubernetes kubernetes.default
kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 1.0.0.73
43.205.242.73]
[certs] etcd/server serving cert is signed for DNS names [ip-1-0-0-73 localhost] and IPs [1.0.0.73
127.0.0.1 ::1]
[certs] etcd/peer serving cert is signed for DNS names [ip-1-0-0-73 localhost] and IPs [1.0.0.73
127.0.0.1 ::1]
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory
"/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.506746 seconds
[mark-control-plane] Marking the node ip-1-0-0-73 as control-plane by adding the labels: [node-
role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ip-1-0-0-73 as control-plane by adding the taints [node-
role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for
nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve
CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates
in the cluster
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
export KUBECONFIG=/etc/kubernetes/admin.conf
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
--discovery-token-ca-cert-hash
sha256:f42bbb0341f5717ce53dc2a12ee753ec15d2bd02c80462bfa29187baa8394750 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
--discovery-token-ca-cert-hash
sha256:f42bbb0341f5717ce53dc2a12ee753ec15d2bd02c80462bfa29187baa8394750
+ mkdir -p /root/.kube
++ id -u
++ id -g
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org
created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
+ curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-
resources.yaml -O
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
root@ip-1-0-0-73:~/kubeadm-scripts/scripts#
Use the following commands from the output to create the kubeconfig in master so
that you can use kubectl to interact with cluster API
mkdir -p $HOME/.kube
Table of Contents
Firewall Ports/Inbound Traffic Ports for Kubernetes Cluster
Execute the following commands on all the nodes for IPtables to see bridged traffic.
overlay
br_netfilter
EOF
sudo swapoff -a
overlay
br_netfilter
EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
Execute the following commands to enable overlayFS & VxLan pod communication.
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward =1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo vi /etc/apt/sources.list.d/kubernetes.list
sudo ./common.sh
PUBLIC_IP_ACCESS="false"
PUBLIC_IP_ACCESS="true"
note : should not run master.sh file in worker nodes , it is for only master node