Shailesh - Main-Kubernet Installation
Shailesh - Main-Kubernet Installation
frontend fe-apiserver
bind 0.0.0.0:6443
mode tcp
option tcplog
default_backend be-apiserver
backend be-apiserver
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256
weight 100
server master1 172.31.38.15:6443 check ==> change the ip address refer aws server
server master1 172.31.38.18:6443 check ==> change the ip address refer aws server
server master1 172.31.38.121:6443 check ==> change the ip address refer aws serve
:wq
nc -v localhost 6443
========================================================================
(Very Very IMPORTANT, Install and configure these steps on all the Master and Worker Server) STEPS
1 and 2.
swapoff -a
usermod -aG docker ubuntu => this command not run by trainer
apt-get update -y && apt-get install -y apt-transport-https curl ca-certificates gnupg-agent software-
properties-common
b. Donwload apt-key for kubedm
apt-get update -y
ABOVE, We just installed required software, yet none of them are Master or Worker
NOW, We have installed all the require software, NOW We will bootstrap the cluster initialize the
cluster.
a. First taket the Loadbalancer port number and Private Ip address, I don't want load balancer to
expose to public therefore take private ip
cat /etc/haproxy/haproxy.cfg
=> the load balancer port we are using will get in configuration file, as below :
b. Now login to first master server as ROOt and initialize the kubeadm, below command : We are
haping multiple master server, with below command wer are generating API SERVER with
loadbalaner.
c. After initializing these has given below output : Please see carefully, very important &VERY
IMPORTANT : Dont clear this screen, because once we clear we don't get
d. You can join multiple master or control pany by running below command each as ROOT user, taken
from above output
e. EXECUTE this one on another MASTER server, as below, As we have 3 Master server, so RUN it on
all master server: (Don't run kubeadm init, just take above output and run on other master server)
NOW we have to configure, kube ctl server, in any of master / worker server :
KUBE CTL can be configured to any server, that should have network connectivity to load balancer
b. NOW, to configure, KUBECONFIG, SWITCH to NORMAL USER In Load Balancer Server and create
a .kube directory :
Go to any of the Master machine and take this file, as highligheted below, as seen below:
/etc/kubernets/admin.conf
d. NOW, I will cat the above file, as shown below in any of the Master Server:
#cat /etc/kubernets/admin.conf
e. COPY the CONFIGURATION FILE from START to END, which came after running cat
/etc/kubernets/admin.conf (Copy all the files below)
f. Create a file under .kube folder, as below = in load balancer server in home directory
PASTING :
g. Change the permission :
h. As of now I have not installled, KUBECTL software, if I run this command, it will not give anything :
$kubectl cluster-info
I. You can install by using this comman given above as well below, but ignore this below because we
have to install using -- classic
h. As we can see below, All the machine are in NOT READY State, because we have not Deployed
Kubernets Networking :
5. KUBERNETS NETWORKING :
#You will notice from the previous command, that all the pods are running except one: ‘kube-dns’. For
resolving this we will install a # pod network. To install the weave pod network, run the following
command, form Load balance where kube CTL installed :
We see below theree API server, three Controler-Mangaer server, Also theree etcd server is nothing
but DATASTORE, it will in sync with each other, The porceesing will request by any API server, but load
will share accross all the etcd data store.
========================================================================
1. Whathever way we can install kubectl, like here it is downloaded from INternet
b.
c. I don't have kube config file, therefore unable to run kubectl commands, I want that file:
5.
6. cat /etc/haproxy/haproxy.conf
$vi mituntechapp.yml
vi mit
mituntechapp.yml