EXPERIMENT 9 (Kubernetes Multi Node Cluster)
EXPERIMENT 9 (Kubernetes Multi Node Cluster)
Kubeadm has been installed on the nodes. Packages are available for Ubuntu
16.04+, CentOS 7 or HypriotOS v1.0.1+.
The first stage of initialising the cluster is to launch the master node. The master is
responsible for running the control plane components, etcd and the API server.
Clients will communicate to the API to schedule workloads and manage the state of
the cluster.
Task
The command below will initialise the cluster with a known token to simplify the
following steps.
To manage the Kubernetes cluster, the client configuration and certificates are
required. This configuration is created when kubeadm initialises the cluster. The
command copies the configuration to the users home directory and sets the
environment variable for use with the CLI.
The Container Network Interface (CNI) defines how the different nodes and their
workloads should communicate. There are multiple network providers available,
some are listed.
Task
In this scenario we'll use WeaveWorks. The deployment definition can be viewed at
cat /opt/weave-kube.yaml
Weave will now deploy as a series of Pods on the cluster. The status of this can be
viewed using the command
Once the Master and CNI has initialised, additional nodes can join the cluster as long
as they have the correct token. The tokens can be managed via kubeadm token, for
example
Task
On the second node, run the command to join the cluster providing the IP address of
the Master node.
This is the same command provided after the Master has been initialised.
The cluster has now been initialised. The Master node will manage the cluster, while
our one worker node will run our container workloads.
Task
The Kubernetes CLI, known as kubectl, can now use the configuration to access the
cluster. For example, the command below will return the two nodes in our cluster.
The state of the two nodes in the cluster should now be Ready. This means that our
deployments can be scheduled and launched.
Using Kubectl, it's possible to deploy pods. Commands are always issued for the
Master with each node only responsible for executing the workloads.
The command below create a Pod based on the Docker Image katacoda/docker-
http-server.
Once running, you can see the Docker Container running on the node.
Task
The dashboard is deployed into the kube-system namespace. View the status of the
deployment with
This means they can control all aspects of Kubernetes. With ClusterRoleBinding and
RBAC, different level of permissions can be defined based on security requirements.
More information on creating a user for the Dashboard can be found in
the Dashboard documentation.
Once the ServiceAccount has been created, the token to login can be found with:
When the dashboard was deployed, it used externalIPs to bind the service to port
8443. This makes the dashboard available to outside of the cluster and viewable
at https://2886795301-8443-simba08.environments.katacoda.com/