0% found this document useful (0 votes)
210 views133 pages

Dockers and Kubernetes: A Way To Build Scalable and Portable Applications With Cloud

Docker and Kubernetes provide a way to build scalable and portable applications for cloud computing. Docker allows applications to be packaged into containers that can run on any infrastructure. Kubernetes provides tools to manage containers across clusters of hosts and helps ensure containers are running as expected. Together, Docker and Kubernetes help address the challenges of running applications in modern, dynamic cloud environments.

Uploaded by

azure
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
0% found this document useful (0 votes)
210 views133 pages

Dockers and Kubernetes: A Way To Build Scalable and Portable Applications With Cloud

Docker and Kubernetes provide a way to build scalable and portable applications for cloud computing. Docker allows applications to be packaged into containers that can run on any infrastructure. Kubernetes provides tools to manage containers across clusters of hosts and helps ensure containers are running as expected. Together, Docker and Kubernetes help address the challenges of running applications in modern, dynamic cloud environments.

Uploaded by

azure
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
You are on page 1/ 133

Dockers and Kubernetes

A way to build scalable and portable applications with Cloud

Dr Ganesh Neelakanta Iyer


Associate Professor, Dept of Computer Science and Engg

Amrita Vishwa Vidyapeetham, Coimbatore


About Me • Associate Professor, Amrita Vishwa Vidyapeetham
• Masters & PhD from National University of Singapore (NUS)
• Several years in Industry/Academia
• Sasken Communications, NXP Semiconductors,
Progress Software, IIIT-HYD, NUS (Singapore)
• Architect, Manager, Technology Evangelist, Visiting
Faculty
• Talks/workshops in USA, Europe, Australia, Asia
• Cloud/Edge Computing, IoT, Game Theory, Software QA
• Kathakali Artist, Composer, Speaker, Traveler,
http://ganeshniyer
GANESHNIYER
Photographer .com
Outline
• Dockers
• Need for Orchestration
• Kubernetes
How many worked on Kubernetes?

Dr Ganesh Neelakanta Iyer 4


How many of you have worked on
Dockers?

Dr Ganesh Neelakanta Iyer 5


How many of you have heard of dockers?
How many know what is Cloud Computing?
Dockers
Flashback –
Lets go back to pre-1960’s
Do I worry about Can I transport
how goods interact quickly and smoothly
(e.g. coffee beans (e.g. from boat to
next to spices) train to truck)
Cargo Transport Pre-1960

transporting/storing
Goods
methods for
Multiplicity of
Multiplicity of
Also an M x N Matrix

? ? ? ? ? ? ?

? ? ? ? ? ? ?

? ? ? ? ? ? ?

? ? ? ? ? ? ?

? ? ? ? ? ? ?

? ? ? ? ? ? ?
Solution: Intermodal Shipping Container

how goods interact


(e.g. coffee beans

Do I worry about
next to spices)
Multiplicity of
Goods

A standard container that is


loaded with virtually any
goods, and stays sealed until
it reaches final delivery.

…in between, can be loaded and


unloaded, stacked, transported
transporting/storing

efficiently over long distances,

(e.g. from boat to

Can I transport
and transferred from one mode
Multiplicity of

train to truck)
methods for

quickly and
of transport to another

smoothly
This eliminated the M x N problem…
and spawned an Intermodal Shipping Container Ecosystem

• 90% of all cargo now shipped in a standard container


• Order of magnitude reduction in cost and time to load and unload ships
• Massive reduction in losses due to theft or damage
• Huge reduction in freight cost as percent of final goods (from >25% to <3%) massive globalizations
• 5000 ships deliver 200M containers per year
The Challenge
User DB

Do services and
appropriately?
Static website

apps interact
postgresql + pgv8 +
Queue Analytics DB
Multiplicity of

v8
nginx 1.5 + modsecurity + openssl + bootstrap
Stacks

hadoop + hive + thrift +


Redis + redis-sentinel
2 OpenJDK

Web frontend
Background workers
Ruby + Rails + sass +
Python 3.0 + celery + pyredis + libcurl + ffmpeg + libopencv + nodejs Unicorn
+ phantomjs API
Python 2.7 + Flaskendpoint
+ pyredis + celery + psycopg + postgresql-
client

Production
Development
environments
Multiplicity of

Public Cluster
VM

Can I migrate
hardware

and quickly?
Cloud

smoothly
QA
server
Disaster
recovery
Customer Data Contributor’s laptop
Center Production
Servers
Results in M x N compatibility nightmare
Static website ? ? ? ? ? ? ?

Web frontend ? ? ? ? ? ? ?

Background workers ? ? ? ? ? ? ?

User DB ? ? ? ? ? ? ?

Analytics DB ? ? ? ? ? ? ?

Queue ? ? ? ? ? ? ?

Development Single Prod Contributor’s Customer


QA Server Onsite Cluster Public Cloud
VM Server laptop Servers
Docker is a shipping container system for
code
Static Analytics

Do services and
Web frontend

appropriately?
User DB Queue DB

apps interact
website
Multiplicity of
Stacks

An engine that enables any


payload to be encapsulated
as a lightweight, portable,
self-sufficient container…

…that can be manipulated using


standard operations and run
environments
Multiplicity of

consistently on virtually any


hardware

Can I migrate
and quickly
hardware platform

smoothly
Developmen QA Customer Data Public Contributor’s
t server Production
Center Cloud Cluster laptop
VM
Or…put more simply
Static User Web Queu Analytics

Do services and
appropriately?
DB

apps interact
website DB frontend e
Multiplicity of
Stacks

Developer: Build Once, Run


Anywhere (Finally)

Operator: Configure Once, Run


Anything
environments
Multiplicity of
hardware

Can I migrate
and quickly
smoothly
Developmen QA Customer Data Public Contributor’s
t server Production
Center Cloud Cluster laptop
VM
Docker solves the M x N problem
Static website

Web frontend

Background workers

User DB

Analytics DB

Queue

Development Single Prod Contributor’s Customer


QA Server Onsite Cluster Public Cloud
VM Server laptop Servers
Docker containers

• Wrap up a piece of software in a


complete file system that contains
everything it needs to run:
– Code, runtime, system tools, system
libraries
– Anything you can install on a server

• This guarantees that it will always


run the same, regardless of the
environment it is running in
Why containers matter
Physical Containers Docker
Content Agnostic The same container can hold almost Can encapsulate any payload and its
any type of cargo dependencies
Hardware Agnostic Standard shape and interface allow Using operating system primitives (e.g.
same container to move from ship to LXC) can run consistently on virtually
train to semi-truck to warehouse to any hardware—VMs, bare metal,
crane without being modified or openstack, public IAAS, etc.—without
opened modification
Content Isolation and No worry about anvils crushing Resource, network, and content
Interaction bananas. Containers can be stacked isolation. Avoids dependency hell
and shipped together
Automation Standard interfaces make it easy to Standard operations to run, start, stop,
automate loading, unloading, moving, commit, search, etc. Perfect for devops:
etc. CI, CD, autoscaling, hybrid clouds
Highly efficient No opening or modification, quick to Lightweight, virtually no perf or start-up
move between waypoints penalty, quick to move and manipulate
Separation of duties Shipper worries about inside of box, Developer worries about code. Ops
carrier worries about outside of box worries about infrastructure.
Docker containers
Lightweight Open Secure
• Containers running on • Based on open • Containers isolate
one machine all standards applications from
share the same OS • Allowing containers to each other and the
kernel run on all major underlying
• They start instantly Linux distributions infrastructure while
and make more and Microsoft OS providing an added
efficient use of RAM with support for layer of protection for
• Images are every infrastructure the application
constructed from
layered file systems
• They can share
common files, making
disk usage and
image downloads
much more efficient
Docker / Containers vs. Virtual Machine
Containers have similar resource
isolation and allocation benefits as
VMs but a different architectural
approach allows them to be much
more portable and efficient

https://www.docker.com/whatisdocker/
Containers vs Virtual Machines
Virtual Machines Containers
Virtual machines run guest operating systems—note the OS Containers can share a single kernel, and the only
layer in each box. This is resource intensive, and the information that needs to be in a container image is the
resulting disk image and application state is an executable and its package dependencies, which never need
entanglement of OS settings, system-installed to be installed on the host system. These processes run like
dependencies, OS security patches, and other easy-to-lose, native processes, and you can manage them individually
hard-to-replicate ephemera
Why are Docker containers lightweight?
VMs Container
s

App Δ
App App App
App App
A’ A A
A A
Bins/ Bins/ Bins/ Bins/
Libs Libs Libs Libs
Gues
t
Guest Guest Guest
OS
OS OS OS

Original App Copy of Modified


(No OS to take App App
up space, No OS. Can Union file system
resources, or Share allows us to only save
require restart) bins/libs the diffs Between
VMs container A and
Every app, every copy of an container
app, and every slight modification A’
of the app requires a new virtual
server
What are the basics of the Docker system?

Container A
Pus Docker
h Container
Image
Registry

Searc
Pul
h
l
Buil Ru
Dockerfile d n
For

Container
Container

Container
A

C
B
Source
Code
Docker Engine
Repository Docker Engine
Host 1 OS (Linux)
Host 2 OS 2 (Windows / Linux)
Changes and Updates
App Pus

App Δ
A h Docker
Container
Bins/ Image
Libs Registry

Base Containe Containe

App Δ
Containe r r
Updat
r Image Mod A’ Mod A’’
e

App App
A’’ A

Bins/ Bins/
Libs Libs

Docker Engine Docker Engine


Host is now running Host running A wants to upgrade to
A’’ A’’.
Requests update. Gets only diffs
Easily Share and Collaborate on Applications
Docker creates a common framework for developers and sysadmins to work together on distributed
applications

• Distribute and share content


– Store, distribute and manage your Docker images in your
Docker
Hub with your team
– Image updates, changes and history are automatically
shared across your organization.
• Simply share your application with others
– Ship your containers to others without worrying about different
environment dependencies creating issues with your
application.
– Other teams can easily link to or test against your app without
having to learn or worry about how it works.
Get Started with Docker
• Install Docker
• Run a software image in a container
• Browse for an image on Docker Hub
• Create your own image and run it in a
container
• Create a Docker Hub account and an
image repository
• Create an image of your own
• Push your image to Docker Hub for
others to use
https://www.docker.com/products/docker
https://www.docker.com/products/docker-toolbox
Docker Container as a Service (CaaS)

Deliver an IT secured and managed application environment for developers to build and deploy
applications in a self service manner
Typical Use cases
App Modernization
Continuous Integration and Deployment (CI / CD)
Microservices

https://mesosphere.com/blog/networking-docker-containers-part-ii-service-discovery-traditional-apps-microservices/
Hybrid Cloud

https://boxboat.com/2016/10/21/maintaining-docker-portability-multi-cloud-world/
How does this help you build better software?
Accelerate Developer Onboarding
• Stop wasting hours trying to setup developer environments
• Spin up new instances and make copies of production code to run locally
• With Docker, you can easily take copies of your live environment and run on any new
endpoint running Docker.

Empower Developer Creativity


• The isolation capabilities of Docker containers free developers from the worries of using
“approved” language stacks and tooling
• Developers can use the best language and tools for their application service without
worrying about causing conflict issues

Eliminate Environment Inconsistencies

• By packaging up the application with its configs and dependencies together and shipping
as a container, the application will always work as designed locally, on another machine,
in test or production
• No more worries about having to install the same configs into a different environment
First Hand Experience
Setting up
• Before we get started, make sure your system has the latest version of
Docker installed.
• Docker is available in two editions: Community Edition
(CE) and Enterprise Edition (EE).
• Docker Community Edition (CE) is ideal for developers and small teams
looking to get started with Docker and experimenting with container-based
apps. Docker CE has two update channels, stable and edge:
– Stable gives you reliable updates every quarter
– Edge gives you new features every month
• Docker Enterprise Edition (EE) is designed for enterprise development
and IT teams who build, ship, and run business critical applications in
production at scale.
Supported Platforms
https://docs.docker.com/install/
In this session, I use Docker for Windows Desktop
Docker for
Windows
If your windows is not in latest version…

https://docs.docker.com/docker-for-windows/release-notes/#docker-community-edition-17062-ce-win27-2017-09-06-stable
Docker for Windows

When the whale in the status


bar stays steady, Docker is
up-and-running, and
accessible from any terminal
window.
Hello-world
• Open command prompt / windows power shell and run
docker run hello-world

 Now would also be a good time to make sure you are using
version 1.13 or higher. Run docker --version to check it out.
Building an app the Docker way
• In the past, if you were to start writing a Python app, your first
order of business was to install a Python runtime onto your
machine
• But, that creates a situation where the environment on your machine
has to be just so in order for your app to run as expected; ditto for
the server that runs your app
• With Docker, you can just grab a portable Python runtime as an
image, no installation necessary
• Then, your build can include the base Python image right alongside
your app code, ensuring that your app, its dependencies, and the
runtime, all travel together
• These portable images are defined by something called a
Dockerfile
Define a container with a Dockerfile
• Dockerfile will define what goes on in the
environment inside your container
• Access to resources like networking interfaces and disk
drives is virtualized inside this environment, which is
isolated from the rest of your system, so you have to map
ports to the outside world, and be specific about what
files you want to “copy in” to that environment
• However, after doing that, you can expect that the build of
your app defined in this Dockerfile will behave
exactly the same wherever it runs
Dockerfile
• Create an empty directory
• Change directories (cd) into the new directory, create a
file called Dockerfile
Dockerfile
• In windows, open notepad, copy the content below, click on Save as, type “Dockerfile”

This Dockerfile refers to a couple of files we


haven’t created yet, namely app.py and
requirements.txt. Let’s create those next.
The app itself
• Create two more files,
requirements.txt and app.py,
and put them in the same folder with
the Dockerfile
• This completes our app, which as
you
can see is quite simple
• When the above Dockerfile is built
into an image, app.py and
requirements.txt will be present
because of that Dockerfile’s ADD
command, and the output from app.py
will be accessible over HTTP thanks
to the EXPOSE command.
The App itself
app.py
Requirements.txt

That’s it! You don’t need Python


or anything in
requirements.txt on your
system, nor will building or
running this image install them
on your system. It doesn’t seem
like you’ve really set up an
environment with Python and
Flask, but you have.
Building the app
• We are ready to build the app. Make sure you are still at the
top level of your new directory. Here’s what ls should show

• Now run the build command. This creates a Docker


image, which we’re going to tag using -t so it has a
friendly name.
Building the app
• docker build -t friendlyhello .
Where is your built images?
• docker images
Run the app
• Run the app, mapping your machine’s port 4000 to the container’s published port 80
using –p
• docker run -p 4000:80 friendlyhello

• You should see a notice that Python is serving your app at http://0.0.0.0:80.
But that message is coming from inside the container, which doesn’t know you
mapped port 80 of that container to 4000, making the correct URL
http://localhost:4000
• Go to that URL in a web browser to see the display content served up on a web
page, including “Hello World” text, the container ID, and the Redis error message
End the process

• Hit CTRL+C in your terminal to quit


• Now use docker stop to end the process, using the
CONTAINER ID, like so
• Now let’s run the app in the background, in detached mode:
• docker run -d -p 4000:80 friendlyhello

• You get the long container ID for your app and then are kicked back
to your terminal. Your container is running in the background. You
can also see the abbreviated container ID with docker container ls
(and both work interchangeably when running commands):
• docker container ls
Share image
• To demonstrate the portability of what we just created, let’s
upload our built image and run it somewhere else
• After all, you’ll need to learn how to push to registries when you
want to deploy containers to production
• A registry is a collection of repositories, and a repository is a
collection of images—sort of like a GitHub repository, except the
code is already built. An account on a registry can create many
repositories. The docker CLI uses Docker’s public registry by
default
• If you don’t have a Docker account, sign up for one at
cloud.docker.com. Make note of your username.
Login with your docker id
• Log in to the Docker public registry on your local machine.
• docker login
Tag the image

• The notation for associating a local image with a repository on a


registry is username/repository:tag. The tag is optional, but
recommended, since it is the mechanism that registries use to give
Docker images a version. Give the repository and tag meaningful
names for the context, such as get-started:part1. This will put
the image in the get-started repository and tag it as part1.

• Now, put it all together to tag the image. Run docker tag image
with your username, repository, and tag names so that the image will
upload to your desired destination. The syntax of the command is:
Tag the image
Publish the image
• Upload your tagged image to the repository
• docker push username/repository:tag

• Once complete, the results of this upload are publicly available. If you log
in to Docker Hub, you will see the new image there, with its pull
command
Publish the image
• Upload your tagged image to the repository
• docker push username/repository:tag

• Once complete, the results of this upload are publicly available. If


you log in to Docker Hub, you will see the new image there, with its
pull command
Pull and run the image from the remote
repository
• From now on, you can use docker run and run your app on any
machine with this command:
• docker run -p 4000:80 username/repository:tag
• If the image isn’t available locally on the machine, Docker will pull it
from the repository.
• If you don’t specify the :tag portion of these commands, the tag of
:latest will be assumed, both when you build and when you run
images. Docker will use the last version of the image that ran without
a tag specified (not necessarily the most recent image).
No matter where executes, it pulls your image, along with Python and all the dependencies
from , and runs your code. It all travels together in a neat little package, and the host machine
doesn’t have to install anything but Docker to run it.
What have you seen so far?
• Basics of Docker
• How to create your first app in the Docker way
• Building the app
• Run the app
• Sharing and Publishing images
• Pull and run images
The Need for

Orchestration
Systems
The Need for Orchestration Systems
• While Docker provided an open standard for packaging
and distributing containerized applications, there arose a
new problem
– How would all of these containers be coordinated and
scheduled?
– How do all the different containers in your application
communicate with each other?
– How can container instances be scaled?

Dr Ganesh Neelakanta Iyer 72


Solution
Container Orchestration Systems

Dr Ganesh Neelakanta Iyer 73


From Containers to Kubernetes Container
Scheduler
Benefits
Isolation
Container Immutable infrastructure
Portability
Faster deployments Kubernetes
Versioning
Ease of Orchestration of cluster of containers
sharing across multiple hosts
• Automatic placements, networking,
Container deployments, scaling, roll-out/-back, A/B
Runtime Challenges testing
Host OS
Networking Declarative – not procedural
VM Deployments • Declare target state, reconcile to desired state
Service Discovery • Self-healing
Auto Scaling
Persisting Data Workload Portability
Logging, Monitoring • Abstract from cloud provider specifics
Access Control • Multiple container runtimes

Docke
r
Kubernetes
• Kubernetes is an open-source container cluster manager
– originally developed by Google, donated to the Cloud Native
Computing Foundation
– schedules & deploys containers onto a cluster of machines
• e.g. ensure that a specified number of instances of an application are running
– provides service discovery, distribution of configuration & secrets, ...
– provides access to persistent storage
• Pod
– smallest deployable unit of compute
– consists of one or more containers that are always co-located,
co- scheduled & run in a shared context
5
Why Kubernetes?
• It can be run anywhere
– on-premises
• bare metal, OpenStack, ...
– public clouds
• Google, Azure, AWS, ...
• Aim is to use Kubernetes as an abstraction layer
– migrate to containerised applications managed by Kubernetes & use only the
Kubernetes API
– can then run out-of-the-box on any Kubernetes cluster
• Avoid vendor lock-in as much as possible by not using any vendor specific
APIs or services
– except where Kubernetes provides an abstraction
• e.g. storage, load balancers

7
Kubernetes
Architecture

https://www.slideshare.net/janakiramm/kubernetes-architecture Dr Ganesh Neelakanta Iyer 77


Kubernetes Master

https://www.slideshare.net/janakiramm/kubernetes-architecture Dr Ganesh Neelakanta Iyer 78


kube-apiserver
• The apiserver provides a forward facing REST interface
into the kubernetes control plane and datastore
• All clients, including nodes, users and other applications
interact with kubernetes strictly through the API Server
• It is the true core of Kubernetes acting as the gatekeeper
to the cluster by handling authentication and
authorization, request validation, mutation, and admission
control in addition to being the front-end to the backing
datastore

Dr Ganesh Neelakanta Iyer 79


etcd
• Etcd acts as the cluster datastore
• Providing a strong, consistent and highly available key-
value store used for persisting cluster state

Dr Ganesh Neelakanta Iyer 80


kube-controller-manager
• The controller-manager is the primary daemon that manages all core
component control loops
• It monitors the cluster state via the apiserver and steers the
cluster towards the desired state
• These controllers include:
– Node Controller: Responsible for noticing and responding when nodes go
down.
– Replication Controller: Responsible for maintaining the correct number of
pods for every replication controller object in the system.
– Endpoints Controller: Populates the Endpoints object (that is, joins
Services & Pods).
– Service Account & Token Controllers: Create default accounts and API
access tokens for new namespaces
Dr Ganesh Neelakanta Iyer 81
cloud-controller-manager
• cloud-controller-manager runs controllers that interact
with the underlying cloud providers
• cloud-controller-manager allows cloud vendors code and
the Kubernetes code to evolve independent of each other

Dr Ganesh Neelakanta Iyer 82


kube-scheduler
• Kube-scheduler is a verbose policy-rich engine that
evaluates workload requirements and attempts to place it
on a matching resource
• These requirements can include such things as general
hardware reqs, affinity, anti-affinity, and other custom
resource requirements

Dr Ganesh Neelakanta Iyer 83


Kubernetes Node

https://www.slideshare.net/janakiramm/kubernetes-architecture Dr Ganesh Neelakanta Iyer 84


Pod
• A Pod is the basic building block of Kubernetes–the smallest and
simplest unit in the Kubernetes object model that you create or
deploy
• A Pod represents a running process on your cluster
• A Pod encapsulates an application container (or, in some cases,
multiple containers), storage resources, a unique network IP, and
options that govern how the container(s) should run
• A Pod represents a unit of deployment: a single instance of an
application in Kubernetes, which might consist of either a single
container or a small number of containers that are tightly coupled
and that share resources

Dr Ganesh Neelakanta Iyer 85


kubelet
• An agent that runs on each node in the cluster. It makes
sure that containers are running in a pod.
• The kubelet takes a set of PodSpecs that are provided
through various mechanisms and ensures that the
containers described in those PodSpecs are running and
healthy. The kubelet doesn’t manage containers which
were not created by Kubernetes

Dr Ganesh Neelakanta Iyer 86


kube-proxy
• Enables the Kubernetes service abstraction by
maintaining network rules on the host and performing
connection forwarding

Dr Ganesh Neelakanta Iyer 87


Container Runtime
• The container runtime is the software that is responsible
for running containers
• Kubernetes supports several runtimes
– Docker, rkt, runc and any OCI runtime-spec implementation

Dr Ganesh Neelakanta Iyer 88


Kubernetes Cluster
• Kubernetes coordinates
a highly available cluster
of computers that are
connected to work as a
single unit
• Kubernetes automates
the distribution and
scheduling of application
containers across a
cluster in a more
efficient way

Dr Ganesh Neelakanta Iyer 89


Running Kubernetes Locally via Minikube
• Minikube is a tool that makes it easy to run Kubernetes
locally
• Minikube runs a single-node Kubernetes cluster inside a
VM on your laptop for users looking to try out Kubernetes
or develop with it day-to-day

Dr Ganesh Neelakanta Iyer 90


Hello Minikube
Hello Minicube
• This tutorial provides a container image built from the following files

Dr Ganesh Neelakanta Iyer 92


Create a minikube cluster
• minikube version

• minikube start

• minikube dashboard

Dr Ganesh Neelakanta Iyer 93


Create a Deployment
• A Kubernetes Pod is a group of one or more Containers,
tied together for the purposes of administration and
networking
• The Pod in this tutorial has only one Container
• A Kubernetes Deployment checks on the health of your
Pod and restarts the Pod’s Container if it terminates
• Deployments are the recommended way to manage the
creation and scaling of Pods

Dr Ganesh Neelakanta Iyer 94


Create a Deployment
• Use the kubectl create command to create a
Deployment that manages a Pod
• The Pod runs a Container based on the provided Docker
image
kubectl create deployment hello-node --image=
gcr.io/hello-minikube-zero-install/hello-
node

Dr Ganesh Neelakanta Iyer 95


Create a Deployment
View the deployment

kubectl get deployments

Dr Ganesh Neelakanta Iyer 96


Create a Deployment
• View the Pod

kubectl get pods

Dr Ganesh Neelakanta Iyer 97


Create a deployment
• View cluster events

kubectl get events

• View the kubectl configuration

kubectl config view

Dr Ganesh Neelakanta Iyer 98


Create s Service
• By default, the Pod is only accessible by its internal IP
address within the Kubernetes cluster
• To make the hello-node Container accessible from outside
the Kubernetes virtual network, you have to expose the
Pod as a Kubernetes Service
• Expose the Pod to the public internet using the kubectl
expose command
kubectl expose deployment hello-node --type=LoadBalancer
--port=8080

Dr Ganesh Neelakanta Iyer 99


Create a Service
• View the Service you just created

kubectl get services

Dr Ganesh Neelakanta Iyer 100


Run a Service
• Run the following command

minikube service hello-node

Dr Ganesh Neelakanta Iyer 101


Bigger Experiment with Kubernetes
Deploying PHP Guestbook application with
Redis
Deploying PHP Guestbook application with
Redis
• This tutorial shows you how to build and deploy a simple,
multi-tier web application using Kubernetes and Docker
• This example consists of the following components:
– A single-instance Redis master to store guestbook entries
– Multiple replicated Redis instances to serve reads
– Multiple web frontend instances

Dr Ganesh Neelakanta Iyer 104


Objectives
• Start up a Redis master

• Start up Redis slaves

• Start up the guestbook frontend

• Expose and view the Frontend Service

Dr Ganesh Neelakanta Iyer 105


Start up the Redis Master
• The guestbook application uses Redis to store its data
• It writes its data to a Redis master instance and reads
data from multiple Redis slave instances
• Creating the Redis Master Deployment
• Copy the folder here to your system

https://tinyurl.com/anokadockers

Dr Ganesh Neelakanta Iyer 106


*.yaml file

Dr Ganesh Neelakanta Iyer 107


Start up the Redis Master
• Launch a terminal window in the directory you
downloaded the manifest files
• Apply the Redis Master Deployment from the redis-
master-deployment.yaml file
kubectl apply -f redis-master-deployment.yaml

Dr Ganesh Neelakanta Iyer 108


Start up the Redis Master
• Query the list of Pods to verify that the Redis Master Pod
is running:
kubectl get pods

Dr Ganesh Neelakanta Iyer 109


Run the following command to view the logs
from the Redis Master Pod
kubectl logs -f POD-NAME

Replace POD-NAME with the


name of your Pod

Dr Ganesh Neelakanta Iyer


110
Creating the Redis Master Service
• The guestbook applications needs to communicate to the Redis
master to write its data
• You need to apply a Service to proxy the traffic to the Redis master
Pod
• A Service defines a policy to access the Pods
• Launch a terminal window in the directory you downloaded the
manifest files
• Apply the Redis Master Service from the following redis-master-
service.yaml file
kubectl apply -f redis-master-service.yaml

Dr Ganesh Neelakanta Iyer 111


Creating the Redis Master Service
• Query the list of Services to verify that the Redis Master
Service is running
• kubectl get service

Dr Ganesh Neelakanta Iyer 112


Start up the Redis Slaves
• Although the Redis master is a single pod, you can make
it highly available to meet traffic demands by adding
replica Redis slaves

Dr Ganesh Neelakanta Iyer 113


Creating the Redis Slave Deployment
• Deployments scale based off of the configurations set in
the manifest file. In this case, the Deployment object
specifies two replicas
• If there are not any replicas running, this Deployment
would start the two replicas on your container
cluster
• Conversely, if there are more than two replicas are
running, it would scale down until two replicas are running

Dr Ganesh Neelakanta Iyer 114


Creating the Redis Slave Deployment
• Apply the Redis Slave Deployment from the redis-slave-
deployment.yaml file
kubectl apply -f redis-slave-
deployment.yaml

Dr Ganesh Neelakanta Iyer 115


Creating the Redis Slave Deployment
• Query the list of Pods to verify that the Redis Slave Pods
are running:
kubectl get pods

Dr Ganesh Neelakanta Iyer 116


Creating the Redis Slave Service
• The guestbook application needs to communicate to
Redis slaves to read data
• To make the Redis slaves discoverable, you need to set
up a Service
• A Service provides transparent load balancing to a set of
Pods

Dr Ganesh Neelakanta Iyer 117


Creating the Redis Slave Service
• Apply the Redis Slave Service from the following redis-
slave-service.yaml file
kubectl apply -f redis-slave-service.yaml

Dr Ganesh Neelakanta Iyer 118


Creating the Redis Slave Service
• Query the list of Services to verify that the Redis slave
service is running
kubectl get services

Dr Ganesh Neelakanta Iyer 119


Set up and Expose the Guestbook Frontend
• The guestbook application has a web frontend serving the
HTTP requests written in PHP
• It is configured to connect to the redis-master Service for
write requests and the redis-slave service for Read
requests

Dr Ganesh Neelakanta Iyer 120


Creating the Guestbook Frontend
Deployment
• Apply the frontend Deployment from the frontend-
deployment.yaml file
kubectl apply -f frontend-
deployment.yaml

Dr Ganesh Neelakanta Iyer 121


Creating the Guestbook Frontend
Deployment
• Query the list of Pods to verify that the three frontend
replicas are running
kubectl get pods -l app=guestbook -l
tier=frontend

Dr Ganesh Neelakanta Iyer 122


Creating the frontend service
• The redis-slave and redis-master Services you applied are
only accessible within the container cluster because the
default type for a Service is ClusterIP
• ClusterIP provides a single IP address for the set of Pods the
Service is pointing to
• This IP address is accessible only within the cluster.
• If you want guests to be able to access your guestbook, you
must configure the frontend Service to be externally visible,
so a client can request the Service from outside the container
cluster
• Minikube can only expose Services through NodePort
Dr Ganesh Neelakanta Iyer 123
Creating the frontend service
• Apply the frontend Service from the frontend-service.yaml
file
kubectl apply -f frontend-service.yaml

Dr Ganesh Neelakanta Iyer 124


Creating the frontend service
• Query the list of Services to verify that the frontend
Service is running
kubectl get services

Dr Ganesh Neelakanta Iyer 125


Viewing the Frontend Service via NodePort
• If you deployed this application to Minikube or a local
cluster, you need to find the IP address to view your
Guestbook
• Run the following command to get the IP address for the
frontend Service
minikube service frontend --url

Dr Ganesh Neelakanta Iyer 126


Go to a browser and type that URL

Dr Ganesh Neelakanta Iyer 127


Viewing the Frontend Service via
LoadBalancer
• If you deployed the frontend-service.yaml manifest with
type: LoadBalancer you need to find the IP address to
view your Guestbook
• Run the following command to get the IP address for the
frontend Service
kubectl get service frontend

Dr Ganesh Neelakanta Iyer 128


Scale the Web Frontend
• Scaling up or down is easy because your servers are
defined as a Service that uses a Deployment controller
• Run the following command to scale up the number of
frontend Pods:
kubectl scale deployment frontend --replicas=5
• Query the list of Pods to verify the number of frontend
Pods running:
kubectl get pods

Dr Ganesh Neelakanta Iyer 129


Summary
• Kubernetes can help you
– Create clusters
– Deploy applications
– Scale your business

Dr Ganesh Neelakanta Iyer 130


Dr Ganesh Neelakanta Iyer

[email protected]
u
[email protected]
GANESHNIYE
R

You might also like