0% found this document useful (0 votes)
336 views139 pages

GCP ACE Notes 6

This document discusses different options for running containers on Google Cloud, including Compute Engine, Cloud Run, Google Kubernetes Engine, and App Engine. It notes that containers allow for better utilization of resources compared to running single applications per VM. The focus of App Engine is on source-code based deployments rather than containerization directly. Docker provides the ability to package applications into lightweight, pre-configured virtual environments called containers that can run on any platform with a container runtime like containerd, which is used on Google Kubernetes Engine nodes.

Uploaded by

Mohan Muddaliar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
336 views139 pages

GCP ACE Notes 6

This document discusses different options for running containers on Google Cloud, including Compute Engine, Cloud Run, Google Kubernetes Engine, and App Engine. It notes that containers allow for better utilization of resources compared to running single applications per VM. The focus of App Engine is on source-code based deployments rather than containerization directly. Docker provides the ability to package applications into lightweight, pre-configured virtual environments called containers that can run on any platform with a container runtime like containerd, which is used on Google Kubernetes Engine nodes.

Uploaded by

Mohan Muddaliar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 139

Proprietary + Confidential

Partner Certification Academy

Associate Cloud Engineer

pls-academy-ace-student-slides-3-2303
Proprietary + Confidential

The information in this presentation is classified:

Google confidential & proprietary


⚠ This presentation is shared with you under NDA.

● Do not record or take screenshots of this presentation.

● Do not share or otherwise distribute the information in this


presentation with anyone inside or outside of your organization.

Thank you!
Proprietary + Confidential

Session logistics
● When you have a question, please:
○ Click the Raise hand button in Google Meet.
○ Or add your question to the Q&A section of Google Meet.
○ Please note that answers may be deferred until the end of the session.

● These slides are available in the Student Lecture section of your Qwiklabs classroom.

● The session is not recorded.

● Google Meet does not have persistent chat.


○ If you get disconnected, you will lose the chat history.
○ Please copy any important URLs to a local text file as they appear in the chat.
Proprietary + Confidential

Program issues or concerns?

● Problems with accessing Cloud Skills Boost for Partners


[email protected]

● Problems with a lab (locked out, etc.)


[email protected]

● Problems with accessing Partner Advantage


○ https://support.google.com/googlecloud/topic/9198654
Proprietary + Confidential

The Google Cloud Certified


Associate Cloud Engineer exam assesses your ability to:

Setup a cloud solution environment

Plan and configure a cloud solution

Deploy and implement a cloud solution

Associate Cloud Ensure successful operation of a cloud solution

Engineer Configure access and security

For more information:


https://cloud.google.com/certification/cloud-engineer

Associate Cloud Engineer


https://cloud.google.com/certification/cloud-engineer

Exam Guide
https://cloud.google.com/certification/guides/cloud-engineer

Sample Questions
https://docs.google.com/forms/d/e/1FAIpQLSfexWKtXT2OSFJ-obA4iT3GmzgiOCGvjr
T9OfxilWC1yPtmfQ/viewform
Proprietary + Confidential

Learning Path - Partner Certification Academy Website


Go to: https://rsvp.withgoogle.com/events/partner-learning/google-cloud-certifications

Click
Click here Professional
Cloud
Architect
Proprietary + Confidential

Needed for
Exam
Voucher
Proprietary + Confidential
Proprietary + Confidential

Associate Cloud Engineer (ACE) Exam Guide


Each module of this course covers Google Cloud
services based on the topics in the ACE Exam Guide

The primary topics are:


● Compute Engine Next
discussion
● VPC Networks
● Google Kubernetes Engine
● Cloud Run, Cloud Functions and App Engine
● Cloud Storage and database options
● Resource Hierarchy/Identity and Access
Management (IAM)
● Logging and Monitoring

https://cloud.google.com/certification/guid
es/cloud-engineer/
Proprietary + Confidential

Google Kubernetes
Engine, Cloud Run,

02
Cloud Functions, App
Engine
Proprietary + Confidential

Exam Guide Overview - Google Kubernetes Engine


2.2 Planning and configuring compute resources. Considerations include:
2.2.1 Selecting appropriate compute choices for a given workload (e.g., Compute Engine,
Google Kubernetes Engine, Cloud Run, Cloud Functions)
2.2.2 Using preemptible VMs and custom machine types as appropriate

3.2 Deploying and implementing Google Kubernetes Engine resources.

3.2.1 Installing and configuring the command line interface (CLI) for Kubernetes (kubectl)
3.2.2 Deploying a Google Kubernetes Engine cluster with different configurations including AutoPilot,
regional clusters, private clusters, etc.
3.2.3 Deploying a containerized application to Google Kubernetes Engine
3.2.4 Configuring Google Kubernetes Engine monitoring and logging

Google Kubernetes 4.2 Managing Google Kubernetes Engine resources. Tasks include:
4.2.1 Viewing current running cluster inventory (nodes, pods, services)
Engine 4.2.2 Browsing Docker images and viewing their details in the Artifact Registry
4.2.3 Working with node pools (e.g., add, edit, or remove a node pool)
4.2.4 Working with pods (e.g., add, edit, or remove pods)
4.2.5 Working with services (e.g., add, edit, or remove a service)
4.2.6 Working with stateful applications (e.g. persistent volumes, stateful sets)
4.2.7 Managing Horizontal and Vertical autoscaling configurations
4.2.8 Working with management interfaces (e.g., Google Cloud console, Cloud Shell, Cloud SDK, kubectl)
Exam Guide - Kubernetes Engine
Proprietary + Confidential

4.2 Managing Google Kubernetes Engine resources. Tasks include:


4.2.1 Viewing current running cluster inventory (nodes, pods, services)
4.2.2 Browsing Docker images and viewing their details in the Artifact Registry
4.2.3 Working with node pools (e.g., add, edit, or remove a node pool)
4.2.4 Working with pods (e.g., add, edit, or remove pods)
4.2.5 Working with services (e.g., add, edit, or remove a service)
4.2.6 Working with stateful applications (e.g. persistent volumes, stateful sets)
4.2.7 Managing Horizontal and Vertical autoscaling configurations
4.2.8 Working with management interfaces (e.g., Google Cloud console, Cloud Shell, Cloud SDK, kubectl)
Proprietary + Confidential

Containers are an approach to app isolation


Containers typically deployed on VMs
• Allows better utilization of VMs resources
• A VM that ran one application before can now run many containers

From this To this


Proprietary + Confidential

Where can you run containers?


While App
Engine runs in
Instance Managed containers
from Instance (which Google
creates for you),
container Group Autoscaling Cluster the focus of App
First Engine is on
source-code
discussion based
deployments,
not on
containerization

Google
Docker Compute
Cloud Run Kubernetes App Engine
Engine
Engine
Proprietary + Confidential

Docker
● Provides the ability to package and run an application in an environment
called a container
● Images are very light-weight, pre-configured virtual environments
○ Contain everything needed to run the application, so do not need to
rely on what is currently installed on the host
● Docker images will run on any platform that has a container runtime installed
○ Google Kubernetes Engine uses the containerd runtime on all GKE
nodes as GKE Version 1.24

*Docker designed containerd, which is now a part of the CNCF, an organization that supports
Kubernetes and Docker-based deployments. Docker is still an independent project that uses containerd
as its runtime.
https://blog.purestorage.com/purely-informational/containerd-vs-docker-whats-the-difference

Docker website
https://www.docker.com/

Docker is a container format used to run applications. They can be leveraged in a


microservice architecture.

Docker images are lightweight, preconfigured virtual environments used to run our
application. The images include the software required to run the containerized
application. The applications are deployed inside the Docker image.

Docker images are versatile and will run on any platform that has Docker installed.
Proprietary + Confidential

How do you get an app into a container using Docker?


Create a Dockerfile which specifies such things as:
● The OS image to use
● Where to copy the code to inside the image
● And other items depending on what is being Example
deployed Dockerfile
content
○ Libraries to load, etc.
Proprietary + Confidential

Then you build and run the container as an image

$> docker build -t space-invaders .


$> docker run -d space-invaders

● docker build builds a container and stores it locally as a runnable image.


● docker run starts the container image
Proprietary + Confidential

When local testing is complete


● Tag the image and upload it to a registry service (like Google Artifact Registry) for
sharing
Location in the
Artifact Registry
$> docker tag space-invaders
us-central1-docker.pkg.dev/bt-spaceinvaders-ke/space-invaders/spaceinvaders-
image
Manually pushing the image

$> docker push


us-central1-docker.pkg.dev/bt-spaceinvaders-ke/space-invaders/spaceinvaders-
image:latest

● At this point, the image can be deployed to Compute Engine, Cloud Run, Kubernetes Engine or
App Engine
Proprietary + Confidential

Managing deployments
Docker packages apps into images & Enterprise and continuous deployment
deploys them to containers
Container Mentioned
in Exam
Docker Guide

Artifact
Not Registry
mentioned Cloud
in Exam Build
Guide
Cloud Source
Code Dockerfile Repositories

When local testing is complete, can manually


push to Artifact Registry:

Developers can test their container locally. When done, they could manually push the
image to the Artifact Registry as shown on the left in this slide.

If you had a hundred developers sharing source files, you would need a system for
managing them, for tracking them, versioning them, and enforcing a check-in, review,
and approval process. They typically push source code to a shared code repository,
such as the Google Cloud Source Repositories, as shown on the right of this slide.
Cloud Source Repositories is a cloud-based solution which integrates with Cloud
Build.

Cloud Build functions similarly to Docker in that it accepts code and configuration and
builds containers (among other things). Cloud Build offers many features and services
that are geared towards professional development. It is designed to fit into a
continuous development / continuous deployment workflow and can scale to handle
many application developers working on and continuously updating a live global
service.
Proprietary + Confidential

Artifact Registry
Optionally scan
● Provides images for known
○ Built-in vulnerability scanning of vulnerabilities

container images
○ Controlled access with fine-grained
control (IAM)
○ Regional and multi-regional
repositories
■ Can have multiple per project
● Store multiple artifact formats
○ Docker images
○ OS packages for Linux distributions
○ Language packages for Python,
Java, and Node

https://cloud.google.com/artifact-registry

Overview of Artifact Registry: https://cloud.google.com/artifact-registry/docs/overview

Container concepts: https://cloud.google.com/artifact-registry/docs/container-concepts

Artifact Registry has its own IAM roles and can be deployed multi-regionally.
Exam Guide - Kubernetes Engine
Proprietary + Confidential

2.2 Planning and configuring compute resources. Considerations include:


2.2.1 Selecting appropriate compute choices for a given workload (e.g., Compute Engine,
Discussed
Google Kubernetes Engine, Cloud Run, Cloud Functions)
Compute Engine
2.2.2 Using preemptible VMs and custom machine types as appropriate in another
module
3.2 Deploying and implementing Google Kubernetes Engine resources. Tasks include:
3.2.1 Installing and configuring the command line interface (CLI) for Kubernetes (kubectl)
3.2.2 Deploying a Google Kubernetes Engine cluster with different configurations including AutoPilot,
regional clusters, private clusters, etc.
3.2.3 Deploying a containerized application to Google Kubernetes Engine
3.2.4 Configuring Google Kubernetes Engine monitoring and logging
Proprietary + Confidential

Where can you run containers? Next


discussion
Instance Managed
from Instance
container Group Autoscaling Cluster

Google
Docker Compute
Cloud Run Kubernetes App Engine
Engine
Engine

App Engine is covered later in this module. While App Engine runs in containers
(which Google creates for you), the focus of App Engine is on source-code based
deployments, not on containerization.
Proprietary + Confidential

Containers are an approach to app isolation


Repeat of an
Containers typically deployed on VMs
earlier slide
• Allows better utilization of VMs resources
• A VM that ran one application before can now run many containers

From this To this


Proprietary + Confidential

The word Kubernetes comes from the Greek word for


helmsman or pilot
● Just like a cargo ship, a VM (aka
“node” in Kubernetes) can host
multiple containers
● Each container is independent of
the others
○ May have multiples of the
same container running
depending on the use case
■ Number can be scaled
up/down as needed

Photo by Ian Taylor on Unsplash


Proprietary + Confidential

Kubernetes and Google deploys BILLIONS of


containers per week

Google

01
The background

Behind the scenes at Google is a


technology called Borg, which is
used to manage Google’s massive
infrastructure. As Borg evolved and
matured, it became one of Google’s
go-to technologies, but it wasn’t
publicly available.

Google is the pioneer when it comes to containerized services. From Gmail to


YouTube to Search, everything at Google runs in containers. Containerization allows
development teams to move fast, deploy software efficiently, and operate at an
unprecedented scale.

Behind the scenes at Google is a technology called Borg, which is used to manage
Google’s massive infrastructure. As Borg evolved and matured, it became one of
Google’s go-to technologies, but it wasn’t publicly available.

Source: Kubernetes: Your Hybrid Cloud Strategy


https://cloud.google.com/files/kubernetes-your-hybrid-cloud-strategy.pdf
Proprietary + Confidential

Kubernetes and
Google
01 02
Adapting the
The background
technology

Behind the scenes at Google is a Open sourcing Borg wasn’t feasible,


technology called Borg, which is however, so a group of Google
used to manage Google’s massive developers decided to create an
infrastructure. As Borg evolved and open source project called
matured, it became one of Google’s Kubernetes.
go-to technologies, but it wasn’t
publicly available.

Google wanted to open source the technology behind BORG. To do so, they created
an open source project named Kubernetes in 2014
Proprietary + Confidential

Kubernetes and
Google
01 02 03
Adapting the
The background The goal
technology

Behind the scenes at Google is a Open sourcing Borg wasn’t feasible, Their goal was to take the
technology called Borg, which is however, so a group of Google knowledge that Google had learned
used to manage Google’s massive developers decided to create an about managing billions of
infrastructure. As Borg evolved and open source project called containers and bring that to the
matured, it became one of Google’s Kubernetes. world.
go-to technologies, but it wasn’t
publicly available.

Their goal was to take the knowledge that Google had learned about managing
billions of containers and bring that to the world.

A container story - Google Kubernetes Engine


https://cloud.google.com/blog/topics/developers-practitioners/container-story-google-k
ubernetes-engine
Proprietary + Confidential

Kubernetes
Today, Kubernetes is widely supported in the industry:
● Google Kubernetes Engine (GKE)
● Microsoft in Azure Container Service (AKS)
● AWS Elastic Kubernetes Service (EKS)
● Estimated that > 54% of Fortune 500 companies have adopted
Kubernetes

Kubernetes, or K8s, can run on a variety of supported platforms including:

● Support on Google Cloud using Google Kubernetes Engine or GKE


● Support on Microsoft Azure using Azure Container service or AKS
● Support on AWS using Elastic Kubernetes Service or EKS
● Support on OpenStack
● Mesos and Pivotal Cloud Foundry also supports Kubernetes
Proprietary + Confidential

Exam Guide - Kubernetes Engine


2.2 Planning and configuring compute resources. Considerations include:
2.2.1 Selecting appropriate compute choices for a given workload (e.g., Compute Engine,
Google Kubernetes Engine, Cloud Run, Cloud Functions)
2.2.2 Using preemptible VMs and custom machine types as appropriate

3.2 Deploying and implementing Google Kubernetes Engine resources. Tasks include:
3.2.1 Installing and configuring the command line interface (CLI) for Kubernetes (kubectl)
3.2.2 Deploying a Google Kubernetes Engine cluster with different configurations including AutoPilot,
regional clusters, private clusters, etc.
3.2.3 Deploying a containerized application to Google Kubernetes Engine
3.2.4 Configuring Google Kubernetes Engine monitoring and logging
Proprietary + Confidential

Installing and configuring the command line interface

● kubectl is the command-line tool used to interact with GKE clusters


● To install
gcloud components install kubectl

● Authorization plug-ins must be installed to provide authentication tokens


to communicate with GKE clusters
gcloud components install gke-gcloud-auth-plugin

● Update the kubectl configuration to use the plugin

gcloud container clusters get-credentials CLUSTER_NAME

Install kubectl and configure cluster access:


https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl

The first command line example shows how to deploy a service based on a
configuration file. The kubernetes-config.yaml file will hold the configuration.

To show running pods, you can use the kubectl get pods command.

The kubectl get deployments command will list all the deployments currently running.

The kubectl describe deployments command is used to display the details of a


deployment.
Proprietary + Confidential

The kubectl command syntax has several parts


What do you … on what type
want to do? of object?

kubectl [command] [TYPE] [NAME] [flags]


get
pods
describe
deployments
logs
nodes
exec

... ...

kubectl syntax:
https://kubernetes.io/docs/reference/kubectl/cheatsheet/

Once the config file in the .kube folder has been configured, the kubectl command
automatically references this file and connects to the default cluster without prompting
you for credentials. Now let’s talk about how to use the kubectl command. Its syntax is
composed of several parts: the command, the type, the name, and optional flags.

‘Command’ specifies the action that you want to perform, such as get, describe, logs,
or exec. Some commands show you information, while others allow you to change the
cluster’s configuration.

‘TYPE’ defines the Kubernetes object that the ‘command’ acts upon. For example,
you could specify Pods, Deployments, nodes, or other objects, including the cluster
itself.

TYPE used in combination with ‘command’ tells kubectl what you want to do and the
type of object you want to perform that action on.
Proprietary + Confidential

The kubectl command syntax has several parts


What do you … on what type What is that Any special
want to do? of object? object’s name? requests?

kubectl [command] [TYPE] [NAME] [flags]

kubectl get pods kubectl get pod my-test-app -o=yaml

kubectl get pod my-test-app kubectl get pods -o=wide

‘NAME’ specifies the object defined in ‘TYPE.’ The Name field isn’t always needed,
especially when you’re using commands that list or show you information.
For example, if you run the command “kubectl get pods” without specifying a name,
the command returns the list of all Pods. To filter this list you specify a Pod’s name,
such as “kubectl get pod my-test-app ”. kubectl then returns information only on the
Pod named ‘my-test-app’.

Some commands support additional optional flags that you can include at the end of
the command.
Think of this as making a special request, like formatting the output in a certain way.
You could view the state of a Pod by using the command “kubectl get pod
my-test-app -o=yaml”. By the way, telling kubectl to give you output in YAML format is
a really useful tool. You’ll often want to capture the existing state of a Kubernetes
object in a YAML file so that, for example, you can recreate it in a different cluster.

You can also use flags to display more information than you normally see. For
instance, you can run the command “kubectl get pods -o=wide” to display the list of
Pods in “wide” format, which means you see additional columns of data for each of
the Pods in the list. One noteworthy piece of extra information you get in wide format:
which Node each Pod is running on.
Proprietary + Confidential

Exam Guide - Kubernetes Engine


2.2 Planning and configuring compute resources. Considerations include:
2.2.1 Selecting appropriate compute choices for a given workload (e.g., Compute Engine,
Google Kubernetes Engine, Cloud Run, Cloud Functions)
2.2.2 Using preemptible VMs and custom machine types as appropriate

3.2 Deploying and implementing Google Kubernetes Engine resources. Tasks include:
3.2.1 Installing and configuring the command line interface (CLI) for Kubernetes (kubectl)
3.2.2 Deploying a Google Kubernetes Engine cluster with different configurations including AutoPilot,
regional clusters, private clusters, etc.
3.2.3 Deploying a containerized application to Google Kubernetes Engine
3.2.4 Configuring Google Kubernetes Engine monitoring and logging
Proprietary + Confidential

Kubernetes Engine cluster is a collection of nodes (VMs)


● Application containers are deployed to the cluster within “pods”
○ Kubernetes decides which nodes the pods will run on

GKE automates the creation of a virtual machine cluster that provides resources to
containerized applications. When deploying applications, you deploy them to a cluster
and the location of the applications will be handled by Kubernetes.
Proprietary + Confidential

Creating Kubernetes Engine clusters


Standard: Provides advanced
configuration flexibility over the
Autopilot: pre-configured cluster's underlying infrastructure
with an optimized cluster
configuration that is ready
for production workloads

Types of clusters (Standard vs Autopilot, Regional vs Zonal)


https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-cluster

Standard cluster architecture:


https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture

Creating an Autopilot cluster:


https://cloud.google.com/kubernetes-engine/docs/how-to/creating-an-autopilot-cluster
Proprietary + Confidential

GKE Autopilot mode - Shared Responsibility model

AutoPilot
https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview
Proprietary + Confidential

Zonal vs regional clusters


Region
Zone Zone Zone Zone

Cluster Cluster
Control Control Control
Control plane plane plane plane

Node Node Node

Node Node Node Node Node Node

Node Node Node

Creating a regional cluster:


https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-regional-cluster

Creating a zonal cluster:


https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-zonal-cluster

By default, a cluster launches in a single Google Cloud compute zone with three
identical nodes, all in one node pool. The number of nodes can be changed during or
after the creation of the cluster. Adding more nodes and deploying multiple replicas of
an application will improve an application’s availability. But only up to a point. What
happens if the entire compute zone goes down?

You can address this concern by using a GKE regional cluster. Regional clusters have
a single API endpoint for the cluster. However, its control planes and nodes are
spread across multiple Compute Engine zones within a region.

Regional clusters ensure that the availability of the application is maintained across
multiple zones in a single region. In addition, the availability of the control plane is
also maintained so that both the application and management functionality can
withstand the loss of one or more, but not all, zones. By default, a regional cluster is
spread across 3 zones, each containing 1 control plane and 3 nodes. These numbers
can be increased or decreased. For example, if you have five nodes in Zone 1, you
will have exactly the same number of nodes in each of the other zones, for a total of
15 nodes. Once you build a zonal cluster, you can’t convert it into a regional cluster,
or vice versa.
Proprietary + Confidential

A regional or zonal GKE cluster can also be set up as


a private cluster
● A private cluster is a type of cluster that only has
internal IP addresses.
○ A VPC subnet is created to host to the
cluster worker nodes
■ Nodes, Pods, and Services receive
unique subnet IP address ranges
○ The control plane communicates with the
nodes via VPC Peering
■ Automatically setup by Google
● These are isolated from the internet by default
○ Can allow restricted access from certain
IPs, e.g, CI/CD services

Creating a private cluster:


https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters

In a private cluster, the entire cluster (that is, the control plane and its nodes) are
hidden from the public internet.

Cluster control planes can be accessed by Google Cloud products, such as Cloud
Logging or Cloud Monitoring, through an internal IP address.

They can also be accessed by authorized networks through an external IP address.


Authorized networks are basically IP address ranges that are trusted to access the
control plane. In addition, nodes can have limited outbound access through Private
Google Access, which allows them to communicate with other Google Cloud services.
For example, nodes can pull container images from Container Registry without
needing external IP addresses.
Proprietary + Confidential

Creating Kubernetes Engine Clusters - CLI


Standard

gcloud container clusters create "my-cluster" --project "my-project"


--region "us-central1"

Autopilot

gcloud container clusters create-auto "my-cluster" --project "my-project"


--region "us-central1"

Private

gcloud container clusters create private-cluster "my-cluster" --project


"my-project" --region "us-central1"
Proprietary + Confidential

Exam Guide - Kubernetes Engine


2.2 Planning and configuring compute resources. Considerations include:
2.2.1 Selecting appropriate compute choices for a given workload (e.g., Compute Engine,
Google Kubernetes Engine, Cloud Run, Cloud Functions)
2.2.2 Using preemptible VMs and custom machine types as appropriate

3.2 Deploying and implementing Google Kubernetes Engine resources. Tasks include:
3.2.1 Installing and configuring the command line interface (CLI) for Kubernetes (kubectl)
3.2.2 Deploying a Google Kubernetes Engine cluster with different configurations including AutoPilot,
regional clusters, private clusters, etc.
3.2.3 Deploying a containerized application to Google Kubernetes Engine
3.2.4 Configuring Google Kubernetes Engine monitoring and logging
Proprietary + Confidential

Kubernetes cluster has nodes, pods, and containers

Container
runs in a pod
Cluster

Pods
Nodes

Cluster nodes Auto-upgrading


https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades

Cluster nodes Auto-repair


https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair

A Kubernetes cluster is composed of nodes, which are a unit of hardware resources.


Nodes in GKE are implemented as VMs in Compute Engine. Each node has pods.
Pods are resource management units. A pod is how Kubernetes controls and
manages resources needed by applications and how it executes code. Pods also give
the system fine-grain control over scaling.

Each pod hosts, manages, and runs one or more containers. The containers in a pod
share networking and storage.

So typically, there is one container per pod, unless the containers hold closely related
applications. For example, a second container might contain the logging system for
the application in the first container.

A pod can be moved from one node to another without reconfiguring or rebuilding
anything.

This design enables advanced controls and operations that gives systems built on
Kubernetes unique qualities.
Proprietary + Confidential

Kubernetes deploys containers to nodes


scheduler
Can deploy individual
pods - but nothing
restarts them if they
die
Control
plane

YAML

A deployment deploys Pod YAML


multiple pods (aka,
replica set) and will YAML
restart one if it dies
Deployment YAML

Pods: https://cloud.google.com/kubernetes-engine/docs/concepts/pod

Creating pods:
https://cloud.google.com/kubernetes-engine/docs/concepts/pod#creating_pods

Pods can be creating by a pod spec yaml file and using the “kubectl apply” command:
https://kubernetes.io/docs/concepts/workloads/pods/

Spec: https://kubernetes.io/docs/concepts/workloads/pods/

A pod template (which contains a pod spec) is included in a Deployment yaml file.
This tells GKE how many pods to create and keep running at all times.
Pod template: https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates

Deployments: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

Each cluster has a control plane node that determines what happens on the cluster.
There are usually at least three of them for availability. And they can be located
across zones. A Kubernetes job makes changes to the cluster.

For example a pod YAML file provides the information to start up and run a pod on a
node. If for some reason a pod stops running or a node is lost, the pod will not
automatically be replaced. The Deployment YAML tells Kubernetes how many pods
you want running. So the Kubernetes deployment is what keeps a number of pods
running. The Deployment YAML also defines a Replica Set, which is how many copies
of a container you want running. The Kubernetes scheduler determines on which
node and in which pod the replica containers are to be run.
Proprietary + Confidential

Kubernetes apiVersion: apps/v1beta1


kind: Deployment

Deployment metadata:
name: devops-deployment

Configuration
labels:
<Some code omitted to save space>
spec:
Example replicas: 3
selector:
<Some code omitted to save space>
template:
<Some code omitted to save space>
spec: Artifact Registry
containers:
- name: devops-demo
image: us-central1-docker.pkg.dev/si/si-image:latest
ports:
- containerPort: 8080

Overview of deploying workloads


https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-workloads-overvie
w

Deploying a stateless Linux application:


https://cloud.google.com/kubernetes-engine/docs/how-to/stateless-apps

The illustration is an example of a deployment configuration.

The top portion of the bolded text depicts setting file as a deployment.

The middle bolded text section shows replicas set to 3. So this deployment will always
have 3 pods running.

The bottom portion of the text sets the container image used for the pod.
Proprietary + Confidential

Deploying to Kubernetes via command line


Create cluster
gcloud container clusters create si-cluster \
--zone us-central1-a --machine-type=e2-micro \
--num-nodes 2
Connect and apply yaml file
gcloud container clusters get-credentials si-cluster --zone us-central1-a

kubectl apply -f devops-deployment.yaml

Show the running pods


kubectl get pods

Show all the deployments


kubectl get deployments

gcloud container syntax


https://cloud.google.com/sdk/gcloud/reference/container
Proprietary + Confidential

Creating a Load Balancer service via command line

Create a load balancer to route requests to the pods

kubectl expose deployment devops-deployment


--port=80 --target-port=8080 --type=LoadBalancer

To get the load balancer public IP address, use the following command:

kubectl get services


More about
services later

GKE Services
https://cloud.google.com/kubernetes-engine/docs/concepts/service

We need to expose our application so users can connect to it. We use the kubectl
expose deployment command for this. The command sets the port to 80, the target
port to 8080, and the type of service to LoadBalancer.

Once the configuration is complete, we can run the kubectl get services command to
see the details of the load balancer, including the public IP address.
Proprietary + Confidential

Scaling a Deployment Scaling is discussed in more


depth in the next section

● Use scale command to manually change the number of instances

kubectl scale deployment devops-deployment


--replicas=10

● To dynamically scale up and down, create an autoscaler


○ Specify min and max number of machines and some metric to
monitor
kubectl autoscale deployment devops-deployment --min=5
--max=10 --cpu-percent=60

A deployment can be scaled using the Console or the command line. In the first
example, the command scales the deployment to 10 replicas. Regardless of resource
usage, there will always be 10.

To configure autoscaling via the command line, use the kubectl autoscale
command. In the example, the deployment will run 5 replicas as a minimum. Based
on CPU percentage, the replicas scale to a maximum of 10.
Proprietary + Confidential

Deleting Deployments and Resources

● Use the delete command to destroy anything previously created


○ Specifying a configuration file will delete everything created from it

kubectl delete -f devops-deployment.yaml

● Can also delete resources individually when created at the command line

kubectl delete services [name here]

Finally, here are a few examples on how to delete resources. Use the delete
command to destroy anything previously created.

In the first example, you can delete resources based on changes made to the
deployment file.

You can also delete resources individually. The second example shows how to do
that.
Proprietary + Confidential

Summary of Kubernetes Terms

● Pods are the smallest unit of deployment


○ Usually pods represent a single container
● Clusters are collections of machines that will run containers
○ Each machine is a node in the cluster
● Replica sets are used to create multiple instances of a pod
○ Guarantee pods are healthy and the right number exist
● Load balancers route requests to pods
● Autoscalers monitor load and create or delete pods
● Deployments are configurations that define service resources

A few terms to understand when working with Kubernetes.

Pods are the smallest unit of deployment. A pod can be comprised of one or more
containers, but typically it is one pod per container.

Clusters are a collection of instances the containers can run on. Each instance is a
node in a cluster.

Replica sets are used to create multiple instances of a pod. The replica sets can
guarantee the desired number is always running and only healthy pods are used.

Kubernetes can leverage cloud load balancers to distribute traffic to multiple pods.
The load balancer is run as a network service.

To ensure the proper performance of your application, you can configure autoscalers
to add and remove pods.

Deployments are configurations that define service resources.


Proprietary + Confidential

Exam Guide - Kubernetes Engine


2.2 Planning and configuring compute resources. Considerations include:
2.2.1 Selecting appropriate compute choices for a given workload (e.g., Compute Engine,
Google Kubernetes Engine, Cloud Run, Cloud Functions)
2.2.2 Using preemptible VMs and custom machine types as appropriate

3.2 Deploying and implementing Google Kubernetes Engine resources. Tasks include:
3.2.1 Installing and configuring the command line interface (CLI) for Kubernetes (kubectl)
3.2.2 Deploying a Google Kubernetes Engine cluster with different configurations including AutoPilot,
regional clusters, private clusters, etc.
3.2.3 Deploying a containerized application to Google Kubernetes Engine
3.2.4 Configuring Google Kubernetes Engine monitoring and logging
Proprietary + Confidential

Cloud Logging and GKE Note the use


of “k8s”
Proprietary + Confidential

Configuring monitoring and logging support for a


new cluster
Options in both
● Logging and Monitoring are enabled by default in GKE commands are
● Can configure which logs are sent to Cloud Logging SYSTEM, WORKLOAD
or NONE
gcloud container clusters create [CLUSTER_NAME] \
--zone=[ZONE] \
--project=[PROJECT_ID] \ SYSTEM: audit logs
--logging=SYSTEM, WORKLOAD for Admin activity,
data access logs and
● Can also configure which metrics are sent to Cloud Monitoring events logs

gcloud container clusters create [CLUSTER_NAME] \ WORKLOAD: logs


produced from
--zone=[ZONE] \ applications running
--project=[PROJECT_ID] \ in the containers
--monitoring=SYSTEM

Configuring Cloud Operations for GKE:


https://cloud.google.com/stackdriver/docs/solutions/gke/installing

Managing GKE logs:


https://cloud.google.com/stackdriver/docs/solutions/gke/managing-logs

Introducing Kubernetes control plane metrics in GKE (Blog Sep 8, 2022):


https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-control-pla
ne-metrics-are-generally-available
Proprietary + Confidential

Exam Guide - Kubernetes Engine


4.2 Managing Google Kubernetes Engine resources. Tasks include:
4.2.1 Viewing current running cluster inventory (nodes, pods, services)
4.2.2 Browsing Docker images and viewing their details in the Artifact Registry
4.2.3 Working with node pools (e.g., add, edit, or remove a node pool)
4.2.4 Working with pods (e.g., add, edit, or remove pods)
4.2.5 Working with services (e.g., add, edit, or remove a service)
4.2.6 Working with stateful applications (e.g. persistent volumes, stateful sets)
4.2.7 Managing Horizontal and Vertical autoscaling configurations
4.2.8 Working with management interfaces (e.g., Google Cloud console, Cloud Shell, Cloud SDK, kubectl)
Proprietary + Confidential

Node pools are cluster subsets with identical machine


configurations
● Share the same hardware, OS, and GKE version
Control Plane
○ Sizing, scaling, and upgrades, operate per pool
○ Can cover a full region – or just select zones
● Additional pools of different sizes and types can be
added after cluster creation
○ For example, a pool with local SSDs, or Spot VMs, or
a specific image or a different machine type
● Common configurations:
○ Each stateful application gets their own node pool
○ Batch jobs in a node pool with Preemptible VMs

About node pools


https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools

Running a GKE application on spot nodes with on-demand nodes as fallback:


https://cloud.google.com/blog/topics/developers-practitioners/running-gke-application-
spot-nodes-demand-nodes-fallback
Proprietary + Confidential

Multiple node pools example

kubectl cluster
networking services

apiserver pod pod pod pod

scheduler kubelet kubelet kubelet

controller node node node data


storage
etcd services
app1 pod pod pod pod
control plane
app2
kubelet kubelet kubelet

app3 node node node

small VM node pool big VM node pool

Here we have 2 node pools. The blue pool is running two apps and the red pool has
one app deployed. Both pools are controlled by the same control plane.

Cluster administrators configure the cluster by sending requests to apiservers on the


Control Plane using a command-line tool called kubectl. Kubectl can be installed and
run anywhere.

From there, the apiserver communicates with the cluster in two primary ways:
● To the kubelet process that runs on each node
● To any node, pod, or service through the apiserver's proxy functionality (not
shown).

Then pods are started on various nodes. In this example, there are three types of
pods running (shown in yellow, green and teal).
Proprietary + Confidential

Add a node pool in the Console


Proprietary + Confidential

Managing node pools with the CLI


● Create a pool
gcloud container node-pools create app3-pool --cluster bt-si-cluster
--zone us-central1-a --machine-type e2-medium
--disk-type pd-standard --disk-size 100 --num-nodes 3

● Resize a pool
gcloud container clusters resize bt-si-cluster \
--node-pool app3-pool \
--num-nodes 10

● Delete a pool

gcloud container node-pools delete app3-pool \


--cluster bt-si-cluster

Add and manage node pools:


https://cloud.google.com/kubernetes-engine/docs/how-to/node-pools

Delete a node pool:


https://cloud.google.com/kubernetes-engine/docs/how-to/node-pools#deleting_a_nod
e_pool
Exam Guide - Kubernetes Engine
Proprietary + Confidential

4.2 Managing Google Kubernetes Engine resources. Tasks include:


4.2.1 Viewing current running cluster inventory (nodes, pods, services)
4.2.2 Browsing Docker images and viewing their details in the Artifact Registry
4.2.3 Working with node pools (e.g., add, edit, or remove a node pool)
4.2.4 Working with pods (e.g., add, edit, or remove pods)
4.2.5 Working with services (e.g., add, edit, or remove a service)
4.2.6 Working with stateful applications (e.g. persistent volumes, stateful sets)
4.2.7 Managing Horizontal and Vertical autoscaling configurations
4.2.8 Working with management interfaces (e.g., Google Cloud console, Cloud Shell, Cloud SDK, kubectl)
Proprietary + Confidential

Services allow communication to/from pods


● Pods in a deployment are regularly created and destroyed, causing their IP addresses to
change constantly
○ Makes it difficult for frontend applications to identify which pods to connect to
● Services consist of
○ A set of pods
○ A policy to access them
● The most common types of Kubernetes services are
○ ClusterIP
○ NodePort
○ LoadBalancer
○ Ingress (not really a service)

Services
https://cloud.google.com/kubernetes-engine/docs/concepts/service

Another overview:
https://kubernetes.io/docs/concepts/services-networking/service/

Exposing applications using services (includes add, edit and remove):


https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps
Proprietary + Confidential

Service details
● ClusterIP
○ Default service
○ Internal to the cluster and allows applications within the pods to communicate with
each other
● NodePort
○ Opens a specific port on each nodes in the cluster
○ Traffic sent to that port is forwarded to the pods
● LoadBalancer
○ Standard way to expose a Kubernetes service externally so it can be accessed over the
internet
○ In GKE this creates a external TCP Network Load Balancer with one IP address
accessible to external users
● Ingress
○ In GKE, the ingress controller creates an HTTP(S) Load Balancer, which can route traffic
to services in the Kubernetes cluster based on path or subdomain
Proprietary + Confidential

Creating a Load Balancer service Earlier


slide
via command line

Create a load balancer to route requests to the pods

kubectl expose deployment devops-deployment


--port=80 --target-port=8080 --type=LoadBalancer

To get the load balancer public IP address, use the following command:
kubectl get services

To delete the service

kubectl delete services [name here]

We need to expose our application so users can connect to it. We use the kubectl
expose deployment command for this. The command sets the port to 80, the target
port to 8080, and the type of service to LoadBalancer.

Once the configuration is complete, we can run the kubectl get services command to
see the details of the load balancer, including the public IP address.
Exam Guide - Kubernetes Engine
Proprietary + Confidential

4.2 Managing Google Kubernetes Engine resources. Tasks include:


4.2.1 Viewing current running cluster inventory (nodes, pods, services)
4.2.2 Browsing Docker images and viewing their details in the Artifact Registry
4.2.3 Working with node pools (e.g., add, edit, or remove a node pool)
4.2.4 Working with pods (e.g., add, edit, or remove pods)
4.2.5 Working with services (e.g., add, edit, or remove a service)
4.2.6 Working with stateful applications (e.g. persistent volumes, stateful sets)
4.2.7 Managing Horizontal and Vertical autoscaling configurations
4.2.8 Working with management interfaces (e.g., Google Cloud console, Cloud Shell, Cloud SDK, kubectl)
Proprietary + Confidential

Kubernetes Engine StatefulSets


● StatefulSets represent a set of Pods with unique, persistent identities and stable
hostnames that GKE maintains regardless of where they are scheduled
● Some examples of use cases include:
○ A Redis pod that needs to maintain access to the same storage volume
■ Even if it is redeployed or restarted
○ A Cassandra implementation with database sharding across Pods
○ A MongoDB database where multiple read replicas are needed to serve
read-only traffic
● Stateful sets can take advantage of GKE health checks
○ E.g., if a pod containing a read-replica becomes non-responsive, a new pod
will be created in its place
■ Its existing storage volume will be re-attached

To run or not to run a database on Kubernetes: What to consider


https://cloud.google.com/blog/products/databases/to-run-or-not-to-run-a-database-on-
kubernetes-what-to-consider

Deploying a stateful application:


https://cloud.google.com/kubernetes-engine/docs/how-to/stateful-apps

StatefulSet:
https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset

Persistent volumes and dynamic provisioning:


https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes

Suggested tutorial:
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/

Suggested Lab: Running a MongoDB Database in Kubernetes with StatefulSets


https://partner.cloudskillsboost.google/catalog_lab/327
Proprietary + Confidential

StatefulSet Example - MongoDB database


Replicas
are read
Primary only
pod is
read/write

Each pod
has its own If pod gets
data restarted, its
volume existing
volume will be
reattached
● Synching handled by MongoDB
● Connectivity to the correct pod is handled at the application level

There are several MongoDB implementations in Cloud Marketplace. MongoDB


automatically handles the replication.
Proprietary + Confidential

StatefulSet Pods are unique


● GKE provides guarantees about the ordering and uniqueness of each Pods
○ Pods are created one after the other, until the max specified is reached
■ Based on an identical container spec (same as a Deployment)
■ A “sticky” identity for each Pod is maintained (different from a
Deployment)
● For example, mypod-0, mypod-1, mypod-2
■ If a Pod fails, it is recreated, given the same identity and matched
with its existing volume
○ State information is maintained in persistent disk storage
■ Each Pod gets its own dedicated volume
Proprietary + Confidential

PersistentVolume and PersistentVolumeClaim


● PersistentVolume (PV)
○ A piece of storage in the cluster that Container details
has been manually provisioned by an
administrator, or dynamically
provisioned by Kubernetes using a Path within the
container where a
StorageClass storage volume is
mounted
● PersistentVolumeClaim (PVC)
○ A request for storage by a user that
can be fulfilled by a PV
Mounted as
○ The claim request read-write by
● Both are independent from Pod lifecycles one pod only

and preserve data through restarting,


rescheduling, and even deleting Pods. GKE will dynamically
create 100 GiB
storage for each pod

The prior page mentions volumes. This is an example of a manifest that creates a
dynamic volume for a MongoDB container
Proprietary + Confidential

Deploying a stateful application


StatefulSet specifies
persist information
● StatefulSets use a Pod
template, which contains a Pods that have a label of
role: mongo belong to this
specification for its Pods replica set

Pod template
kubectl apply -f
STATEFULSET_FILENAME Pods created by this template will
have the label role: mongo

Container details

Path within the container where


a storage volume is mounted
GKE will dynamically create
the storage for each pod
Will be mounted as
read-write by one pod only

Tutorial: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
Proprietary + Confidential

Suggested Lab: Running a MongoDB Database in


Kubernetes with StatefulSets (If time permits)
● Topics include
○ Deploy a Kubernetes cluster
and a StatefulSet
○ Connect a Kubernetes
cluster to a MongoDB
replica set
○ Scale MongoDB replica set
instances up and down.

https://partner.cloudskillsboost.google/catalog_lab/327
Proprietary + Confidential

Exam Guide - Kubernetes Engine


4.2 Managing Google Kubernetes Engine resources. Tasks include:
4.2.1 Viewing current running cluster inventory (nodes, pods, services)
4.2.2 Browsing Docker images and viewing their details in the Artifact Registry
4.2.3 Working with node pools (e.g., add, edit, or remove a node pool)
4.2.4 Working with pods (e.g., add, edit, or remove pods)
4.2.5 Working with services (e.g., add, edit, or remove a service)
4.2.6 Working with stateful applications (e.g. persistent volumes, stateful sets)
4.2.7 Managing Horizontal and Vertical autoscaling configurations
4.2.8 Working with management interfaces (e.g., Google Cloud console, Cloud Shell, Cloud SDK, kubectl)
Proprietary + Confidential

Autoscaling Strategies
Proprietary + Confidential

Four Scalability Dimensions

Configuring multidimensional Pod autoscaling


https://cloud.google.com/kubernetes-engine/docs/how-to/multidimensional-pod-autos
caling
Proprietary + Confidential

Resource Management for Pods and Containers


● A Pod spec (optionally) defines how much CPU
Pod has 2
and memory (RAM) a container needs containers
○ Used by the scheduler to determine
which node to place the Pod on
● Requests Total Pod Requests:
○ Minimum amount of CPU/Memory CPU: 0.5 cores
Memory: 128 MiB
needed
Total Pod Limits:
● Limits
CPU: 1.0 core
○ Maximum allowed Memory: 256 MiB
○ The system kernel terminates processes
that attempt to allocate memory over the
1 CPU unit i=1 physical CPU
max allowed, with an out of memory core, or 1 virtual core,
(OOM) error usually measured in
millicores (1,000 millicores
- 1 vCPU)
Limits and requests for
memory are measured
in bytes Units are case sensitive: m = millibyte, M = megabyte

Image from:
https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

In Kubernetes, 1 CPU unit is equivalent to 1 physical CPU core, or 1 virtual core,


usually measured in millicores (1,000 millicores - 1 vCPU)

Limits and requests for memory are measured in bytes

For more details on Requests and Limits, see:


https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Proprietary + Confidential

How Pods with resource requests are scheduled


● Kubernetes Scheduler selects a node for the Pod
○ Each node has a maximum capacity for the amount of CPU
and memory it can provide for Pods
○ The Scheduler ensures that the sum of the resource requests
of the scheduled containers is less than the capacity of the
node
○ When total node capacity is reached, additional Pods are
marked as “unschedulable”
■ Solutions:
● Implement Cluster Autoscaler to add more nodes
● Use Node Auto-Provisioning (NAP) to
automatically create node pools on a as-needed
basis
● Manually scale # of nodes/add node pools
Proprietary + Confidential

Autoscaling nodes
● Cluster autoscaler -x add/remove nodes based on resource requests of pods running in
the node pool
gcloud container clusters create example-cluster \
--num-nodes 2 \
--zone us-central1-a \
--node-locations us-central1-a,us-central1-b,us-central1-f \
--enable-autoscaling --min-nodes 1 --max-nodes 10

● Node auto provisioning - add/delete node pools as needed (managed by Google)


○ Feature is automatically enabled in Autopilot clusters
○ To enable it in Standard GKE
Scale between a total
gcloud container clusters update dev-cluster \ cluster size of 1 CPU / 1 GB
--enable-autoprovisioning \ memory to a maximum of
--min-cpu 1 \ 10 CPU / 64 GB memory
--min-memory 1 \
--max-cpu 10 \
--max-memory 64

Cluster nodes scaling


https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler

Using node auto-provisioning


https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning

Enabling node auto-provisioning:


https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning#ena
ble

Youtube - Autoscaling with GKE: Clusters and nodes:


https://www.youtube.com/watch?v=VNAWA6NkoBs&t=106s
Proprietary + Confidential

Summary: Cluster Autoscaling vs Node


Auto-provisioning Automatically resizes node pools Requires active monitoring of
based on the workload demands node resource usage to ensure
haven’t over/under provisioned
High demand: adds nodes to the node resources
node pool. ● Under provisioned: Extra
Low demand: scales back down overhead of adding
to a minimum size additional nodes when
needed (+ node cost)
● Over provisioned: Paying
too much per node

Google optimizes the compute ● Not best for sudden spikes


resources in each node pool to in traffic
match workload requirements ● Will take longer to create a
new node pool vs adding a
node to an existing pool
Proprietary + Confidential

Horizontal Pod Autoscaler


● Automatically add more pods to a Deployment or StatefulSet when
workload increases
○ A control loop runs intermittently (default = 15 seconds) and
compares resource utilization against the metrics specified
● Metrics are based on
○ Actual (or percentage) resource usage: when a given Pod's
CPU or memory usage exceeds a specified threshold.
○ Custom metrics: based on any metric reported by a
Kubernetes object, such as the rate of client requests per
second or I/O writes per second.
■ Useful if your application is prone to network or storage
bottlenecks, rather than CPU or memory.
○ External metrics: based on a metric from an application or
service external to your cluster, e.g., size of a Pub/Sub queue

Horizontal pod scaling


https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler

Custom and external metrics for Horizontal autoscaling workloads:


https://cloud.google.com/kubernetes-engine/docs/concepts/custom-and-external-metri
cs
Proprietary + Confidential

Vertical Pod Autoscaler


● Provides recommendations for resource usage over time
○ Use horizontal pod autoscaler for sudden increases
● Vertical Pod autoscaling recommends values for CPU and
memory requests for Pods
○ In lieu of you having to monitor and manually set
CPU/Memory requests and limits for the containers in
your Pods,
○ Recommendations can be used to manually update your
Pods (delete and recreate), or can configure vertical Pod
autoscaling to automatically update the values
● Vertical Pod autoscaling notifies the cluster autoscaler ahead
of the update to provide the resources needed for the resized
workload before recreating it

Vertical pod scaling


https://cloud.google.com/kubernetes-engine/docs/concepts/verticalpodautoscaler
Proprietary + Confidential

Vertical Pod Autoscaler Modes


Off Initial Auto

● No autoscaling; ● Recommendations used ● Recommendations used to regularly


recommendations only to create resized pods resize pods by deleting/recreating
● Won’t resize them again them as needed

● Google recommends leaving VPA off for at least a week to accumulate recommendations
○ Then switch to Initial or Auto based on needs

Youtube - Autoscaling with GKE


https://www.youtube.com/watch?v=7naCIxIaV1M
Proprietary + Confidential

Summary: Vertical vs Horizontal Pod Autoscaling


Use when are unsure of the If you do it manually:
optimal resource requests the ● Over-estimating
container application needs memory/cpu results in
over-provisioning the # of
Can use once or on a ongoing nodes needed when pods
basis scale up, increasing costs
● Under-estimating resources
means that pods will be
killed when the max limit is
reached

Use when have sudden spikes in Can use in conjunction with


traffic Vertical Pod Autoscaler to ensure
pod resource requests are
optimized
Exam Guide - Kubernetes Engine
Proprietary + Confidential

4.2 Managing Google Kubernetes Engine resources. Tasks include:


4.2.1 Viewing current running cluster inventory (nodes, pods, services)
4.2.2 Browsing Docker images and viewing their details in the Artifact Registry
4.2.3 Working with node pools (e.g., add, edit, or remove a node pool)
4.2.4 Working with pods (e.g., add, edit, or remove pods)
4.2.5 Working with services (e.g., add, edit, or remove a service)
4.2.6 Working with stateful applications (e.g. persistent volumes, stateful sets)
4.2.7 Managing Horizontal and Vertical autoscaling configurations
4.2.8 Working with management interfaces (e.g., Google Cloud console, Cloud Shell, Cloud SDK, kubectl)

No further
discussion
needed

Console:
https://cloud.google.com/cloud-console

Cloud Shell:
https://cloud.google.com/shel

Cloud SDK:
https://cloud.google.com/sdk
Proprietary + Confidential

Exam Guide - Cloud Run and Cloud Functions (and App


Engine)
2.2 Planning and configuring compute resources. Considerations include:
2.2.1 Selecting appropriate compute choices for a given workload (e.g.,
Compute Engine, Google Kubernetes Engine, Cloud Run, Cloud Functions)
2.2.2 Using preemptible VMs and custom machine types as appropriate
Cloud Run
3.3 Deploying and implementing Cloud Run and Cloud Functions resources.
3.3.1 Deploying an application and updating scaling configuration, versions,
and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events
(e.g., Pub/Sub events, Cloud Storage object change notification events)
Cloud Functions
4.3 Managing Cloud Run resources. Tasks include:
4.3.1 Adjusting application traffic-splitting parameters
4.3.2 Setting scaling parameters for autoscaling instances
4.3.3 Determining whether to run Cloud Run (fully managed) or
Cloud Run for Anthos
App Engine
Proprietary + Confidential

Exam Guide - Cloud Run and Cloud Functions


2.2 Planning and configuring compute resources. Considerations include:
2.2.1 Selecting appropriate compute choices for a given workload (e.g., Compute Engine,
Google Kubernetes Engine, Cloud Run, Cloud Functions)
2.2.2 Using preemptible VMs and custom machine types as appropriate

3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)

4.3 Managing Cloud Run resources. Tasks include:


4.3.1 Adjusting application traffic-splitting parameters
4.3.2 Setting scaling parameters for autoscaling instances
4.3.3 Determining whether to run Cloud Run (fully managed) or Cloud Run for Anthos
Proprietary + Confidential

Cloud Run
Compute Options is next

Where should I run my stuff? Choosing a Google Cloud compute option


Proprietary + Confidential

Exam Guide - Cloud Run and Cloud Functions

3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)

4.3 Managing Cloud Run resources. Tasks include:


4.3.1 Adjusting application traffic-splitting parameters
4.3.2 Setting scaling parameters for autoscaling instances
4.3.3 Determining whether to run Cloud Run (fully managed) or Cloud Run for Anthos
Proprietary + Confidential

Cloud Run provides Containers-as-a-Service


● Serverless platform that runs individual containers.
● Based on the open-source knative project (https://knative.dev/)
● Two deployment methods
○ Fully managed
■ Use when want Google Cloud to manage autoscaling, connectivity, high availability, etc.
○ Cloud Run for Anthos
■ Knative running on a GKE cluster
● Use when need to:
○ Access VPC network
○ Tune size of compute engine instance, use GPUs, etc.
○ Are running knative containers on-premise or in other clouds and want a single pane of
glass (Anthos) to manage them

Cloud Run
https://cloud.google.com/run

Knative
https://knative.dev/
https://cloud.google.com/knative

Cloud Run: What no one tells you about Serverless (and how it's done)
https://cloud.google.com/blog/topics/developers-practitioners/cloud-run-story-serverle
ss-containers

Choosing between Cloud Run and Cloud Run for Anthos:


https://cloud.google.com/anthos/run/docs/choosing-a-platform
Proprietary + Confidential

Exam Guide - Cloud Run and Cloud Functions

3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)

4.3 Managing Cloud Run resources. Tasks include:


4.3.1 Adjusting application traffic-splitting parameters
4.3.2 Setting scaling parameters for autoscaling instances
4.3.3 Determining whether to run Cloud Run (fully managed) or Cloud Run for Anthos
Proprietary + Confidential

Deploying Cloud Run


● Can be deployed from the Console or the command line

Showing Artifact
Registry but can also
use Container Registry
and Docker Hub

gcloud run deploy my-great-game --image


us-central1-docker.pkg.dev/bt-spaceinva...

Deploying to Cloud Run


https://cloud.google.com/run/docs/deploying

Deploying container images:


https://cloud.google.com/run/docs/deploying#command-line
Proprietary + Confidential

All incoming requests are handled with automatic scaling

Cloud Run
service
Container Auto scaling: Add
containers to handle
Container requests and remove
them if they are unused

Internal Auto healing: Monitor and


Load replace unhealthy
Balancer Container here containers

Distributes
Container here
requests over
available
Clients containers

About container instance autoscaling:


https://cloud.google.com/run/docs/about-instance-autoscaling
Proprietary + Confidential

Cloud Run Scaling Parameters


● Specify a minimum and maximum number of instances

gcloud run deploy my-great-game --image


us-central1-docker.pkg.dev/bt-spaceinva…
--min-instances=0 --max-instances=100

Minimum instances (services)


https://cloud.google.com/run/docs/configuring/min-instances

Maximum number of container instances (services)


https://cloud.google.com/run/docs/configuring/max-instances
Proprietary + Confidential

Cloud Run automatically balances containers across zones


in a region
Region A region is a data
center. For example:
Container Council Bluffs
Process Zone A
(Iowa, North
America)

Cloud Run
Requests Every region has
internal Container Zone B
Process three or more zones.
load
balancer
It’s unlikely for a
zone to go down,
Container
Process Zone C and a multi-zone
failure is highly
unlikely.
Proprietary + Confidential

Deploy Cloud Run to multiple regions and use a global load


balancer to deliver lowest end-user latency
Region us-central1 Container
Zone A

Internal
Cloud Run load Container Zone B
balancer
Client in
Global Container
Zone C
USA
HTTPS
Load
Container
Zone A
Balancer
Internal
Cloud Run load Container Zone B
balancer
Client in Container
Europe Region europe-west1 Zone C

Set up a global external HTTP(S) load balancer with Cloud Run, App Engine, or
Cloud Functions
https://cloud.google.com/load-balancing/docs/https/setup-global-ext-https-serverless

Google Cloud global HTTP(S) load balancer can load balancer to backend services,
including Cloud Run
Proprietary + Confidential

Exam Guide - Cloud Run and Cloud Functions

3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)

4.3 Managing Cloud Run resources. Tasks include:


4.3.1 Adjusting application traffic-splitting parameters
4.3.2 Setting scaling parameters for autoscaling instances
4.3.3 Determining whether to run Cloud Run (fully managed) or Cloud Run for Anthos
Proprietary + Confidential

Application updates are created as new revisions

● When a new revision is deployed, choices Cloud Run


are service
○ Serve 100% of traffic Revisions
○ Serve no traffic Requests 90% Revision Newest

10% Revision

Revision Older

Clients
Proprietary + Confidential

Create 2 or more
Cloud Run traffic splitting revisions and split
traffic among them
for canary testing,
etc
● Split traffic across two or more
revision
○ Great for canary deployments
○ Gradual rollouts for revisions
● Easily roll back to a previous revision

gcloud run services update-traffic my-great-game


Send 5% of traffic to
--to-revisions LATEST=5
the latest revision

Rollbacks, gradual rollouts, and traffic migration:


https://cloud.google.com/run/docs/rollouts-rollbacks-traffic-migration

Traffic splitting:
https://cloud.google.com/run/docs/rollouts-rollbacks-traffic-migration
Proprietary + Confidential

Exam Guide - Cloud Run and Cloud Functions

3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)

4.3 Managing Cloud Run resources. Tasks include:


4.3.1 Adjusting application traffic-splitting parameters
4.3.2 Setting scaling parameters for autoscaling instances
4.3.3 Determining whether to run Cloud Run (fully managed) or Cloud Run for Anthos
Proprietary + Confidential

Invoking Cloud Run


Cloud Storage in
● Can be invoked by this example
○ HTTPS endpoint
○ gRPC
○ Websockets
○ Pub/Sub List of items that
can trigger Cloud
○ Eventarc triggers Run
■ Cloud Storage
■ BigQuery
■ And more

gcloud eventarc triggers create storage-file-created \


--more here shortened for brevity
--event-filters="type=google.cloud.storage.object.v1.finalized"

Eventarc overview
https://cloud.google.com/eventarc/docs/overview

Triggering from Pub/Sub push:


https://cloud.google.com/run/docs/triggering/pubsub-push

Cloud Storage (+ other types of events):


https://cloud.google.com/eventarc/docs/creating-triggers
Proprietary + Confidential

Cloud Run use cases

● Web applications, public APIs or


websites

● Microservices-based applications
that communicate using direct or
asynchronous messages

● Event processing

● Scheduled tasks shorter than 60


minutes
Other example use cases found toward
the bottom of this page:
https://cloud.google.com/run/
Proprietary + Confidential

Cloud Run - customer use case


● Les Echos Le Parisien Annonces publishes legal
notices on behalf of various organizations
○ Supports a number of sites for different regions
within France, as well as French readers in
territories across the world
○ Each local site offers a variety of services and
content
● Historically these sites were served from multiple,
dedicated on-prem infrastructure
○ Over time, growth was constricted by their Scaling quickly to new markets with Cloud
Run—a web modernization story
monolithic architecture
● Today, each site is containerized and deployed as its
own Cloud Run service in Google Cloud
Proprietary + Confidential

Compute Options
Cloud
Functions
is next

Where should I run my stuff? Choosing a Google Cloud compute option


Proprietary + Confidential

Exam Guide - Cloud Run and Cloud Functions

3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)

4.3 Managing Cloud Run resources. Tasks include:


4.3.1 Adjusting application traffic-splitting parameters
4.3.2 Setting scaling parameters for autoscaling instances
4.3.3 Determining whether to run Cloud Run (fully managed) or Cloud Run for Anthos
Proprietary + Confidential

Cloud Functions provide single purpose services


● Used to create specialized services for a single purpose e.g.
○ Process an image
○ Update a user profile in a database
● Scales to zero to save on costs
● Autoscales up/down based on load
● Various programming languages supported
○ Code can be maintained by different developers
● Great for microservices
○ Each scales independently
○ Updates to one doesn’t affect the others
● Triggered several ways
○ Storage bucket changes
○ Pub/Sub (queue) topic
○ REST API call
○ Other events

Cloud Functions
https://cloud.google.com/functions

Cloud Functions Overview


https://cloud.google.com/functions/docs/concepts/overview

Google Cloud Pub/Sub Triggers:


https://cloud.google.com/functions/docs/calling/pubsub

Cloud Pub/Sub Tutorial:


https://cloud.google.com/functions/docs/tutorials/pubsub

Google Cloud Storage Triggers:


https://cloud.google.com/functions/docs/calling/storage
Proprietary + Confidential

Useful when need event-driven, highly scalable


microservices
● Can be triggered by changes in a storage bucket, Pub/Sub messages, web requests and
other types of events
● Completely managed, scalable, and inexpensive

Tutorial: https://cloud.google.com/functions/docs/tutorials/ocr
Proprietary + Confidential

Creating Cloud Functions


Trigger
options

Triggers are the events that cause


Cloud Functions to execute

Using maximum instances:


https://cloud.google.com/functions/docs/configuring/max-instances#gcloud

Using minimum instances:


https://cloud.google.com/functions/docs/configuring/min-instances
Proprietary + Confidential

Creating Cloud Functions (continued)

Not a complete list of


supported
languages/versions
Proprietary + Confidential

Cloud Function CLI

● Deploy a Cloud Function

gcloud functions deploy my-java-function --entry-point


com.example.MyFunction --runtime java11 --trigger-http
--allow-unauthenticated

Deploying Cloud Functions:


https://cloud.google.com/functions/docs/deploying
Proprietary + Confidential

Exam Guide - Cloud Run and Cloud Functions

3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)

4.3 Managing Cloud Run resources. Tasks include:


4.3.1 Adjusting application traffic-splitting parameters
4.3.2 Setting scaling parameters for autoscaling instances
4.3.3 Determining whether to run Cloud Run (fully managed) or Cloud RunAnswers depend on which
for Anthos
“generation” of Cloud
Functions is used
Cloud Functions 2nd Generation
Proprietary + Confidential

● Cloud Functions 2nd generation was released in August 2022 *Exam questions will probably be
generic and not specific to
○ Features are outlined in the table below Generation 1 or 2

Feature Cloud Functions (1st gen) Cloud Functions (2nd gen)


Image registry Container Registry or Artifact Registry Artifact Registry only
Request Up to 9 minutes ● Up to 60 minutes for HTTP-triggered functions
timeout ● Up to 9 minutes for event-triggered functions
Instance size Up to 8GB RAM with 2 vCPU Up to 16GiB RAM with 4 vCPU
Concurrency 1 concurrent request per function instance Up to 1000 concurrent requests per function
instance
Traffic splitting Not supported Supported
Event types Direct support for events from 7 sources Support for any event type supported by Eventarc,
including 90+ event sources via Cloud Audit Logs
CloudEvents Supported only in Ruby, .NET, and PHP runtimes Supported in all language runtimes

*Personal opinion of content developer; Others may disagree

Cloud Functions version comparison


https://cloud.google.com/functions/docs/concepts/version-comparison

Cloud Functions vs. Cloud Run: when to use one over the other
https://cloud.google.com/blog/products/serverless/cloud-run-vs-cloud-functions-for-ser
verless
Proprietary + Confidential

Choosing a Google Cloud compute option

Where’s
App
Engine?

Source:
https://cloud.google.com/blog/topics/developers-practitioners/where-should-
i-run-my-stuff-choosing-google-cloud-compute-option
Proprietary + Confidential

What is App Engine?

The ultimate App Engine cheat sheet

App Engine cheat sheet


https://cloud.google.com/blog/topics/developers-practitioners/ultimate-app-engine-che
at-sheet
Proprietary + Confidential

App Engine offers fully managed, serverless compute for low


latency, highly scalable applications
● Developers focus on code, Google manages
infrastructure
● Supports Node.js, Java, Ruby, C#, Go, Python, or PHP
● Autoscales automatically depending on load
PROJECT-1
● 2 Types
○ App Engine Standard SERVICE-1 SERVICE-2
Version- Version- Version- Version-
■ Limited language support 1 2 1 2

■ Free tier
○ App Engine Flexible
■ Google Cloud builds a Docker container
■ Runs on Compute Engine - no free tier

App Engine documentation


https://cloud.google.com/appengine/docs

App Engine is discussed for completeness, even though it was not explicitly
mentioned in the exam guide

App Engine is a fully managed, serverless application platform supporting the building
and deploying of applications. Applications can be scaled seamlessly from zero
upward without having to worry about managing the underlying infrastructure. App
Engine was designed for microservices. For configuration, each Google Cloud project
can contain one App Engine application, and an application has one or more services.
Each service can have one or more versions, and each version has one or more
instances. App Engine supports traffic splitting so it makes switching between
versions and strategies such as canary testing or A/B testing simple. The diagram on
the right shows the high-level organization of a Google Cloud project with two
services, and each service has two versions. These services are independently
deployable and versioned.
Proprietary + Confidential

App Engine Standard

● Instances start in milliseconds


○ Can scale to zero instances - no charge when app is not running
○ Free tier of 28 instance hours per day
● Supports certain languages, including Python
● Runs your app in a restrictive sandbox environment
○ Has built-in APIs for tasks, queuing, memory store, etc.

https://cloud.google.com/appengine/docs/standard

App Engine standard


https://cloud.google.com/appengine/docs/standard

App Engine Standard environment runs your code in containers provided by Google.

Container instances can start in milliseconds. If there is no traffic coming to an


application, it will turn all the containers off. When it scales to zero instances, you
aren’t charged anything for the application. If a request comes in, a container starts
and handles the requests. If millions of requests start coming in, it will scale quickly to
meet the demand. There is a free tier for App Engine standard that allows smaller
apps to run without generating a bill.

App Engine Standard supports many languages including Python, Java, PHP, Go,
and JavaScript.
Proprietary + Confidential

App Engine Flexible


● Supports multiple programming languages*, including:
○ C#, Go, Java, Node.js, PHP, Python, and Ruby
○ Python, Java, Node.js, Go, Ruby, PHP
● Runs Docker containers on Compute Engine VM instances using a container optimized OS
○ Two options
■ Upload code which will be containerized internally by Google before deployment
■ Provide a Dockerfile, along with the associated code, which Google will deploy
● One container per VM
○ Google manages the number of VMs
■ Can specify min/max number of VMs to deploy when scaling
● When additional capacity is needed, may take > minute to scale due to VM creation
● Does not scale to zero - no free tier
App Engine flexible environment
*Check the documentation for a complete list of
supported programming languages

App Engine Flexible


https://cloud.google.com/appengine/docs/flexible/
Proprietary + Confidential

Choosing an App Engine environment

● When should you choose Standard vs Flexible?


○ https://cloud.google.com/appengine/docs/the-appengine-environments
Proprietary + Confidential

App Engine Example: Gaming Platform Services

Since game platform services often need to be accessed by many different processes
- including client apps, game servers, and websites - using RESTful HTTP endpoints
is a very effective pattern. Google App Engine allows you to just write the code for
these endpoints without worrying about scaling or downtime.

Game Database: Cloud Firestore in Datastore mode along with a dedicated


Memcache can provide a fast, reliable, scalable App Engine native NoSQL database.
This database pattern has been proven to scale seamless for games that started out
serving thousands and ended up serving millions, such as Pokemon Go
Proprietary + Confidential

App Engine supports multiple applications/multiple versions

● Each Google Cloud project can


contain 1 App Engine application.
● An application has 1 or more
services.
● Each service has 1 or more
versions.
● Versions have 1 or more instances.
● Automatic traffic splitting for
switching versions

App Engine is a fully managed, serverless application platform supporting the building
and deploying of applications. Applications can be scaled seamlessly from zero
upward without having to worry about managing the underlying infrastructure. App
Engine was designed for microservices. For configuration, each Google Cloud project
can contain one App Engine application, and an application has one or more services.
Each service can have one or more versions, and each version has one or more
instances. App Engine supports traffic splitting so it makes switching between
versions and strategies such as canary testing or A/B testing simple. The diagram on
the right shows the high-level organization of a Google Cloud project with two
services, and each service has two versions. These services are independently
deployable and versioned.
Proprietary + Confidential

Choosing a Google Cloud deployment platform


You have specific
machine and OS Start NO YES
requirements
Compute
Engine

You’re using
containers

You want your own


Kubernetes Cluster Your service is
Google event-driven
Kubernetes
Your service is
Engine
event-driven

App Engine

Cloud Run Cloud Run

Cloud
Functions

Here is a high-level overview of how you could decide on the most suitable platform
for your application.

First, ask yourself whether you have specific machine and OS requirements. If you
do, then Compute Engine is the platform of choice.

If you have no specific machine or operating system requirements, then the next
question to ask is whether you are using containers. If you are, then you should
consider Google Kubernetes Engine or Cloud Run, depending on whether you want
to configure your own Kubernetes cluster.

If you are not using containers, then you want to consider Cloud Functions if your
service is event-driven and App Engine if it’s not.
Proprietary + Confidential

Exam Guide - Install CLI, Calculator, Marketplace, IaC


1.3 Installing and configuring the command line interface (CLI), specifically
the Cloud SDK (e.g., setting the default project)

Cloud SDK 2.1 Planning and estimating Google Cloud product using the Pricing
Calculator

3.6 Deploying a solution using Cloud Marketplace. Tasks include:


3.6.1 Browsing the Cloud Marketplace catalog and viewing solution details
Cloud 3.6.2 Deploying a Cloud Marketplace solution
Marketplace

3.7 Implementing resources via infrastructure as code. Tasks include:


3.7.1 Building infrastructure via Cloud Foundation Toolkit templates
and implementing best practices
3.7.2 Installing and configuring Config Connector in Google Kubernetes
Cloud Deployment Engine to create, update, delete, and secure resources
Manager
Proprietary + Confidential

Exam Guide - Install CLI, Calculator, Marketplace, IaC


1.3 Installing and configuring the command line interface (CLI), specifically the Cloud SDK
(e.g., setting the default project)

2.1 Planning and estimating Google Cloud product using the Pricing Calculator

3.6 Deploying a solution using Cloud Marketplace. Tasks include:


3.6.1 Browsing the Cloud Marketplace catalog and viewing solution details
3.6.2 Deploying a Cloud Marketplace solution

3.7 Implementing resources via infrastructure as code. Tasks include:


3.7.1 Building infrastructure via Cloud Foundation Toolkit templates and implementing best practices
3.7.2 Installing and configuring Config Connector in Google Kubernetes Engine to create,
update, delete, and secure resources
Proprietary + Confidential

Google Cloud
SDK
See:
https://cloud.googl
e.com/sdk/

Google Cloud SDK:


https://cloud.google.com/sdk/

SDK Installation and quick start:


https://cloud.google.com/sdk/docs/install-sdk

To get started using the SDK you need to install it. Go to cloud.google.com/sdk to find
information and instructions for installing it on your preferred operating system.

● Includes command-line tools for Google CLoud products and services.


○ gcloud, gsutil (Cloud Storage), bq (BigQuery)
● Access via the Cloud Shell button in the Cloud Console
● Can also be installed on local machines.
● Is also available as a Docker image.
Proprietary + Confidential

Initializing the
gcloud CLI

See:
https://cloud.googl
e.com/sdk/docs/ini
tializing

Initializing the gcloud CLI:


https://cloud.google.com/sdk/docs/initializing

gcloud command to set configuration:


https://cloud.google.com/sdk/gcloud/reference/config/set

gcloud tool guide:


https://cloud.google.com/sdk/gcloud/

After the CLI is installed, you need to do an initial setup. That includes things such as
setting a default region and zone.
Proprietary + Confidential

Exam Guide - Install CLI, Calculator, Marketplace, IaC


1.3 Installing and configuring the command line interface (CLI), specifically the Cloud SDK
(e.g., setting the default project)

2.1 Planning and estimating Google Cloud product using the Pricing Calculator

3.6 Deploying a solution using Cloud Marketplace. Tasks include:


3.6.1 Browsing the Cloud Marketplace catalog and viewing solution details
3.6.2 Deploying a Cloud Marketplace solution

3.7 Implementing resources via infrastructure as code. Tasks include:


3.7.1 Building infrastructure via Cloud Foundation Toolkit templates and implementing best practices
3.7.2 Installing and configuring Config Connector in Google Kubernetes Engine to create,
update, delete, and secure resources
Proprietary + Confidential

Use the Google Cloud Pricing Calculator to estimate costs

● Create cost estimates based on


forecasting and capacity planning.
● The parameters entered will vary according
to the service, e.g.,
○ Compute Engine - machine type,
operating system, usage/day, disk size,
etc
○ Cloud Storage - Location, storage class,
storage amount, ingress and egress
estimates
● Can save and email estimates for later use,
e.g., presentations https://cloud.google.com/products/calculator

Pricing calculator:
https://cloud.google.com/products/calculator/

The pricing calculator is the go-to resource for gaining cost estimates. Remember that
the costs are just an estimate, and actual cost may be higher or lower. The estimates
by default use the timeframe of one month. If any inputs vary from this, they will state
this. For example, Firestore document operations read, write, and delete are asked
for on a per day basis.
Proprietary + Confidential

Exam Guide - Install CLI, Calculator, Marketplace, IaC


1.3 Installing and configuring the command line interface (CLI), specifically the Cloud SDK
(e.g., setting the default project)

2.1 Planning and estimating Google Cloud product using the Pricing Calculator

3.6 Deploying a solution using Cloud Marketplace. Tasks include:


3.6.1 Browsing the Cloud Marketplace catalog and viewing solution details
3.6.2 Deploying a Cloud Marketplace solution

3.7 Implementing resources via infrastructure as code. Tasks include:


3.7.1 Building infrastructure via Cloud Foundation Toolkit templates and implementing best practices
3.7.2 Installing and configuring Config Connector in Google Kubernetes Engine to create,
update, delete, and secure resources
Proprietary + Confidential

Deploying a Cloud Marketplace solution

● Provides access to approved


deployments for common
applications
● Some deployments use Deployment
Manager
○ Others use Kubernetes
● Some are open source
○ Others require a license

GKE: Deploying an application from Cloud Marketplace:


https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-marketplace-app

Suggested lab - Provision Services with Google Cloud Marketplace


https://partner.cloudskillsboost.google/catalog_lab/339
Proprietary + Confidential

Exam Guide - Install CLI, Calculator, Marketplace, IaC


1.3 Installing and configuring the command line interface (CLI), specifically the Cloud SDK
(e.g., setting the default project)

2.1 Planning and estimating Google Cloud product using the Pricing Calculator

3.6 Deploying a solution using Cloud Marketplace. Tasks include:


3.6.1 Browsing the Cloud Marketplace catalog and viewing solution details
3.6.2 Deploying a Cloud Marketplace solution

3.7 Implementing resources via infrastructure as code. Tasks include:


3.7.1 Building infrastructure via Cloud Foundation Toolkit templates and implementing best practices
3.7.2 Installing and configuring Config Connector in Google Kubernetes Engine to create,
update, delete, and secure resources
Proprietary + Confidential

Moving to the cloud requires a mindset change

On-Premises Cloud

● Buy machines. ● Rent machines.

● Keep machines running for years. ● Turn machines off as soon as possible.

● Prefer fewer big machines. ● Prefer lots of small machines.

● Machines are capital expenditures. ● Machines are monthly expenses.

The on demand, pay-per-use model of cloud computing is a different model to


traditional on-premises infrastructure provisioning. Resources can be allocated to best
meet demand in a timely manner, and the cloud supports experimentation and
innovation by providing immediate access to an ever-increasing range of services.
Proprietary + Confidential

Treat infrastructure as disposable in the cloud

● Don’t fix broken machines. ● Many tools exist for automating


infrastructure creation
● Don’t install patches.
○ Terraform
● Don’t upgrade machines.
○ Deployment Manager on Google Cloud
● If you need to fix a machine, delete it and ○ CloudFormation on AWS
re-create a new one. ○ Resource Manager on Azure

The key term is infrastructure as code (IaC). The provisioning, configuration, and
deployment activities should all be automated.

Having the process automated minimizes risks, eliminates manual mistakes, and
supports repeatable deployments and scale and speed. Deploying one or one
hundred machines is the same effort.

Costs can be reduced by provisioning ephemeral environments, such as test


environments that replicate the production environment.
Proprietary + Confidential

Infrastructure as code (IaC) allows quick provisioning


and removing of infrastructures

● Build an infrastructure when needed.


● Destroy the infrastructure when not in use.
● Create identical infrastructures for dev, test, and prod
○ Use templates to create different types of resources per group
● Can be part of a CI/CD pipeline.
● Templates are the building blocks for disaster recovery procedures.

In essence, infrastructure as code allows for the quick provisioning and removing of
infrastructures.

The on-demand provisioning of a deployment is extremely powerful. This can be


integrated into a continuous integration pipeline that smoothes the path to continuous
deployment.

Automated infrastructure provisioning means that the infrastructure can be


provisioned on demand, and the deployment complexity is managed in code. This
provides the flexibility to change infrastructure as requirements change. And all the
changes are in one place. Infrastructure for environments such as development and
test can now easily replicate production and can be deleted immediately when not in
use. All because of infrastructure as code.

Several tools can be used for IaC. Google Cloud supports Terraform, where
deployments are described in a file known as a configuration. This details all the
resources that should be provisioned. Configurations can be modularized using
templates, which allows the abstraction of resources into reusable components
across deployments.

In addition to Terraform, Google Cloud also provides support for other IaC tools,
including:
● Deployment Manager
● Chef
● Puppet
● Ansible
● Packer
Proprietary + Confidential

Deployment Manager YAML Templates


resources:
1 # Configure a VM
- name: devops-vm 1 Create a virtual
type: compute.v1.instance machine
2 properties:
zone: us-central1-a
machineType: zones/us-central1-a/machineTypes/f1-micro
3 disks:
- deviceName: boot VM
type: PERSISTENT properties
boot: true 2
3 autoDelete: true
initializeParams:
sourceImage:
Create a boot disk
projects/debian-cloud/global/images/family/debian-8 3
# Add VM to default network and give it an external IP from an image
4 networkInterfaces:
- network: global/networks/default
accessConfigs:
- name: External NAT
type: ONE_TO_ONE_NAT
4 Need a network

Deployment Manager is Google Cloud’s IaC tool


https://cloud.google.com/deployment-manager/docs

Shown here is a simple Deployment Manager template written in YAML. The root
element at the top is resources.

Resources is a collection. Notice the minus sign before the name element. In YAML,
the minus sign denotes one item in a collection. In this case, we are creating a VM
named devops-vm.

The VM has a collection of properties, one of which is a collection of disks. Lastly, at


the bottom we are assigning the machine to a network.
Proprietary + Confidential

Managing deployments with gcloud


● Once the template is built, use gcloud to deploy it
○ Simple creation of infrastructure
○ Easy changes with update command
○ Delete will remove everything created in the reverse order
● Examples:
○ gcloud deployment-manager deployments create devops-deployment
--config deployment-manager-config.yaml
○ gcloud deployment-manager deployments list
○ cloud deployment-manager deployments update example-deployment
--config deployment-manager-config.yaml --preview
○ gcloud deployment-manager deployments delete devops-deployment

The gcloud commands used with Deployment Manager are pretty straightforward.
Like all gcloud commands you specify the service, collection verb, name, and
parameters.

So all the commands begin with gcloud deployment-manager deployments. The verbs
are create, list, update, and delete. Lastly, specify the deployment name and the
parameters.

When running an update, only resources that have changed in the template will be
updated. When running a delete command, it is smart enough to delete resources in
the reverse order in which they were created. Thus, you don’t get an error when
deleting a resource that is used by another resource. As an example, an Instance
Group cannot be deleted if a Load Balancer back end is using it. Deployment
Manager handles those types of issues for you.
Proprietary + Confidential

Terraform susports multi-cloud environments

resource "google_compute_instance" "default" {


1
name = "flask-vm"
machine_type = "f1-micro" 1 Create a virtual
zone = "us-west1-a" machine
● Pre-installed in Cloud 2 tags = ["ssh"]

Shell boot_disk {
initialize_params {
Create a boot disk
2 from an image
image = "debian-cloud/debian-9"
● Supports both a native }

syntax named HCL plus


}

JSON-compatible syntax 3 # Install Flask


metadata_startup_script = "sudo apt-get update; sudo Run a script at
as shown here apt-get install -yq build-essential python-pip rsync; pip
install flask"
3
startup

4 network_interface {
network = "default"

access_config { Network interface to


// Ephemeral public IP 4 default VPC &
}}}
external IP

Terraform
https://www.terraform.io/
Proprietary + Confidential

Managing Terraform deployments

● Once the template is built, use terraform commands to make changes


○ terraform init : get plugins if needed
○ terraform plan : checks for errors in the terraform file
○ terraform apply : applies the file and creates the resources
○ terraform delete : deletes the resources
Proprietary + Confidential

Cloud Foundation Toolkit Some of the


example templates

● Ready made IaC templates which


reflect best practices, in both
○ Deployment Manager
○ Terraform
● Can be used off-the-shelf to quickly
build a repeatable enterprise-ready
foundation in Google Cloud
○ Can easily update the foundation
as needs change

Cloud Foundation Toolkit


https://cloud.google.com/foundation-toolkit

Example templates from the Cloud Foundation Toolkit:


https://cloud.google.com/deployment-manager/docs/reference/cloud-foundation-toolki
t

Rapid cloud foundation buildout and workload deployment using Terraform:


https://cloud.google.com/blog/products/devops-sre/using-the-cloud-foundation-toolkit-
with-terraform

Terraform with Google Cloud: https://cloud.google.com/docs/terraform


Proprietary + Confidential

Exam Guide - Install CLI, Calculator, Marketplace, IaC


1.3 Installing and configuring the command line interface (CLI), specifically the Cloud SDK
(e.g., setting the default project)

2.1 Planning and estimating Google Cloud product using the Pricing Calculator

3.6 Deploying a solution using Cloud Marketplace. Tasks include:


3.6.1 Browsing the Cloud Marketplace catalog and viewing solution details
3.6.2 Deploying a Cloud Marketplace solution

3.7 Implementing resources via infrastructure as code. Tasks include:


3.7.1 Building infrastructure via Cloud Foundation Toolkit templates and implementing best practices
3.7.2 Installing and configuring Config Connector in Google Kubernetes Engine to create,
update, delete, and secure resources
Proprietary + Confidential

Config Connector
YAML creating a
● Part of the Anthos toolset Pub/Sub topic

● Lets you manage more than 120


Google Cloud resources the
same way you manage other
Kubernetes resources
○ Use YAML files

To deploy and
create the topic

kubectl apply -f pubsub-topic.yaml

Config Connector overview:


https://cloud.google.com/config-connector/docs/overview

YouTube video:
https://www.youtube.com/watch?v=3lAOr2XdAh4

Choosing an installation type:


https://cloud.google.com/config-connector/docs/concepts/installation-types

Getting Started w/ config connector:


https://cloud.google.com/config-connector/docs/how-to/getting-started
Proprietary + Confidential

Exam Guide - Install CLI, Calculator, Marketplace, IaC


1.3 Installing and configuring the command line interface (CLI), specifically the Cloud SDK
(e.g., setting the default project)

2.1 Planning and estimating Google Cloud product using the Pricing Calculator

3.6 Deploying a solution using Cloud Marketplace. Tasks include:


3.6.1 Browsing the Cloud Marketplace catalog and viewing solution details
3.6.2 Deploying a Cloud Marketplace solution

3.7 Implementing resources via infrastructure as code. Tasks include:


3.7.1 Building infrastructure via Cloud Foundation Toolkit templates and implementing best practices
3.7.2 Installing and configuring Config Connector in Google Kubernetes Engine to create,
update, delete, and secure resources
Proprietary + Confidential

You might also like