GCP ACE Notes 6
GCP ACE Notes 6
pls-academy-ace-student-slides-3-2303
Proprietary + Confidential
Thank you!
Proprietary + Confidential
Session logistics
● When you have a question, please:
○ Click the Raise hand button in Google Meet.
○ Or add your question to the Q&A section of Google Meet.
○ Please note that answers may be deferred until the end of the session.
● These slides are available in the Student Lecture section of your Qwiklabs classroom.
Exam Guide
https://cloud.google.com/certification/guides/cloud-engineer
Sample Questions
https://docs.google.com/forms/d/e/1FAIpQLSfexWKtXT2OSFJ-obA4iT3GmzgiOCGvjr
T9OfxilWC1yPtmfQ/viewform
Proprietary + Confidential
Click
Click here Professional
Cloud
Architect
Proprietary + Confidential
Needed for
Exam
Voucher
Proprietary + Confidential
Proprietary + Confidential
https://cloud.google.com/certification/guid
es/cloud-engineer/
Proprietary + Confidential
Google Kubernetes
Engine, Cloud Run,
02
Cloud Functions, App
Engine
Proprietary + Confidential
3.2.1 Installing and configuring the command line interface (CLI) for Kubernetes (kubectl)
3.2.2 Deploying a Google Kubernetes Engine cluster with different configurations including AutoPilot,
regional clusters, private clusters, etc.
3.2.3 Deploying a containerized application to Google Kubernetes Engine
3.2.4 Configuring Google Kubernetes Engine monitoring and logging
Google Kubernetes 4.2 Managing Google Kubernetes Engine resources. Tasks include:
4.2.1 Viewing current running cluster inventory (nodes, pods, services)
Engine 4.2.2 Browsing Docker images and viewing their details in the Artifact Registry
4.2.3 Working with node pools (e.g., add, edit, or remove a node pool)
4.2.4 Working with pods (e.g., add, edit, or remove pods)
4.2.5 Working with services (e.g., add, edit, or remove a service)
4.2.6 Working with stateful applications (e.g. persistent volumes, stateful sets)
4.2.7 Managing Horizontal and Vertical autoscaling configurations
4.2.8 Working with management interfaces (e.g., Google Cloud console, Cloud Shell, Cloud SDK, kubectl)
Exam Guide - Kubernetes Engine
Proprietary + Confidential
Google
Docker Compute
Cloud Run Kubernetes App Engine
Engine
Engine
Proprietary + Confidential
Docker
● Provides the ability to package and run an application in an environment
called a container
● Images are very light-weight, pre-configured virtual environments
○ Contain everything needed to run the application, so do not need to
rely on what is currently installed on the host
● Docker images will run on any platform that has a container runtime installed
○ Google Kubernetes Engine uses the containerd runtime on all GKE
nodes as GKE Version 1.24
*Docker designed containerd, which is now a part of the CNCF, an organization that supports
Kubernetes and Docker-based deployments. Docker is still an independent project that uses containerd
as its runtime.
https://blog.purestorage.com/purely-informational/containerd-vs-docker-whats-the-difference
Docker website
https://www.docker.com/
Docker images are lightweight, preconfigured virtual environments used to run our
application. The images include the software required to run the containerized
application. The applications are deployed inside the Docker image.
Docker images are versatile and will run on any platform that has Docker installed.
Proprietary + Confidential
● At this point, the image can be deployed to Compute Engine, Cloud Run, Kubernetes Engine or
App Engine
Proprietary + Confidential
Managing deployments
Docker packages apps into images & Enterprise and continuous deployment
deploys them to containers
Container Mentioned
in Exam
Docker Guide
Artifact
Not Registry
mentioned Cloud
in Exam Build
Guide
Cloud Source
Code Dockerfile Repositories
Developers can test their container locally. When done, they could manually push the
image to the Artifact Registry as shown on the left in this slide.
If you had a hundred developers sharing source files, you would need a system for
managing them, for tracking them, versioning them, and enforcing a check-in, review,
and approval process. They typically push source code to a shared code repository,
such as the Google Cloud Source Repositories, as shown on the right of this slide.
Cloud Source Repositories is a cloud-based solution which integrates with Cloud
Build.
Cloud Build functions similarly to Docker in that it accepts code and configuration and
builds containers (among other things). Cloud Build offers many features and services
that are geared towards professional development. It is designed to fit into a
continuous development / continuous deployment workflow and can scale to handle
many application developers working on and continuously updating a live global
service.
Proprietary + Confidential
Artifact Registry
Optionally scan
● Provides images for known
○ Built-in vulnerability scanning of vulnerabilities
container images
○ Controlled access with fine-grained
control (IAM)
○ Regional and multi-regional
repositories
■ Can have multiple per project
● Store multiple artifact formats
○ Docker images
○ OS packages for Linux distributions
○ Language packages for Python,
Java, and Node
https://cloud.google.com/artifact-registry
Artifact Registry has its own IAM roles and can be deployed multi-regionally.
Exam Guide - Kubernetes Engine
Proprietary + Confidential
Google
Docker Compute
Cloud Run Kubernetes App Engine
Engine
Engine
App Engine is covered later in this module. While App Engine runs in containers
(which Google creates for you), the focus of App Engine is on source-code based
deployments, not on containerization.
Proprietary + Confidential
01
The background
Behind the scenes at Google is a technology called Borg, which is used to manage
Google’s massive infrastructure. As Borg evolved and matured, it became one of
Google’s go-to technologies, but it wasn’t publicly available.
Kubernetes and
Google
01 02
Adapting the
The background
technology
Google wanted to open source the technology behind BORG. To do so, they created
an open source project named Kubernetes in 2014
Proprietary + Confidential
Kubernetes and
Google
01 02 03
Adapting the
The background The goal
technology
Behind the scenes at Google is a Open sourcing Borg wasn’t feasible, Their goal was to take the
technology called Borg, which is however, so a group of Google knowledge that Google had learned
used to manage Google’s massive developers decided to create an about managing billions of
infrastructure. As Borg evolved and open source project called containers and bring that to the
matured, it became one of Google’s Kubernetes. world.
go-to technologies, but it wasn’t
publicly available.
Their goal was to take the knowledge that Google had learned about managing
billions of containers and bring that to the world.
Kubernetes
Today, Kubernetes is widely supported in the industry:
● Google Kubernetes Engine (GKE)
● Microsoft in Azure Container Service (AKS)
● AWS Elastic Kubernetes Service (EKS)
● Estimated that > 54% of Fortune 500 companies have adopted
Kubernetes
3.2 Deploying and implementing Google Kubernetes Engine resources. Tasks include:
3.2.1 Installing and configuring the command line interface (CLI) for Kubernetes (kubectl)
3.2.2 Deploying a Google Kubernetes Engine cluster with different configurations including AutoPilot,
regional clusters, private clusters, etc.
3.2.3 Deploying a containerized application to Google Kubernetes Engine
3.2.4 Configuring Google Kubernetes Engine monitoring and logging
Proprietary + Confidential
The first command line example shows how to deploy a service based on a
configuration file. The kubernetes-config.yaml file will hold the configuration.
To show running pods, you can use the kubectl get pods command.
The kubectl get deployments command will list all the deployments currently running.
... ...
kubectl syntax:
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Once the config file in the .kube folder has been configured, the kubectl command
automatically references this file and connects to the default cluster without prompting
you for credentials. Now let’s talk about how to use the kubectl command. Its syntax is
composed of several parts: the command, the type, the name, and optional flags.
‘Command’ specifies the action that you want to perform, such as get, describe, logs,
or exec. Some commands show you information, while others allow you to change the
cluster’s configuration.
‘TYPE’ defines the Kubernetes object that the ‘command’ acts upon. For example,
you could specify Pods, Deployments, nodes, or other objects, including the cluster
itself.
TYPE used in combination with ‘command’ tells kubectl what you want to do and the
type of object you want to perform that action on.
Proprietary + Confidential
‘NAME’ specifies the object defined in ‘TYPE.’ The Name field isn’t always needed,
especially when you’re using commands that list or show you information.
For example, if you run the command “kubectl get pods” without specifying a name,
the command returns the list of all Pods. To filter this list you specify a Pod’s name,
such as “kubectl get pod my-test-app ”. kubectl then returns information only on the
Pod named ‘my-test-app’.
Some commands support additional optional flags that you can include at the end of
the command.
Think of this as making a special request, like formatting the output in a certain way.
You could view the state of a Pod by using the command “kubectl get pod
my-test-app -o=yaml”. By the way, telling kubectl to give you output in YAML format is
a really useful tool. You’ll often want to capture the existing state of a Kubernetes
object in a YAML file so that, for example, you can recreate it in a different cluster.
You can also use flags to display more information than you normally see. For
instance, you can run the command “kubectl get pods -o=wide” to display the list of
Pods in “wide” format, which means you see additional columns of data for each of
the Pods in the list. One noteworthy piece of extra information you get in wide format:
which Node each Pod is running on.
Proprietary + Confidential
3.2 Deploying and implementing Google Kubernetes Engine resources. Tasks include:
3.2.1 Installing and configuring the command line interface (CLI) for Kubernetes (kubectl)
3.2.2 Deploying a Google Kubernetes Engine cluster with different configurations including AutoPilot,
regional clusters, private clusters, etc.
3.2.3 Deploying a containerized application to Google Kubernetes Engine
3.2.4 Configuring Google Kubernetes Engine monitoring and logging
Proprietary + Confidential
GKE automates the creation of a virtual machine cluster that provides resources to
containerized applications. When deploying applications, you deploy them to a cluster
and the location of the applications will be handled by Kubernetes.
Proprietary + Confidential
AutoPilot
https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview
Proprietary + Confidential
Cluster Cluster
Control Control Control
Control plane plane plane plane
By default, a cluster launches in a single Google Cloud compute zone with three
identical nodes, all in one node pool. The number of nodes can be changed during or
after the creation of the cluster. Adding more nodes and deploying multiple replicas of
an application will improve an application’s availability. But only up to a point. What
happens if the entire compute zone goes down?
You can address this concern by using a GKE regional cluster. Regional clusters have
a single API endpoint for the cluster. However, its control planes and nodes are
spread across multiple Compute Engine zones within a region.
Regional clusters ensure that the availability of the application is maintained across
multiple zones in a single region. In addition, the availability of the control plane is
also maintained so that both the application and management functionality can
withstand the loss of one or more, but not all, zones. By default, a regional cluster is
spread across 3 zones, each containing 1 control plane and 3 nodes. These numbers
can be increased or decreased. For example, if you have five nodes in Zone 1, you
will have exactly the same number of nodes in each of the other zones, for a total of
15 nodes. Once you build a zonal cluster, you can’t convert it into a regional cluster,
or vice versa.
Proprietary + Confidential
In a private cluster, the entire cluster (that is, the control plane and its nodes) are
hidden from the public internet.
Cluster control planes can be accessed by Google Cloud products, such as Cloud
Logging or Cloud Monitoring, through an internal IP address.
Autopilot
Private
3.2 Deploying and implementing Google Kubernetes Engine resources. Tasks include:
3.2.1 Installing and configuring the command line interface (CLI) for Kubernetes (kubectl)
3.2.2 Deploying a Google Kubernetes Engine cluster with different configurations including AutoPilot,
regional clusters, private clusters, etc.
3.2.3 Deploying a containerized application to Google Kubernetes Engine
3.2.4 Configuring Google Kubernetes Engine monitoring and logging
Proprietary + Confidential
Container
runs in a pod
Cluster
Pods
Nodes
Each pod hosts, manages, and runs one or more containers. The containers in a pod
share networking and storage.
So typically, there is one container per pod, unless the containers hold closely related
applications. For example, a second container might contain the logging system for
the application in the first container.
A pod can be moved from one node to another without reconfiguring or rebuilding
anything.
This design enables advanced controls and operations that gives systems built on
Kubernetes unique qualities.
Proprietary + Confidential
YAML
Pods: https://cloud.google.com/kubernetes-engine/docs/concepts/pod
Creating pods:
https://cloud.google.com/kubernetes-engine/docs/concepts/pod#creating_pods
Pods can be creating by a pod spec yaml file and using the “kubectl apply” command:
https://kubernetes.io/docs/concepts/workloads/pods/
Spec: https://kubernetes.io/docs/concepts/workloads/pods/
A pod template (which contains a pod spec) is included in a Deployment yaml file.
This tells GKE how many pods to create and keep running at all times.
Pod template: https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates
Deployments: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
Each cluster has a control plane node that determines what happens on the cluster.
There are usually at least three of them for availability. And they can be located
across zones. A Kubernetes job makes changes to the cluster.
For example a pod YAML file provides the information to start up and run a pod on a
node. If for some reason a pod stops running or a node is lost, the pod will not
automatically be replaced. The Deployment YAML tells Kubernetes how many pods
you want running. So the Kubernetes deployment is what keeps a number of pods
running. The Deployment YAML also defines a Replica Set, which is how many copies
of a container you want running. The Kubernetes scheduler determines on which
node and in which pod the replica containers are to be run.
Proprietary + Confidential
Deployment metadata:
name: devops-deployment
Configuration
labels:
<Some code omitted to save space>
spec:
Example replicas: 3
selector:
<Some code omitted to save space>
template:
<Some code omitted to save space>
spec: Artifact Registry
containers:
- name: devops-demo
image: us-central1-docker.pkg.dev/si/si-image:latest
ports:
- containerPort: 8080
The top portion of the bolded text depicts setting file as a deployment.
The middle bolded text section shows replicas set to 3. So this deployment will always
have 3 pods running.
The bottom portion of the text sets the container image used for the pod.
Proprietary + Confidential
To get the load balancer public IP address, use the following command:
GKE Services
https://cloud.google.com/kubernetes-engine/docs/concepts/service
We need to expose our application so users can connect to it. We use the kubectl
expose deployment command for this. The command sets the port to 80, the target
port to 8080, and the type of service to LoadBalancer.
Once the configuration is complete, we can run the kubectl get services command to
see the details of the load balancer, including the public IP address.
Proprietary + Confidential
A deployment can be scaled using the Console or the command line. In the first
example, the command scales the deployment to 10 replicas. Regardless of resource
usage, there will always be 10.
To configure autoscaling via the command line, use the kubectl autoscale
command. In the example, the deployment will run 5 replicas as a minimum. Based
on CPU percentage, the replicas scale to a maximum of 10.
Proprietary + Confidential
● Can also delete resources individually when created at the command line
Finally, here are a few examples on how to delete resources. Use the delete
command to destroy anything previously created.
In the first example, you can delete resources based on changes made to the
deployment file.
You can also delete resources individually. The second example shows how to do
that.
Proprietary + Confidential
Pods are the smallest unit of deployment. A pod can be comprised of one or more
containers, but typically it is one pod per container.
Clusters are a collection of instances the containers can run on. Each instance is a
node in a cluster.
Replica sets are used to create multiple instances of a pod. The replica sets can
guarantee the desired number is always running and only healthy pods are used.
Kubernetes can leverage cloud load balancers to distribute traffic to multiple pods.
The load balancer is run as a network service.
To ensure the proper performance of your application, you can configure autoscalers
to add and remove pods.
3.2 Deploying and implementing Google Kubernetes Engine resources. Tasks include:
3.2.1 Installing and configuring the command line interface (CLI) for Kubernetes (kubectl)
3.2.2 Deploying a Google Kubernetes Engine cluster with different configurations including AutoPilot,
regional clusters, private clusters, etc.
3.2.3 Deploying a containerized application to Google Kubernetes Engine
3.2.4 Configuring Google Kubernetes Engine monitoring and logging
Proprietary + Confidential
kubectl cluster
networking services
Here we have 2 node pools. The blue pool is running two apps and the red pool has
one app deployed. Both pools are controlled by the same control plane.
From there, the apiserver communicates with the cluster in two primary ways:
● To the kubelet process that runs on each node
● To any node, pod, or service through the apiserver's proxy functionality (not
shown).
Then pods are started on various nodes. In this example, there are three types of
pods running (shown in yellow, green and teal).
Proprietary + Confidential
● Resize a pool
gcloud container clusters resize bt-si-cluster \
--node-pool app3-pool \
--num-nodes 10
● Delete a pool
Services
https://cloud.google.com/kubernetes-engine/docs/concepts/service
Another overview:
https://kubernetes.io/docs/concepts/services-networking/service/
Service details
● ClusterIP
○ Default service
○ Internal to the cluster and allows applications within the pods to communicate with
each other
● NodePort
○ Opens a specific port on each nodes in the cluster
○ Traffic sent to that port is forwarded to the pods
● LoadBalancer
○ Standard way to expose a Kubernetes service externally so it can be accessed over the
internet
○ In GKE this creates a external TCP Network Load Balancer with one IP address
accessible to external users
● Ingress
○ In GKE, the ingress controller creates an HTTP(S) Load Balancer, which can route traffic
to services in the Kubernetes cluster based on path or subdomain
Proprietary + Confidential
To get the load balancer public IP address, use the following command:
kubectl get services
We need to expose our application so users can connect to it. We use the kubectl
expose deployment command for this. The command sets the port to 80, the target
port to 8080, and the type of service to LoadBalancer.
Once the configuration is complete, we can run the kubectl get services command to
see the details of the load balancer, including the public IP address.
Exam Guide - Kubernetes Engine
Proprietary + Confidential
StatefulSet:
https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset
Suggested tutorial:
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
Each pod
has its own If pod gets
data restarted, its
volume existing
volume will be
reattached
● Synching handled by MongoDB
● Connectivity to the correct pod is handled at the application level
The prior page mentions volumes. This is an example of a manifest that creates a
dynamic volume for a MongoDB container
Proprietary + Confidential
Pod template
kubectl apply -f
STATEFULSET_FILENAME Pods created by this template will
have the label role: mongo
Container details
Tutorial: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
Proprietary + Confidential
https://partner.cloudskillsboost.google/catalog_lab/327
Proprietary + Confidential
Autoscaling Strategies
Proprietary + Confidential
Image from:
https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
Autoscaling nodes
● Cluster autoscaler -x add/remove nodes based on resource requests of pods running in
the node pool
gcloud container clusters create example-cluster \
--num-nodes 2 \
--zone us-central1-a \
--node-locations us-central1-a,us-central1-b,us-central1-f \
--enable-autoscaling --min-nodes 1 --max-nodes 10
● Google recommends leaving VPA off for at least a week to accumulate recommendations
○ Then switch to Initial or Auto based on needs
No further
discussion
needed
Console:
https://cloud.google.com/cloud-console
Cloud Shell:
https://cloud.google.com/shel
Cloud SDK:
https://cloud.google.com/sdk
Proprietary + Confidential
3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)
Cloud Run
Compute Options is next
3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)
Cloud Run
https://cloud.google.com/run
Knative
https://knative.dev/
https://cloud.google.com/knative
Cloud Run: What no one tells you about Serverless (and how it's done)
https://cloud.google.com/blog/topics/developers-practitioners/cloud-run-story-serverle
ss-containers
3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)
Showing Artifact
Registry but can also
use Container Registry
and Docker Hub
Cloud Run
service
Container Auto scaling: Add
containers to handle
Container requests and remove
them if they are unused
Distributes
Container here
requests over
available
Clients containers
Cloud Run
Requests Every region has
internal Container Zone B
Process three or more zones.
load
balancer
It’s unlikely for a
zone to go down,
Container
Process Zone C and a multi-zone
failure is highly
unlikely.
Proprietary + Confidential
Internal
Cloud Run load Container Zone B
balancer
Client in
Global Container
Zone C
USA
HTTPS
Load
Container
Zone A
Balancer
Internal
Cloud Run load Container Zone B
balancer
Client in Container
Europe Region europe-west1 Zone C
Set up a global external HTTP(S) load balancer with Cloud Run, App Engine, or
Cloud Functions
https://cloud.google.com/load-balancing/docs/https/setup-global-ext-https-serverless
Google Cloud global HTTP(S) load balancer can load balancer to backend services,
including Cloud Run
Proprietary + Confidential
3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)
10% Revision
Revision Older
Clients
Proprietary + Confidential
Create 2 or more
Cloud Run traffic splitting revisions and split
traffic among them
for canary testing,
etc
● Split traffic across two or more
revision
○ Great for canary deployments
○ Gradual rollouts for revisions
● Easily roll back to a previous revision
Traffic splitting:
https://cloud.google.com/run/docs/rollouts-rollbacks-traffic-migration
Proprietary + Confidential
3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)
Eventarc overview
https://cloud.google.com/eventarc/docs/overview
● Microservices-based applications
that communicate using direct or
asynchronous messages
● Event processing
Compute Options
Cloud
Functions
is next
3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)
Cloud Functions
https://cloud.google.com/functions
Tutorial: https://cloud.google.com/functions/docs/tutorials/ocr
Proprietary + Confidential
3.3 Deploying and implementing Cloud Run and Cloud Functions resources. Tasks include, where applicable:
3.3.1 Deploying an application and updating scaling configuration, versions, and traffic splitting
3.3.2 Deploying an application that receives Google Cloud events (e.g., Pub/Sub events,
Cloud Storage object change notification events)
● Cloud Functions 2nd generation was released in August 2022 *Exam questions will probably be
generic and not specific to
○ Features are outlined in the table below Generation 1 or 2
Cloud Functions vs. Cloud Run: when to use one over the other
https://cloud.google.com/blog/products/serverless/cloud-run-vs-cloud-functions-for-ser
verless
Proprietary + Confidential
Where’s
App
Engine?
Source:
https://cloud.google.com/blog/topics/developers-practitioners/where-should-
i-run-my-stuff-choosing-google-cloud-compute-option
Proprietary + Confidential
■ Free tier
○ App Engine Flexible
■ Google Cloud builds a Docker container
■ Runs on Compute Engine - no free tier
App Engine is discussed for completeness, even though it was not explicitly
mentioned in the exam guide
App Engine is a fully managed, serverless application platform supporting the building
and deploying of applications. Applications can be scaled seamlessly from zero
upward without having to worry about managing the underlying infrastructure. App
Engine was designed for microservices. For configuration, each Google Cloud project
can contain one App Engine application, and an application has one or more services.
Each service can have one or more versions, and each version has one or more
instances. App Engine supports traffic splitting so it makes switching between
versions and strategies such as canary testing or A/B testing simple. The diagram on
the right shows the high-level organization of a Google Cloud project with two
services, and each service has two versions. These services are independently
deployable and versioned.
Proprietary + Confidential
https://cloud.google.com/appengine/docs/standard
App Engine Standard environment runs your code in containers provided by Google.
App Engine Standard supports many languages including Python, Java, PHP, Go,
and JavaScript.
Proprietary + Confidential
Since game platform services often need to be accessed by many different processes
- including client apps, game servers, and websites - using RESTful HTTP endpoints
is a very effective pattern. Google App Engine allows you to just write the code for
these endpoints without worrying about scaling or downtime.
App Engine is a fully managed, serverless application platform supporting the building
and deploying of applications. Applications can be scaled seamlessly from zero
upward without having to worry about managing the underlying infrastructure. App
Engine was designed for microservices. For configuration, each Google Cloud project
can contain one App Engine application, and an application has one or more services.
Each service can have one or more versions, and each version has one or more
instances. App Engine supports traffic splitting so it makes switching between
versions and strategies such as canary testing or A/B testing simple. The diagram on
the right shows the high-level organization of a Google Cloud project with two
services, and each service has two versions. These services are independently
deployable and versioned.
Proprietary + Confidential
You’re using
containers
App Engine
Cloud
Functions
Here is a high-level overview of how you could decide on the most suitable platform
for your application.
First, ask yourself whether you have specific machine and OS requirements. If you
do, then Compute Engine is the platform of choice.
If you have no specific machine or operating system requirements, then the next
question to ask is whether you are using containers. If you are, then you should
consider Google Kubernetes Engine or Cloud Run, depending on whether you want
to configure your own Kubernetes cluster.
If you are not using containers, then you want to consider Cloud Functions if your
service is event-driven and App Engine if it’s not.
Proprietary + Confidential
Cloud SDK 2.1 Planning and estimating Google Cloud product using the Pricing
Calculator
2.1 Planning and estimating Google Cloud product using the Pricing Calculator
Google Cloud
SDK
See:
https://cloud.googl
e.com/sdk/
To get started using the SDK you need to install it. Go to cloud.google.com/sdk to find
information and instructions for installing it on your preferred operating system.
Initializing the
gcloud CLI
See:
https://cloud.googl
e.com/sdk/docs/ini
tializing
After the CLI is installed, you need to do an initial setup. That includes things such as
setting a default region and zone.
Proprietary + Confidential
2.1 Planning and estimating Google Cloud product using the Pricing Calculator
Pricing calculator:
https://cloud.google.com/products/calculator/
The pricing calculator is the go-to resource for gaining cost estimates. Remember that
the costs are just an estimate, and actual cost may be higher or lower. The estimates
by default use the timeframe of one month. If any inputs vary from this, they will state
this. For example, Firestore document operations read, write, and delete are asked
for on a per day basis.
Proprietary + Confidential
2.1 Planning and estimating Google Cloud product using the Pricing Calculator
2.1 Planning and estimating Google Cloud product using the Pricing Calculator
On-Premises Cloud
● Keep machines running for years. ● Turn machines off as soon as possible.
The key term is infrastructure as code (IaC). The provisioning, configuration, and
deployment activities should all be automated.
Having the process automated minimizes risks, eliminates manual mistakes, and
supports repeatable deployments and scale and speed. Deploying one or one
hundred machines is the same effort.
In essence, infrastructure as code allows for the quick provisioning and removing of
infrastructures.
Several tools can be used for IaC. Google Cloud supports Terraform, where
deployments are described in a file known as a configuration. This details all the
resources that should be provisioned. Configurations can be modularized using
templates, which allows the abstraction of resources into reusable components
across deployments.
In addition to Terraform, Google Cloud also provides support for other IaC tools,
including:
● Deployment Manager
● Chef
● Puppet
● Ansible
● Packer
Proprietary + Confidential
Shown here is a simple Deployment Manager template written in YAML. The root
element at the top is resources.
Resources is a collection. Notice the minus sign before the name element. In YAML,
the minus sign denotes one item in a collection. In this case, we are creating a VM
named devops-vm.
The gcloud commands used with Deployment Manager are pretty straightforward.
Like all gcloud commands you specify the service, collection verb, name, and
parameters.
So all the commands begin with gcloud deployment-manager deployments. The verbs
are create, list, update, and delete. Lastly, specify the deployment name and the
parameters.
When running an update, only resources that have changed in the template will be
updated. When running a delete command, it is smart enough to delete resources in
the reverse order in which they were created. Thus, you don’t get an error when
deleting a resource that is used by another resource. As an example, an Instance
Group cannot be deleted if a Load Balancer back end is using it. Deployment
Manager handles those types of issues for you.
Proprietary + Confidential
Shell boot_disk {
initialize_params {
Create a boot disk
2 from an image
image = "debian-cloud/debian-9"
● Supports both a native }
4 network_interface {
network = "default"
Terraform
https://www.terraform.io/
Proprietary + Confidential
2.1 Planning and estimating Google Cloud product using the Pricing Calculator
Config Connector
YAML creating a
● Part of the Anthos toolset Pub/Sub topic
To deploy and
create the topic
YouTube video:
https://www.youtube.com/watch?v=3lAOr2XdAh4
2.1 Planning and estimating Google Cloud product using the Pricing Calculator