Professional Cloud Developer
Professional Cloud Developer
Answer: A
Question #: 2
Q: You migrated your applications to Google Cloud Platform and kept your existing
monitoring platform. You now find that your notification system is too slow for time critical
problems.
What should you do?
Answer: C
Question #: 3
Q: You are planning to migrate a MySQL database to the managed Cloud SQL database for
Google Cloud. You have Compute Engine virtual machine instances that will connect with
this Cloud SQL instance. You do not want to whitelist IPs for the Compute Engine instances
to be able to access Cloud SQL.
What should you do?
Answer: A
Question #: 4
Q: You have deployed an HTTP(s) Load Balancer with the gcloud commands shown below.
Health checks to port 80 on the Compute Engine virtual machine instance are failing and no
traffic is sent to your instances. You want to resolve the problem.
Which commands should you run?
Answer: C
Question #: 5
Q: Your website is deployed on Compute Engine. Your marketing team wants to test
conversion rates between 3 different website designs.
Which approach should you use?
Answer: A
Question #: 6
Q: You need to copy directory local-scripts and all of its contents from your local
workstation to a Compute Engine virtual machine instance.
Which command should you use?
Answer: C
Question #: 7
Q: You are deploying your application to a Compute Engine virtual machine instance with
the Stackdriver Monitoring Agent installed. Your application is a unix process on the
instance. You want to be alerted if the unix process has not run for at least 5 minutes. You
are not able to change the application to generate metrics or logs.
Which alert condition should you configure?
A. Uptime check
B. Process health
C. Metric absence
D. Metric threshold
Answer: C
Question #: 8
Q: You have two tables in an ANSI-SQL compliant database with identical columns that you
need to quickly combine into a single table, removing duplicate rows from the result set.
What should you do?
Answer: C
Question #: 9
Q: You have an application deployed in production. When a new version is deployed, some
issues don't arise until the application receives traffic from users in production. You want to
reduce both the impact and the number of users affected.
Which deployment strategy should you use?
A. Blue/green deployment
B. Canary deployment
C. Rolling deployment
D. Recreate deployment
Answer: B
Question #: 10
Q: Your company wants to expand their users outside the United States for their popular
application. The company wants to ensure 99.999% availability of the database for their
application and also wants to minimize the read latency for their users across the globe.
Which two actions should they take? (Choose two.)
Answer: A C
Question #: 11
Q: You need to migrate an internal file upload API with an enforced 500-MB file size limit to
App Engine.
What should you do?
Answer: C
Question #: 12
Q: You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster.
The application exposes an HTTP-based health check at /healthz. You want to use this
health check endpoint to determine whether traffic should be routed to the pod by the load
balancer.
Which code snippet should you include in your Pod configuration?
A.
B.
C.
D.
Answer: B
Question #: 13
Q: Your teammate has asked you to review the code below. Its purpose is to efficiently add a
large number of small rows to a BigQuery table.
Answer: A
Question #: 14
Q: You are developing a JPEG image-resizing API hosted on Google Kubernetes Engine
(GKE). Callers of the service will exist within the same GKE cluster. You want clients to be
able to get the IP address of the service.
What should you do?
A. Define a GKE Service. Clients should use the name of the A record in Cloud DNS to find
the service's cluster IP address.
B. Define a GKE Service. Clients should use the service name in the URL to connect
to the service.
C. Define a GKE Endpoint. Clients should get the endpoint name from the appropriate
environment variable in the client container.
D. Define a GKE Endpoint. Clients should get the endpoint name from Cloud DNS.
Answer: B
Question #: 15
Q: You are using Cloud Build to build and test application source code stored in Cloud
Source Repositories. The build process requires a build tool not available in the Cloud Build
environment.
What should you do?
A. Download the binary from the internet during the build process.
B. Build a custom cloud builder image and reference the image in your build steps.
C. Include the binary in your Cloud Source Repositories repository and reference it in
your build scripts.
D. Ask to have the binary added to the Cloud Build environment by filing a feature
request against the Cloud Build public Issue Tracker.
Answer: B
Question #: 16
Q: You are deploying your application to a Compute Engine virtual machine instance. Your
application is configured to write its log files to disk. You want to view the logs in
Stackdriver Logging without changing the application code.
What should you do?
A. Install the Stackdriver Logging Agent and configure it to send the application
logs.
B. Use a Stackdriver Logging Library to log directly from the application to Stackdriver
Logging.
C. Provide the log file folder path in the metadata of the instance to configure it to send
the application logs.
D. Change the application to log to /var/log so that its logs are automatically sent to
Stackdriver Logging.
Answer: A
Question #: 17
Q: Your service adds text to images that it reads from Cloud Storage. During busy times of
the year, requests to Cloud Storage fail with an HTTP 429 "Too Many
Requests" status code.
How should you handle this error?
Answer: C
Question #: 18
Q: You are building an API that will be used by Android and iOS apps. The API must:
* Support HTTPs
* Minimize bandwidth cost
* Integrate easily with mobile apps
Which API architecture should you use?
A. RESTful APIs
B. MQTT for APIs
C. gRPC-based APIs
D. SOAP-based APIs
Answer: C
Question #: 19
Q: Your application takes an input from a user and publishes it to the user's contacts. This
input is stored in a table in Cloud Spanner. Your application is more sensitive to latency and
less sensitive to consistency.
How should you perform reads from Cloud Spanner for this application?
Answer: B
Question #: 20
Q: Your application is deployed in a Google Kubernetes Engine (GKE) cluster. When a new
version of your application is released, your CI/CD tool updates the
spec.template.spec.containers[0].image value to reference the Docker image of your new
application version. When the Deployment object applies the change, you want to deploy at
least 1 replica of the new version and maintain the previous replicas until the new replica is
healthy.
Which change should you make to the GKE Deployment object shown below?
Answer: B
Question #: 21
Q: You plan to make a simple HTML application available on the internet. This site keeps
information about FAQs for your application. The application is static and contains images,
HTML, CSS, and Javascript. You want to make this application available on the internet with
as few steps as possible.
What should you do?
Answer: A
Question #: 22
Q: Your company has deployed a new API to App Engine Standard environment. During
testing, the API is not behaving as expected. You want to monitor the application over time
to diagnose the problem within the application code without redeploying the application.
Which tool should you use?
A. Stackdriver Trace
B. Stackdriver Monitoring
C. Stackdriver Debug Snapshots
D. Stackdriver Debug Logpoints
Answer: D
Question #: 23
Q: You want to use the Stackdriver Logging Agent to send an application's log file to
Stackdriver from a Compute Engine virtual machine instance.
After installing the Stackdriver Logging Agent, what should you do first?
Answer: C
Question #: 24
Q: Your company has a BigQuery data mart that provides analytics information to hundreds
of employees. One user of wants to run jobs without interrupting important workloads. This
user isn't concerned about the time it takes to run these jobs. You want to fulfill this request
while minimizing cost to the company and the effort required on your part.
What should you do?
Answer: A
Question #: 25
Q: You want to notify on-call engineers about a service degradation in production while
minimizing development time.
What should you do?
Answer: D
Question #: 26
Q: You are writing a single-page web application with a user-interface that communicates
with a third-party API for content using XMLHttpRequest. The data displayed on the UI by
the API results is less critical than other data displayed on the same web page, so it is
acceptable for some requests to not have the API data displayed in the UI. However, calls
made to the API should not delay rendering of other parts of the user interface. You want
your application to perform well when the API response is an error or a timeout.
What should you do?
A. Set the asynchronous option for your requests to the API to false and omit the widget
displaying the API results when a timeout or error is encountered.
B. Set the asynchronous option for your request to the API to true and omit the
widget displaying the API results when a timeout or error is encountered.
C. Catch timeout or error exceptions from the API call and keep trying with exponential
backoff until the API response is successful.
D. Catch timeout or error exceptions from the API call and display the error response in
the UI widget.
Answer: B
Question #: 27
Q: You are creating a web application that runs in a Compute Engine instance and writes a
file to any user's Google Drive. You need to configure the application to authenticate to the
Google Drive API. What should you do?
Question #: 28
Q: You are creating a Google Kubernetes Engine (GKE) cluster and run this command:
Answer: B
Question #: 29
Q: You are parsing a log file that contains three columns: a timestamp, an account number (a
string), and a transaction amount (a number). You want to calculate the sum of all
transaction amounts for each unique account number efficiently.
Which data structure should you use?
A. A linked list
B. A hash table
C. A two-dimensional array
D. A comma-delimited string
Answer: B
Question #: 30
Q: Your company has a BigQuery dataset named "Master" that keeps information about
employee travel and expenses. This information is organized by employee department. That
means employees should only be able to view information for their department. You want
to apply a security framework to enforce this requirement with the minimum number of
steps.
What should you do?
A. Create a separate dataset for each department. Create a view with an appropriate
WHERE clause to select records from a particular dataset for the specific department.
Authorize this view to access records from your Master dataset. Give employees the
permission to this department-specific dataset.
B. Create a separate dataset for each department. Create a data pipeline for each
department to copy appropriate information from the Master dataset to the specific
dataset for the department. Give employees the permission to this department-specific
dataset.
C. Create a dataset named Master dataset. Create a separate view for each
department in the Master dataset. Give employees access to the specific view for
their department.
D. Create a dataset named Master dataset. Create a separate table for each department
in the Master dataset. Give employees access to the specific table for their department.
Answer: C
Question #: 31
Q: You have an application in production. It is deployed on Compute Engine virtual machine
instances controlled by a managed instance group. Traffic is routed to the instances via a
HTTP(s) load balancer. Your users are unable to access your application. You want to
implement a monitoring technique to alert you when the application is unavailable.
Which technique should you choose?
A. Smoke tests
B. Stackdriver uptime checks
C. Cloud Load Balancing - heath checks
D. Managed instance group - heath checks
Answer: B
Question #: 32
Q: You are load testing your server application. During the first 30 seconds, you observe
that a previously inactive Cloud Storage bucket is now servicing 2000 write requests per
second and 7500 read requests per second. Your application is now receiving intermittent
5xx and 429 HTTP responses from the Cloud Storage
JSON API as the demand escalates. You want to decrease the failed responses from the
Cloud Storage API.
What should you do?
Answer: D
Question #: 33
Q: Your application is controlled by a managed instance group. You want to share a large
read-only data set between all the instances in the managed instance group. You want to
ensure that each instance can start quickly and can access the data set via its filesystem with
very low latency. You also want to minimize the total cost of the solution.
What should you do?
A. Move the data to a Cloud Storage bucket, and mount the bucket on the filesystem
using Cloud Storage FUSE.
B. Move the data to a Cloud Storage bucket, and copy the data to the boot disk of the
instance via a startup script.
C. Move the data to a Compute Engine persistent disk, and attach the disk in read-
only mode to multiple Compute Engine virtual machine instances.
D. Move the data to a Compute Engine persistent disk, take a snapshot, create multiple
disks from the snapshot, and attach each disk to its own instance.
Answer: C
Question #: 34
Q: You are developing an HTTP API hosted on a Compute Engine virtual machine instance
that needs to be invoked by multiple clients within the same Virtual
Private Cloud (VPC). You want clients to be able to get the IP address of the service.
What should you do?
Answer: C
Question #: 35
Q: Your application is logging to Stackdriver. You want to get the count of all requests on
all /api/alpha/* endpoints.
What should you do?
Answer: B
Question #: 36
Q: You want to re-architect a monolithic application so that it follows a microservices
model. You want to accomplish this efficiently while minimizing the impact of this change to
the business.
Which approach should you take?
Answer: B
Question #: 37
Q: Your existing application keeps user state information in a single MySQL database. This
state information is very user-specific and depends heavily on how long a user has been
using an application. The MySQL database is causing challenges to maintain and enhance
the schema for various users.
Which storage option should you choose?
A. Cloud SQL
B. Cloud Storage
C. Cloud Spanner
D. Cloud Datastore/Firestore
Answer: D
Question #: 38
Q: You are building a new API. You want to minimize the cost of storing and reduce the
latency of serving images.
Which architecture should you use?
Answer: D
Question #: 39
Q: Your company's development teams want to use Cloud Build in their projects to build
and push Docker images to Container Registry. The operations team requires all Docker
images to be published to a centralized, securely managed Docker registry that the
operations team manages.
What should you do?
Answer: B
Question #: 40
Q: You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster.
Your application can scale horizontally, and each instance of your application needs to have
a stable network identity and its own persistent disk.
Which GKE object should you use?
A. Deployment
B. StatefulSet
C. ReplicaSet
D. ReplicaController
Answer: B
Question #: 41
Q: You are using Cloud Build to build a Docker image. You need to modify the build to
execute unit and run integration tests. When there is a failure, you want the build history to
clearly display the stage at which the build failed.
What should you do?
A. Add RUN commands in the Dockerfile to execute unit and integration tests.
B. Create a Cloud Build build config file with a single build step to compile unit and
integration tests.
C. Create a Cloud Build build config file that will spawn a separate cloud build pipeline
for unit and integration tests.
D. Create a Cloud Build build config file with separate cloud builder steps to
compile and execute unit and integration tests.
Answer: D
Question #: 42
Q: Your code is running on Cloud Functions in project A. It is supposed to write an object in
a Cloud Storage bucket owned by project B. However, the write call is failing with the error
"403 Forbidden".
What should you do to correct the problem?
A. Grant your user account the roles/storage.objectCreator role for the Cloud Storage
bucket.
B. Grant your user account the roles/iam.serviceAccountUser role for the service-
[email protected] service account.
C. Grant the [email protected]
service account the roles/storage.objectCreator role for the Cloud Storage bucket.
D. Enable the Cloud Storage API in project B.
Answer: C
Question #: 43
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal's .net-based auth service fails under intermittent load.
What should they do?
Answer: A
Question #: 44
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal's APIs are having occasional application failures. They want to collect application
information specifically to troubleshoot the issue. What should they do?
Answer: B
Question #: 45
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in
order to query data stored on persistent disks.
Which IP strategy should they use?
Answer: A
Question #: 46
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
Which service should HipLocal use to enable access to internal apps?
A. Cloud VPN
B. Cloud Armor
C. Virtual Private Cloud
D. Cloud Identity-Aware Proxy
Answer: D
Question #: 47
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal wants to reduce the number of on-call engineers and eliminate manual scaling.
Which two services should they choose? (Choose two.)
Answer: C D
Question #: 48
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data.
Existing Technical Environment -
HipLocal's environment is a mix of on-premises hardware and infrastructure running in
Google Cloud Platform. The HipLocal team understands their application well, but has
limited experience in global scale applications. Their existing technical environment is as
follows:
* Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
* State is stored in a single instance MySQL database in GCP.
* Data is exported to an on-premises Teradata/Vertica data warehouse.
* Data analytics is performed in an on-premises Hadoop environment.
* The application has no logging.
* There are basic indicators of uptime; alerts are frequently fired when the APIs are
unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
In order to meet their business requirements, how should HipLocal store their application
state?
Answer: C
Question #: 49
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
Which service should HipLocal use for their public APIs?
A. Cloud Armor
B. Cloud Functions
C. Cloud Endpoints
D. Shielded Virtual Machines
Answer: C
Question #: 50
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal wants to improve the resilience of their MySQL deployment, while also meeting
their business and technical requirements.
Which configuration should they choose?
A. Use the current single instance MySQL on Compute Engine and several read-only
MySQL servers on Compute Engine.
B. Use the current single instance MySQL on Compute Engine, and replicate the data to
Cloud SQL in an external master configuration.
C. Replace the current single instance MySQL instance with Cloud SQL, and
configure high availability.
D. Replace the current single instance MySQL instance with Cloud SQL, and Google
provides redundancy without further configuration.
Answer: C
Question #: 51
Q: Your application is running in multiple Google Kubernetes Engine clusters. It is managed
by a Deployment in each cluster. The Deployment has created multiple replicas of your Pod
in each cluster. You want to view the logs sent to stdout for all of the replicas in your
Deployment in all clusters.
Which command should you use?
Answer:
Question #: 52
Q: You are using Cloud Build to create a new Docker image on each source code commit to a
Cloud Source Repositories repository. Your application is built on every commit to the
master branch. You want to release specific commits made to the master branch in an
automated method.
What should you do?
Answer: B
Question #: 53
Q: You are designing a schema for a table that will be moved from MySQL to Cloud Bigtable.
The MySQL table is as follows:
How should you design a row key for Cloud Bigtable for this table?
Answer: B
Question #: 54
Q: You want to view the memory usage of your application deployed on Compute Engine.
What should you do?
Answer: B
Question #: 55
Q: You have an analytics application that runs hundreds of queries on BigQuery every few
minutes using BigQuery API. You want to find out how much time these queries take to
execute.
What should you do?
Answer:
Question #: 56
Q: You are designing a schema for a Cloud Spanner customer database. You want to store a
phone number array field in a customer table. You also want to allow users to search
customers by phone number.
How should you design this schema?
A. Create a table named Customers. Add an Array field in a table that will hold phone
numbers for the customer.
B. Create a table named Customers. Create a table named Phones. Add a CustomerId
field in the Phones table to find the CustomerId from a phone number.
C. Create a table named Customers. Add an Array field in a table that will hold phone
numbers for the customer. Create a secondary index on the Array field.
D. Create a table named Customers as a parent table. Create a table named
Phones, and interleave this table into the Customer table. Create an index on the
phone number field in the Phones table.
Answer: D
Question #: 57
Q: You are deploying a single website on App Engine that needs to be accessible via the URL
http://www.altostrat.com/.
What should you do?
A. Verify domain ownership with Webmaster Central. Create a DNS CNAME record
to point to the App Engine canonical name ghs.googlehosted.com.
B. Verify domain ownership with Webmaster Central. Define an A record pointing to the
single global App Engine IP address.
C. Define a mapping in dispatch.yaml to point the domain www.altostrat.com to your
App Engine service. Create a DNS CNAME record to point to the App Engine canonical
name ghs.googlehosted.com.
D. Define a mapping in dispatch.yaml to point the domain www.altostrat.com to your
App Engine service. Define an A record pointing to the single global App Engine IP
address.
Answer: A
Question #: 58
Q: You are running an application on App Engine that you inherited. You want to find out
whether the application is using insecure binaries or is vulnerable to XSS attacks.
Which service should you use?
A. Cloud Amor
B. Stackdriver Debugger
C. Cloud Security Scanner
D. Stackdriver Error Reporting
Answer: C
Question #: 59
Q: You are working on a social media application. You plan to add a feature that allows users
to upload images. These images will be 2 MB `" 1 GB in size. You want to minimize their
infrastructure operations overhead for this feature.
What should you do?
A. Change the application to accept images directly and store them in the database that
stores other user information.
B. Change the application to create signed URLs for Cloud Storage. Transfer these
signed URLs to the client application to upload images to Cloud Storage.
C. Set up a web server on GCP to accept user images and create a file store to keep
uploaded files. Change the application to retrieve images from the file store.
D. Create a separate bucket for each user in Cloud Storage. Assign a separate service
account to allow write access on each bucket. Transfer service account credentials to
the client application based on user information. The application uses this service
account to upload images to Cloud Storage.
Answer: B
Question #: 60
Q: Your application is built as a custom machine image. You have multiple unique
deployments of the machine image. Each deployment is a separate managed instance group
with its own template. Each deployment requires a unique set of configuration values. You
want to provide these unique values to each deployment but use the same custom machine
image in all deployments. You want to use out-of-the-box features of Compute Engine.
What should you do?
Answer: D
Question #: 61
Q: Your application performs well when tested locally, but it runs significantly slower after
you deploy it to a Compute Engine instance. You need to diagnose the problem. What should
you do?
What should you do?
A. File a ticket with Cloud Support indicating that the application performs faster locally.
B. Use Cloud Debugger snapshots to look at a point-in-time execution of the application.
C. Use Cloud Profiler to determine which functions within the application take the
longest amount of time.
D. Add logging commands to the application and use Cloud Logging to check where the
latency problem occurs.
Answer: C
Question #: 62
Q: You have an application running in App Engine. Your application is instrumented with
Stackdriver Trace. The /product-details request reports details about four known unique
products at /sku-details as shown below. You want to reduce the time it takes for the
request to complete.
What should you do?
Answer: C
Question #: 63
Q: Your company has a data warehouse that keeps your application information in
BigQuery. The BigQuery data warehouse keeps 2 PBs of user data. Recently, your company
expanded your user base to include EU users and needs to comply with these requirements:
✑ Your company must be able to delete all user account information upon user request.
✑ All EU user data must be stored in a single region specifically for EU users.
Which two actions should you take? (Choose two.)
Answer: B E
Question #: 64
Q: Your App Engine standard configuration is as follows:
service: production
instance_class: B1
You want to limit the application to 5 instances.
Which code snippet should you include in your configuration?
Answer: D
Question #: 65
Q: Your analytics system executes queries against a BigQuery dataset. The SQL query is
executed in batch and passes the contents of a SQL file to the BigQuery
CLI. Then it redirects the BigQuery CLI output to another process. However, you are getting
a permission error from the BigQuery CLI when the queries are executed.
You want to resolve the issue. What should you do?
A. Grant the service account BigQuery Data Viewer and BigQuery Job User roles.
B. Grant the service account BigQuery Data Editor and BigQuery Data Viewer roles.
C. Create a view in BigQuery from the SQL query and SELECT* from the view in the CLI.
D. Create a new dataset in BigQuery, and copy the source table to the new dataset Query
the new dataset and table from the CLI.
Answer: A
Question #: 66
Q: Your application is running on Compute Engine and is showing sustained failures for a
small number of requests. You have narrowed the cause down to a single
Compute Engine instance, but the instance is unresponsive to SSH.
What should you do next?
Answer: B
Question #: 67
Q: You configured your Compute Engine instance group to scale automatically according to
overall CPU usage. However, your application's response latency increases sharply before
the cluster has finished adding up instances. You want to provide a more consistent latency
experience for your end users by changing the configuration of the instance group
autoscaler.
Which two configuration changes should you make? (Choose two.)
Answer: B D
Question #: 68
Q: You have an application controlled by a managed instance group. When you deploy a new
version of the application, costs should be minimized and the number of instances should
not increase. You want to ensure that, when each new instance is created, the deployment
only continues if the new instance is healthy.
What should you do?
Answer: B
Question #: 69
Q: Your application requires service accounts to be authenticated to GCP products via
credentials stored on its host Compute Engine virtual machine instances. You want to
distribute these credentials to the host instances as securely as possible.
What should you do?
A. Use HTTP signed URLs to securely provide access to the required resources.
B. Use the instance's service account Application Default Credentials to
authenticate to the required resources.
C. Generate a P12 file from the GCP Console after the instance is deployed, and copy the
credentials to the host instance before starting the application.
D. Commit the credential JSON file into your application's source repository, and have
your CI/CD process package it with the software that is deployed to the instance.
Answer: B
Question #: 70
Q: Your application is deployed in a Google Kubernetes Engine (GKE) cluster. You want to
expose this application publicly behind a Cloud Load Balancing HTTP(S) load balancer.
What should you do?
Answer: A
Question #: 71
Q: Your company is planning to migrate their on-premises Hadoop environment to the
cloud. Increasing storage cost and maintenance of data stored in HDFS is a major concern
for your company. You also want to make minimal changes to existing data analytics jobs
and existing architecture.
How should you proceed with the migration?
A. Migrate your data stored in Hadoop to BigQuery. Change your jobs to source their
information from BigQuery instead of the on-premises Hadoop environment.
B. Create Compute Engine instances with HDD instead of SSD to save costs. Then
perform a full migration of your existing environment into the new one in Compute
Engine instances.
C. Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your
Hadoop environment to the new Cloud Dataproc cluster. Move your HDFS data into
larger HDD disks to save on storage costs.
D. Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate
your Hadoop code objects to the new cluster. Move your data to Cloud Storage and
leverage the Cloud Dataproc connector to run jobs on that data.
Answer: D
Question #: 72
Q: Your data is stored in Cloud Storage buckets. Fellow developers have reported that data
downloaded from Cloud Storage is resulting in slow API performance.
You want to research the issue to provide details to the GCP support team.
Which command should you run?
A. gsutil test ג€"o output.json gs://my-bucket
B. gsutil perfdiag ג€"o output.json gs://my-bucket
C. gcloud compute scp example-instance:~/test-data ג€"o output.json gs://my-bucket
D. gcloud services test ג€"o output.json gs://my-bucket
Answer: B
Question #: 73
Q: You are using Cloud Build build to promote a Docker image to Development, Test, and
Production environments. You need to ensure that the same Docker image is deployed to
each of these environments.
How should you identify the Docker image in your build?
Answer: C
Question #: 74
Q: Your company has created an application that uploads a report to a Cloud Storage bucket.
When the report is uploaded to the bucket, you want to publish a message to a Cloud
Pub/Sub topic. You want to implement a solution that will take a small amount to effort to
implement.
What should you do?
Answer: A
Question #: 75
Q: Your teammate has asked you to review the code below, which is adding a credit to an
account balance in Cloud Datastore.
Which improvement should you suggest your teammate make?
Answer: B
Question #: 76
Q: Your company stores their source code in a Cloud Source Repositories repository. Your
company wants to build and test their code on each source code commit to the repository
and requires a solution that is managed and has minimal operations overhead.
Which method should they use?
A. Use Cloud Build with a trigger configured for each source code commit.
B. Use Jenkins deployed via the Google Cloud Platform Marketplace, configured to watch
for source code commits.
C. Use a Compute Engine virtual machine instance with an open source continuous
integration tool, configured to watch for source code commits.
D. Use a source code commit trigger to push a message to a Cloud Pub/Sub topic that
triggers an App Engine service to build the source code.
Answer: A
Question #: 77
Q: You are writing a Compute Engine hosted application in project A that needs to securely
authenticate to a Cloud Pub/Sub topic in project B.
What should you do?
A. Configure the instances with a service account owned by project B. Add the service
account as a Cloud Pub/Sub publisher to project A.
B. Configure the instances with a service account owned by project A. Add the
service account as a publisher on the topic.
C. Configure Application Default Credentials to use the private key of a service account
owned by project B. Add the service account as a Cloud Pub/Sub publisher to project A.
D. Configure Application Default Credentials to use the private key of a service account
owned by project A. Add the service account as a publisher on the topic
Answer: B
Question #: 78
Q: You are developing a corporate tool on Compute Engine for the finance department,
which needs to authenticate users and verify that they are in the finance department. All
company employees use G Suite.
What should you do?
A. Enable Cloud Identity-Aware Proxy on the HTTP(s) load balancer and restrict
access to a Google Group containing users in the finance department. Verify the
provided JSON Web Token within the application.
B. Enable Cloud Identity-Aware Proxy on the HTTP(s) load balancer and restrict access
to a Google Group containing users in the finance department. Issue client-side
certificates to everybody in the finance team and verify the certificates in the
application.
C. Configure Cloud Armor Security Policies to restrict access to only corporate IP
address ranges. Verify the provided JSON Web Token within the application.
D. Configure Cloud Armor Security Policies to restrict access to only corporate IP
address ranges. Issue client side certificates to everybody in the finance team and verify
the certificates in the application.
Answer: A
Question #: 79
Q: Your API backend is running on multiple cloud providers. You want to generate reports
for the network latency of your API.
Which two steps should you take? (Choose two.)
Answer: A C
Question #: 80
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
Which database should HipLocal use for storing user activity?
A. BigQuery
B. Cloud SQL
C. Cloud Spanner
D. Cloud Datastore
Answer: A
Question #: 81
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal is configuring their access controls.
Which firewall configuration should they implement?
Answer: C
Question #: 82
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
HipLocal's data science team wants to analyze user reviews.
How should they prepare the data?
A. Use the Cloud Data Loss Prevention API for redaction of the review dataset.
B. Use the Cloud Data Loss Prevention API for de-identification of the review
dataset.
C. Use the Cloud Natural Language Processing API for redaction of the review dataset.
D. Use the Cloud Natural Language Processing API for de-identification of the review
dataset.
Answer: B
Question #: 83
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
* Expand availability of the application to new regions.
* Increase the number of concurrent users that can be supported.
* Ensure a consistent experience for users when they travel to different regions.
* Obtain user activity metrics to better understand how to monetize their product.
* Ensure compliance with regulations in the new regions (for example, GDPR).
* Reduce infrastructure management time and cost.
* Adopt the Google-recommended practices for cloud computing.
Technical Requirements -
* The application and backend must provide usage metrics and monitoring.
* APIs require strong authentication and authorization.
* Logging must be increased, and data should be stored in a cloud analytics platform.
* Move to serverless architecture to facilitate elastic scaling.
* Provide authorized access to internal apps in a secure manner.
In order for HipLocal to store application state and meet their stated business
requirements, which database service should they migrate to?
A. Cloud Spanner
B. Cloud Datastore
C. Cloud Memorystore as a cache
D. Separate Cloud SQL clusters for each region
Answer: A
Question #: 84
Q: You have an application deployed in production. When a new version is deployed, you
want to ensure that all production traffic is routed to the new version of your application.
You also want to keep the previous version deployed so that you can revert to it if there is
an issue with the new version.
Which deployment strategy should you use?
A. Blue/green deployment
B. Canary deployment
C. Rolling deployment
D. Recreate deployment
Answer: A
Question #: 85
Q: You are porting an existing Apache/MySQL/PHP application stack from a single machine
to Google
Kubernetes Engine. You need to determine how to containerize the application. Your
approach should follow Google-recommended best practices for availability.
What should you do?
Answer: A
Question #: 86
Q: You are developing an application that will be launched on Compute Engine instances
into multiple distinct projects, each corresponding to the environments in your software
development process (development, QA, staging, and production). The instances in each
project have the same application code but a different configuration. During deployment,
each instance should receive the application's configuration based on the environment it
serves. You want to minimize the number of steps to configure this flow. What should you
do?
A. When creating your instances, configure a startup script using the gcloud command
to determine the project name that indicates the correct environment.
B. In each project, configure a metadata key ג€environmentג€ whose value is the
environment it serves. Use your deployment tool to query the instance metadata
and configure the application based on the ג€environmentג€ value.
C. Deploy your chosen deployment tool on an instance in each project. Use a deployment
job to retrieve the appropriate configuration file from your version control system, and
apply the configuration when deploying the application on each instance.
D. During each instance launch, configure an instance custom-metadata key named ג
€environmentג€ whose value is the environment the instance serves. Use your
deployment tool to query the instance metadata, and configure the application based on
the ג€environmentג€ value.
Answer: B
Question #: 87
Q: You are developing an ecommerce application that stores customer, order, and inventory
data as relational tables inside Cloud Spanner. During a recent load test, you discover that
Spanner performance is not scaling linearly as expected. Which of the following is the
cause?
Answer: C
Question #: 88
Q: You are developing an application that reads credit card data from a Pub/Sub
subscription. You have written code and completed unit testing. You need to test the
Pub/Sub integration before deploying to Google Cloud. What should you do?
A. Create a service to publish messages, and deploy the Pub/Sub emulator. Generate
random content in the publishing service, and publish to the emulator.
B. Create a service to publish messages to your application. Collect the messages from
Pub/Sub in production, and replay them through the publishing service.
C. Create a service to publish messages, and deploy the Pub/Sub emulator. Collect the
messages from Pub/Sub in production, and publish them to the emulator.
D. Create a service to publish messages, and deploy the Pub/Sub emulator.
Publish a standard set of testing messages from the publishing service to the
emulator.
Answer: D
Question #: 89
Q: You are designing an application that will subscribe to and receive messages from a
single Pub/Sub topic and insert corresponding rows into a database. Your application runs
on Linux and leverages preemptible virtual machines to reduce costs. You need to create a
shutdown script that will initiate a graceful shutdown.
What should you do?
A. Write a shutdown script that uses inter-process signals to notify the application
process to disconnect from the database.
B. Write a shutdown script that broadcasts a message to all signed-in users that the
Compute Engine instance is going down and instructs them to save current work and
sign out.
C. Write a shutdown script that writes a file in a location that is being polled by the
application once every five minutes. After the file is read, the application disconnects
from the database.
D. Write a shutdown script that publishes a message to the Pub/Sub topic announcing
that a shutdown is in progress. After the application reads the message, it disconnects
from the database.
Answer: A
Question #: 90
Q: You work for a web development team at a small startup. Your team is developing a
Node.js application using Google Cloud services, including Cloud Storage and Cloud Build.
The team uses a Git repository for version control. Your manager calls you over the
weekend and instructs you to make an emergency update to one of the company's websites,
and you're the only developer available. You need to access Google Cloud to make the
update, but you don't have your work laptop. You are not allowed to store source code
locally on a non-corporate computer. How should you set up your developer environment?
A. Use a text editor and the Git command line to send your source code updates as pull
requests from a public computer.
B. Use a text editor and the Git command line to send your source code updates as pull
requests from a virtual machine running on a public computer.
C. Use Cloud Shell and the built-in code editor for development. Send your source
code updates as pull requests.
D. Use a Cloud Storage bucket to store the source code that you need to edit. Mount the
bucket to a public computer as a drive, and use a code editor to update the code. Turn
on versioning for the bucket, and point it to the team's Git repository.
Answer: C
Question #: 91
Q: Your team develops services that run on Google Kubernetes Engine. You need to
standardize their log data using Google-recommended practices and make the data more
useful in the fewest number of steps. What should you do? (Choose two.)
Answer: A C
Question #: 92
Q: You are designing a deployment technique for your new applications on Google Cloud. As
part of your deployment planning, you want to use live traffic to gather performance
metrics for both new and existing applications. You need to test against the full production
load prior to launch. What should you do?
Answer: D
Question #: 93
Q: You support an application that uses the Cloud Storage API. You review the logs and
discover multiple HTTP 503 Service Unavailable error responses from the
API. Your application logs the error and does not take any further action. You want to
implement Google-recommended retry logic to improve success rates.
Which approach should you take?
Question #: 94
Q: You need to redesign the ingestion of audit events from your authentication service to
allow it to handle a large increase in traffic. Currently, the audit service and the
authentication system run in the same Compute Engine virtual machine. You plan to use the
following Google Cloud tools in the new architecture:
✑ Multiple Compute Engine machines, each running an instance of the authentication
service
✑ Multiple Compute Engine machines, each running an instance of the audit service
✑ Pub/Sub to send the events from the authentication services.
How should you set up the topics and subscriptions to ensure that the system can handle a
large volume of messages and can scale efficiently?
A. Create one Pub/Sub topic. Create one pull subscription to allow the audit
services to share the messages.
B. Create one Pub/Sub topic. Create one pull subscription per audit service instance to
allow the services to share the messages.
C. Create one Pub/Sub topic. Create one push subscription with the endpoint pointing to
a load balancer in front of the audit services.
D. Create one Pub/Sub topic per authentication service. Create one pull subscription per
topic to be used by one audit service.
E. Create one Pub/Sub topic per authentication service. Create one push subscription
per topic, with the endpoint pointing to one audit service.
Answer: A
Question #: 95
Q: You are developing a marquee stateless web application that will run on Google Cloud.
The rate of the incoming user traffic is expected to be unpredictable, with no traffic on some
days and large spikes on other days. You need the application to automatically scale up and
down, and you need to minimize the cost associated with running the application. What
should you do?
A. Build the application in Python with Firestore as the database. Deploy the
application to Cloud Run.
B. Build the application in C# with Firestore as the database. Deploy the application to
App Engine flexible environment.
C. Build the application in Python with CloudSQL as the database. Deploy the application
to App Engine standard environment.
D. Build the application in Python with Firestore as the database. Deploy the application
to a Compute Engine managed instance group with autoscaling.
Answer: A
Question #: 96
Q: You have written a Cloud Function that accesses other Google Cloud resources. You want
to secure the environment using the principle of least privilege. What should you do?
A. Create a new service account that has Editor authority to access the resources. The
deployer is given permission to get the access token.
B. Create a new service account that has a custom IAM role to access the resources. The
deployer is given permission to get the access token.
C. Create a new service account that has Editor authority to access the resources. The
deployer is given permission to act as the new service account.
D. Create a new service account that has a custom IAM role to access the
resources. The deployer is given permission to act as the new service account.
Answer: D
Question #: 97
Q: You are a SaaS provider deploying dedicated blogging software to customers in your
Google Kubernetes Engine (GKE) cluster. You want to configure a secure multi-tenant
platform to ensure that each customer has access to only their own blog and can't affect the
workloads of other customers. What should you do?
Answer: B
Question #: 98
Q: You have decided to migrate your Compute Engine application to Google Kubernetes
Engine. You need to build a container image and push it to Artifact Registry using Cloud
Build. What should you do? (Choose two.)
A. Run gcloud builds submit in the directory that contains the application source
code.
B. Run gcloud run deploy app-name --image gcr.io/$PROJECT_ID/app-name in the
directory that contains the application source code.
C. Run gcloud container images add-tag gcr.io/$PROJECT_ID/app-name
gcr.io/$PROJECT_ID/app-name:latest in the directory that contains the application
source code.
D. In the application source directory, create a file named cloudbuild.yaml that
contains the following contents:
E. In the application source directory, create a file named cloudbuild.yaml that contains
the following contents:
Answer: A D
Question #: 99
Q: You are developing an internal application that will allow employees to organize
community events within your company. You deployed your application on a single
Compute Engine instance. Your company uses Google Workspace (formerly G Suite), and
you need to ensure that the company employees can authenticate to the application from
anywhere. What should you do?
A. Add a public IP address to your instance, and restrict access to the instance using
firewall rules. Allow your company's proxy as the only source IP address.
B. Add an HTTP(S) load balancer in front of the instance, and set up Identity-
Aware Proxy (IAP). Configure the IAP settings to allow your company domain to
access the website.
C. Set up a VPN tunnel between your company network and your instance's VPC location
on Google Cloud. Configure the required firewall rules and routing information to both
the on-premises and Google Cloud networks.
D. Add a public IP address to your instance, and allow traffic from the internet. Generate
a random hash, and create a subdomain that includes this hash and points to your
instance. Distribute this DNS address to your company's employees.
Answer: B
Question #: 100
Q: Your development team is using Cloud Build to promote a Node.js application built on
App Engine from your staging environment to production. The application relies on several
directories of photos stored in a Cloud Storage bucket named webphotos-staging in the
staging environment. After the promotion, these photos must be available in a Cloud
Storage bucket named webphotos-prod in the production environment. You want to
automate the process where possible. What should you do?
Answer:
Question #: 101
Q: You are developing a web application that will be accessible over both HTTP and HTTPS
and will run on Compute Engine instances. On occasion, you will need to SSH from your
remote laptop into one of the Compute Engine instances to conduct maintenance on the
app. How should you configure the instances while following Google-recommended best
practices?
A. Set up a backend with Compute Engine web server instances with a private IP
address behind a TCP proxy load balancer.
B. Configure the firewall rules to allow all ingress traffic to connect to the Compute
Engine web servers, with each server having a unique external IP address.
C. Configure Cloud Identity-Aware Proxy API for SSH access. Then configure the
Compute Engine servers with private IP addresses behind an HTTP(s) load
balancer for the application web traffic.
D. Set up a backend with Compute Engine web server instances with a private IP
address behind an HTTP(S) load balancer. Set up a bastion host with a public IP address
and open firewall ports. Connect to the web instances using the bastion host.
Answer: C
Question #: 102
Q: You have a mixture of packaged and internally developed applications hosted on a
Compute Engine instance that is running Linux. These applications write log records as text
in local files. You want the logs to be written to Cloud Logging. What should you do?
Answer: B
Question #: 103
Q: You want to create `fully baked` or `golden` Compute Engine images for your application.
You need to bootstrap your application to connect to the appropriate database according to
the environment the application is running on (test, staging, production). What should you
do?
A. Embed the appropriate database connection string in the image. Create a different
image for each environment.
B. When creating the Compute Engine instance, add a tag with the name of the database
to be connected. In your application, query the Compute Engine API to pull the tags for
the current instance, and use the tag to construct the appropriate database connection
string.
C. When creating the Compute Engine instance, create a metadata item with a key of ג
€DATABASEג€ and a value for the appropriate database connection string. In your
application, read the ג€DATABASEג€ environment variable, and use the value to
connect to the appropriate database.
D. When creating the Compute Engine instance, create a metadata item with a key
of ג€DATABASEג€ and a value for the appropriate database connection string. In
your application, query the metadata server for the ג€DATABASEג€ value, and
use the value to connect to the appropriate database.
Answer: D
Question #: 104
Q: You are developing a microservice-based application that will be deployed on a Google
Kubernetes Engine cluster. The application needs to read and write to a
Spanner database. You want to follow security best practices while minimizing code
changes. How should you configure your application to retrieve Spanner credentials?
A. Configure the appropriate service accounts, and use Workload Identity to run
the pods.
B. Store the application credentials as Kubernetes Secrets, and expose them as
environment variables.
C. Configure the appropriate routing rules, and use a VPC-native cluster to directly
connect to the database.
D. Store the application credentials using Cloud Key Management Service, and retrieve
them whenever a database connection is made.
Answer: A
Question #: 105
Q: You are deploying your application on a Compute Engine instance that communicates
with Cloud SQL. You will use Cloud SQL Proxy to allow your application to communicate to
the database using the service account associated with the application's instance. You want
to follow the Google-recommended best practice of providing minimum access for the role
assigned to the service account. What should you do?
Answer: C
Question #: 106
Q: Your team develops stateless services that run on Google Kubernetes Engine (GKE). You
need to deploy a new service that will only be accessed by other services running in the GKE
cluster. The service will need to scale as quickly as possible to respond to changing load.
What should you do?
A. Use a Vertical Pod Autoscaler to scale the containers, and expose them via a ClusterIP
Service.
B. Use a Vertical Pod Autoscaler to scale the containers, and expose them via a NodePort
Service.
C. Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a
ClusterIP Service.
D. Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a
NodePort Service.
Answer: C
Question #: 107
Q: You recently migrated a monolithic application to Google Cloud by breaking it down into
microservices. One of the microservices is deployed using Cloud
Functions. As you modernize the application, you make a change to the API of the service
that is backward-incompatible. You need to support both existing callers who use the
original API and new callers who use the new API. What should you do?
A. Leave the original Cloud Function as-is and deploy a second Cloud Function with the
new API. Use a load balancer to distribute calls between the versions.
B. Leave the original Cloud Function as-is and deploy a second Cloud Function that
includes only the changed API. Calls are automatically routed to the correct function.
C. Leave the original Cloud Function as-is and deploy a second Cloud Function
with the new API. Use Cloud Endpoints to provide an API gateway that exposes a
versioned API.
D. Re-deploy the Cloud Function after making code changes to support the new API.
Requests for both versions of the API are fulfilled based on a version identifier included
in the call.
Answer: C
Question #: 108
Q: You are developing an application that will allow users to read and post comments on
news articles. You want to configure your application to store and display user-submitted
comments using Firestore. How should you design the schema to support an unknown
number of comments and articles?
Answer: A
Question #: 109
Q: You recently developed an application. You need to call the Cloud Storage API from a
Compute
Engine instance that doesn't have a public IP address. What should you do?
Answer: D
Question #: 110
Q: You are a developer working with the CI/CD team to troubleshoot a new feature that
your team introduced. The CI/CD team used HashiCorp Packer to create a new Compute
Engine image from your development branch. The image was successfully built, but is not
booting up. You need to investigate the issue with the CI/
CD team. What should you do?
A. Create a new feature branch, and ask the build team to rebuild the image.
B. Shut down the deployed virtual machine, export the disk, and then mount the disk
locally to access the boot logs.
C. Install Packer locally, build the Compute Engine image locally, and then run it in your
personal Google Cloud project.
D. Check Compute Engine OS logs using the serial port, and check the Cloud
Logging logs to confirm access to the serial port.
Answer: D
Question #: 111
Q: You manage an application that runs in a Compute Engine instance. You also have
multiple backend services executing in stand-alone Docker containers running in Compute
Engine instances. The Compute Engine instances supporting the backend services are scaled
by managed instance groups in multiple regions. You want your calling application to be
loosely coupled. You need to be able to invoke distinct service implementations that are
chosen based on the value of an HTTP header found in the request. Which Google Cloud
feature should you use to invoke the backend services?
A. Traffic Director
B. Service Directory
C. Anthos Service Mesh
D. Internal HTTP(S) Load Balancing
Answer: A
Question #: 112
Q: Your team is developing an ecommerce platform for your company. Users will log in to
the website and add items to their shopping cart. Users will be automatically logged out
after 30 minutes of inactivity. When users log back in, their shopping cart should be saved.
How should you store users' session and shopping cart information while following Google-
recommended best practices?
A. Store the session information in Pub/Sub, and store the shopping cart information in
Cloud SQL.
B. Store the shopping cart information in a file on Cloud Storage where the filename is
the SESSION ID.
C. Store the session and shopping cart information in a MySQL database running on
multiple Compute Engine instances.
D. Store the session information in Memorystore for Redis or Memorystore for
Memcached, and store the shopping cart information in Firestore.
Answer: D
Question #: 113
Q: You are designing a resource-sharing policy for applications used by different teams in a
Google Kubernetes Engine cluster. You need to ensure that all applications can access the
resources needed to run. What should you do? (Choose two.)
Answer: B C
Question #: 114
Q: You are developing a new application that has the following design requirements:
✑ Creation and changes to the application infrastructure are versioned and auditable.
✑ The application and deployment infrastructure uses Google-managed services as much as
possible.
✑ The application runs on a serverless compute platform.
How should you design the application's architecture?
Answer: A
Question #: 115
Q: You are creating and running containers across different projects in Google Cloud. The
application you are developing needs to access Google Cloud services from within Google
Kubernetes Engine (GKE). What should you do?
Answer:
Question #: 116
Q: You have containerized a legacy application that stores its configuration on an NFS share.
You need to deploy this application to Google Kubernetes Engine
(GKE) and do not want the application serving traffic until after the configuration has been
retrieved. What should you do?
A. Use the gsutil utility to copy files from within the Docker container at startup, and
start the service using an ENTRYPOINT script.
B. Create a PersistentVolumeClaim on the GKE cluster. Access the configuration
files from the volume, and start the service using an ENTRYPOINT script.
C. Use the COPY statement in the Dockerfile to load the configuration into the container
image. Verify that the configuration is available, and start the service using an
ENTRYPOINT script.
D. Add a startup script to the GKE instance group to mount the NFS share at node
startup. Copy the configuration files into the container, and start the service using an
ENTRYPOINT script.
Answer: B
Question #: 117
Q: Your team is developing a new application using a PostgreSQL database and Cloud Run.
You are responsible for ensuring that all traffic is kept private on Google
Cloud. You want to use managed services and follow Google-recommended best practices.
What should you do?
A. 1. Enable Cloud SQL and Cloud Run in the same project. 2. Configure a private IP
address for Cloud SQL. Enable private services access. 3. Create a Serverless VPC
Access connector. 4. Configure Cloud Run to use the connector to connect to Cloud
SQL.
B. 1. Install PostgreSQL on a Compute Engine virtual machine (VM), and enable Cloud
Run in the same project. 2. Configure a private IP address for the VM. Enable private
services access. 3. Create a Serverless VPC Access connector. 4. Configure Cloud Run to
use the connector to connect to the VM hosting PostgreSQL.
C. 1. Use Cloud SQL and Cloud Run in different projects. 2. Configure a private IP address
for Cloud SQL. Enable private services access. 3. Create a Serverless VPC Access
connector. 4. Set up a VPN connection between the two projects. Configure Cloud Run to
use the connector to connect to Cloud SQL.
D. 1. Install PostgreSQL on a Compute Engine VM, and enable Cloud Run in different
projects. 2. Configure a private IP address for the VM. Enable private services access. 3.
Create a Serverless VPC Access connector. 4. Set up a VPN connection between the two
projects. Configure Cloud Run to use the connector to access the VM hosting PostgreSQL
Answer: A
Question #: 118
Q: You are developing an application that will allow clients to download a file from your
website for a specific period of time. How should you design the application to complete this
task while following Google-recommended best practices?
A. Configure the application to send the file to the client as an email attachment.
B. Generate and assign a Cloud Storage-signed URL for the file. Make the URL
available for the client to download.
C. Create a temporary Cloud Storage bucket with time expiration specified, and give
download permissions to the bucket. Copy the file, and send it to the client.
D. Generate the HTTP cookies with time expiration specified. If the time is valid, copy
the file from the Cloud Storage bucket, and make the file available for the client to
download.
Answer: B
Question #: 119
Q: Your development team has been asked to refactor an existing monolithic application
into a set of composable microservices. Which design aspects should you implement for the
new application? (Choose two.)
A. Develop the microservice code in the same programming language used by the
microservice caller.
B. Create an API contract agreement between the microservice implementation
and microservice caller.
C. Require asynchronous communications between all microservice implementations
and microservice callers.
D. Ensure that sufficient instances of the microservice are running to accommodate the
performance requirements.
E. Implement a versioning scheme to permit future changes that could be incompatible
with the current interface.
Answer: B
Question #: 120
Q: You deployed a new application to Google Kubernetes Engine and are experiencing some
performance degradation. Your logs are being written to Cloud
Logging, and you are using a Prometheus sidecar model for capturing metrics. You need to
correlate the metrics and data from the logs to troubleshoot the performance issue and
send real-time alerts while minimizing costs. What should you do?
A. Create custom metrics from the Cloud Logging logs, and use Prometheus to import
the results using the Cloud Monitoring REST API.
B. Export the Cloud Logging logs and the Prometheus metrics to Cloud Bigtable. Run a
query to join the results, and analyze in Google Data Studio.
C. Export the Cloud Logging logs and stream the Prometheus metrics to BigQuery. Run a
recurring query to join the results, and send notifications using Cloud Tasks.
D. Export the Prometheus metrics and use Cloud Monitoring to view them as
external metrics. Configure Cloud Monitoring to create log-based metrics from the
logs, and correlate them with the Prometheus data.
Answer: D
Question #: 121
Q: You have been tasked with planning the migration of your company's application from
on-premises to Google Cloud. Your company's monolithic application is an ecommerce
website. The application will be migrated to microservices deployed on Google Cloud in
stages. The majority of your company's revenue is generated through online sales, so it is
important to minimize risk during the migration. You need to prioritize features and select
the first functionality to migrate. What should you do?
A. Migrate the Product catalog, which has integrations to the frontend and
product database.
B. Migrate Payment processing, which has integrations to the frontend, order database,
and third-party payment vendor.
C. Migrate Order fulfillment, which has integrations to the order database, inventory
system, and third-party shipping vendor.
D. Migrate the Shopping cart, which has integrations to the frontend, cart database,
inventory system, and payment processing system.
Answer: A
Question #: 122
Q: Your team develops services that run on Google Kubernetes Engine. Your team's code is
stored in Cloud Source Repositories. You need to quickly identify bugs in the code before it
is deployed to production. You want to invest in automation to improve developer feedback
and make the process as efficient as possible.
What should you do?
A. Use Spinnaker to automate building container images from code based on Git tags.
B. Use Cloud Build to automate building container images from code based on Git
tags.
C. Use Spinnaker to automate deploying container images to the production
environment.
D. Use Cloud Build to automate building container images from code based on forked
versions.
Answer: B
Question #: 123
Q: Your team is developing an application in Google Cloud that executes with user identities
maintained by Cloud Identity. Each of your application's users will have an associated
Pub/Sub topic to which messages are published, and a Pub/Sub subscription where the
same user will retrieve published messages. You need to ensure that only authorized users
can publish and subscribe to their own specific Pub/Sub topic and subscription. What
should you do?
A. Bind the user identity to the pubsub.publisher and pubsub.subscriber roles at
the resource level.
B. Grant the user identity the pubsub.publisher and pubsub.subscriber roles at the
project level.
C. Grant the user identity a custom role that contains the pubsub.topics.create and
pubsub.subscriptions.create permissions.
D. Configure the application to run as a service account that has the pubsub.publisher
and pubsub.subscriber roles.
Answer: A
Question #: 124
Q: You are evaluating developer tools to help drive Google Kubernetes Engine adoption and
integration with your development environment, which includes VS Code and IntelliJ. What
should you do?
Answer: A
Question #: 125
Q: You are developing an ecommerce web application that uses App Engine standard
environment and Memorystore for Redis. When a user logs into the app, the application
caches the user's information (e.g., session, name, address, preferences), which is stored for
quick retrieval during checkout.
While testing your application in a browser, you get a 502 Bad Gateway error. You have
determined that the application is not connecting to Memorystore. What is the reason for
this error?
A. Your Memorystore for Redis instance was deployed without a public IP address.
B. You configured your Serverless VPC Access connector in a different region than
your App Engine instance.
C. The firewall rule allowing a connection between App Engine and Memorystore was
removed during an infrastructure update by the DevOps team.
D. You configured your application to use a Serverless VPC Access connector on a
different subnet in a different availability zone than your App Engine instance.
Answer: B
Question #: 126
Q: Your team develops services that run on Google Cloud. You need to build a data
processing service and will use Cloud Functions. The data to be processed by the function is
sensitive. You need to ensure that invocations can only happen from authorized services
and follow Google-recommended best practices for securing functions. What should you do?
A. Enable Identity-Aware Proxy in your project. Secure function access using its
permissions.
B. Create a service account with the Cloud Functions Viewer role. Use that service
account to invoke the function.
C. Create a service account with the Cloud Functions Invoker role. Use that service
account to invoke the function.
D. Create an OAuth 2.0 client ID for your calling service in the same project as the
function you want to secure. Use those credentials to invoke the function.
Answer: C
Question #: 127
Q: You are deploying your applications on Compute Engine. One of your Compute Engine
instances failed to launch. What should you do? (Choose two.)
Answer: A D
Question #: 128
Q: Your web application is deployed to the corporate intranet. You need to migrate the web
application to Google Cloud. The web application must be available only to company
employees and accessible to employees as they travel. You need to ensure the security and
accessibility of the web application while minimizing application changes. What should you
do?
Answer:
Question #: 129
Q: You have an application that uses an HTTP Cloud Function to process user activity from
both desktop browser and mobile application clients. This function will serve as the
endpoint for all metric submissions using HTTP POST.
Due to legacy restrictions, the function must be mapped to a domain that is separate from
the domain requested by users on web or mobile sessions. The domain for the Cloud
Function is https://fn.example.com. Desktop and mobile clients use the domain
https://www.example.com. You need to add a header to the function's
HTTP response so that only those browser and mobile sessions can submit metrics to the
Cloud Function. Which response header should you add?
A. Access-Control-Allow-Origin: *
B. Access-Control-Allow-Origin: https://*.example.com
C. Access-Control-Allow-Origin: https://fn.example.com
D. Access-Control-Allow-origin: https://www.example.com
Answer: D
Question #: 130
Q: You have an HTTP Cloud Function that is called via POST. Each submission's request
body has a flat, unnested JSON structure containing numeric and text data. After the Cloud
Function completes, the collected data should be immediately available for ongoing and
complex analytics by many users in parallel. How should you persist the submissions?
Answer: B
Question #: 131
Q: Your security team is auditing all deployed applications running in Google Kubernetes
Engine. After completing the audit, your team discovers that some of the applications send
traffic within the cluster in clear text. You need to ensure that all application traffic is
encrypted as quickly as possible while minimizing changes to your applications and
maintaining support from Google. What should you do?
A. Use Network Policies to block traffic between applications.
B. Install Istio, enable proxy injection on your application namespace, and then
enable mTLS.
C. Define Trusted Network ranges within the application, and configure the applications
to allow traffic only from those networks.
D. Use an automated process to request SSL Certificates for your applications from Let's
Encrypt and add them to your applications.
Answer: B
Question #: 132
Q: You migrated some of your applications to Google Cloud. You are using a legacy
monitoring platform deployed on-premises for both on-premises and cloud- deployed
applications. You discover that your notification system is responding slowly to time-critical
problems in the cloud applications. What should you do?
Answer:
Question #: 133
Q: You recently deployed your application in Google Kubernetes Engine, and now need to
release a new version of your application. You need the ability to instantly roll back to the
previous version in case there are issues with the new version. Which deployment model
should you use?
A. Perform a rolling deployment, and test your new application after the deployment is
complete.
B. Perform A/B testing, and test your application periodically after the new tests are
implemented.
C. Perform a blue/green deployment, and test your new application after the
deployment is. complete.
D. Perform a canary deployment, and test your new application periodically after the
new version is deployed.
Answer: C
Question #: 134
Q: You developed a JavaScript web application that needs to access Google Drive's API and
obtain permission from users to store files in their Google Drives. You need to select an
authorization approach for your application. What should you do?
Answer: D
Question #: 135
Q: You manage an ecommerce application that processes purchases from customers who
can subsequently cancel or change those purchases. You discover that order volumes are
highly variable and the backend order-processing system can only process one request at a
time. You want to ensure seamless performance for customers regardless of usage volume.
It is crucial that customers' order update requests are performed in the sequence in which
they were generated. What should you do?
A. Send the purchase and change requests over WebSockets to the backend.
B. Send the purchase and change requests as REST requests to the backend.
C. Use a Pub/Sub subscriber in pull mode and use a data store to manage ordering.
D. Use a Pub/Sub subscriber in push mode and use a data store to manage ordering.
Answer:
Question #: 136
Q: Your company needs a database solution that stores customer purchase history and
meets the following requirements:
✑ Customers can query their purchase immediately after submission.
✑ Purchases can be sorted on a variety of fields.
✑ Distinct record formats can be stored at the same time.
Which storage option satisfies these requirements?
Answer: A
Question #: 137
Q: You recently developed a new service on Cloud Run. The new service authenticates using
a custom service and then writes transactional information to a Cloud
Spanner database. You need to verify that your application can support up to 5,000 read
and 1,000 write transactions per second while identifying any bottlenecks that occur. Your
test infrastructure must be able to autoscale. What should you do?
A. Build a test harness to generate requests and deploy it to Cloud Run. Analyze the VPC
Flow Logs using Cloud Logging.
B. Create a Google Kubernetes Engine cluster running the Locust or JMeter images
to dynamically generate load tests. Analyze the results using Cloud Trace.
C. Create a Cloud Task to generate a test load. Use Cloud Scheduler to run 60,000 Cloud
Task transactions per minute for 10 minutes. Analyze the results using Cloud
Monitoring.
D. Create a Compute Engine instance that uses a LAMP stack image from the
Marketplace, and use Apache Bench to generate load tests against the service. Analyze
the results using Cloud Trace.
Answer: B
Question #: 138
Q: You are using Cloud Build for your CI/CD pipeline to complete several tasks, including
copying certain files to Compute Engine virtual machines. Your pipeline requires a flat file
that is generated in one builder in the pipeline to be accessible by subsequent builders in
the same pipeline. How should you store the file so that all the builders in the pipeline can
access it?
A. Store and retrieve the file contents using Compute Engine instance metadata.
B. Output the file contents to a file in /workspace. Read from the same
/workspace file in the subsequent build step.
C. Use gsutil to output the file contents to a Cloud Storage object. Read from the same
object in the subsequent build step.
D. Add a build argument that runs an HTTP POST via curl to a separate web server to
persist the value in one builder. Use an HTTP GET via curl from the subsequent build
step to read the value.
Answer: B
Question #: 139
Q: Your company’s development teams want to use various open source operating systems
in their Docker builds. When images are created in published containers in your company’s
environment, you need to scan them for Common Vulnerabilities and Exposures (CVEs).
The scanning process must not impact software development agility. You want to use
managed services where possible. What should you do?
Answer: A
Question #: 140
Q: You are configuring a continuous integration pipeline using Cloud Build to automate the
deployment of new container images to Google Kubernetes Engine (GKE). The pipeline
builds the application from its source code, runs unit and integration tests in separate steps,
and pushes the container to Container Registry. The application runs on a Python web
server.
FROM python:3.7-alpine -
COPY . /app -
WORKDIR /app -
RUN pip install -r requirements.txt
CMD [ "gunicorn", "-w 4", "main:app" ]
You notice that Cloud Build runs are taking longer than expected to complete. You want to
decrease the build time. What should you do? (Choose two.)
A. Select a virtual machine (VM) size with higher CPU for Cloud Build runs.
B. Deploy a Container Registry on a Compute Engine VM in a VPC, and use it to store the
final images.
C. Cache the Docker image for subsequent builds using the -- cache-from
argument in your build config file.
D. Change the base image in the Dockerfile to ubuntu:latest, and install Python 3.7 using
a package manager utility.
E. Store application source code on Cloud Storage, and configure the pipeline to use
gsutil to download the source code.
Answer: A C
Question #: 141
Q: You are building a CI/CD pipeline that consists of a version control system, Cloud Build,
and Container Registry. Each time a new tag is pushed to the repository, a Cloud Build job is
triggered, which runs unit tests on the new code builds a new Docker container image, and
pushes it into Container Registry. The last step of your pipeline should deploy the new
container to your production Google Kubernetes Engine (GKE) cluster. You need to select a
tool and deployment strategy that meets the following requirements:
• Zero downtime is incurred
• Testing is fully automated
• Allows for testing before being rolled out to users
• Can quickly rollback if needed
A. Trigger a Spinnaker pipeline configured as an A/B test of your new code and, if it is
successful, deploy the container to production.
B. Trigger a Spinnaker pipeline configured as a canary test of your new code and, if it is
successful, deploy the container to production.
C. Trigger another Cloud Build job that uses the Kubernetes CLI tools to deploy your
new container to your GKE cluster, where you can perform a canary test.
D. Trigger another Cloud Build job that uses the Kubernetes CLI tools to deploy
your new container to your GKE cluster, where you can perform a shadow test.
Answer: D
Question #: 142
Q: Your operations team has asked you to create a script that lists the Cloud Bigtable,
Memorystore, and Cloud SQL databases running within a project. The script should allow
users to submit a filter expression to limit the results presented. How should you retrieve
the data?
A. Use the HBase API, Redis API, and MySQL connection to retrieve database lists.
Combine the results, and then apply the filter to display the results
B. Use the HBase API, Redis API, and MySQL connection to retrieve database lists. Filter
the results individually, and then combine them to display the results
C. Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql
databases list. Use a filter within the application, and then display the results
D. Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql
databases list. Use --filter flag with each command, and then display the results
Answer: D
Question #: 143
Q: You need to deploy a new European version of a website hosted on Google Kubernetes
Engine. The current and new websites must be accessed via the same HTTP(S) load
balancer's external IP address, but have different domain names. What should you do?
A. Define a new Ingress resource with a host rule matching the new domain
B. Modify the existing Ingress resource with a host rule matching the new domain
C. Create a new Service of type LoadBalancer specifying the existing IP address as the
loadBalancerIP
D. Generate a new Ingress resource and specify the existing IP address as the
kubernetes.io/ingress.global-static-ip-name annotation value
Answer: B
Question #: 144
Q: You are developing a single-player mobile game backend that has unpredictable traffic
patterns as users interact with the game throughout the day and night. You want to
optimize costs by ensuring that you have enough resources to handle requests, but
minimize over-provisioning. You also want the system to handle traffic spikes efficiently.
Which compute platform should you use?
A. Cloud Run
B. Compute Engine with managed instance groups
C. Compute Engine with unmanaged instance groups
D. Google Kubernetes Engine using cluster autoscaling
Answer: A
Question #: 145
Q: The development teams in your company want to manage resources from their local
environments. You have been asked to enable developer access to each team’s Google Cloud
projects. You want to maximize efficiency while following Google-recommended best
practices. What should you do?
A. Add the users to their projects, assign the relevant roles to the users, and then
provide the users with each relevant Project ID.
B. Add the users to their projects, assign the relevant roles to the users, and then
provide the users with each relevant Project Number.
C. Create groups, add the users to their groups, assign the relevant roles to the
groups, and then provide the users with each relevant Project ID.
D. Create groups, add the users to their groups, assign the relevant roles to the groups,
and then provide the users with each relevant Project Number.
Answer: C
Question #: 146
Q: Your company’s product team has a new requirement based on customer demand to
autoscale your stateless and distributed service running in a Google Kubernetes Engine
(GKE) duster. You want to find a solution that minimizes changes because this feature will
go live in two weeks. What should you do?
A. Deploy a Vertical Pod Autoscaler, and scale based on the CPU load.
B. Deploy a Vertical Pod Autoscaler, and scale based on a custom metric.
C. Deploy a Horizontal Pod Autoscaler, and scale based on the CPU toad.
D. Deploy a Horizontal Pod Autoscaler, and scale based on a custom metric.
Answer: C
Question #: 147
Q: Your application is composed of a set of loosely coupled services orchestrated by code
executed on Compute Engine. You want your application to easily bring up new Compute
Engine instances that find and use a specific version of a service. How should this be
configured?
Answer: A
Question #: 148
Q: You are developing a microservice-based application that will run on Google Kubernetes
Engine (GKE). Some of the services need to access different Google Cloud APIs. How should
you set up authentication of these services in the cluster following Google-recommended
best practices? (Choose two.)
Answer: B E
Question #: 149
Q: Your development team has been tasked with maintaining a .NET legacy application. The
application incurs occasional changes and was recently updated. Your goal is to ensure that
the application provides consistent results while moving through the CI/CD pipeline from
environment to environment. You want to minimize the cost of deployment while making
sure that external factors and dependencies between hosting environments are not
problematic. Containers are not yet approved in your organization. What should you do?
A. Rewrite the application using .NET Core, and deploy to Cloud Run. Use revisions to
separate the environments.
B. Use Cloud Build to deploy the application as a new Compute Engine image for
each build. Use this image in each environment.
C. Deploy the application using MS Web Deploy, and make sure to always use the latest,
patched MS Windows Server base image in Compute Engine.
D. Use Cloud Build to package the application, and deploy to a Google Kubernetes
Engine cluster. Use namespaces to separate the environments.
Answer: B
Question #: 150
Q: The new version of your containerized application has been tested and is ready to deploy
to production on Google Kubernetes Engine. You were not able to fully load-test the new
version in pre-production environments, and you need to make sure that it does not have
performance problems once deployed. Your deployment must be automated. What should
you do?
A. Use Cloud Load Balancing to slowly ramp up traffic between versions. Use Cloud
Monitoring to look for performance issues.
B. Deploy the application via a continuous delivery pipeline using canary
deployments. Use Cloud Monitoring to look for performance issues. and ramp up
traffic as the metrics support it.
C. Deploy the application via a continuous delivery pipeline using blue/green
deployments. Use Cloud Monitoring to look for performance issues, and launch fully
when the metrics support it.
D. Deploy the application using kubectl and set the spec.updateStrategv.type to
RollingUpdate. Use Cloud Monitoring to look for performance issues, and run the
kubectl rollback command if there are any issues.
Answer: B
Question #: 151
Q: Users are complaining that your Cloud Run-hosted website responds too slowly during
traffic spikes. You want to provide a better user experience during traffic peaks. What
should you do?
A. Read application configuration and static data from the database on application
startup.
B. Package application configuration and static data into the application image
during build time.
C. Perform as much work as possible in the background after the response has been
returned to the user.
D. Ensure that timeout exceptions and errors cause the Cloud Run instance to exit
quickly so a replacement instance can be started.
Answer: B
Question #: 152
Q: You are a developer working on an internal application for payroll processing. You are
building a component of the application that allows an employee to submit a timesheet,
which then initiates several steps:
• An email is sent to the employee and manager, notifying them that the timesheet was
submitted.
• A timesheet is sent to payroll processing for the vendor's API.
• A timesheet is sent to the data warehouse for headcount planning.
These steps are not dependent on each other and can be completed in any order. New steps
are being considered and will be implemented by different development teams. Each
development team will implement the error handling specific to their step. What should you
do?
A. Deploy a Cloud Function for each step that calls the corresponding downstream
system to complete the required action.
B. Create a Pub/Sub topic for each step. Create a subscription for each downstream
development team to subscribe to their step's topic.
C. Create a Pub/Sub topic for timesheet submissions. Create a subscription for
each downstream development team to subscribe to the topic.
D. Create a timesheet microservice deployed to Google Kubernetes Engine. The
microservice calls each downstream step and waits for a successful response before
calling the next step.
Answer: C
Question #: 153
Q: You are designing an application that uses a microservices architecture. You are planning
to deploy the application in the cloud and on-premises. You want to make sure the
application can scale up on demand and also use managed services as much as possible.
What should you do?
Answer: B
Question #: 154
Q: You want to migrate an on-premises container running in Knative to Google Cloud. You
need to make sure that the migration doesn't affect your application's deployment strategy,
and you want to use a fully managed service. Which Google Cloud service should you use to
deploy your container?
A. Cloud Run
B. Compute Engine
C. Google Kubernetes Engine
D. App Engine flexible environment
Answer: A
Question #: 155
Q: This architectural diagram depicts a system that streams data from thousands of devices.
You want to ingest data into a pipeline, store the data, and analyze the data using SQL
statements. Which Google Cloud services should you use for steps 1, 2, 3, and 4?
A. 1. App Engine
2. Pub/Sub
3. BigQuery
4. Firestore
B. 1. Dataflow
2. Pub/Sub
3. Firestore
4. BigQuery
C. 1. Pub/Sub
2. Dataflow
3. BigQuery
4. Firestore
D. 1. Pub/Sub
2. Dataflow
3. Firestore
4. BigQuery
Answer: D
Question #: 156
Q: Your company just experienced a Google Kubernetes Engine (GKE) API outage due to a
zone failure. You want to deploy a highly available GKE architecture that minimizes service
interruption to users in the event of a future zone failure. What should you do?
Answer: B
Question #: 157
Q: Your team develops services that run on Google Cloud. You want to process messages
sent to a Pub/Sub topic, and then store them. Each message must be processed exactly once
to avoid duplication of data and any data conflicts. You need to use the cheapest and most
simple solution. What should you do?
A. Process the messages with a Dataproc job, and write the output to storage.
B. Process the messages with a Dataflow streaming pipeline using Apache Beam's
PubSubIO package, and write the output to storage.
C. Process the messages with a Cloud Function, and write the results to a BigQuery
location where you can run a job to deduplicate the data.
D. Retrieve the messages with a Dataflow streaming pipeline, store them in Cloud
Bigtable, and use another Dataflow streaming pipeline to deduplicate messages.
Answer: B
Question #: 158
Q: You are running a containerized application on Google Kubernetes Engine. Your
container images are stored in Container Registry. Your team uses CI/CD practices. You
need to prevent the deployment of containers with known critical vulnerabilities. What
should you do?
Answer: D
Question #: 159
Q: You have an on-premises application that authenticates to the Cloud Storage API using a
user-managed service account with a user-managed key. The application connects to Cloud
Storage using Private Google Access over a Dedicated Interconnect link. You discover that
requests from the application to access objects in the Cloud Storage bucket are failing with a
403 Permission Denied error code. What is the likely cause of this issue?
A. The folder structure inside the bucket and object paths have changed.
B. The permissions of the service account’s predefined role have changed.
C. The service account key has been rotated but not updated on the application
server.
D. The Interconnect link from the on-premises data center to Google Cloud is
experiencing a temporary outage.
Answer: C
Question #: 160
Q: You are using the Cloud Client Library to upload an image in your application to Cloud
Storage. Users of the application report that occasionally the upload does not complete and
the client library reports an HTTP 504 Gateway Timeout error. You want to make the
application more resilient to errors. What changes to the application should you make?
Answer: A
Question #: 161
Q: You are building a mobile application that will store hierarchical data structures in a
database. The application will enable users working offline to sync changes when they are
back online. A backend service will enrich the data in the database using a service account.
The application is expected to be very popular and needs to scale seamlessly and securely.
Which database and IAM role should you use?
A. Use Cloud SQL, and assign the roles/cloudsql.editor role to the service account.
B. Use Bigtable, and assign the roles/bigtable.viewer role to the service account.
C. Use Firestore in Native mode and assign the roles/datastore.user role to the
service account.
D. Use Firestore in Datastore mode and assign the roles/datastore.viewer role to the
service account.
Answer: C
Question #: 162
Q: Your application is deployed on hundreds of Compute Engine instances in a managed
instance group (MIG) in multiple zones. You need to deploy a new instance template to fix a
critical vulnerability immediately but must avoid impact to your service. What setting
should be made to the MIG after updating the instance template?
Answer: D
Question #: 163
Q: You made a typo in a low-level Linux configuration file that prevents your Compute
Engine instance from booting to a normal run level. You just created the Compute Engine
instance today and have done no other maintenance on it, other than tweaking files. How
should you correct this error?
A. Download the file using scp, change the file, and then upload the modified version
B. Configure and log in to the Compute Engine instance through SSH, and change the file
C. Configure and log in to the Compute Engine instance through the serial port,
and change the file
D. Configure and log in to the Compute Engine instance using a remote desktop client,
and change the file
Answer: C
Question #: 164
Q: You are developing an application that needs to store files belonging to users in Cloud
Storage. You want each user to have their own subdirectory in Cloud Storage. When a new
user is created, the corresponding empty subdirectory should also be created. What should
you do?
A. Create an object with the name of the subdirectory ending with a trailing slash
('/') that is zero bytes in length.
B. Create an object with the name of the subdirectory, and then immediately delete the
object within that subdirectory.
C. Create an object with the name of the subdirectory that is zero bytes in length and has
WRITER access control list permission.
D. Create an object with the name of the subdirectory that is zero bytes in length. Set the
Content-Type metadata to CLOUDSTORAGE_FOLDER.
Answer: A
Question #: 165
Q: Your company’s corporate policy states that there must be a copyright comment at the
very beginning of all source files. You want to write a custom step in Cloud Build that is
triggered by each source commit. You need the trigger to validate that the source contains a
copyright and add one for subsequent steps if not there. What should you do?
A. Build a new Docker container that examines the files in /workspace and then
checks and adds a copyright for each source file. Changed files are explicitly
committed back to the source repository.
B. Build a new Docker container that examines the files in /workspace and then checks
and adds a copyright for each source file. Changed files do not need to be committed
back to the source repository.
C. Build a new Docker container that examines the files in a Cloud Storage bucket and
then checks and adds a copyright for each source file. Changed files are written back to
the Cloud Storage bucket.
D. Build a new Docker container that examines the files in a Cloud Storage bucket and
then checks and adds a copyright for each source file. Changed files are explicitly
committed back to the source repository.
Answer: A
Question #: 166
Q: One of your deployed applications in Google Kubernetes Engine (GKE) is having
intermittent performance issues. Your team uses a third-party logging solution. You want to
install this solution on each node in your GKE cluster so you can view the logs. What should
you do?
Answer: A
Question #: 167
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data, and that they analyze and respond to any issues
that occur.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
• Expand availability of the application to new regions.
• Support 10x as many concurrent users.
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
• Reduce infrastructure management time and cost.
• Adopt the Google-recommended practices for cloud computing.
○ Develop standardized workflows and processes around application lifecycle
management.
○ Define service level indicators (SLIs) and service level objectives (SLOs).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted
applications and infrastructure.
• The application must provide usage metrics and monitoring.
• APIs require authentication and authorization.
• Implement faster and more accurate validation of new features.
• Logging and performance metrics must provide actionable information to be able to
provide debugging information and alerts.
• Must scale to meet user demand.
For this question, refer to the HipLocal case study.
How should HipLocal redesign their architecture to ensure that the application scales to
support a large increase in users?
A. Use Google Kubernetes Engine (GKE) to run the application as a microservice. Run
the MySQL database on a dedicated GKE node.
B. Use multiple Compute Engine instances to run MySQL to store state information. Use
a Google Cloud-managed load balancer to distribute the load between instances. Use
managed instance groups for scaling.
C. Use Memorystore to store session information and CloudSQL to store state
information. Use a Google Cloud-managed load balancer to distribute the load
between instances. Use managed instance groups for scaling.
D. Use a Cloud Storage bucket to serve the application as a static website, and use
another Cloud Storage bucket to store user state information.
Answer: C
Question #: 168
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data, and that they analyze and respond to any issues
that occur.
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted
applications and infrastructure.
• The application must provide usage metrics and monitoring.
• APIs require authentication and authorization.
• Implement faster and more accurate validation of new features.
• Logging and performance metrics must provide actionable information to be able to
provide debugging information and alerts.
• Must scale to meet user demand.
How should HipLocal increase their API development speed while continuing to provide the
QA team with a stable testing environment that meets feature requirements?
A. Include unit tests in their code, and prevent deployments to QA until all tests
have a passing status.
B. Include performance tests in their code, and prevent deployments to QA until all tests
have a passing status.
C. Create health checks for the QA environment, and redeploy the APIs at a later time if
the environment is unhealthy.
D. Redeploy the APIs to App Engine using Traffic Splitting. Do not move QA traffic to the
new versions if errors are found.
Answer: A
Question #: 169
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data, and that they analyze and respond to any issues
that occur.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
• Expand availability of the application to new regions.
• Support 10x as many concurrent users.
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
• Reduce infrastructure management time and cost.
• Adopt the Google-recommended practices for cloud computing.
○ Develop standardized workflows and processes around application lifecycle
management.
○ Define service level indicators (SLIs) and service level objectives (SLOs).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted
applications and infrastructure.
• The application must provide usage metrics and monitoring.
• APIs require authentication and authorization.
• Implement faster and more accurate validation of new features.
• Logging and performance metrics must provide actionable information to be able to
provide debugging information and alerts.
• Must scale to meet user demand.
HipLocal's application uses Cloud Client Libraries to interact with Google Cloud. HipLocal
needs to configure authentication and authorization in the Cloud Client Libraries to
implement least privileged access for the application. What should they do?
A. Create an API key. Use the API key to interact with Google Cloud.
B. Use the default compute service account to interact with Google Cloud.
C. Create a service account for the application. Export and deploy the private key
for the application. Use the service account to interact with Google Cloud.
D. Create a service account for the application and for each Google Cloud API used by the
application. Export and deploy the private keys used by the application. Use the service
account with one Google Cloud API to interact with Google Cloud.
Answer: C
Question #: 170
Q: You are in the final stage of migrating an on-premises data center to Google Cloud. You
are quickly approaching your deadline, and discover that a web API is running on a server
slated for decommissioning. You need to recommend a solution to modernize this API while
migrating to Google Cloud. The modernized web API must meet the following requirements:
You want to minimize cost, effort, and operational overhead of this migration. What should
you do?
Answer: B
Question #: 171
Q: You are developing an application that consists of several microservices running in a
Google Kubernetes Engine cluster. One microservice needs to connect to a third-party
database running on-premises. You need to store credentials to the database and ensure
that these credentials can be rotated while following security best practices. What should
you do?
A. Store the credentials in a sidecar container proxy, and use it to connect to the third-
party database.
B. Configure a service mesh to allow or restrict traffic from the Pods in your
microservice to the database.
C. Store the credentials in an encrypted volume mount, and associate a Persistent
Volume Claim with the client Pod.
D. Store the credentials as a Kubernetes Secret, and use the Cloud Key
Management Service plugin to handle encryption and decryption.
Answer: D
Question #: 172
Q: You manage your company's ecommerce platform's payment system, which runs on
Google Cloud. Your company must retain user logs for 1 year for internal auditing purposes
and for 3 years to meet compliance requirements. You need to store new user logs on
Google Cloud to minimize on-premises storage usage and ensure that they are easily
searchable. You want to minimize effort while ensuring that the logs are stored correctly.
What should you do?
A. Store the logs in a Cloud Storage bucket with bucket lock turned on.
B. Store the logs in a Cloud Storage bucket with a 3-year retention period.
C. Store the logs in Cloud Logging as custom logs with a custom retention period.
D. Store the logs in a Cloud Storage bucket with a 1-year retention period. After 1 year,
move the logs to another bucket with a 2-year retention period.
Answer: C
Question #: 173
Q: Your company has a new security initiative that requires all data stored in Google Cloud
to be encrypted by customer-managed encryption keys. You plan to use Cloud Key
Management Service (KMS) to configure access to the keys. You need to follow the
"separation of duties" principle and Google-recommended best practices. What should you
do? (Choose two.)
Answer: A B
Question #: 174
Q: You need to migrate a standalone Java application running in an on-premises Linux
virtual machine (VM) to Google Cloud in a cost-effective manner. You decide not to take the
lift-and-shift approach, and instead you plan to modernize the application by converting it
to a container. How should you accomplish this task?
A. Use Migrate for Anthos to migrate the VM to your Google Kubernetes Engine (GKE)
cluster as a container.
B. Export the VM as a raw disk and import it as an image. Create a Compute Engine
instance from the Imported image.
C. Use Migrate for Compute Engine to migrate the VM to a Compute Engine instance, and
use Cloud Build to convert it to a container.
D. Use Jib to build a Docker image from your source code, and upload it to Artifact
Registry. Deploy the application in a GKE cluster, and test the application.
Answer: D
Question #: 175
Q: Your organization has recently begun an initiative to replatform their legacy applications
onto Google Kubernetes Engine. You need to decompose a monolithic application into
microservices. Multiple instances have read and write access to a configuration file, which is
stored on a shared file system. You want to minimize the effort required to manage this
transition, and you want to avoid rewriting the application code. What should you do?
A. Create a new Cloud Storage bucket, and mount it via FUSE in the container.
B. Create a new persistent disk, and mount the volume as a shared PersistentVolume.
C. Create a new Filestore instance, and mount the volume as an NFS
PersistentVolume.
D. Create a new ConfigMap and volumeMount to store the contents of the configuration
file.
Answer: C
Question #: 176
Q: Your development team has built several Cloud Functions using Java along with
corresponding integration and service tests. You are building and deploying the functions
and launching the tests using Cloud Build. Your Cloud Build job is reporting deployment
failures immediately after successfully validating the code. What should you do?
A. Check the maximum number of Cloud Function instances.
B. Verify that your Cloud Build trigger has the correct build parameters.
C. Retry the tests using the truncated exponential backoff polling strategy.
D. Verify that the Cloud Build service account is assigned the Cloud Functions
Developer role.
Answer: D
Question #: 177
Q: You manage a microservices application on Google Kubernetes Engine (GKE) using Istio.
You secure the communication channels between your microservices by implementing an
Istio AuthorizationPolicy, a Kubernetes NetworkPolicy, and mTLS on your GKE cluster. You
discover that HTTP requests between two Pods to specific URLs fail, while other requests to
other URLs succeed. What is the cause of the connection issue?
Answer: C
Question #: 178
Q: You recently migrated an on-premises monolithic application to a microservices
application on Google Kubernetes Engine (GKE). The application has dependencies on
backend services on-premises, including a CRM system and a MySQL database that contains
personally identifiable information (PII). The backend services must remain on-premises to
meet regulatory requirements.
You established a Cloud VPN connection between your on-premises data center and Google
Cloud. You notice that some requests from your microservices application on GKE to the
backend services are failing due to latency issues caused by fluctuating bandwidth, which is
causing the application to crash. How should you address the latency issues?
A. Use Memorystore to cache frequently accessed PII data from the on-premises MySQL
database
B. Use Istio to create a service mesh that includes the microservices on GKE and the on-
premises services
C. Increase the number of Cloud VPN tunnels for the connection between Google
Cloud and the on-premises services
D. Decrease the network layer packet size by decreasing the Maximum Transmission
Unit (MTU) value from its default value on Cloud VPN
Answer: C
Question #: 179
Q: Your company has deployed a new API to a Compute Engine instance. During testing, the
API is not behaving as expected. You want to monitor the application over 12 hours to
diagnose the problem within the application code without redeploying the application.
Which tool should you use?
A. Cloud Trace
B. Cloud Monitoring
C. Cloud Debugger logpoints
D. Cloud Debugger snapshots
Answer: C
Question #: 180
Q: You are designing an application that consists of several microservices. Each
microservice has its own RESTful API and will be deployed as a separate Kubernetes
Service. You want to ensure that the consumers of these APIs aren't impacted when there is
a change to your API, and also ensure that third-party systems aren't interrupted when new
versions of the API are released. How should you configure the connection to the
application following Google-recommended best practices?
A. Use an Ingress that uses the API's URL to route requests to the appropriate
backend.
B. Leverage a Service Discovery system, and connect to the backend specified by the
request.
C. Use multiple clusters, and use DNS entries to route requests to separate versioned
backends.
D. Combine multiple versions in the same service, and then specify the API version in
the POST request.
Answer: A
Question #: 181
Q: Your team is building an application for a financial institution. The application's frontend
runs on Compute Engine, and the data resides in Cloud SQL and one Cloud Storage bucket.
The application will collect data containing PII, which will be stored in the Cloud SQL
database and the Cloud Storage bucket. You need to secure the PII data. What should you
do?
A. 1. Create the relevant firewall rules to allow only the frontend to communicate with
the Cloud SQL database
2. Using IAM, allow only the frontend service account to access the Cloud Storage bucket
B. 1. Create the relevant firewall rules to allow only the frontend to communicate with
the Cloud SQL database
2. Enable private access to allow the frontend to access the Cloud Storage bucket
privately
C. 1. Configure a private IP address for Cloud SQL
2. Use VPC-SC to create a service perimeter
3. Add the Cloud SQL database and the Cloud Storage bucket to the same service
perimeter
D. 1. Configure a private IP address for Cloud SQL
2. Use VPC-SC to create a service perimeter
3. Add the Cloud SQL database and the Cloud Storage bucket to different service
perimeters
Answer: C
Question #: 182
Q: You are designing a chat room application that will host multiple rooms and retain the
message history for each room. You have selected Firestore as your database. How should
you represent the data in Firestore?
A. Create a collection for the rooms. For each room, create a document that lists the
contents of the messages
B. Create a collection for the rooms. For each room, create a collection that contains a
document for each message
C. Create a collection for the rooms. For each room, create a document that
contains a collection for documents, each of which contains a message.
D. Create a collection for the rooms, and create a document for each room. Create a
separate collection for messages, with one document per message. Each room’s
document contains a list of references to the messages.
Answer: C
Question #: 183
Q: You are developing an application that will handle requests from end users. You need to
secure a Cloud Function called by the application to allow authorized end users to
authenticate to the function via the application while restricting access to unauthorized
users. You will integrate Google Sign-In as part of the solution and want to follow Google-
recommended best practices. What should you do?
Answer: B
Question #: 184
Q: You are running a web application on Google Kubernetes Engine that you inherited. You
want to determine whether the application is using libraries with known vulnerabilities or
is vulnerable to XSS attacks. Which service should you use?
Answer: C
Question #: 185
Q: You are building a highly available and globally accessible application that will serve
static content to users. You need to configure the storage and serving components. You
want to minimize management overhead and latency while maximizing reliability for users.
What should you do?
A. 1. Create a managed instance group. Replicate the static content across the virtual
machines (VMs)
2. Create an external HTTP(S) load balancer.
3. Enable Cloud CDN, and send traffic to the managed instance group.
B. 1. Create an unmanaged instance group. Replicate the static content across the VMs.
2. Create an external HTTP(S) load balancer
3. Enable Cloud CDN, and send traffic to the unmanaged instance group.
C. 1. Create a Standard storage class, regional Cloud Storage bucket. Put the static
content in the bucket
2. Reserve an external IP address, and create an external HTTP(S) load balancer
3. Enable Cloud CDN, and send traffic to your backend bucket
D. 1. Create a Standard storage class, multi-regional Cloud Storage bucket. Put the
static content in the bucket.
2. Reserve an external IP address, and create an external HTTP(S) load balancer.
3. Enable Cloud CDN, and send traffic to your backend bucket.
Answer: D
Question #: 186
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data, and that they analyze and respond to any issues
that occur.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
• Expand availability of the application to new regions.
• Support 10x as many concurrent users.
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
• Reduce infrastructure management time and cost.
• Adopt the Google-recommended practices for cloud computing.
○ Develop standardized workflows and processes around application lifecycle
management.
○ Define service level indicators (SLIs) and service level objectives (SLOs).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted
applications and infrastructure.
• The application must provide usage metrics and monitoring.
• APIs require authentication and authorization.
• Implement faster and more accurate validation of new features.
• Logging and performance metrics must provide actionable information to be able to
provide debugging information and alerts.
• Must scale to meet user demand.
HipLocal wants to reduce the latency of their services for users in global locations. They
have created read replicas of their database in locations where their users reside and
configured their service to read traffic using those replicas. How should they further reduce
latency for all database interactions with the least amount of effort?
A. Migrate the database to Bigtable and use it to serve all global user traffic.
B. Migrate the database to Cloud Spanner and use it to serve all global user traffic.
C. Migrate the database to Firestore in Datastore mode and use it to serve all global user
traffic.
D. Migrate the services to Google Kubernetes Engine and use a load balancer service to
better scale the application.
Answer: B
Question #: 187
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data, and that they analyze and respond to any issues
that occur.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
• Expand availability of the application to new regions.
• Support 10x as many concurrent users.
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
• Reduce infrastructure management time and cost.
• Adopt the Google-recommended practices for cloud computing.
○ Develop standardized workflows and processes around application lifecycle
management.
○ Define service level indicators (SLIs) and service level objectives (SLOs).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted
applications and infrastructure.
• The application must provide usage metrics and monitoring.
• APIs require authentication and authorization.
• Implement faster and more accurate validation of new features.
• Logging and performance metrics must provide actionable information to be able to
provide debugging information and alerts.
• Must scale to meet user demand.
Which Google Cloud product addresses HipLocal’s business requirements for service level
indicators and objectives?
A. Cloud Profiler
B. Cloud Monitoring
C. Cloud Trace
D. Cloud Logging
Answer: B
Question #: 188
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data, and that they analyze and respond to any issues
that occur.
Existing Technical Environment -
HipLocal's environment is a mix of on-premises hardware and infrastructure running in
Google Cloud Platform. The HipLocal team understands their application well, but has
limited experience in global scale applications. Their existing technical environment is as
follows:
• Existing APIs run on Compute Engine virtual machine instances hosted in GCP.
• State is stored in a single instance MySQL database in GCP.
• Release cycles include development freezes to allow for QA testing.
• The application has no logging.
• Applications are manually deployed by infrastructure engineers during periods of slow
traffic on weekday evenings.
• There are basic indicators of uptime; alerts are frequently fired when the APIs are
unresponsive.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
• Expand availability of the application to new regions.
• Support 10x as many concurrent users.
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
• Reduce infrastructure management time and cost.
• Adopt the Google-recommended practices for cloud computing.
○ Develop standardized workflows and processes around application lifecycle
management.
○ Define service level indicators (SLIs) and service level objectives (SLOs).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted
applications and infrastructure.
• The application must provide usage metrics and monitoring.
• APIs require authentication and authorization.
• Implement faster and more accurate validation of new features.
• Logging and performance metrics must provide actionable information to be able to
provide debugging information and alerts.
• Must scale to meet user demand.
For this question, refer to the HipLocal case study.
A recent security audit discovers that HipLocal’s database credentials for their Compute
Engine-hosted MySQL databases are stored in plain text on persistent disks. HipLocal needs
to reduce the risk of these credentials being stolen. What should they do?
A. Create a service account and download its key. Use the key to authenticate to Cloud
Key Management Service (KMS) to obtain the database credentials.
B. Create a service account and download its key. Use the key to authenticate to Cloud
Key Management Service (KMS) to obtain a key used to decrypt the database
credentials.
C. Create a service account and grant it the roles/iam.serviceAccountUser role.
Impersonate as this account and authenticate using the Cloud SQL Proxy.
D. Grant the roles/secretmanager.secretAccessor role to the Compute Engine
service account. Store and access the database credentials with the Secret
Manager API.
Answer: D
Question #: 189
Q: Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time
as you would like to complete each case. However, there may be additional case studies and
sections on this exam. You must manage your time to ensure that you are able to complete
all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information
that is provided in the case study. Case studies might contain exhibits and other resources
that provide more information about the scenario that is described in the case study. Each
question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review
your answers and to make changes before you move to the next section of the exam. After
you begin a new section, you cannot return to this section.
Company Overview -
HipLocal is a community application designed to facilitate communication between people
in close proximity. It is used for event planning and organizing sporting events, and for
businesses to connect with their local communities. HipLocal launched recently in a few
neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style
of hyper-local community communication and business outreach is in demand around the
world.
Executive Statement -
We are the number one local community app; it's time to take our local community services
global. Our venture capital investors want to see rapid growth and the same great
experience for new local and virtual communities that come online, whether their members
are 10 or 10000 miles away from each other.
Solution Concept -
HipLocal wants to expand their existing service, with updated functionality, in new regions
to better serve their global customers. They want to hire and train a new team to support
these regions in their time zones. They will need to ensure that the application scales
smoothly and provides clear uptime data, and that they analyze and respond to any issues
that occur.
Business Requirements -
HipLocal's investors want to expand their footprint and support the increase in demand
they are seeing. Their requirements are:
• Expand availability of the application to new regions.
• Support 10x as many concurrent users.
• Ensure a consistent experience for users when they travel to different regions.
• Obtain user activity metrics to better understand how to monetize their product.
• Ensure compliance with regulations in the new regions (for example, GDPR).
• Reduce infrastructure management time and cost.
• Adopt the Google-recommended practices for cloud computing.
○ Develop standardized workflows and processes around application lifecycle
management.
○ Define service level indicators (SLIs) and service level objectives (SLOs).
Technical Requirements -
• Provide secure communications between the on-premises data center and cloud-hosted
applications and infrastructure.
• The application must provide usage metrics and monitoring.
• APIs require authentication and authorization.
• Implement faster and more accurate validation of new features.
• Logging and performance metrics must provide actionable information to be able to
provide debugging information and alerts.
• Must scale to meet user demand.
HipLocal is expanding into new locations. They must capture additional data each time the
application is launched in a new European country. This is causing delays in the
development process due to constant schema changes and a lack of environments for
conducting testing on the application changes. How should they resolve the issue while
meeting the business requirements?
A. Create new Cloud SQL instances in Europe and North America for testing and
deployment. Provide developers with local MySQL instances to conduct testing on the
application changes.
B. Migrate data to Bigtable. Instruct the development teams to use the Cloud SDK to
emulate a local Bigtable development environment.
C. Move from Cloud SQL to MySQL hosted on Compute Engine. Replicate hosts across
regions in the Americas and Europe. Provide developers with local MySQL instances to
conduct testing on the application changes.
D. Migrate data to Firestore in Native mode and set up instances in Europe and
North America. Instruct the development teams to use the Cloud SDK to emulate a
local Firestore in Native mode development environment.
Answer: D
Question #: 190
Q: You are writing from a Go application to a Cloud Spanner database. You want to optimize
your application’s performance using Google-recommended best practices. What should
you do?
Answer: A
Question #: 191
Q: You have an application deployed in Google Kubernetes Engine (GKE). You need to
update the application to make authorized requests to Google Cloud managed services. You
want this to be a one-time setup, and you need to follow security best practices of auto-
rotating your security keys and storing them in an encrypted store. You already created a
service account with appropriate access to the Google Cloud service. What should you do
next?
A. Assign the Google Cloud service account to your GKE Pod using Workload
Identity.
B. Export the Google Cloud service account, and share it with the Pod as a Kubernetes
Secret.
C. Export the Google Cloud service account, and embed it in the source code of the
application.
D. Export the Google Cloud service account, and upload it to HashiCorp Vault to generate
a dynamic service account for your application.
Answer: A
Question #: 192
Q: You are planning to deploy hundreds of microservices in your Google Kubernetes Engine
(GKE) cluster. How should you secure communication between the microservices on GKE
using a managed service?
A. Use global HTTP(S) Load Balancing with managed SSL certificates to protect your
services
B. Deploy open source Istio in your GKE cluster, and enable mTLS in your Service Mesh
C. Install cert-manager on GKE to automatically renew the SSL certificates.
D. Install Anthos Service Mesh, and enable mTLS in your Service Mesh.
Answer: D
Question #: 193
Q: You are developing an application that will store and access sensitive unstructured data
objects in a Cloud Storage bucket. To comply with regulatory requirements, you need to
ensure that all data objects are available for at least 7 years after their initial creation.
Objects created more than 3 years ago are accessed very infrequently (less than once a
year). You need to configure object storage while ensuring that storage cost is optimized.
What should you do? (Choose two.)
Answer: A D
Question #: 194
Q: You are developing an application using different microservices that must remain
internal to the cluster. You want the ability to configure each microservice with a specific
number of replicas. You also want the ability to address a specific microservice from any
other microservice in a uniform way, regardless of the number of replicas the microservice
scales to. You plan to implement this solution on Google Kubernetes Engine. What should
you do?
Answer: A
Question #: 195
Q: You are building an application that uses a distributed microservices architecture. You
want to measure the performance and system resource utilization in one of the
microservices written in Java. What should you do?
A. Instrument the service with Cloud Profiler to measure CPU utilization and
method-level execution times in the service.
B. Instrument the service with Debugger to investigate service errors.
C. Instrument the service with Cloud Trace to measure request latency.
D. Instrument the service with OpenCensus to measure service latency, and write
custom metrics to Cloud Monitoring.
Answer: A
Question #: 196
Q: Your team is responsible for maintaining an application that aggregates news articles
from many different sources. Your monitoring dashboard contains publicly accessible real-
time reports and runs on a Compute Engine instance as a web application. External
stakeholders and analysts need to access these reports via a secure channel without
authentication. How should you configure this secure channel?
A. Add a public IP address to the instance. Use the service account key of the instance to
encrypt the traffic.
B. Use Cloud Scheduler to trigger Cloud Build every hour to create an export from the
reports. Store the reports in a public Cloud Storage bucket.
C. Add an HTTP(S) load balancer in front of the monitoring dashboard. Configure
Identity-Aware Proxy to secure the communication channel.
D. Add an HTTP(S) load balancer in front of the monitoring dashboard. Set up a
Google-managed SSL certificate on the load balancer for traffic encryption.
Answer: D
Question #: 197
Q: You are planning to add unit tests to your application. You need to be able to assert that
published Pub/Sub messages are processed by your subscriber in order. You want the unit
tests to be cost-effective and reliable. What should you do?
Answer: D
Question #: 198
Q: You have an application deployed in Google Kubernetes Engine (GKE) that reads and
processes Pub/Sub messages. Each Pod handles a fixed number of messages per minute.
The rate at which messages are published to the Pub/Sub topic varies considerably
throughout the day and week, including occasional large batches of messages published at a
single moment.
You want to scale your GKE Deployment to be able to process messages in a timely manner.
What GKE feature should you use to automatically adapt your workload?
Answer: C
Question #: 199
Q: You are using Cloud Run to host a web application. You need to securely obtain the
application project ID and region where the application is running and display this
information to users. You want to use the most performant approach. What should you do?
Answer: A
Question #: 200
Q: You need to deploy resources from your laptop to Google Cloud using Terraform.
Resources in your Google Cloud environment must be created using a service account. Your
Cloud Identity has the roles/iam.serviceAccountTokenCreator Identity and Access
Management (IAM) role and the necessary permissions to deploy the resources using
Terraform. You want to set up your development environment to deploy the desired
resources following Google-recommended best practices. What should you do?
A. 1. Download the service account’s key file in JSON format, and store it locally on your
laptop.
2. Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of
your downloaded key file.
B. 1. Run the following command from a command line: gcloud config set
auth/impersonate_service_account service-account-
[email protected].
2. Set the GOOGLE_OAUTH_ACCESS_TOKEN environment variable to the value that
is returned by the gcloud auth print-access-token command.
C. 1. Run the following command from a command line: gcloud auth application-default
login.
2. In the browser window that opens, authenticate using your personal credentials.
D. 1. Store the service account's key file in JSON format in Hashicorp Vault.
2. Integrate Terraform with Vault to retrieve the key file dynamically, and authenticate
to Vault using a short-lived access token.
Answer: B
Question #: 201
Q: Your company uses Cloud Logging to manage large volumes of log data. You need to build
a real-time log analysis architecture that pushes logs to a third-party application for
processing. What should you do?
Answer: A
Question #: 202
Q: You are developing a new public-facing application that needs to retrieve specific
properties in the metadata of users’ objects in their respective Cloud Storage buckets. Due
to privacy and data residency requirements, you must retrieve only the metadata and not
the object data. You want to maximize the performance of the retrieval process. How should
you retrieve the metadata?
Answer: D
Question #: 203
Q: You are deploying a microservices application to Google Kubernetes Engine (GKE) that
will broadcast livestreams. You expect unpredictable traffic patterns and large variations in
the number of concurrent users. Your application must meet the following requirements:
Answer: A C
Question #: 204
Q: You work at a rapidly growing financial technology startup. You manage the payment
processing application written in Go and hosted on Cloud Run in the Singapore region (asia-
southeast1). The payment processing application processes data stored in a Cloud Storage
bucket that is also located in the Singapore region.
The startup plans to expand further into the Asia Pacific region. You plan to deploy the
Payment Gateway in Jakarta, Hong Kong, and Taiwan over the next six months. Each
location has data residency requirements that require customer data to reside in the
country where the transaction was made. You want to minimize the cost of these
deployments. What should you do?
A. Create a Cloud Storage bucket in each region, and create a Cloud Run service of
the payment processing application in each region.
B. Create a Cloud Storage bucket in each region, and create three Cloud Run services of
the payment processing application in the Singapore region.
C. Create three Cloud Storage buckets in the Asia multi-region, and create three Cloud
Run services of the payment processing application in the Singapore region.
D. Create three Cloud Storage buckets in the Asia multi-region, and create three Cloud
Run revisions of the payment processing application in the Singapore region.
Answer: A
Question #: 205
Q: You recently joined a new team that has a Cloud Spanner database instance running in
production. Your manager has asked you to optimize the Spanner instance to reduce cost
while maintaining high reliability and availability of the database. What should you do?
A. Use Cloud Logging to check for error logs, and reduce Spanner processing units by
small increments until you find the minimum capacity required.
B. Use Cloud Trace to monitor the requests per sec of incoming requests to Spanner, and
reduce Spanner processing units by small increments until you find the minimum
capacity required.
C. Use Cloud Monitoring to monitor the CPU utilization, and reduce Spanner processing
units by small increments until you find the minimum capacity required.
D. Use Snapshot Debugger to check for application errors, and reduce Spanner
processing units by small increments until you find the minimum capacity required.
Answer:
Question #: 206
Q: You recently deployed a Go application on Google Kubernetes Engine (GKE). The
operations team has noticed that the application's CPU usage is high even when there is low
production traffic. The operations team has asked you to optimize your application's CPU
resource consumption. You want to determine which Go functions consume the largest
amount of CPU. What should you do?
A. Deploy a Fluent Bit daemonset on the GKE cluster to log data in Cloud Logging.
Analyze the logs to get insights into your application code’s performance.
B. Create a custom dashboard in Cloud Monitoring to evaluate the CPU performance
metrics of your application.
C. Connect to your GKE nodes using SSH. Run the top command on the shell to extract
the CPU utilization of your application.
D. Modify your Go application to capture profiling data. Analyze the CPU metrics
of your application in flame graphs in Profiler.
Answer: D
Question #: 207
Q: Your team manages a Google Kubernetes Engine (GKE) cluster where an application is
running. A different team is planning to integrate with this application. Before they start the
integration, you need to ensure that the other team cannot make changes to your
application, but they can deploy the integration on GKE. What should you do?
A. Using Identity and Access Management (IAM), grant the Viewer IAM role on the
cluster project to the other team.
B. Create a new GKE cluster. Using Identity and Access Management (IAM), grant the
Editor role on the cluster project to the other team.
C. Create a new namespace in the existing cluster. Using Identity and Access
Management (IAM), grant the Editor role on the cluster project to the other team.
D. Create a new namespace in the existing cluster. Using Kubernetes role-based access
control (RBAC), grant the Admin role on the new namespace to the other team.
Answer:
Question #: 208
Q: You have recently instrumented a new application with OpenTelemetry, and you want to
check the latency of your application requests in Trace. You want to ensure that a specific
request is always traced. What should you do?
A. Wait 10 minutes, then verify that Trace captures those types of requests
automatically.
B. Write a custom script that sends this type of request repeatedly from your dev
project.
C. Use the Trace API to apply custom attributes to the trace.
D. Add the X-Cloud-Trace-Context header to the request with the appropriate
parameters.
Answer: D
Question #: 209
Q: You are trying to connect to your Google Kubernetes Engine (GKE) cluster using kubectl
from Cloud Shell. You have deployed your GKE cluster with a public endpoint. From Cloud
Shell, you run the following command:
You notice that the kubectl commands time out without returning an error message. What is
the most likely cause of this issue?
A. Your user account does not have privileges to interact with the cluster using kubectl.
B. Your Cloud Shell external IP address is not part of the authorized networks of
the cluster.
C. The Cloud Shell is not part of the same VPC as the GKE cluster.
D. A VPC firewall is blocking access to the cluster’s endpoint.
Answer: B
Question #: 210
Q: You are developing a web application that contains private images and videos stored in a
Cloud Storage bucket. Your users are anonymous and do not have Google Accounts. You
want to use your application-specific logic to control access to the images and videos. How
should you configure access?
A. Cache each web application user's IP address to create a named IP table using Google
Cloud Armor. Create a Google Cloud Armor security policy that allows users to access
the backend bucket.
B. Grant the Storage Object Viewer IAM role to allUsers. Allow users to access the bucket
after authenticating through your web application.
C. Configure Identity-Aware Proxy (IAP) to authenticate users into the web application.
Allow users to access the bucket after authenticating through IAP.
D. Generate a signed URL that grants read access to the bucket. Allow users to
access the URL after authenticating through your web application.
Answer: D
Question #: 211
Q: You need to configure a Deployment on Google Kubernetes Engine (GKE). You want to
include a check that verifies that the containers can connect to the database. If the Pod is
failing to connect, you want a script on the container to run to complete a graceful
shutdown. How should you configure the Deployment?
A. Create two jobs: one that checks whether the container can connect to the database,
and another that runs the shutdown script if the Pod is failing.
B. Create the Deployment with a livenessProbe for the container that will fail if
the container can't connect to the database. Configure a Prestop lifecycle handler
that runs the shutdown script if the container is failing.
C. Create the Deployment with a PostStart lifecycle handler that checks the service
availability. Configure a PreStop lifecycle handler that runs the shutdown script if the
container is failing.
D. Create the Deployment with an initContainer that checks the service availability.
Configure a Prestop lifecycle handler that runs the shutdown script if the Pod is failing.
Answer: B
Question #: 212
Q: You are responsible for deploying a new API. That API will have three different URL
paths:
• https://yourcompany.com/students
• https://yourcompany.com/teachers
• https://yourcompany.com/classes
You need to configure each API URL path to invoke a different function in your code. What
should you do?
A. Create one Cloud Function as a backend service exposed using an HTTPS load
balancer.
B. Create three Cloud Functions exposed directly.
C. Create one Cloud Function exposed directly.
D. Create three Cloud Functions as three backend services exposed using an
HTTPS load balancer.
Answer: D
Question #: 213
Q: You are deploying a microservices application to Google Kubernetes Engine (GKE). The
application will receive daily updates. You expect to deploy a large number of distinct
containers that will run on the Linux operating system (OS). You want to be alerted to any
known OS vulnerabilities in the new containers. You want to follow Google-recommended
best practices. What should you do?
A. Use the gcloud CLI to call Container Analysis to scan new container images. Review
the vulnerability results before each deployment.
B. Enable Container Analysis, and upload new container images to Artifact
Registry. Review the vulnerability results before each deployment.
C. Enable Container Analysis, and upload new container images to Artifact Registry.
Review the critical vulnerability results before each deployment.
D. Use the Container Analysis REST API to call Container Analysis to scan new container
images. Review the vulnerability results before each deployment.
Answer: B
Question #: 214
Q: You are a developer at a large organization. You have an application written in Go
running in a production Google Kubernetes Engine (GKE) cluster. You need to add a new
feature that requires access to BigQuery. You want to grant BigQuery access to your GKE
cluster following Google-recommended best practices. What should you do?
A. Create a Google service account with BigQuery access. Add the JSON key to Secret
Manager, and use the Go client library to access the JSON key.
B. Create a Google service account with BigQuery access. Add the Google service account
JSON key as a Kubernetes secret, and configure the application to use this secret.
C. Create a Google service account with BigQuery access. Add the Google service account
JSON key to Secret Manager, and use an init container to access the secret for the
application to use.
D. Create a Google service account and a Kubernetes service account. Configure
Workload Identity on the GKE cluster, and reference the Kubernetes service
account on the application Deployment.
Answer: D
Question #: 215
Q: You have an application written in Python running in production on Cloud Run. Your
application needs to read/write data stored in a Cloud Storage bucket in the same project.
You want to grant access to your application following the principle of least privilege. What
should you do?
Answer: A
Question #: 216
Q: Your team is developing unit tests for Cloud Function code. The code is stored in a Cloud
Source Repositories repository. You are responsible for implementing the tests. Only a
specific service account has the necessary permissions to deploy the code to Cloud
Functions. You want to ensure that the code cannot be deployed without first passing the
tests. How should you configure the unit testing process?
A. Configure Cloud Build to deploy the Cloud Function. If the code passes the tests, a
deployment approval is sent to you.
B. Configure Cloud Build to deploy the Cloud Function, using the specific service account
as the build agent. Run the unit tests after successful deployment.
C. Configure Cloud Build to run the unit tests. If the code passes the tests, the developer
deploys the Cloud Function.
D. Configure Cloud Build to run the unit tests, using the specific service account as
the build agent. If the code passes the tests, Cloud Build deploys the Cloud
Function.
Answer: D
Question #: 217
Q: Your team detected a spike of errors in an application running on Cloud Run in your
production project. The application is configured to read messages from Pub/Sub topic A,
process the messages, and write the messages to topic B. You want to conduct tests to
identify the cause of the errors. You can use a set of mock messages for testing. What should
you do?
A. Deploy the Pub/Sub and Cloud Run emulators on your local machine. Deploy
the application locally, and change the logging level in the application to DEBUG
or INFO. Write mock messages to topic A, and then analyze the logs.
B. Use the gcloud CLI to write mock messages to topic A. Change the logging level in the
application to DEBUG or INFO, and then analyze the logs.
C. Deploy the Pub/Sub emulator on your local machine. Point the production application
to your local Pub/Sub topics. Write mock messages to topic A, and then analyze the logs.
D. Use the Google Cloud console to write mock messages to topic A. Change the logging
level in the application to DEBUG or INFO, and then analyze the logs.
Answer: A
Question #: 218
Q: You are developing a Java Web Server that needs to interact with Google Cloud services
via the Google Cloud API on the user's behalf. Users should be able to authenticate to the
Google Cloud API using their Google Cloud identities. Which workflow should you
implement in your web application?
A. 1. When a user arrives at your application, prompt them for their Google username
and password.
2. Store an SHA password hash in your application's database along with the user's
username.
3. The application authenticates to the Google Cloud API using HTTPs requests with the
user's username and password hash in the Authorization request header.
B. 1. When a user arrives at your application, prompt them for their Google username
and password.
2. Forward the user's username and password in an HTTPS request to the Google Cloud
authorization server, and request an access token.
3. The Google server validates the user's credentials and returns an access token to the
application.
4. The application uses the access token to call the Google Cloud API.
C. 1. When a user arrives at your application, route them to a Google Cloud consent
screen with a list of requested permissions that prompts the user to sign in with SSO to
their Google Account.
2. After the user signs in and provides consent, your application receives an
authorization code from a Google server.
3. The Google server returns the authorization code to the user, which is stored in the
browser's cookies.
4. The user authenticates to the Google Cloud API using the authorization code in the
cookie.
D. 1. When a user arrives at your application, route them to a Google Cloud
consent screen with a list of requested permissions that prompts the user to sign
in with SSO to their Google Account.
2. After the user signs in and provides consent, your application receives an
authorization code from a Google server.
3. The application requests a Google Server to exchange the authorization code
with an access token.
4. The Google server responds with the access token that is used by the
application to call the Google Cloud API.
Answer: D
Question #: 219
Q: You recently developed a new application. You want to deploy the application on Cloud
Run without a Dockerfile. Your organization requires that all container images are pushed
to a centrally managed container repository. How should you build your container using
Google Cloud services? (Choose two.)
Answer: C D
Question #: 220
Q: You work for an organization that manages an online ecommerce website. Your company
plans to expand across the world; however, the estore currently serves one specific region.
You need to select a SQL database and configure a schema that will scale as your
organization grows. You want to create a table that stores all customer transactions and
ensure that the customer (CustomerId) and the transaction (TransactionId) are unique.
What should you do?
A. Create a Cloud SQL table that has TransactionId and CustomerId configured as
primary keys. Use an incremental number for the TransactionId.
B. Create a Cloud SQL table that has TransactionId and CustomerId configured as
primary keys. Use a random string (UUID) for the Transactionid.
C. Create a Cloud Spanner table that has TransactionId and CustomerId
configured as primary keys. Use a random string (UUID) for the TransactionId.
D. Create a Cloud Spanner table that has TransactionId and CustomerId configured as
primary keys. Use an incremental number for the TransactionId.
Answer: C
Question #: 221
Q: You are monitoring a web application that is written in Go and deployed in Google
Kubernetes Engine. You notice an increase in CPU and memory utilization. You need to
determine which source code is consuming the most CPU and memory resources. What
should you do?
A. Download, install, and start the Snapshot Debugger agent in your VM. Take debug
snapshots of the functions that take the longest time. Review the call stack frame, and
identify the local variables at that level in the stack.
B. Import the Cloud Profiler package into your application, and initialize the
Profiler agent. Review the generated flame graph in the Google Cloud console to
identify time-intensive functions.
C. Import OpenTelemetry and Trace export packages into your application, and create
the trace provider.
Review the latency data for your application on the Trace overview page, and identify
where bottlenecks are occurring.
D. Create a Cloud Logging query that gathers the web application's logs. Write a Python
script that calculates the difference between the timestamps from the beginning and the
end of the application's longest functions to identity time-intensive functions.
Answer: B
Question #: 222
Q: You have a container deployed on Google Kubernetes Engine. The container can
sometimes be slow to launch, so you have implemented a liveness probe. You notice that the
liveness probe occasionally fails on launch. What should you do?
Answer: A
Question #: 223
Q: You work for an organization that manages an ecommerce site. Your application is
deployed behind a global HTTP(S) load balancer. You need to test a new product
recommendation algorithm. You plan to use A/B testing to determine the new algorithm’s
effect on sales in a randomized way. How should you test this feature?
Answer: A
Question #: 224
Q: You plan to deploy a new application revision with a Deployment resource to Google
Kubernetes Engine (GKE) in production. The container might not work correctly. You want
to minimize risk in case there are issues after deploying the revision. You want to follow
Google-recommended best practices. What should you do?
Answer: A
Question #: 225
Q: Before promoting your new application code to production, you want to conduct testing
across a variety of different users. Although this plan is risky, you want to test the new
version of the application with production users and you want to control which users are
forwarded to the new version of the application based on their operating system. If bugs are
discovered in the new version, you want to roll back the newly deployed version of the
application as quickly as possible.
A. Deploy your application on Cloud Run. Use traffic splitting to direct a subset of user
traffic to the new version based on the revision tag.
B. Deploy your application on Google Kubernetes Engine with Anthos Service
Mesh. Use traffic splitting to direct a subset of user traffic to the new version
based on the user-agent header.
C. Deploy your application on App Engine. Use traffic splitting to direct a subset of user
traffic to the new version based on the IP address.
D. Deploy your application on Compute Engine. Use Traffic Director to direct a subset of
user traffic to the new version based on predefined weights.
Answer: B
Question #: 226
Q: Your team is writing a backend application to implement the business logic for an
interactive voice response (IVR) system that will support a payroll application. The IVR
system has the following technical characteristics:
You want to minimize cost, effort, and operational overhead. Where should you deploy the
backend application?
A. Compute Engine
B. Google Kubernetes Engine cluster in Standard mode
C. Cloud Functions
D. Cloud Run
Answer: D
Question #: 227
Q: You are developing an application hosted on Google Cloud that uses a MySQL relational
database schema. The application will have a large volume of reads and writes to the
database and will require backups and ongoing capacity planning. Your team does not have
time to fully manage the database but can take on small administrative tasks. How should
you host the database?
A. Configure Cloud SQL to host the database, and import the schema into Cloud
SQL.
B. Deploy MySQL from the Google Cloud Marketplace to the database using a client, and
import the schema.
C. Configure Bigtable to host the database, and import the data into Bigtable.
D. Configure Cloud Spanner to host the database, and import the schema into Cloud
Spanner.
E. Configure Firestore to host the database, and import the data into Firestore.
Answer: A
Question #: 228
Q: You are developing a new web application using Cloud Run and committing code to Cloud
Source Repositories. You want to deploy new code in the most efficient way possible. You
have already created a Cloud Build YAML file that builds a container and runs the following
command: gcloud run deploy. What should you do next?
A. Create a Pub/Sub topic to be notified when code is pushed to the repository. Create a
Pub/Sub trigger that runs the build file when an event is published to the topic.
B. Create a build trigger that runs the build file in response to a repository code
being pushed to the development branch.
C. Create a webhook build trigger that runs the build file in response to HTTP POST calls
to the webhook URL.
D. Create a Cron job that runs the following command every 24 hours: gcloud builds
submit.
Answer: B
Question #: 229
Q: You are a developer at a large organization. You are deploying a web application to
Google Kubernetes Engine (GKE). The DevOps team has built a CI/CD pipeline that uses
Cloud Deploy to deploy the application to Dev, Test, and Prod clusters in GKE. After Cloud
Deploy successfully deploys the application to the Dev cluster, you want to automatically
promote it to the Test cluster. How should you configure this process following Google-
recommended best practices?
A. 1. Create a Cloud Build trigger that listens for SUCCEEDED Pub/Sub messages from
the clouddeploy-operations topic.
2. Configure Cloud Build to include a step that promotes the application to the Test
cluster.
B. 1. Create a Cloud Function that calls the Google Cloud Deploy API to promote the
application to the Test cluster.
2. Configure this function to be triggered by SUCCEEDED Pub/Sub messages from the
cloud-builds topic.
C. 1. Create a Cloud Function that calls the Google Cloud Deploy API to promote
the application to the Test cluster.
2. Configure this function to be triggered by SUCCEEDED Pub/Sub messages from
the clouddeploy-operations topic.
D. 1. Create a Cloud Build pipeline that uses the gke-deploy builder.
2. Create a Cloud Build trigger that listens for SUCCEEDED Pub/Sub messages from the
cloud-builds topic.
3. Configure this pipeline to run a deployment step to the Test cluster.
Answer: C
Question #: 230
Q: Your application is running as a container in a Google Kubernetes Engine cluster. You
need to add a secret to your application using a secure approach. What should you do?
A. Create a Kubernetes Secret, and pass the Secret as an environment variable to the
container.
B. Enable Application-layer Secret Encryption on the cluster using a Cloud Key
Management Service (KMS) key.
C. Store the credential in Cloud KMS. Create a Google service account (GSA) to read the
credential from Cloud KMS. Export the GSA as a .json file, and pass the .json file to the
container as a volume which can read the credential from Cloud KMS.
D. Store the credential in Secret Manager. Create a Google service account (GSA) to
read the credential from Secret Manager. Create a Kubernetes service account
(KSA) to run the container. Use Workload Identity to configure your KSA to act as
a GSA.
Answer: D
Question #: 231
Q: You are a developer at a financial institution. You use Cloud Shell to interact with Google
Cloud services. User data is currently stored on an ephemeral disk; however, a recently
passed regulation mandates that you can no longer store sensitive information on an
ephemeral disk. You need to implement a new storage solution for your user data. You want
to minimize code changes. Where should you store your user data?
A. Store user data on a Cloud Shell home disk, and log in at least every 120 days to
prevent its deletion.
B. Store user data on a persistent disk in a Compute Engine instance.
C. Store user data in a Cloud Storage bucket.
D. Store user data in BigQuery tables.
Answer: B
Question #: 232
Q: You recently developed a web application to transfer log data to a Cloud Storage bucket
daily. Authenticated users will regularly review logs from the prior two weeks for critical
events. After that, logs will be reviewed once annually by an external auditor. Data must be
stored for a period of no less than 7 years. You want to propose a storage solution that
meets these requirements and minimizes costs. What should you do? (Choose two.)
A. Use the Bucket Lock feature to set the retention policy on the data.
B. Run a scheduled job to set the storage class to Coldline for objects older than 14 days.
C. Create a JSON Web Token (JWT) for users needing access to the Coldline storage
buckets.
D. Create a lifecycle management policy to set the storage class to Coldline for
objects older than 14 days.
E. Create a lifecycle management policy to set the storage class to Nearline for objects
older than 14 days.
Answer: A D
Question #: 233
Q: Your team is developing a Cloud Function triggered by Cloud Storage events. You want to
accelerate testing and development of your Cloud Function while following Google-
recommended best practices. What should you do?
A. Create a new Cloud Function that is triggered when Cloud Audit Logs detects the
cloudfunctions.functions.sourceCodeSet operation in the original Cloud Function. Send
mock requests to the new function to evaluate the functionality.
B. Make a copy of the Cloud Function, and rewrite the code to be HTTP-triggered. Edit
and test the new version by triggering the HTTP endpoint. Send mock requests to the
new function to evaluate the functionality.
C. Install the Functions Frameworks library, and configure the Cloud Function on
localhost. Make a copy of the function, and make edits to the new version. Test the new
version using curl.
D. Make a copy of the Cloud Function in the Google Cloud console. Use the Cloud
console's in-line editor to make source code changes to the new function. Modify your
web application to call the new function, and test the new version in production
Answer:
Question #: 234
Q: Your team is setting up a build pipeline for an application that will run in Google
Kubernetes Engine (GKE). For security reasons, you only want images produced by the
pipeline to be deployed to your GKE cluster. Which combination of Google Cloud services
should you use?
Answer: D
Question #: 235
Q: You are supporting a business-critical application in production deployed on Cloud Run.
The application is reporting HTTP 500 errors that are affecting the usability of the
application. You want to be alerted when the number of errors exceeds 15% of the requests
within a specific time window. What should you do?
A. Create a Cloud Function that consumes the Cloud Monitoring API. Use Cloud
Scheduler to trigger the Cloud Function daily and alert you if the number of errors is
above the defined threshold.
B. Navigate to the Cloud Run page in the Google Cloud console, and select the service
from the services list. Use the Metrics tab to visualize the number of errors for that
revision, and refresh the page daily.
C. Create an alerting policy in Cloud Monitoring that alerts you if the number of
errors is above the defined threshold.
D. Create a Cloud Function that consumes the Cloud Monitoring API. Use Cloud
Composer to trigger the Cloud Function daily and alert you if the number of errors is
above the defined threshold.
Answer: C
Question #: 236
Q: You need to build a public API that authenticates, enforces quotas, and reports metrics
for API callers. Which tool should you use to complete this architecture?
A. App Engine
B. Cloud Endpoints
C. Identity-Aware Proxy
D. GKE Ingress for HTTP(S) Load Balancing
Answer: B
Question #: 237
Q: You noticed that your application was forcefully shut down during a Deployment update
in Google Kubernetes Engine. Your application didn’t close the database connection before
it was terminated. You want to update your application to make sure that it completes a
graceful shutdown. What should you do?
Answer: A
Question #: 238
Q: You are a lead developer working on a new retail system that runs on Cloud Run and
Firestore in Datastore mode. A web UI requirement is for the system to display a list of
available products when users access the system and for the user to be able to browse
through all products. You have implemented this requirement in the minimum viable
product (MVP) phase by returning a list of all available products stored in Firestore.
A few months after go-live, you notice that Cloud Run instances are terminated with HTTP
500: Container instances are exceeding memory limits errors during busy times. This error
coincides with spikes in the number of Datastore entity reads. You need to prevent Cloud
Run from crashing and decrease the number of Datastore entity reads. You want to use a
solution that optimizes system performance. What should you do?
A. Modify the query that returns the product list using integer offsets.
B. Modify the query that returns the product list using limits.
C. Modify the Cloud Run configuration to increase the memory limits.
D. Modify the query that returns the product list using cursors.
Answer: D
Question #: 239
Q: You need to deploy an internet-facing microservices application to Google Kubernetes
Engine (GKE). You want to validate new features using the A/B testing method. You have
the following requirements for deploying new container image releases:
• There is no downtime when new container images are deployed.
• New production releases are tested and verified using a subset of production users.
A. 1. Configure your CI/CD pipeline to update the Deployment manifest file by replacing
the container version with the latest version.
2. Recreate the Pods in your cluster by applying the Deployment manifest file.
3. Validate the application's performance by comparing its functionality with the
previous release version, and roll back if an issue arises.
B. 1. Create a second namespace on GKE for the new release version.
2. Create a Deployment configuration for the second namespace with the desired
number of Pods.
3. Deploy new container versions in the second namespace.
4. Update the Ingress configuration to route traffic to the namespace with the new
container versions.
C. 1. Install the Anthos Service Mesh on your GKE cluster.
2. Create two Deployments on the GKE cluster, and label them with different
version names.
3. Implement an Istio routing rule to send a small percentage of traffic to the
Deployment that references the new version of the application.
D. 1. Implement a rolling update pattern by replacing the Pods gradually with the new
release version.
2. Validate the application's performance for the new subset of users during the rollout,
and roll back if an issue arises.
Answer: C
Question #: 240
Q: Your team manages a large Google Kubernetes Engine (GKE) cluster. Several application
teams currently use the same namespace to develop microservices for the cluster. Your
organization plans to onboard additional teams to create microservices. You need to
configure multiple environments while ensuring the security and optimal performance of
each team’s work. You want to minimize cost and follow Google-recommended best
practices. What should you do?
A. Create new role-based access controls (RBAC) for each team in the existing cluster,
and define resource quotas.
B. Create a new namespace for each environment in the existing cluster, and define
resource quotas.
C. Create a new GKE cluster for each team.
D. Create a new namespace for each team in the existing cluster, and define
resource quotas.
Answer: D
Question #: 241
Q: You have deployed a Java application to Cloud Run. Your application requires access to a
database hosted on Cloud SQL. Due to regulatory requirements, your connection to the
Cloud SQL instance must use its internal IP address. How should you configure the
connectivity while following Google-recommended best practices?
Answer: B
Question #: 242
Q: Your application stores customers’ content in a Cloud Storage bucket, with each object
being encrypted with the customer's encryption key. The key for each object in Cloud
Storage is entered into your application by the customer. You discover that your application
is receiving an HTTP 4xx error when reading the object from Cloud Storage. What is a
possible cause of this error?
A. You attempted the read operation on the object with the customer's base64-encoded
key.
B. You attempted the read operation without the base64-encoded SHA256 hash of
the encryption key.
C. You entered the same encryption algorithm specified by the customer when
attempting the read operation.
D. You attempted the read operation on the object with the base64-encoded SHA256
hash of the customer's key.
Answer: B
Question #: 243
Q: You have two Google Cloud projects, named Project A and Project B. You need to create a
Cloud Function in Project A that saves the output in a Cloud Storage bucket in Project B. You
want to follow the principle of least privilege. What should you do?
Answer: B
Question #: 244
Q: A governmental regulation was recently passed that affects your application. For
compliance purposes, you are now required to send a duplicate of specific application logs
from your application’s project to a project that is restricted to the security team. What
should you do?
Answer: A
Question #: 245
Q: You plan to deploy a new Go application to Cloud Run. The source code is stored in Cloud
Source Repositories. You need to configure a fully managed, automated, continuous
deployment pipeline that runs when a source code commit is made. You want to use the
simplest deployment solution. What should you do?
A. Configure a cron job on your workstations to periodically run gcloud run deploy --
source in the working directory.
B. Configure a Jenkins trigger to run the container build and deploy process for each
source code commit to Cloud Source Repositories.
C. Configure continuous deployment of new revisions from a source repository for
Cloud Run using buildpacks.
D. Use Cloud Build with a trigger configured to run the container build and deploy
process for each source code commit to Cloud Source Repositories.
Answer: D
Question #: 246
Q: Your team has created an application that is hosted on a Google Kubernetes Engine (GKE)
cluster. You need to connect the application to a legacy REST service that is deployed in two
GKE clusters in two different regions. You want to connect your application to the target
service in a way that is resilient. You also want to be able to run health checks on the legacy
service on a separate port. How should you set up the connection? (Choose two.)
A. Use Traffic Director with a sidecar proxy to connect the application to the service.
B. Use a proxyless Traffic Director configuration to connect the application to the
service.
C. Configure the legacy service's firewall to allow health checks originating from the
proxy.
D. Configure the legacy service's firewall to allow health checks originating from the
application.
E. Configure the legacy service's firewall to allow health checks originating from the
Traffic Director control plane.
Answer:
Question #: 247
Q: You have an application running in a production Google Kubernetes Engine (GKE)
cluster. You use Cloud Deploy to automatically deploy your application to your production
GKE cluster. As part of your development process, you are planning to make frequent
changes to the application’s source code and need to select the tools to test the changes
before pushing them to your remote source code repository. Your toolset must meet the
following requirements:
• Test frequent local changes automatically.
• Local deployment emulates production deployment.
Which tools should you use to test building and running a container on your laptop using
minimal resources?
Answer: C
Question #: 248
Q: You are deploying a Python application to Cloud Run using Cloud Source Repositories
and Cloud Build. The Cloud Build pipeline is shown below:
You want to optimize deployment times and avoid unnecessary steps. What should you do?
Answer: D
Question #: 249
Q: You are developing an event-driven application. You have created a topic to receive
messages sent to Pub/Sub. You want those messages to be processed in real time. You need
the application to be independent from any other system and only incur costs when new
messages arrive. How should you configure the architecture?
Answer: D
Question #: 250
Q: You have an application running on Google Kubernetes Engine (GKE). The application is
currently using a logging library and is outputting to standard output. You need to export
the logs to Cloud Logging, and you need the logs to include metadata about each request.
You want to use the simplest method to accomplish this. What should you do?
A. Change your application’s logging library to the Cloud Logging library, and
configure your application to export logs to Cloud Logging.
B. Update your application to output logs in JSON format, and add the necessary
metadata to the JSON.
C. Update your application to output logs in CSV format, and add the necessary metadata
to the CSV.
D. Install the Fluent Bit agent on each of your GKE nodes, and have the agent export all
logs from /var/log.
Answer: A
Question #: 251
Q: You are working on a new application that is deployed on Cloud Run and uses Cloud
Functions. Each time new features are added, new Cloud Functions and Cloud Run services
are deployed. You use ENV variables to keep track of the services and enable interservice
communication, but the maintenance of the ENV variables has become difficult. You want to
implement dynamic discovery in a scalable way. What should you do?
A. Configure your microservices to use the Cloud Run Admin and Cloud Functions APIs
to query for deployed Cloud Run services and Cloud Functions in the Google Cloud
project.
B. Create a Service Directory namespace. Use API calls to register the services
during deployment, and query during runtime.
C. Rename the Cloud Functions and Cloud Run services endpoint is using a well-
documented naming convention.
D. Deploy Hashicorp Consul on a single Compute Engine instance. Register the services
with Consul during deployment, and query during runtime.
Answer: B
Question #: 252
Q: You work for a financial services company that has a container-first approach. Your team
develops microservices applications. A Cloud Build pipeline creates the container image,
runs regression tests, and publishes the image to Artifact Registry. You need to ensure that
only containers that have passed the regression tests are deployed to Google Kubernetes
Engine (GKE) clusters. You have already enabled Binary Authorization on the GKE clusters.
What should you do next?
A. Create an attestor and a policy. After a container image has successfully passed
the regression tests, use Cloud Build to run Kritis Signer to create an attestation
for the container image.
B. Deploy Voucher Server and Voucher Client components. After a container image has
successfully passed the regression tests, run Voucher Client as a step in the Cloud Build
pipeline.
C. Set the Pod Security Standard level to Restricted for the relevant namespaces. Use
Cloud Build to digitally sign the container images that have passed the regression tests.
D. Create an attestor and a policy. Create an attestation for the container images that
have passed the regression tests as a step in the Cloud Build pipeline.
Answer: A
Question #: 253
Q: You are reviewing and updating your Cloud Build steps to adhere to best practices.
Currently, your build steps include:
You need to add a step to perform a vulnerability scan of the built container image, and you
want the results of the scan to be available to your deployment pipeline running in Google
Cloud. You want to minimize changes that could disrupt other teams’ processes. What
should you do?
Answer: C
Question #: 254
Q: You are developing an online gaming platform as a microservices application on Google
Kubernetes Engine (GKE). Users on social media are complaining about long loading times
for certain URL requests to the application. You need to investigate performance
bottlenecks in the application and identify which HTTP requests have a significantly high
latency span in user requests. What should you do?
A. Configure GKE workload metrics using kubectl. Select all Pods to send their metrics to
Cloud Monitoring. Create a custom dashboard of application metrics in Cloud
Monitoring to determine performance bottlenecks of your GKE cluster.
B. Update your microservices to log HTTP request methods and URL paths to STDOUT.
Use the logs router to send container logs to Cloud Logging. Create filters in Cloud
Logging to evaluate the latency of user requests across different methods and URL
paths.
C. Instrument your microservices by installing the OpenTelemetry tracing
package. Update your application code to send traces to Trace for inspection and
analysis. Create an analysis report on Trace to analyze user requests.
D. Install tcpdump on your GKE nodes. Run tcpdump to capture network traffic over an
extended period of time to collect data. Analyze the data files using Wireshark to
determine the cause of high latency.
Answer: C
Question #: 255
Q: You need to load-test a set of REST API endpoints that are deployed to Cloud Run. The
API responds to HTTP POST requests. Your load tests must meet the following
requirements:
• Load is initiated from multiple parallel threads.
• User traffic to the API originates from multiple source IP addresses.
• Load can be scaled up using additional test instances.
You want to follow Google-recommended best practices. How should you configure the load
testing?
A. Create an image that has cURL installed, and configure cURL to run a test plan. Deploy
the image in a managed instance group, and run one instance of the image for each VM.
B. Create an image that has cURL installed, and configure cURL to run a test plan. Deploy
the image in an unmanaged instance group, and run one instance of the image for each
VM.
C. Deploy a distributed load testing framework on a private Google Kubernetes
Engine cluster. Deploy additional Pods as needed to initiate more traffic and
support the number of concurrent users.
D. Download the container image of a distributed load testing framework on Cloud Shell.
Sequentially start several instances of the container on Cloud Shell to increase the load
on the API.
Answer: C
Question #: 256
Q: Your team is creating a serverless web application on Cloud Run. The application needs
to access images stored in a private Cloud Storage bucket. You want to give the application
Identity and Access Management (IAM) permission to access the images in the bucket, while
also securing the services using Google-recommended best practices. What should you do?
A. Enforce signed URLs for the desired bucket. Grant the Storage Object Viewer IAM role
on the bucket to the Compute Engine default service account.
B. Enforce public access prevention for the desired bucket. Grant the Storage Object
Viewer IAM role on the bucket to the Compute Engine default service account.
C. Enforce signed URLs for the desired bucket. Create and update the Cloud Run service
to use a user-managed service account. Grant the Storage Object Viewer IAM role on the
bucket to the service account.
D. Enforce public access prevention for the desired bucket. Create and update the
Cloud Run service to use a user-managed service account. Grant the Storage
Object Viewer IAM role on the bucket to the service account.
Answer: D
Question #: 257
Q: You are using Cloud Run to host a global ecommerce web application. Your company’s
design team is creating a new color scheme for the web app. You have been tasked with
determining whether the new color scheme will increase sales. You want to conduct testing
on live production traffic. How should you design the study?
Answer: A
Question #: 258
Q: You are a developer at a large corporation. You manage three Google Kubernetes Engine
clusters on Google Cloud. Your team’s developers need to switch from one cluster to
another regularly without losing access to their preferred development tools. You want to
configure access to these multiple clusters while following Google-recommended best
practices. What should you do?
A. Ask the developers to use Cloud Shell and run gcloud container clusters get-
credential to switch to another cluster.
B. In a configuration file, define the clusters, users, and contexts. Share the file
with the developers and ask them to use kubect1 contig to add cluster, user, and
context details.
C. Ask the developers to install the gcloud CLI on their workstation and run gcloud
container clusters get-credentials to switch to another cluster.
D. Ask the developers to open three terminals on their workstation and use kubect1
config to configure access to each cluster.
Answer: B
Question #: 259
Q: You are a lead developer working on a new retail system that runs on Cloud Run and
Firestore. A web UI requirement is for the user to be able to browse through all products. A
few months after go-live, you notice that Cloud Run instances are terminated with HTTP
500: Container instances are exceeding memory limits errors during busy times. This error
coincides with spikes in the number of Firestore queries.
You need to prevent Cloud Run from crashing and decrease the number of Firestore
queries. You want to use a solution that optimizes system performance. What should you
do?
A. Modify the query that returns the product list using cursors with limits.
B. Create a custom index over the products.
C. Modify the query that returns the product list using integer offsets.
D. Modify the Cloud Run configuration to increase the memory limits.
Answer: A
Question #: 260
Q: You are a developer at a large organization. Your team uses Git for source code
management (SCM). You want to ensure that your team follows Google-recommended best
practices to manage code to drive higher rates of software delivery. Which SCM process
should your team use?
A. Each developer commits their code to the main branch before each product release,
conducts testing, and rolls back if integration issues are detected.
B. Each group of developers copies the repository, commits their changes to their
repository, and merges their code into the main repository before each product release.
C. Each developer creates a branch for their own work, commits their changes to their
branch, and merges their code into the main branch daily.
D. Each group of developers creates a feature branch from the main branch for
their work, commits their changes to their branch, and merges their code into the
main branch after the change advisory board approves it.
Answer: D
Question #: 261
Q: You have a web application that publishes messages to Pub/Sub. You plan to build new
versions of the application locally and want to quickly test Pub/Sub integration for each
new build. How should you configure local testing?
Answer: B
Question #: 262
Q: Your ecommerce application receives external requests and forwards them to third-party
API services for credit card processing, shipping, and inventory management as shown in
the diagram.
Your customers are reporting that your application is running slowly at unpredictable
times. The application doesn’t report any metrics. You need to determine the cause of the
inconsistent performance. What should you do?
A. Install the OpenTelemetry library for your respective language, and instrument
your application.
B. Install the Ops Agent inside your container and configure it to gather application
metrics.
C. Modify your application to read and forward the X-Cloud-Trace-Context header when
it calls the downstream services.
D. Enable Managed Service for Prometheus on the Google Kubernetes Engine cluster to
gather application metrics.
Answer: A
Question #: 263
Q: You are developing a new application. You want the application to be triggered only
when a given file is updated in your Cloud Storage bucket. Your trigger might change, so
your process must support different types of triggers. You want the configuration to be
simple so that multiple team members can update the triggers in the future. What should
you do?
A. Configure Cloud Storage events to be sent to Pub/Sub, and use Pub/Sub events to
trigger a Cloud Build job that executes your application.
B. Create an Eventarc trigger that monitors your Cloud Storage bucket for a
specific filename, and set the target as Cloud Run.
C. Configure a Cloud Function that executes your application and is triggered when an
object is updated in Cloud Storage.
D. Configure a Firebase function that executes your application and is triggered when an
object is updated in Cloud Storage.
Answer: B
Question #: 264
Q: You are defining your system tests for an application running in Cloud Run in a Google
Cloud project. You need to create a testing environment that is isolated from the production
environment. You want to fully automate the creation of the testing environment with the
least amount of effort and execute automated tests. What should you do?
A. Using Cloud Build, execute Terraform scripts to create a new Google Cloud
project and a Cloud Run instance of your application in the Google Cloud project.
B. Using Cloud Build, execute a Terraform script to deploy a new Cloud Run revision in
the existing Google Cloud project. Use traffic splitting to send traffic to your test
environment.
C. Using Cloud Build, execute gcloud commands to create a new Google Cloud project
and a Cloud Run instance of your application in the Google Cloud project.
D. Using Cloud Build, execute gcloud commands to deploy a new Cloud Run revision in
the existing Google Cloud project. Use traffic splitting to send traffic to your test
environment.
Answer: A
Question #: 265
Q: You are a cluster administrator for Google Kubernetes Engine (GKE). Your organization’s
clusters are enrolled in a release channel. You need to be informed of relevant events that
affect your GKE clusters, such as available upgrades and security bulletins. What should you
do?
Answer:
Question #: 266
Q: You are tasked with using C++ to build and deploy a microservice for an application
hosted on Google Cloud. The code needs to be containerized and use several custom
software libraries that your team has built. You do not want to maintain the underlying
infrastructure of the application. How should you deploy the microservice?
Answer: B
Question #: 267
Q: You need to containerize a web application that will be hosted on Google Cloud behind a
global load balancer with SSL certificates. You don’t have the time to develop authentication
at the application level, and you want to offload SSL encryption and management from your
application. You want to configure the architecture using managed services where possible.
What should you do?
A. Host the application on Google Kubernetes Engine, and deploy an NGINX Ingress
Controller to handle authentication.
B. Host the application on Google Kubernetes Engine, and deploy cert-manager to
manage SSL certificates.
C. Host the application on Compute Engine, and configure Cloud Endpoints for your
application.
D. Host the application on Google Kubernetes Engine, and use Identity-Aware
Proxy (IAP) with Cloud Load Balancing and Google-managed certificates.
Answer: D
Question #: 268
Q: You manage a system that runs on stateless Compute Engine VMs and Cloud Run
instances. Cloud Run is connected to a VPC, and the ingress setting is set to Internal. You
want to schedule tasks on Cloud Run. You create a service account and grant it the
roles/run.invoker Identity and Access Management (IAM) role. When you create a schedule
and test it, a 403 Permission Denied error is returned in Cloud Logging. What should you
do?
A. Grant the service account the roles/run.developer IAM role.
B. Configure a cron job on the Compute Engine VMs to trigger Cloud Run on schedule.
C. Change the Cloud Run ingress setting to 'Internal and Cloud Load Balancing.'
D. Use Cloud Scheduler with Pub/Sub to invoke Cloud Run.
Answer: D
Question #: 269
Q: You work on an application that relies on Cloud Spanner as its main datastore. New
application features have occasionally caused performance regressions. You want to
prevent performance issues by running an automated performance test with Cloud Build for
each commit made. If multiple commits are made at the same time, the tests might run
concurrently. What should you do?
A. Create a new project with a random name for every build. Load the required data.
Delete the project after the test is run.
B. Create a new Cloud Spanner instance for every build. Load the required data.
Delete the Cloud Spanner instance after the test is run.
C. Create a project with a Cloud Spanner instance and the required data. Adjust the
Cloud Build build file to automatically restore the data to its previous state after the test
is run.
D. Start the Cloud Spanner emulator locally. Load the required data. Shut down the
emulator after the test is run.
Answer: B
Question #: 270
Q: Your company's security team uses Identity and Access Management (IAM) to track
which users have access to which resources. You need to create a version control system
that can integrate with your security team's processes. You want your solution to support
fast release cycles and frequent merges to your main branch to minimize merge conflicts.
What should you do?
Answer: A
Question #: 271
Q: You recently developed an application that monitors a large number of stock prices. You
need to configure Pub/Sub to receive messages and update the current stock price in an in-
memory database. A downstream service needs the most up-to-date prices in the in-
memory database to perform stock trading transactions. Each message contains three
pieces or information:
• Stock symbol
• Stock price
• Timestamp for the update
Answer: C
Question #: 272
Q: You are a developer at a social media company. The company runs their social media
website on-premises and uses MySQL as a backend to store user profiles and user posts.
Your company plans to migrate to Google Cloud, and your learn will migrate user profile
information to Firestore. You are tasked with designing the Firestore collections. What
should you do?
A. Create one root collection for user profiles, and create one root collection for user
posts.
B. Create one root collection for user profiles, and create one subcollection for
each user's posts.
C. Create one root collection for user profiles, and store each user's post as a nested list
in the user profile document.
D. Create one root collection for user posts, and create one subcollection for each user's
profile.
Answer: B
Question #: 273
Q: Your team recently deployed an application on Google Kubernetes Engine (GKE). You are
monitoring your application and want to be alerted when the average memory consumption
of your containers is under 20% or above 80%. How should you configure the alerts?
A. Create a Cloud Function that consumes the Monitoring API. Create a schedule to
trigger the Cloud Function hourly and alert you if the average memory consumption is
outside the defined range.
B. In Cloud Monitoring, create an alerting policy to notify you if the average
memory consumption is outside the defined range.
C. Create a Cloud Function that runs on a schedule, executes kubectl top on all the
workloads on the cluster, and sends an email alert if the average memory consumption
is outside the defined range.
D. Write a script that pulls the memory consumption of the instance at the OS level and
sends an email alert if the average memory consumption is outside the defined range.
Answer: B
Question #: 274
Q: You manage a microservice-based ecommerce platform on Google Cloud that sends
confirmation emails to a third-party email service provider using a Cloud Function. Your
company just launched a marketing campaign, and some customers are reporting that they
have not received order confirmation emails. You discover that the services triggering the
Cloud Function are receiving HTTP 500 errors. You need to change the way emails are
handled to minimize email loss. What should you do?
Answer: B
Question #: 275
Q: You have a web application that publishes messages to Pub/Sub. You plan to build new
versions of the application locally and need to quickly test Pub/Sub integration for each
new build. How should you configure local testing?
A. In the Google Cloud console, navigate to the API Library, and enable the Pub/Sub API.
When developing locally configure your application to call pubsub.googleapis.com.
B. Install the Pub/Sub emulator using gcloud, and start the emulator with a valid
Google Project ID. When developing locally, configure your applicat.cn to use the
local emulator by exporting the PUBSUB_EMULATOR_HOST variable.
C. Run the gcloud config set api_endpoint_overrides/pubsub
https://pubsubemulator.googleapis.com.com/ command to change the Pub/Sub
endpoint prior to starting the application.
D. Install Cloud Code on the integrated development environment (IDE). Navigate to
Cloud APIs, and enable Pub/Sub against a valid Google Project IWhen developing locally,
configure your application to call pubsub.googleapis.com.
Answer: B
Question #: 276
Q: You recently developed an application that monitors a large number of stock prices. You
need to configure Pub/Sub to receive a high volume messages and update the current stock
price in a single large in-memory database. A downstream service needs the most up-to-
date prices in the in-memory database to perform stock trading transactions. Each message
contains three pieces or information:
• Stock symbol
• Stock price
• Timestamp for the update
Answer:
Question #: 277
Q: Your team has created an application that is hosted on a Google Kubemetes Engine (GKE)
cluster. You need to connect the application to a legacy REST service that is deployed in two
GKE clusters in two different regions. You want to connect your application to the legacy
service in a way that is resilient and requires the fewest number of steps. You also want to
be able to run probe-based health checks on the legacy service on a separate port. How
should you set up the connection? (Choose two.)
A. Use Traffic Director with a sidecar proxy to connect the application to the
service.
B. Set up a proxyless Traffic Director configuration for the application.
C. Configure the legacy service's firewall to allow health checks originating from
the sidecar proxy.
D. Configure the legacy service's firewall to allow health checks originating from the
application.
E. Configure the legacy service's firewall to allow health checks originating from the
Traffic Director control plane.
Answer: A C
Question #: 278
Q: You are monitoring a web application that is written in Go and deployed in Google
Kubernetes Engine. You notice an increase in CPU and memory utilization. You need to
determine which function is consuming the most CPU and memory resources. What should
you do?
A. Add print commands to the application source code to log when each function is
called, and redeploy the application.
B. Create a Cloud Logging query that gathers the web application s logs. Write a Python
script that calculates the difference between the timestamps from the beginning and the
end of the application's longest functions to identify time-intensive functions.
C. Import OpenTelemetry and Trace export packages into your application, and create
the trace provider. Review the latency data for your application on the Trace overview
page, and identify which functions cause the most latency.
D. Import the Cloud Profiler package into your application, and initialize the
Profiler agent. Review the generated flame graph in the Google Cloud console to
identify time-intensive functions.
Answer: D
Question #: 279
Q: You are developing a flower ordering application. Currently you have three
microservices:
• Order Service (receives the orders)
• Order Fulfillment Service (processes the orders)
• Notification Service (notifies the customer when the order is filled)
You need to determine how the services will communicate with each other. You want
incoming orders to be processed quickly and you need to collect order information for
fulfillment. You also want to make sure orders are not lost between your services and are
able to communicate asynchronously. How should the requests be processed?
A.
B.
C.
D.
Answer: D
Question #: 280
Q: You recently deployed an application to GKE where Pods are writing files to a Compute
Engine persistent disk. You have created a PersistentVolumeClaim (PVC) and a
PersistentVolume (PV) object on Kubernetes for the disk, and you reference the PVC in the
deployment manifest file.
You recently expanded the size of the persistent disk because the application has used up
almost all of the disk space. You have logged on to one of the Pods, and you notice that the
disk expansion is not visible in the container file system. What should you do?
A. Set the spec.capacity.storage value of the PV object to match the size of the persistent
disk. Apply the updated configuration by using kubectl.
B. Recreate the application Pods by running the kubectl delete deployment
DEPLOYMENT_NAME && kubectl apply deployment.yaml command, where the
DEPLOYMENT_NAME parameter is the name of your deployment and deployment.yaml
is its manifest file.
C. Set the spec.resources.requests.storage value of the PVC object to match the
size of the persistent disk. Apply the updated configuration by using kubectl.
D. In the Pod, resize the disk partition to the maximum value by using the fdisk or
parted utility.
Answer: C
Question #: 281
Q: You work for an ecommerce company. You are designing a new Orders API that will be
exposed through Apigee. In your Apigee organization, you created two new environments
named orders-test and orders-prod. You plan to use unique URLs named
test.lnk-42.com/api/v1/orders and Ink-42.com/api/v1/orders for each environment. You
need to ensure that each environment only uses the assigned URL. What should you do?
Answer: D
Question #: 282
Q: You are developing an application that uses microservices architecture that includes
Cloud Run, Bigtable, and Pub/Sub. You want to conduct the testing and debugging process
as quickly as possible to create a minimally viable product with minimal cost. What should
you do?
A. Use Cloud Shell Editor and Cloud Shell to deploy the application, and test the
functionality by using the Google Cloud console in the project.
B. Use emulators to test the functionality of cloud resources locally, and deploy
the code to your Google Cloud project.
C. Use Cloud Build to create a pipeline, and add the unit testing stage and the manual
approval stage. Deploy the code to your Google Cloud project.
D. Use Cloud Code to develop, deploy, and test microservices resources. Use Cloud
Logging to review the resource logs.
Answer: B
Question #: 283
Q: You are a lead developer at an organization that recently integrated several Google Cloud
services. These services are located within Virtual Private Cloud (VPC) environments that
are secured with VPC Service Controls and Private Service Connect endpoints. Developers
across your organization use different operating systems, development frameworks, and
integrated development environments (IDEs). You need to recommend a developer
environment that will ensure consistency in the developer process and improve the overall
developer experience. You want this solution to:
A. Use Cloud Workstations, and allow developers to create their own custom images.
B. Use Cloud Workstations with preconfigured base images. For custom tools and
utilities, use custom images that are rebuilt weekly.
C. Use the Cloud Code extension with the IDEs that are used across the organization.
Configure Cloud VPN to enable VPC access.
D. Use the Cloud Code extension with the IDEs that are used across the organization. Use
Identity-Aware Proxy to enable access to the services in the VPC.
Answer: B
Question #: 284
Q: You are preparing to conduct a load test on your Cloud Run service by using JMeter. You
need to orchestrate the steps and services to use for an effective load test and analysis. You
want to follow Google-recommended practices. What should you do?
A. Install JMeter on your local machine, create a log sink to BigQuery, and use Looker to
analyze the results.
B. Set up a Compute Engine instance, install JMeter on the instance, create a log sink to a
Cloud Storage bucket, and use Looker Studio to analyze the results.
C. Set up a Compute Engine instance, install JMeter on the instance, create a log sink to a
Cloud Storage bucket, and use Looker to analyze the results.
D. Set up a Compute Engine instance, install JMeter on the instance, create a log
sink to BigQuery, and use Looker Studio to analyze the results.
Answer: D
Question #: 285
Q: You are designing a Node.js-based mobile news feed application that stores data on
Google Cloud. You need to select the application's database. You want the database to have
zonal resiliency out of the box, low latency responses, ACID compliance, an optional middle
tier, semi-structured data storage, and network-partition-tolerant and offline-mode client
libraries. What should you do?
A. Configure Firestore and use the Firestore client library in the app.
B. Configure Bigtable and use the Bigtable client in the app.
C. Configure Cloud SQL and use the Google Client Library for Cloud SQL in the app.
D. Configure BigQuery and use the BigQuery REST API in the app.
Answer: A
Question #: 286
Q: You are developing an application component to capture user behavior data and stream
the data to BigQuery. You plan to use the BigQuery Storage Write API. You need to ensure
that the data that arrives in BigQuery does not have any duplicates. You want to use the
simplest operational method to achieve this. What should you do?
Answer: B
Question #: 287
Q: You maintain a popular mobile game deployed on Google Cloud services that include
Firebase, Firestore, and Cloud Functions. Recently, the game experienced a surge in usage,
and the application encountered HTTP 429 RESOURCE_EXHAUSTED errors when accessing
the Firestore API. The application has now stabilized. You want to quickly fix this issue
because your company has a marketing campaign next week and you expect another surge
in usage. What should you do?
A. Request a quota increase, and modify the application code to retry the Firestore API
call with fixed backoff.
B. Request a quota increase, and modify the application code to retry the Firestore API
call with exponential backoff.
C. Optimize database queries to reduce read/write operations, and modify the
application code to retry the Firestore API call with fixed backoff.
D. Optimize database queries to reduce read/write operations, and modify the
application code to retry the Firestore API call with exponential backoff.
Answer: D
Question #: 288
Q: You are developing a mobile application that allows users to create and manage to-do
lists. Your application has the following requirements:
You need to implement a database solution while minimizing operational effort. Which
approach should you use?
A. Create a Cloud SQL for MySQL instance. Implement a data model to store to-do list
information. Create indexes for the most heavily and frequently used queries.
B. Create a Bigtable instance. Design a database schema to avoid hotspots when writing
data. Use a Bigtable change stream to capture data changes.
C. Use Firestore as the database. Configure Firestore offline persistence to cache a copy
of the Firestore data. Listen to document changes to update applications whenever
there are document changes.
D. Implement a SQLite database on each user's device. Use a scheduled job to
synchronize each device database with a copy stored in Cloud Storage.
Answer:
Question #: 289
Q: You manage an application deployed on GKE clusters across multiple environments. You
are using Cloud Build to run user acceptance testing (UAT) tests. You have integrated Cloud
Build with Artifact Analysis, and enabled the Binary Authorization API in all Google Cloud
projects hosting your environments. You want only container images that have passed
certain automated UAT tests to be deployed to the production environment. You have
already created an attestor. What should you do next?
A. After the UAT phase, sign the attestation with a key stored as a Kubernetes secret.
Add a GKE cluster-specific rule in Binary Authorization for the UAT Google Cloud
project.
B. After the UAT phase, sign the attestation with a key stored as a Kubernetes secret.
Add a GKE cluster-specific rule in Binary Authorization for the production Google Cloud
project policy.
C. After the UAT phase, sign the attestation with a key stored in Cloud Key Management
Service (KMS). Add a default rule in Binary Authorization for the UAT Google Cloud
project.
D. After the UAT phase, sign the attestation with a key stored in Cloud Key
Management Service (KMS). Add a GKE cluster-specific rule in Binary
Authorization for the production Google Cloud project policy.
Answer: D
Question #: 290
Q: You work for a company that operates an ecommerce website. You are developing a new
integration that will manage all order fulfillment steps after orders are placed. You have
created multiple Cloud Functions to process each order. You need to orchestrate the
execution of the functions, using the output of each function to determine the flow. You
want to minimize the latency of this process. What should you do?
A. Use Workflows to call the functions, and use callbacks to handle the execution logic.
B. Use Workflows to call the functions, and use conditional jumps to handle the
execution logic.
C. Use Cloud Composer to call the functions, and use an Apache Airflow HTTP operator
to handle the execution logic.
D. Use Cloud Composer to call the functions, and use an Apache Airflow operator to
handle the execution logic.
Answer: B
Question #: 291
Q: You are currently pushing container images to Artifact Registry and deploying a
containerized microservices application to GKE. After deploying the application, you notice
that the services do not behave as expected. You use the kubectl get pods command to
inspect the state of the application Pods, and discover that one of the Pods has a state of
CrashLoopBackoff. How should you troubleshoot the Pod?
A. Connect to the problematic Pod by running the kubectl exec -it POD_NAME -
/bin/bash command where the POD_NAME parameter is the name of the problematic
Pod. Inspect the logs in the /var/log/messages folder to determine the root cause.
B. Execute the gcloud projects get-iam-policy PROJECT_ID command where the
PROJECT_ID parameter is the name of the project where your Artifact Registry resides.
Inspect the IAM bindings of the node pool s service account. Validate if the service
account has the roles/artifactregistry.reader role.
C. Run the kubectl logs POD_NAME command where the POD_NAME parameter is
the name of the problematic Pod. Analyze the logs of the Pod from previous runs
to determine the root cause of failed start attempts of the Pod.
D. In the Google Cloud console, navigate to Cloud Logging in the project of the cluster’s
VPC. Enter a filter to show denied egress traffic to the Private Google Access CIDR range.
Validate if egress traffic is denied from your GKE cluster to the Private Google Access
CIDR range.
Answer: C
Question #: 292
Q: You use Cloud Build to build and test container images prior to deploying them to Cloud
Run. Your images are stored in Artifact Registry. You need to ensure that only container
images that have passed testing are deployed. You want to minimize operational overhead.
What should you do?
A. Deploy a new revision to a Cloud Run service. Assign a tag that allows access to the
revision at a specific URL without serving traffic. Test that revision again. Migrate the
traffic to the Cloud Run service after you confirm that the new revision is performing as
expected.
B. Enable Binary Authorization on your Cloud Run service. Create an attestation if
the container image has passed all tests. Configure Binary Authorization to allow
only images with appropriate attestation to be deployed to the Cloud Run service.
C. Create a GKE cluster. Verify that all tests have passed, and then deploy the image to
the GKE cluster.
D. Configure build provenance on your Cloud Build pipeline. Verify that all the tests have
passed, and then deploy the image to a Cloud Run service.
Answer: B
Question #: 293
Q: You are developing a scalable web application for internal users. Your organization uses
Google Workspace. You need to set up authentication to the application for the users, and
then deploy the application on Google Cloud. You plan to use cloud-native features, and you
want to minimize infrastructure management effort. What should you do? (Choose two.)
A. Create a Compute Engine VM, configure a web server, and deploy the application in a
VPC.
B. Containerize the application, and deploy it as a Cloud Run service.
C. Configure Cloud SQL database with a table containing the users and password hashes.
Add an authentication screen to ensure that only internal users can access the
application.
D. Configure Identity Aware Proxy, and grant the
roles/iap.httpsResourceAccessor IAM role to the users that need to access the
application.
E. Configure Identity Aware Proxy, and grant the roles/iap.tunnelResourceAccessor IAM
role to the users that need to access the application.
Answer: B D
Question #: 294
Q: You work for an ecommerce company, and you are responsible for deploying and
managing multiple APIs. The operations team wants to review the traffic patterns in the
orders-prod and users-prod environments. These are the only environments in the store-
prod environment group. You want to follow Google-recommended practices. What should
you do?
A. Assign the Apigee Analytics Viewer IAM role to the operations team for both
environments. Use Cloud Monitoring to review traffic patterns.
B. Assign the Apigee Analytics Viewer IAM role to the operations team for both
environments. Use Apigee API Analytics to review traffic patterns.
C. Assign the Apigee API Reader IAM role to each user of the operations team for both
environments. Use Cloud Monitoring to review traffic patterns.
D. Assign the Apigee API Reader IAM role to each user of the operations team for both
environments. Use Apigee API Analytics to review traffic patterns.
Answer: B
Question #: 295
Q: You are migrating a containerized application to Cloud Run. You plan to use Cloud Build
to build your container image and push it to Artifact Registry, and you plan to use Cloud
Deploy to deploy the image to production. You need to ensure that only secure images are
deployed to production. What should you do?
A. Use Cloud Armor in front of Cloud Run to protect the container image from threats.
B. Use Artifact Analysis to scan the image for vulnerabilities. Use Cloud Key Management
Service to encrypt the image to be deployed to production.
C. Use Secret Manager to store the encrypted image. Deploy this image to production.
D. Use Binary Authorization to enforce a policy that only allows images that have been
signed with a trusted key to be deployed to production.
Answer:
Question #: 296
Q: Your team uses Cloud Storage for a video and image application that was recently
migrated to Google Cloud. Following a viral surge, users are reporting application
instability, coinciding with a 10x increase in HTTP 429 error codes from Cloud Storage APIs.
You need to resolve the errors and establish a long-term solution. You want to ensure that
the application remains stable if the load increases again in the future. What should you do?
A. Optimize the application code to reduce unnecessary calls to Cloud Storage APIs to
prevent HTTP 429 errors.
B. Compress the video and images files to reduce their size, and minimize storage costs
and bandwidth usage. Implement a custom throttling mechanism in the application that
limits the number of concurrent API calls.
C. Migrate all image and video data to Firestore. Replace the Cloud Storage APIs in the
application code with the new Firestore database.
D. Implement a retry strategy with exponential backoff for requests that
encounter HTTP 429 errors.
Answer: D
Question #: 297
Q: You are developing a container build pipeline for an application hosted on GKE. You have
the following requirements:
• Only images that are created using your build pipeline should be deployed on your GKE
cluster.
• All code and build artifacts should remain within your environment and protected from
data exfiltration.
A. 1. Create a build pipeline by using Cloud Build with the default worker pool.
2. Deploy container images to a private container registry in your VPC.
3. Create a VPC firewall policy in your project that denies all egress and ingress traffic to
public networks.
B. 1. Create a build pipeline by using Cloud Build with a private worker pool.
2. Use VPC Service Controls to place all components and services in your CI/CD
pipeline inside a security perimeter.
3. Configure your GKE cluster to only allow container images signed by Binary
Authorization.
C. 1. Create a build pipeline by using Cloud Build with a private worker pool.
2. Configure the CI/CD pipeline to build container images and store them in Artifact
Registry.
3. Configure Artifact Registry to encrypt container images by using customer-managed
encryption keys (CMEK).
D. 1. Create a build pipeline by using Cloud Build with the default worker pool.
2. Configure the CI/CD pipeline to build container images and store them in Artifact
Registry.
3. Configure your GKE cluster to only allow container images signed by Binary
Authorization.
Answer: B
Question #: 298
Q: You are a developer at a company that operates an ecommerce website. The website
stores the customer order data in a Cloud SQL for PostgreSQL database. Data scientists on
the marketing team access this data to run their reports. Every time they run these reports,
the website's performance is negatively affected. You want to provide access to up-to-date
customer order datasets without affecting your website. What should you do?
A. Configure Cloud Scheduler to run an hourly Cloud Function that exports the data from
the Cloud SQL database into CSV format and sends the data to a Cloud Storage bucket.
B. Set up a Bigtable table for the data science team. Configure the application to perform
dual writes to both Cloud SQL and Bigtable simultaneously.
C. Set up a BigQuery dataset for the data science team. Configure Datastream to
replicate the relevant Cloud SQL tables in BigQuery.
D. Create a clone of the PostgreSQL database instance for the data science team.
Schedule a job to create a new clone every 15 minutes.
Answer: C
Question #: 299
Q: You are developing a web application by using Cloud Run and Cloud Storage. You are
notified of a production issue that you need to troubleshoot immediately. You need to
implement a workaround that requires you to execute a script on a Git repository. Your
corporate laptop is unavailable but you have your personal computer. You can use your
corporate credentials to access the required Git repository and Google Cloud resources. You
want to fix the issue as quickly and efficiently as possible while minimizing additional cost.
What should you do?
Answer: C
Question #: 300
Q: You are using App Engine and Cloud SQL for PostgreSQL to develop an application. You
want to test your application code locally before deploying new application versions to the
development environment that is shared with other developers. You need to set up your
App Engine local development environment to test your application while keeping all traffic
to Cloud SQL instances encrypted and authenticated to Cloud IAM and PostgreSQL. What
should you do before starting the local development server?
Answer:
Question #: 301
Q: You are developing a public web application on Cloud Run. You expose the Cloud Run
service directly with its public IP address. You are now running a load test to ensure that
your application is resilient against high traffic loads. You notice that your application
performs as expected when you initiate light traffic. However, when you generate high
loads, your web server runs slowly and returns error messages. How should you
troubleshoot this issue?
A. Check the network traffic to Cloud Run in Cloud Monitoring to validate whether a
traffic spike occurred. If necessary, enable traffic splitting on the Cloud Run instance to
route some of the traffic to a previous instance revision.
B. Check the min-instances value for your Cloud Run service. If necessary, increase the
min-instances value to match the maximum number of virtual users in your load test.
C. Check whether Cloud Armor is detecting distributed denial of service (DDoS) attacks
and is blocking traffic before the traffic is routed to your Cloud Run service. If necessary,
disable any Cloud Armor policies in your project.
D. Check whether the Cloud Run service has scaled to a number of instances that
equals the max-instances value. If necessary, increase the max-instances value.
Answer: D
Question #: 302
Q: You are developing a new image processing application that needs to handle various
tasks, such as resizing, cropping, and watermarking images. You also need to monitor the
workflow and ensure that it scales efficiently when there are large volumes of images. You
want to automate the image processing tasks and workflow monitoring with the least effort.
What should you do?
A. Employ Cloud Composer to manage the image processing workflows. Use Dataproc
for workflow monitoring and analytics.
B. Use Cloud Run to deploy the image processing functions. Use Apigee to expose the
API. Use Cloud Logging for workflow monitoring.
C. Implement Workflows to orchestrate the image processing tasks. Use Cloud
Logging for workflow monitoring.
D. Use Cloud Build to trigger Cloud Functions for the image processing tasks. Use Cloud
Monitoring for workflow monitoring.
Answer: C
Question #: 303
Q: You are developing a web application that will be deployed to production on Cloud Run.
The application consists of multiple microservices, some of which will be publicly accessible
and others that will only be accessible after authentication by Google identities. You need to
ensure that only authenticated users can access the restricted services, while allowing
unrestricted access to the public services of the application. You want to use the most
secure approach while minimizing management overhead and complexity. How should you
configure access?
A. Enable Identity-Aware Proxy (IAP) for all microservices. Develop a new microservice
that checks the authentication requirements for each application and controls access to
the respective services.
B. Enable Identity-Aware Proxy (IAP) for all microservices. Manage access control lists
(ACLs) for the restricted services, and configure allAuthenticatedUsers access to the
public services.
C. Use Cloud Endpoints with Firebase Authentication for all microservices. Configure
Firebase rules to manage access control lists (ACLs) for each service, allowing access to
the public services.
D. Configure separate Cloud Run services for the public and restricted
microservices. Enable Identity-Aware Proxy (IAP) only for the restricted services,
and configure the Cloud Run ingress settings to ‘Internal and Cloud Load
Balancing’.
Answer: D
Question #: 304
Q: You are the lead developer for a company that provides a financial risk calculation API.
The API is built on Cloud Run and has a gRPC interface. You frequently develop
optimizations to the risk calculators. You want to enable these optimizations for select
customers who registered to try out the optimizations prior to rolling out the optimization
to all customers. Your CI/CD pipeline has built a new image and stored it in the Artifact
Registry.
A. Migrate the traffic to the new service by setting Cloud Run’s traffic split based on the
percentage of registered customers.
B. Migrate the traffic to the new service by using a blue/green deployment approach.
C. Migrate the traffic to the new service by using a feature flag for registered
customers.
D. Migrate the traffic to the new service and enable session affinity for Cloud Run.
Answer: C
Question #: 305
Q: Your ecommerce application has a rapidly growing user base, and it is experiencing
performance issues due to excessive requests to your backend API. Your team develops and
manages this API. The Cloud SQL backend database is struggling to handle the high demand,
leading to latency and timeouts. You need to implement a solution that optimizes API
performance and improves user experience. What should you do?
A. Use Apigee to expose your API. Use Memorystore for Redis to cache frequently
accessed data. Implement exponential backoff in the application to retry failed
requests.
B. Use Apigee to expose your API. Implement rate limiting and access control policies in
Apigee to control API traffic. Use Pub/Sub to queue requests to prevent database
overload.
C. Use Cloud Load Balancing to expose your API. Use Cloud CDN in front of the load
balancer to cache responses. Implement exponential backoff to retry failed requests.
D. Use Cloud Load Balancing to expose your API. Increase the memory for the database
instances to handle more concurrent requests. Implement a custom rate-limiting
mechanism in your application code to control API requests.
Answer: A
Question #: 306
Q: You need to deploy a new feature into production on Cloud Run. Your company’s SRE
team mandates gradual deployments to avoid large downtimes caused by code change
errors. You want to configure this deployment with minimal effort. What should you do?
A. Configure the application’s frontend load balancer to toggle between the new and old
revisions.
B. Configure the application code to send a small percentage of users to the newly
deployed revision.
C. Deploy the feature with “Serve this revision immediately” unchecked, and
configure the new revision to serve a small percentage of traffic. Check for errors,
and increase traffic to the revision as appropriate.
D. Deploy the feature with “Serve this revision immediately” checked. Check for errors,
roll back to the previous revision, and repeat the process until you have verified that the
deployment is bug-free.
Answer: C
Question #: 307
Q: You are developing an external-facing application on GKE that provides a streaming API
to users. You want to offer two subscription tiers, “basic" and “premium", to users based on
the number of API requests that each client application is allowed to make each day. You
want to design the application architecture to provide subscription tiers to users while
following Google-recommended practices. What should you do?
Answer: A
Question #: 308
Q: Your organization has users and groups configured in an external identity provider (IdP).
You want to leverage the same external IdP to allow Google Cloud console access to all
employees. You also want to personalize the sign-in experience by displaying the user's
name and photo when users access the Google Cloud console. What should you do?
A. Configure workforce identity federation with the external IdP, and set up
attribute mapping.
B. Configure a service account for each individual by using the user name and photo, and
grant permissions for each user to impersonate their respective service accounts.
C. Configure workload identity federation to get the external IdP tokens, and use these
tokens to sign in to the Google Cloud console.
D. Create a Google group that includes organization email IDs for all users. Ask users to
use the same name, work email ID, and password to register and sign in.
Answer: A
Question #: 309
Q: You are developing a new API that creates requests on an asynchronous message service.
Requests will be consumed by different services. You need to expose the API by using a
gRPC interface while minimizing infrastructure management overhead. How should you
deploy the API?
A. Deploy your API to App Engine. Create a Pub/Sub topic, and configure your API to
push messages to the topic.
B. Deploy your API as a Cloud Run service. Create a Pub/Sub topic, and configure
your API to push messages to the topic.
C. Deploy your API to a GKE cluster. Create a Kafka cluster, and configure your API to
write messages to the cluster.
D. Deploy your API on a Compute Engine instance. Create a Kafka cluster, and configure
your API to write messages to the cluster.
Answer: B
Question #: 310
Q: You are about to deploy an application hosted on a Compute Engine instance with
Windows OS and Cloud SQL. You plan to use the Cloud SQL Auth Proxy for connectivity to
the Cloud SQL instance. You plan to follow Google-recommended practices and the principle
of least privilege. You have already created a custom service account. What should you do
next?
Answer: A
Question #: 311
Q: You are developing a secure document sharing platform. The platform allows users to
share documents with other users who may be external to their organization. Access to
these documents should be revoked after a configurable time period. The documents are
stored in Cloud Storage. How should you configure Cloud Storage to support this
functionality?
Answer: C
Question #: 312
Q: You work for an environmental agency in a large city. You are developing a new
monitoring platform that will capture air quality readings from thousands of locations in
the city. You want the air quality reading devices to send and receive their data payload to
the newly created RESTful backend systems every minute by using a curl command. The
backend systems are running in a single cloud region and are using Premium Tier
networking. You need to connect the devices to the backend while minimizing the daily
average latency, measured by using Time to First Byte (TTFB). How should you build this
service?
Answer: D
Question #: 313
Q: Your infrastructure team is responsible for creating and managing Compute Engine VMs.
Your team uses the Google Cloud console and gcloud CLI to provision resources for the
development environment. You need to ensure that all Compute Engine VMs are labeled
correctly for compliance reasons. In case of missing labels, you need to implement
corrective actions so the labels are configured accordingly without changing the current
deployment process. You want to use the most scalable approach. What should you do?
A. Use a Cloud Audit Logs trigger to invoke a Cloud Function when a Compute
Engine VM is created. Check for missing labels and assign them if necessary.
B. Deploy resources with Terraform. Use the gcloud terraform vet command with a
policy to ensure that every Compute Engine VM that is provisioned by Terraform has
labels set.
C. Write a script to check all Compute Engine VMs for missing labels regularly by using
Cloud Scheduler. Use the script to assign the labels.
D. Check all Compute Engine VMs for missing labels regularly. Use the console to assign
the labels.
Answer: A
Question #: 314
Q: You are developing a discussion portal that is built on Cloud Run. Incoming external
requests are routed through a set of microservices before a response is sent. Some of these
microservices connect to databases. You need to run a load test to identify any bottlenecks
in the application when it is under load. You want to follow Google-recommended practices.
What should you do?
A. Modify the response to include a time series that shows elapsed time per service. Use
Log Analytics in Cloud Logging to create a heatmap that exposes any service that could
be a bottleneck.
B. Configure Cloud Trace to capture the requests from the load testing clients.
Review the timings in Cloud Trace.
C. Expose the latency metrics per service for each request. Configure Google Cloud
Managed Service for Prometheus, and use it to scrape and analyze the metrics.
D. Add log statements that capture elapsed time. Analyze the logs and metrics by using
BigQuery.
Answer: B
Question #: 315
Q: Your team currently uses Bigtable as their database backend. In your application's app
profile, you notice that the connection to the Bigtable cluster is specified as single-cluster
routing, and the cluster’s connection logic is configured to conduct manual failover when
the cluster is unavailable. You want to optimize the application code to have more efficient
and highly available Bigtable connectivity. What should you do?
A. Set up Memcached so that queries hit the cache layer first and automatically get data
from Bigtable in the event of a cache miss.
B. Increase the Bigtable client’s connection pool size.
C. Configure a Dataflow template, and use a Beam connector to stream data changes.
D. Configure the app profile to use multi-cluster routing.
Answer: D
Question #: 316
Q: You work for an ecommerce company. Your company is migrating multiple applications
to Google Cloud, and you are assisting with the migration of one of the applications. The
application is currently deployed on a VM without any OS dependencies. You have created a
Dockerfile and used it to upload a new image to Artifact Registry. You want to minimize the
infrastructure and operational complexity. What should you do?
Answer: A
Question #: 317
Q: You recently deployed an Apigee API proxy to your organization across two regions. Both
regions are configured with a separate backend that is hosting the API. You need to
configure Apigee to route traffic to the appropriate local region backend. What should you
do?
A. Create a TargetEndpoint with a weighted load balancing algorithm. Configure the API
proxy to use the same weights for each region's backend.
B. Configure a regional internal Application Load Balancer in each region, and use health
checks to verify that each backend is active. Create a DNS A record that contains the IP
addresses of both regions' load balancers. Configure a Targetserver for each region that
uses this DNS name.
C. Configure a global external Application Load Balancer and configure each region’s
backend with a different regional backend service. Each region communicates to this
single global external Application Load Balancer as its TargetServer.
D. Configure a TargetServer for each region's backend host names. Configure the
API proxy to choose the TargetServer based on the system.region.name flow
variable.
Answer: D
Question #: 318
Q: You are a developer that works for a local concert venue. Customers use your company’s
website to purchase tickets for events. You need to provide customers with immediate
confirmation when a selected seat has been reserved. How should you design the ticket
ordering process?
A. Add the seat reservation to a Cloud Tasks queue, which triggers Workflows to process
the seat reservation.
B. Publish the seat reservation to a Pub/Sub topic. Configure the backend service to use
Eventarc to process the seat reservation on GKE.
C. Upload the seat reservation to a Cloud Storage bucket, which triggers an event to a
Cloud Run service that processes the orders.
D. Submit the seat reservation in an HTTP POST request to an Application Load
Balancer. Configure the Application Load Balancer to distribute the request to a
Compute Engine managed instance group that processes the reservation.
Answer: D
Question #: 319
Q: You work for a financial services company that has a container-first approach. Your team
develops microservices applications. You have a Cloud Build pipeline that creates a
container image, runs regression tests, and publishes the image to Artifact Registry. You
need to ensure that only containers that have passed the regression tests are deployed to
GKE clusters. You have already enabled Binary Authorization on the GKE clusters. What
should you do next?
A. Deploy Voucher Server and Voucher Client components. After a container image has
passed the regression tests, run Voucher Client as a step in the Cloud Build pipeline.
B. Create an attestor and a policy. Run a vulnerability scan to create an attestation for
the container image as a step in the Cloud Build pipeline.
C. Create an attestor and a policy. Create an attestation for the container images
that have passed the regression tests as a step in the Cloud Build pipeline.
D. Set the Pod Security Standard level to Restricted for the relevant namespaces.
Digitally sign the container images that have passed the regression tests as a step in the
Cloud Build pipeline.
Answer: C
Question #: 320
Q: You have an application running in production on Cloud Run. Your team needs to change
one of the application’s services to return a new field. You want to test the new revision on
10% of your clients using the least amount of effort. You also need to keep your service
backward compatible.
Answer: C
Question #: 321
Q: Your team plans to use AlloyDB as their database backend for an upcoming application
release. Your application is currently hosted in a different project and network than the
AlloyDB instances. You need to securely connect your application to the AlloyDB instance
while keeping the projects isolated. You want to minimize additional operations and follow
Google-recommended practices. How should you configure the network for database
connectivity?
A. Provision a Shared VPC project where both the application project and the AlloyDB
project are service projects.
B. Use AlloyDB Auth Proxy and configure the application project’s firewall to allow
connections to port 5433.
C. Provision a service account from the AlloyDB project. Use this service account’s JSON
key file as the --credentials-file to connect to the AlloyDB instance.
D. Ask the database team to provision AlloyDB databases in the same project and
network as the application.
Answer:
Question #: 322
Q: You have an on-premises containerized service written in the current stable version of
Python 3 that is available only to users in the United States. The service has high traffic
during the day and no traffic at night. You need to migrate this application to Google Cloud
and track error logs after the migration in Error Reporting. You want to minimize the cost
and effort of these tasks. What should you do?
A. Deploy the code on Cloud Run. Configure your code to write errors to standard
error.
B. Deploy the code on Cloud Run. Configure your code to stream errors to a Cloud
Storage bucket.
C. Deploy the code on a GKE Autopilot cluster. Configure your code to write error logs to
standard error.
D. Deploy the code on a GKE Autopilot cluster. Configure your code to write error logs to
a Cloud Storage bucket.
Answer: A
Question #: 324
Q: You have an application running on a GKE cluster. Your application has a stateless web
frontend, and has a high-availability requirement. Your cluster is set to automatically
upgrade, and some of your nodes need to be drained. You need to ensure that the
application has a serving capacity of 10% of the Pods prior to the drain. What should you
do?
A. Configure a Vertical Pod Autoscaler (VPA) to increase the memory and CPU by 10%
and set the updateMode to Auto.
B. Configure the Pod replica count to be 10% more than the current replica count.
C. Configure a Pod Disruption Budget (PDB) value to have a minAvailable value of 10%.
D. Configure the Horizontal Pod Autoscaler (HPA) maxReplicas value to 10% more than
the current replica count.
Answer: B
Question #: 327
Q: Your infrastructure team uses Terraform Cloud and manages Google Cloud resources by
using Terraform configuration files. You want to configure an infrastructure as code
pipeline that authenticates to Google Cloud APIs. You want to use the most secure approach
and minimize changes to the configuration. How should you configure the authentication?
A. Use Terraform on GKE. Create a Kubernetes service account to execute the Terraform
code. Use workload identity federation to authenticate as the Google service account.
B. Install Terraform on a Compute Engine VM. Configure the VM by using a service
account that has the required permissions to manage the Google Cloud resources.
C. Configure Terraform Cloud to use workload identity federation to authenticate
to the Google Cloud APIs.
D. Create a service account that has the required permissions to manage the Google
Cloud resources, and import the service account key to Terraform Cloud. Use this
service account to authenticate to the Google Cloud APIs.
Answer: C
Question #: 328
Q: Your team has created an application that is hosted on a GKE cluster. You need to connect
the application to a REST service that is deployed in two GKE clusters in two different
regions. How should you set up the connection and health checks? (Choose two.)
A. Use Cloud Service Mesh with sidecar proxies to connect the application to the
REST service.
B. Use Cloud Service Mesh with proxyless gRPC to connect the application to the REST
service.
C. Configure the REST service's firewall to allow health checks originating from the GKE
service’s IP ranges.
D. Configure the REST service's firewall to allow health checks originating from the GKE
control plane’s IP ranges.
E. Configure the REST service's firewall to allow health checks originating from
the GKE check probe’s IP ranges.
Answer: A E
Question #: 329
Q: You are using the latest stable version of Python 3 to develop an API that stores data in a
Cloud SQL database. You need to perform CRUD operations on the production database
securely and reliably with minimal effort. What should you do?
A. 1. Use Cloud Composer to manage the connection to the Cloud SQL database from
your Python application.
2. Grant an IAM role to the service account that includes the composer.worker
permission.
B. 1. Use the Cloud SQL API to connect to the Cloud SQL database from your Python
application.
2. Grant an IAM role to the service account that includes the cloudsql.instances.login
permission.
C. 1. Use the Cloud SQL connector library for Python to connect to the Cloud SQL
database through a Cloud SQL Auth Proxy.
2. Grant an IAM role to the service account that includes the
cloudsql.instances.connect permission.
D. 1. Use the Cloud SQL emulator to connect to the Cloud SQL database from Cloud Shell
2. Grant an IAM role to the user that includes the cloudsql.instances.login permission.
Answer: C
Question #: 330
Q: Your company manages an application that captures stock data in an internal database.
You need to create an API that provides real-time stock data to users. You want to return
stock data to users as quickly as possible, and you want your solution to be highly scalable.
What should you do?
A. Create a BigQuery dataset and table to act as the internal database. Query the table
when user requests are received.
B. Create a Memorystore for Redis instance to store all stock market data. Query this
database when user requests are received.
C. Create a Bigtable instance. Query the table when user requests are received.
Configure a Pub/Sub topic to queue user requests that your API will respond to.
D. Create a Memorystore for Redis instance, and use this database to store the
most accessed stock data. Query this instance first when user requests are
received, and fall back to the internal database.
Answer: D
Question #: 331
Q: You are designing a microservices architecture for a new application that will be
deployed on Cloud Run. The application requires high-throughput communication between
the internal microservices. You want to use the most effective, lowest latency
communication protocol for this application. What should you do?
A. Configure the Cloud Run service to use HTTP/2. Implement gRPC for
communication between the microservices. Use streaming gRPCs when a large
amount of data has to be sent.
B. Implement the microservices with the REST API communication protocol. Use Apigee
with rate-limiting to provide the best QoS for high-priority services.
C. Use SOAP to build the microservices API, and use XML as the data format for
communication across the microservices. Define SOAP data contracts for each
microservice.
D. Use HTTP REST to communicate across the microservices. Implement pagination and
add indexing to your database.
Answer: A
Question #: 332
Q: Your company recently modernized their monolith ecommerce site to a microservices
application in GKE. Your team uses Google Cloud's operations suite for monitoring and
logging. You want to improve the logging indexing and searchabilty in Cloud Logging across
your microservices with the least amount of effort. What should you do?
A. Ask the SRE team to enable Managed Service for Prometheus on your GKE cluster.
B. Reconfigure your applications to write logs to an emptyDir volume. Configure a
sidecar agent to read the logs and send them to the Cloud Logging API.
C. Update your microservices code to emit logs in JSON format.
D. Instrument your microservices code with OpenTelemetry libraries.
Answer: C
Question #: 333
Q: You recently developed an application that will be hosted on Cloud Run. You need to
conduct a load test. You want to analyze the load test logs second by second to understand
your Cloud Run service's response to rapid traffic spikes. You want to minimize effort. How
should you analyze the logs?
Answer: B
Question #: 337
Q: You are deploying a microservices application to GKE. One microservice needs to
download files from a Cloud Storage bucket. You have an IAM service account with the
Storage Object Viewer role on the project with the bucket. You need to configure your
application to access the Cloud Storage bucket while following Google-recommended
practices. What should you do?
A. Assign the IAM service account to the cluster’s node pool. Configure the application to
authenticate to the bucket by using Application Default Credentials.
B. Assign the IAM service account to the cluster’s node pool. Encrypt the IAM service
account key file by using a symmetric block cipher, and store the encrypted file on a
persistent volume. Store the encryption key in Secret Manager.
C. Create a Kubernetes service account. Create a Kubernetes secret with a base64-
encoded IAM service account key file. Annotate the Kubernetes secret with the
Kubernetes service account. Assign the Kubernetes ServiceAccount to the Pods that
need to access the bucket.
D. Create a Kubernetes service account. Use an IAM policy to bind the IAM service
account to a Kubernetes service account. Annotate the Kubernetes
ServiceAccount object with the name of the bound IAM service account. Assign the
Kubernetes ServiceAccount to the Pods that need to access the bucket.
Answer: D
Question #: 339
Q: You are developing a new ecommerce website for your company. You want customers to
receive a customized email notification when they place an order. You need to configure this
email service while minimizing deployment effort. What should you do?
Answer: A
Question #: 340
Q: You are developing an online chat application where users can upload profile pictures.
Uploaded profile pictures must comply with content policies. You need to detect
inappropriate images and label those images automatically when they are uploaded. In the
future, this process will need to be expanded to include additional processing tasks such as
watermarking and image compression.
You want to simplify orchestration and minimize operational overhead of the image
scanning and labeling steps while also ensuring that additional steps can be added and
removed easily later on. What should you do?
Answer: D
Question #: 348
Q: You are compiling a compliance report on vulnerability metadata for a specific set of
images identified by Artifact Analysis. Metadata from images scanned more than 30 days
ago are missing from the compliance report. You need to access the vulnerability metadata
for these older images. What should you do?
Answer: C
Question #: 349
Q: Your team runs a Python job that reads millions of customer record files stored in a Cloud
Storage bucket. To comply with regulatory requirements, you need to ensure that customer
data is immediately deleted once the job is completed. You want to minimize the time
required to complete this task. What should you do?
A. Add a final step in the job that deletes all the objects in the bucket in bulk by
using batch requests to the Cloud Storage API.
B. Configure Object Lifecycle Management on the Cloud Storage bucket that deletes all
the objects in the bucket at the end of the job execution.
C. Remove the bucket from the Google Cloud console when the job is completed
D. Use the gcloud CLI to execute the gcloud storage rm --recursive gs://BUCKET_NAME/
command.
Answer: A
Question #: 351
Q: You have a Cloud Run service that needs to connect to a Cloud SQL instance in a different
project. You provisioned the Cloud Run service account with the Cloud SQL Client IAM role
on the project that is hosting Cloud SQL. However, when you test the connection, the
connection fails. You want to fix the connection failure while following Google-
recommended practices. What should you do?
A. Add the cloudsql.instances.connect IAM permission to the Cloud Run service account.
B. Request additional API quota for Cloud SQL Auth Proxy,
C. Enable the Cloud SQL Admin API in both projects.
D. Migrate the Cloud SQL instance into the same project as the Cloud Run service.
Answer: C
Question #: 353
Q: You are a developer at a large organization. Your team uses Git for source code
management (SCM). You want to ensure that your team follows Google-recommended best
practices to manage code to drive higher rates of software delivery. Which SCM process
should your team use?
A. Each developer commits their code to the main branch before each product release,
conducts testing, and rolls back if integration issues are detected.
B. Each group of developers copies the repository, commits their changes to their
repository, and merges their code into the main repository before each product release.
C. Each developer creates a branch for their own work, commits their changes to
their branch, and merges their code into the main branch daily.
D. Each group of developers creates a feature branch from the main branch for their
work, commits their changes to their branch, and merges their code into the main
branch before each major release.
Answer: C
Question #: 361
Q: You are responsible for developing a new ecommerce application that is running on
Cloud Run. You need to connect your application to a Cloud SQL database that is in a
separate project. This project is on an isolated network dedicated to multiple databases
without a public IP. You need to connect your application to this database. What should you
do?
A. Create a Private Service Connect endpoint on your network. Create a Serverless VPC
Access connector on your project. Use Cloud SQL Language Connectors to create an
internal connection.
B. Configure VPC Network Peering between both networks. In Cloud Run, create a Cloud
SQL connection that uses the internal IP. Use Cloud SQL Language Connectors to
interact with the database.
C. Configure private services access on your project. In Cloud Run, create a Cloud SQL
connection. Use Cloud SQL Language Connectors to interact with the database.
D. Create a subnet on your VPC. Create a Serverless VPC Access connector on your
project using the new subnet. In Cloud Run, create a Cloud SQL connection. Use
Cloud SQL Language Connectors to interact with the database.
Answer: D