Associate Cloud Engineer Exam Dumps
Associate Cloud Engineer Exam Dumps
2.You are developing an internet of things (IoT) application that captures sensor data from multiple
devices that have already been set up. You need to identify the global data storage product your
company should use to store this data. You must ensure that the storage solution you choose meets
your requirements of sub-millisecond latency.
What should you do?)
A. Store the IoT data in Spanner. Use caches to speed up the process and avoid latencies.
B. Store the IoT data in Bigtable.
C. Capture IoT data in BigQuery datasets.
D. Store the IoT data in Cloud Storage. Implement caching by using Cloud CDN.
Answer: B
Explanation:
Let's evaluate each option based on the requirement of sub-millisecond latency for globally stored IoT
data:
A. Spanner with Caching: While Spanner offers strong consistency and global scalability, the base
latency might not consistently be sub-millisecond for all read/write operations globally. Introducing
caching adds complexity and doesn't guarantee sub-millisecond latency for all initial reads or cache
misses.
B. Bigtable: Bigtable is a highly scalable NoSQL database service designed for low-latency, high-
throughput workloads. It excels at storing and retrieving large volumes of time-series data, which is
typical for IoT sensor data. Its architecture is optimized for single-key lookups and scans, providing
consistent sub-millisecond latency, making it a strong candidate for this use case.
C. BigQuery: BigQuery is a fully managed, serverless data warehouse designed for analytical queries
on large datasets. While it's excellent for analyzing IoT data in batch, it's not optimized for the low-
latency, high-throughput ingestion and retrieval required for real-time IoT applications with sub-
millisecond latency needs.
D. Cloud Storage with Cloud CDN: Cloud Storage is object storage and is not designed for low-
latency transactional workloads. Cloud CDN is a content delivery network that caches content closer
to users for faster delivery, but it's not suitable for the primary storage of rapidly incoming IoT sensor
data requiring sub-millisecond write latency.
Google Cloud Documentation
Reference: Cloud Bigtable Overview: https://cloud.google.com/bigtable/docs/overview - This
document highlights Bigtable's suitability for low-latency and high-throughput applications, including
IoT. It mentions its ability to handle massive amounts of data with consistent performance.
Spanner Overview: https://cloud.google.com/spanner/docs/overview - While Spanner offers low
latency, Bigtable is generally preferred for extremely high-throughput, low-latency use cases like raw
sensor data ingestion due to its optimized architecture for such workloads.
BigQuery Overview: https://cloud.google.com/bigquery/docs/introduction - This emphasizes
BigQuery's analytical capabilities rather than low-latency operational workloads.
Cloud Storage Overview: https://cloud.google.com/storage/docs/overview - This describes Cloud
Storage as object storage, not ideal for sub-millisecond latency reads and writes required for real-time
IoT data.
3.You need to manage multiple Google Cloud Platform (GCP) projects in the fewest steps possible.
You want to configure the Google Cloud SDK command line interface (CLI) so that you can easily
manage
multiple GCP projects.
What should you?
A. 1. Create a configuration for each project you need to manage.
4.You created several resources in multiple Google Cloud projects. All projects are linked to different
billing accounts. To better estimate future charges, you want to have a single visual representation of
all costs incurred. You want to include new cost data as soon as possible.
What should you do?
A. Configure Billing Data Export to BigQuery and visualize the data in Data Studio.
B. Visit the Cost Table page to get a CSV export and visualize it using Data Studio.
C. Fill all resources in the Pricing Calculator to get an estimate of the monthly cost.
D. Use the Reports view in the Cloud Billing Console to view the desired cost information.
Answer: A
Explanation:
https://cloud.google.com/billing/docs/how-to/export-data-bigquery "Cloud Billing export to BigQuery
enables you to export detailed Google Cloud billing data (such as usage, cost estimates, and pricing
data) automatically throughout the day to a BigQuery dataset that you specify."
Reference: https://cloud.google.com/billing/docs/how-to/visualize-data
5.You are running multiple microservices in a Kubernetes Engine cluster. One microservice is
rendering images. The microservice responsible for the image rendering requires a large amount of
CPU time compared to the memory it requires. The other microservices are workloads that are
optimized for n1-standard machine types. You need to optimize your cluster so that all workloads are
using resources as efficiently as possible.
What should you do?
A. Assign the pods of the image rendering microservice a higher pod priority than the older
microservices
B. Create a node pool with compute-optimized machine type nodes for the image rendering
microservice Use the node pool with general-purpose machine type nodes for the other microservices
C. Use the node pool with general-purpose machine type nodes for lite mage rendering microservice
Create a nodepool with compute-optimized machine type nodes for the other microservices
D. Configure the required amount of CPU and memory in the resource requests specification of the
image rendering microservice deployment Keep the resource requests for the other microservices at
the default
Answer: B
6. Locate the project in the GCP console, click Shut down and then enter the project ID.
B. 1. Verify that you are assigned the Project Owners IAM role for this project.
7.You created an instance of SQL Server 2017 on Compute Engine to test features in the new
version. You want to connect to this instance using the fewest number of steps.
What should you do?
A. Install a RDP client on your desktop. Verify that a firewall rule for port 3389 exists.
B. Install a RDP client in your desktop. Set a Windows username and password in the GCP Console.
Use the credentials to log in to the instance.
C. Set a Windows password in the GCP Console. Verify that a firewall rule for port 22 exists. Click the
RDP button in the GCP Console and supply the credentials to log in.
D. Set a Windows username and password in the GCP Console. Verify that a firewall rule for port
3389 exists. Click the RDP button in the GCP Console, and supply the credentials to log in.
Answer: D
Explanation:
https://cloud.google.com/compute/docs/instances/connecting-to-windows#remote-desktop-connection-
app
https://cloud.google.com/compute/docs/instances/windows/generating-credentials
https://cloud.google.com/compute/docs/instances/connecting-to-windows#before-you-begin
8. Locate the project in the GCP console, enter the project ID and then click Shut down.
D. 1. Verify that you are assigned the Organizational Administrators IAM role for this project.
11.You are building an application that processes data files uploaded from thousands of suppliers.
Your primary goals for the application are data security and the expiration of aged data.
You need to design the application to:
• Restrict access so that suppliers can access only their own data.
• Give suppliers write access to data only for 30 minutes.
• Delete data that is over 45 days old.
You have a very short development cycle, and you need to make sure that the application requires
minimal maintenance.
Which two strategies should you use? (Choose two.)
A. Build a lifecycle policy to delete Cloud Storage objects after 45 days.
B. Use signed URLs to allow suppliers limited time access to store their objects.
C. Set up an SFTP server for your application, and create a separate user for each supplier.
D. Build a Cloud function that triggers a timer of 45 days to delete objects that have expired.
E. Develop a script that loops through all Cloud Storage buckets and deletes any buckets that are
older than 45 days.
Answer: AB
Explanation:
(A) Object Lifecycle Management Delete
The Delete action deletes an object when the object meets all conditions specified in the lifecycle rule.
Exception: In buckets with Object Versioning enabled, deleting the live version of an object causes it
to become a noncurrent version, while deleting a noncurrent version deletes that version
permanently.
https://cloud.google.com/storage/docs/lifecycle#delete
(B) Signed URLs
This page provides an overview of signed URLs, which you use to give time-limited resource access
to anyone in possession of the URL, regardless of whether they have a Google account
https://cloud.google.com/storage/docs/access-control/signed-urls
12.You have a project for your App Engine application that serves a development environment. The
required testing has succeeded and you want to create a new project to serve as your production
environment.
What should you do?
A. Use gcloud to create the new project, and then deploy your application to the new project.
B. Use gcloud to create the new project and to copy the deployed application to the new project.
C. Create a Deployment Manager configuration file that copies the current App Engine deployment
into a new project.
D. Deploy your application again using gcloud and specify the project parameter with the new project
name to create the new project.
Answer: A
Explanation:
You can deploy to a different project by using Cproject flag.
By default, the service is deployed the current project configured via:
$ gcloud config set core/project PROJECT
To override this value for a single deployment, use the Cproject flag:
$ gcloud app deploy ~/my_app/app.yaml Cproject=PROJECT
Ref: https://cloud.google.com/sdk/gcloud/reference/app/deploy
13.You are in charge of provisioning access for all Google Cloud users in your organization. Your
company recently acquired a startup company that has their own Google Cloud organization. You
need to ensure that your Site Reliability Engineers (SREs) have the same project permissions in the
startup company's organization as in your own organization.
What should you do?
A. In the Google Cloud console for your organization, select Create role from selection, and choose
destination as the startup company's organization
B. In the Google Cloud console for the startup company, select Create role from selection and choose
source as the startup company's Google Cloud organization.
C. Use the gcloud iam roles copy command, and provide the Organization ID of the startup
company's
Google Cloud Organization as the destination.
D. Use the gcloud iam roles copy command, and provide the project IDs of all projects in the startup
company s organization as the destination.
Answer: D
Explanation:
https://cloud.google.com/architecture/best-practices-vpc-design#shared-service Cloud VPN is another
alternative. Because Cloud VPN establishes reachability through managed IPsec tunnels, it doesn't
have the aggregate limits of VPC Network Peering. Cloud VPN uses a VPN Gateway for connectivity
and doesn't consider the aggregate resource use of the IPsec peer. The drawbacks of Cloud VPN
include increased costs (VPN tunnels and traffic egress), management overhead required to maintain
tunnels, and the performance overhead of IPsec.
14. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is
enabled for your project.
To use the gcloud command-line tool for this quickstart, you must first install and initialize the Cloud
SDK:
15.You are using multiple configurations for gcloud. You want to review the configured Kubernetes
Engine cluster of an inactive configuration using the fewest possible steps.
What should you do?
A. Use gcloud config configurations describe to review the output.
B. Use gcloud config configurations activate and gcloud config list to review the output.
C. Use kubectl config get-contexts to review the output.
D. Use kubectl config use-context and kubectl config view to review the output.
Answer: D
Explanation:
Reference: https://medium.com/google-cloud/kubernetes-engine-kubectl-config-b6270d2b656c
kubectl config view -o jsonpath='{.users[].name}' # display the first user kubectl config view -o
jsonpath='{.users[*].name}' # get a list of users kubectl config get-contexts # display list of contexts
kubectl config current-context # display the current-context
kubectl config use-context my-cluster-name # set the default context to my-cluster-name
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
16.You need to set a budget alert for use of Compute Engineer services on one of the three Google
Cloud Platform projects that you manage. All three projects are linked to a single billing account.
What should you do?
A. Verify that you are the project billing administrator. Select the associated billing account and create
a budget and alert for the appropriate project.
B. Verify that you are the project billing administrator. Select the associated billing account and create
a budget and a custom alert.
C. Verify that you are the project administrator. Select the associated billing account and create a
budget for the appropriate project.
D. Verify that you are project administrator. Select the associated billing account and create a budget
and a custom alert.
Answer: A
Explanation:
https://cloud.google.com/iam/docs/understanding-roles#billing-roles
17.Your organization has a dedicated person who creates and manages all service accounts for
Google Cloud projects. You need to assign this person the minimum role for projects.
What should you do?
A. Add the user to roles/iam.roleAdmin role.
B. Add the user to roles/iam.securityAdmin role.
C. Add the user to roles/iam.serviceAccountUser role.
D. Add the user to roles/iam.serviceAccountAdmin role.
Answer: D
Explanation:
Reference: https://cloud.google.com/iam/docs/creating-managing-service-accounts
Service Account User (roles/iam.serviceAccountUser): Includes permissions to list service accounts,
get details about a service account, and impersonate a service account. Service Account Admin
(roles/iam.serviceAccountAdmin): Includes permissions to list service accounts and get details about
a service account. Also includes permissions to create, update, and delete service accounts, and to
view or change the IAM policy on a service account.
18.You have two Google Cloud projects: project-a with VPC vpc-a (10.0.0.0/16) and project-b with
VPC vpc-b (10.8.0.0/16). Your frontend application resides in vpc-a and the backend API services ate
deployed in vpc-b. You need to efficiently and cost-effectively enable communication between these
Google Cloud projects. You also want to follow Google-recommended practices.
What should you do?
A. Configure a Cloud Router in vpc-a and another Cloud Router in vpc-b.
B. Configure a Cloud Interconnect connection between vpc-a and vpc-b.
C. Create VPC Network Peering between vpc-a and vpc-b.
D. Create an OpenVPN connection between vpc-a and vpc-b.
Answer: C
19. Create an ingress firewall rule with the following settings: "¢ Targets: all instances with tier #2
service account "¢ Source filter: all instances with tier #1 service account "¢ Protocols: allow
TCP:8080 2. Create an ingress firewall rule with the following settings: "¢ Targets: all instances with
tier #3 service account "¢ Source filter: all instances with tier #2 service account "¢ Protocols: allow
TCP: 8080
23.You have deployed an application on a Compute Engine instance. An external consultant needs to
access the Linux-based instance. The consultant is connected to your corporate network through a
VPN connection, but the consultant has no Google account.
What should you do?
A. Instruct the external consultant to use the gcloud compute ssh command line tool by using Identity-
Aware Proxy to access the instance.
B. Instruct the external consultant to use the gcloud compute ssh command line tool by using the
public IP address of the instance to access it.
C. Instruct the external consultant to generate an SSH key pair, and request the public key from the
consultant. Add the public key to the instance yourself, and have the consultant access the instance
through SSH with their private key.
D. Instruct the external consultant to generate an SSH key pair, and request the private key from the
consultant. Add the private key to the instance yourself, and have the consultant access the instance
through SSH with their public key.
Answer: C
Explanation:
The best option is to instruct the external consultant to generate an SSH key pair, and request the
public key from the consultant. Then, add the public key to the instance yourself, and have the
consultant access the instance through SSH with their private key. This way, you can grant the
consultant access to the instance without requiring a Google account or exposing the instance’s
public IP address. This option also follows the best practice of using user-managed SSH keys instead
of service account keys for SSH access1.
Option A is not feasible because the external consultant does not have a Google account, and
therefore cannot use Identity-Aware Proxy (IAP) to access the instance. IAP requires the user to
authenticate with a Google account and have the appropriate IAM permissions to access the
instance2. Option B is not secure because it exposes the instance’s public IP address, which can
increase the risk of unauthorized access or attacks. Option D is not correct because it reverses the
roles of the public and private keys. The public key should be added to the instance, and the private
key should be kept by the consultant. Sharing the private key with anyone else can compromise the
security of the SSH connection3.
Reference:
1: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
2: https://cloud.google.com/iap/docs/using-tcp-forwarding
3: https://cloud.google.com/compute/docs/instances/connecting-advanced#sshbetweeninstances
24.You are developing a new web application that will be deployed on Google Cloud Platform. As part
of your release cycle, you want to test updates to your application on a small portion of real user
traffic. The majority of the users should still be directed towards a stable version of your application.
What should you do?
A. Deploy me application on App Engine For each update, create a new version of the same service
Configure traffic splitting to send a small percentage of traffic to the new version
B. Deploy the application on App Engine For each update, create a new service Configure traffic
splitting to send a small percentage of traffic to the new service.
C. Deploy the application on Kubernetes Engine For a new release, update the deployment to use the
new version
D. Deploy the application on Kubernetes Engine For a now release, create a new deployment for the
new version Update the service e to use the now deployment.
Answer: D
Explanation:
Keyword, Version, traffic splitting, App Engine supports traffic splitting for versions before releasing.
25.You have deployed an application on a single Compute Engine instance. The application writes
logs to disk. Users start reporting errors with the application. You want to diagnose the problem.
What should you do?
A. Navigate to Cloud Logging and view the application logs.
B. Connect to the instance’s serial console and read the application logs.
C. Configure a Health Check on the instance and set a Low Healthy Threshold value.
D. Install and configure the Cloud Logging Agent and view the logs from Cloud Logging.
Answer: D
Explanation:
Reference: https://cloud.google.com/error-reporting/docs/setup/compute-engine
Cloud Loging knows nothing about applications installed on the system without an agent collecting
logs. Using the serial console is not a best-practice and is impractical on a large scale.
The VM images for Compute Engine and Amazon Elastic Compute Cloud (EC2) don't include the
Logging agent, so you must complete these steps to install it on those instances. The agent runs
under both Linux and Windows. Source:
https://cloud.google.com/logging/docs/agent/logging/installation
26.You are operating a Google Kubernetes Engine (GKE) cluster for your company where different
teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia
Tesla P100 GPUs to train their models. You want to minimize effort and cost.
What should you do?
A. Ask your ML team to add the “accelerator: gpu” annotation to their pod specification.
B. Recreate all the nodes of the GKE cluster to enable GPUs on all of them.
C. Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs.
Dedicate this cluster to your ML team.
D. Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the
cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.
Answer: D
Explanation:
This is the most optimal solution. Rather than recreating all nodes, you create a new node pool with
GPU enabled. You then modify the pod specification to target particular GPU types by adding node
selector to your workloads Pod specification. YOu still have a single cluster so you pay Kubernetes
cluster management fee for just one cluster thus minimizing the cost.
Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/gpus
Ref: https://cloud.google.com/kubernetes-engine/pricing
Example:
apiVersion: v1
kind: Pod
metadata:
name: my-gpu-pod
spec:
containers:
name: my-gpu-container
image: nvidia/cuda:10.0-runtime-ubuntu18.04
command: [/bin/bash]
resources:
limits:
nvidia.com/gpu: 2
nodeSelector:
cloud.google.com/gke-accelerator: nvidia-tesla-k80 # or nvidia-tesla-p100 or nvidia-tesla-p4 or nvidia-
tesla-v100 or nvidia-tesla-t4
27.You are given a project with a single virtual private cloud (VPC) and a single subnetwork in the us-
central1 region. There is a Compute Engine instance hosting an application in this subnetwork. You
need to deploy a new instance in the same project in the europe-west1 region. This new instance
needs access to the application. You want to follow Google-recommended practices.
What should you do?
A. 1. Create a subnetwork in the same VPC, in europe-west1.
28.The storage costs for your application logs have far exceeded the project budget. The logs are
currently being retained indefinitely in the Cloud Storage bucket myapp-gcp-ace-logs. You have been
asked to remove logs older than 90 days from your Cloud Storage bucket. You want to optimize
ongoing Cloud Storage spend.
What should you do?
A. Write a script that runs gsutil Is -| C gs://myapp-gcp-ace-logs/** to find and remove items older than
90 days. Schedule the script with cron.
B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil lifecycle set config-
json-file.
C. Write a lifecycle management rule in XML and push it to the bucket with gsutil lifecycle set config-
xml-file.
D. Write a script that runs gsutil Is -Ir gs://myapp-gcp-ace-logs/** to find and remove items older than
90 days. Repeat this process every morning.
Answer: B
Explanation:
You write a lifecycle management rule in XML and push it to the bucket with gsutil lifecycle set config-
xml-file. is not right.
gsutil lifecycle set enables you to set the lifecycle configuration on one or more buckets based on the
configuration file provided. However, XML is not a valid supported type for the configuration file.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/lifecycle
Write a script that runs gsutil ls -lr gs://myapp-gcp-ace-logs/** to find and remove items older than 90
days. Repeat this process every morning. is not right.
This manual approach is error-prone, time-consuming and expensive. GCP Cloud Storage provides
lifecycle management rules that let you achieve this with minimal effort.
Write a script that runs gsutil ls -l gs://myapp-gcp-ace-logs/** to find and remove items older than 90
days. Schedule the script with cron. is not right.
This manual approach is error-prone, time-consuming and expensive. GCP Cloud Storage provides
lifecycle management rules that let you achieve this with minimal effort.
Write a lifecycle management rule in JSON and push it to the bucket with gsutil lifecycle set config-
json-file. is the right answer.
You can assign a lifecycle management configuration to a bucket. The configuration contains a set of
rules which apply to current and future objects in the bucket. When an object meets the criteria of one
of the rules, Cloud Storage automatically performs a specified action on the object. One of the
supported actions is to Delete objects. You can set up a lifecycle management to delete objects older
than 90 days. gsutil lifecycle set enables you to set the lifecycle configuration on the bucket based on
the configuration file. JSON is the only supported type for the configuration file. The config-json-file
specified on the command line should be a path to a local file containing the lifecycle configuration
JSON document.
Ref: https://cloud.google.com/storage/docs/gsutil/commands/lifecycle
Ref: https://cloud.google.com/storage/docs/lifecycle
29.You need to configure optimal data storage for files stored in Cloud Storage for minimal cost. The
files are used in a mission-critical analytics pipeline that is used continually. The users are in Boston,
MA (United States).
What should you do?
A. Configure regional storage for the region closest to the users Configure a Nearline storage class
B. Configure regional storage for the region closest to the users Configure a Standard storage class
C. Configure dual-regional storage for the dual region closest to the users Configure a Nearline
storage class
D. Configure dual-regional storage for the dual region closest to the users Configure a Standard
storage class
Answer: B
Explanation:
Keywords: - continually -> Standard - mission-critical analytics -> dual-regional
30.Your company is using Google Workspace to manage employee accounts. Anticipated growth will
increase the number of personnel from 100 employees to 1.000 employees within 2 years. Most
employees will need access to your company's Google Cloud account. The systems and processes
will need to support 10x growth without performance degradation, unnecessary complexity, or
security issues.
What should you do?
A. Migrate the users to Active Directory. Connect the Human Resources system to Active Directory.
Turn on Google Cloud Directory Sync (GCDS) for Cloud Identity. Turn on Identity Federation from
Cloud Identity to Active Directory.
B. Organize the users in Cloud Identity into groups. Enforce multi-factor authentication in Cloud
Identity.
C. Turn on identity federation between Cloud Identity and Google Workspace. Enforce multi-factor
authentication for domain wide delegation.
D. Use a third-party identity provider service through federation. Synchronize the users from Google
Workplace to the third-party provider in real time.
Answer: B
31. Collect metrics and publish them lo the Cloud Monitoring API
32.You are analyzing Google Cloud Platform service costs from three separate projects. You want to
use this information to create service cost estimates by service type, daily and monthly, for the next
six months using standard query syntax.
What should you do?
A. Export your bill to a Cloud Storage bucket, and then import into Cloud Bigtable for analysis.
B. Export your bill to a Cloud Storage bucket, and then import into Google Sheets for analysis.
C. Export your transactions to a local file, and perform analysis with a desktop tool.
D. Export your bill to a BigQuery dataset, and then write time window-based SQL queries for analysis.
Answer: D
Explanation:
"...we recommend that you enable Cloud Billing data export to BigQuery at the same time that you
create a Cloud Billing account. " https://cloud.google.com/billing/docs/how-to/export-data-bigquery
https://medium.com/google-cloud/analyzing-google-cloud-billing-data-with-big-query-30bae1c2aae4
33.You have a web application deployed as a managed instance group. You have a new version of
the application to gradually deploy. Your web application is currently receiving live web traffic. You
want to ensure that the available capacity does not decrease during the deployment.
What should you do?
A. Perform a rolling-action start-update with maxSurge set to 0 and maxUnavailable set to 1.
B. Perform a rolling-action start-update with maxSurge set to 1 and maxUnavailable set to 0.
C. Create a new managed instance group with an updated instance template. Add the group to the
backend service for the load balancer. When all instances in the new managed instance group are
healthy, delete the old managed instance group.
D. Create a new instance template with the new application version. Update the existing managed
instance group with the new instance template. Delete the instances in the managed instance group
to allow the managed instance group to recreate the instance using the new instance template.
Answer: B
Explanation:
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-
groups#max_unavailable
34.You have created a new project in Google Cloud through the gcloud command line interface (CLI)
and linked a billing account. You need to create a new Compute
Engine instance using the CLI. You need to perform the prerequisite steps.
What should you do?
A. Create a Cloud Monitoring Workspace.
B. Create a VPC network in the project.
C. Enable the compute googleapis.com API.
D. Grant yourself the IAM role of Compute Admin.
Answer: D
35.You are planning to migrate the following on-premises data management solutions to Google
Cloud:
• One MySQL cluster for your main database
• Apache Kafka for your event streaming platform
• One Cloud SOL for PostgreSOL database for your analytical and reporting needs
You want to implement Google-recommended solutions for the migration. You need to ensure that the
new solutions provide global scalability and require minimal operational and infrastructure
management.
What should you do?
A. Migrate from MySQL to Cloud SQL, from Kafka to Memorystore, and from Cloud SQL for
PostgreSQL to Cloud SQL
B. Migrate from MySQL to Cloud Spanner, from Kafka to Memorystore, and from Cloud SOL for
PostgreSQL to Cloud SQL
C. Migrate from MySQL to Cloud SOL, from Kafka to Pub/Sub, and from Cloud SOL for PostgreSQL
to BigQuery.
D. Migrate from MySQL to Cloud Spanner, from Kafka to Pub/Sub. and from Cloud SQL for
PostgreSQL to BigQuery
Answer: D
36.You are managing several Google Cloud Platform (GCP) projects and need access to all logs for
the past 60 days. You want to be able to explore and quickly analyze the log contents. You want to
follow Google- recommended practices to obtain the combined logs for all projects.
What should you do?
A. Navigate to Stackdriver Logging and select resource.labels.project_id="*"
B. Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the
table expiration to 60 days.
C. Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle
rule
to delete objects after 60 days.
D. Configure a Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery.
Configure the table expiration to 60 days.
Answer: B
Explanation:
Navigate to Stackdriver Logging and select resource.labels.project_id=*. is not right.
Log entries are held in Stackdriver Logging for a limited time known as the retention period which is
30 days (default configuration). After that, the entries are deleted. To keep log entries longer, you
need to export them outside of Stackdriver Logging by configuring log sinks.
Ref: https://cloud.google.com/blog/products/gcp/best-practices-for-working-with-google-cloud-audit-
logging
Configure a Cloud Scheduler job to read from Stackdriver and store the logs in BigQuery. Configure
the table expiration to 60 days. is not right.
While this works, it makes no sense to use Cloud Scheduler job to read from Stackdriver and store
the logs in BigQuery when Google provides a feature (export sinks) that does exactly the same thing
and works out of the box.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Create a Stackdriver Logging Export with a Sink destination to Cloud Storage. Create a lifecycle rule
to delete objects after 60 days. is not right.
You can export logs by creating one or more sinks that include a logs query and an export
destination. Supported destinations for exported log entries are Cloud Storage, BigQuery, and
Pub/Sub.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a
Google Cloud project, organization, folder, or billing account. If it makes it easier to exporting from all
projects of an organication, you can create an aggregated sink that can export log entries from all the
projects, folders, and billing accounts of a Google Cloud organization.
Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in Cloud Storage, but querying logs information from Cloud Storage
is harder than Querying information from BigQuery dataset. For this reason, we should prefer Big
Query over Cloud Storage.
Create a Stackdriver Logging Export with a Sink destination to a BigQuery dataset. Configure the
table expiration to 60 days. is the right answer.
You can export logs by creating one or more sinks that include a logs query and an export
destination. Supported destinations for exported log entries are Cloud Storage, BigQuery, and
Pub/Sub.
Ref: https://cloud.google.com/logging/docs/export/configure_export_v2
Sinks are limited to exporting log entries from the exact resource in which the sink was created: a
Google Cloud project, organization, folder, or billing account. If it makes it easier to exporting from all
projects of an organication, you can create an aggregated sink that can export log entries from all the
projects, folders, and billing accounts of a Google Cloud organization.
Ref: https://cloud.google.com/logging/docs/export/aggregated_sinks
Either way, we now have the data in a BigQuery Dataset. Querying information from a Big Query
dataset is easier and quicker than analyzing contents in Cloud Storage bucket. As our requirement is
to Quickly analyze the log contents, we should prefer Big Query over Cloud Storage.
Also, You can control storage costs and optimize storage usage by setting the default table expiration
for newly created tables in a dataset. If you set the property when the dataset is created, any table
created in the dataset is deleted after the expiration period. If you set the property after the dataset is
created, only new tables are deleted after the expiration period.
For example, if you set the default table expiration to 7 days, older data is automatically deleted after
1 week.
Ref: https://cloud.google.com/bigquery/docs/best-practices-storage
Reference: https://cloud.google.com/blog/products/gcp/best-practices-for-working-with-google-cloud-
audit- logging
37.You are deploying an application on Google Cloud that requires a relational database for storage.
To satisfy your company's security policies, your application must connect to your database through
an encrypted and authenticated connection that requires minimal management and integrates with
Identity and Access Management (IAM).
What should you do?
A. Deploy a Cloud SQL database with the SSL mode set to encrypted only, configure SSL/TLS client
certificates, and configure a database user and password.
B. Deploy a Cloud SOL database and configure IAM database authentication. Access the database
through the Cloud SQL Auth Proxy.
C. Deploy a Cloud SQL database with the SSL mode set to encrypted only, configure SSL/TLS client
certificates, and configure IAM database authentication.
D. Deploy a Cloud SQL database and configure a database user and password. Access the database
through the Cloud SQL Auth Proxy.
Answer: B
Explanation:
Cloud SQL Auth Proxy: This proxy ensures secure connections to your Cloud SQL database by
automatically handling encryption (SSL/TLS) and IAM-based authentication. It simplifies the
management of secure connections without needing to manage SSL/TLS certificates manually. IAM
Database Authentication: This allows you to use IAM credentials to authenticate to the database,
providing a unified and secure authentication mechanism that integrates seamlessly with Google
Cloud IAM.
38. Update your instances’ metadata to add the following value: logs-destination: bq://platform-logs.
B. 1. In Stackdriver Logging, create a logs export with a Cloud Pub/Sub topic called logs as a sink.
39. Configure the Compute Engine instance to use the address of proxy in gce-network as endpoint.
C. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.
40.You are working in a team that has developed a new application that needs to be deployed on
Kubernetes. The production application is business critical and should be optimized for reliability. You
need to provision a Kubernetes cluster and want to follow Google-recommended practices.
What should you do?
A. Create a GKE Autopilot cluster. Enroll the cluster in the rapid release channel.
B. Create a GKE Autopilot cluster. Enroll the cluster in the stable release channel.
C. Create a zonal GKE standard cluster. Enroll the cluster in the stable release channel.
D. Create a regional GKE standard cluster. Enroll the cluster in the rapid release channel.
Answer: B
Explanation:
Autopilot is more reliable and stable release gives more time to fix issues in new version of GKE
41.Your company is running a three-tier web application on virtual machines that use a MySQL
database. You need to create an estimated total cost of cloud infrastructure to run this application on
Google Cloud instances and Cloud SQL.
What should you do?
A. Use the Google Cloud Pricing Calculator to determine the cost of every Google Cloud resource
you expect to use. Use similar size instances for the web server, and use your current on-premises
machines as a comparison for Cloud SQL.
B. Implement a similar architecture on Google Cloud, and run a reasonable load test on a smaller
scale. Check the billing information, and calculate the estimated costs based on the real load your
system usually handles.
C. Use the Google Cloud Pricing Calculator and select the Cloud Operations template to define your
web application with as much detail as possible.
D. Create a Google spreadsheet with multiple Google Cloud resource combinations. On a separate
sheet, import the current Google Cloud prices and use these prices for the calculations within
formulas.
Answer: C
43.You are running out of primary internal IP addresses in a subnet for a custom mode VPC. The
subnet has the IP range 10.0.0.0/20. and the IP addresses are primarily used by virtual machines in
the project. You need to provide more IP addresses for the virtual machines.
What should you do?
A. Change the subnet IP range from 10.0.0.0/20 to 10.0.0.0/22.
B. Change the subnet IP range from 10.0 0.0/20 to 10.0.0.0718.
C. Add a secondary IP range 10.1.0.0/20 to the subnet.
D. Convert the subnet IP range from IPv4 to IPv6
Answer: B
44.You are building a product on top of Google Kubernetes Engine (GKE). You have a single GKE
cluster. For each of your customers, a Pod is running in that cluster, and your customers can run
arbitrary code inside their Pod. You want to maximize the isolation between your customers’ Pods.
What should you do?
A. Use Binary Authorization and whitelist only the container images used by your customers’ Pods.
B. Use the Container Analysis API to detect vulnerabilities in the containers used by your customers’
Pods.
C. Create a GKE node pool with a sandbox type configured to gvisor. Add the parameter
runtimeClassName: gvisor to the specification of your customers’ Pods.
D. Use the cos_containerd image for your GKE nodes. Add a nodeSelector with the value
cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers’ Pods.
Answer: C
Explanation:
Reference: https://cloud.google.com/kubernetes-engine/sandbox/
GKE Sandbox provides an extra layer of security to prevent untrusted code from affecting the host
kernel on your cluster nodes when containers in the Pod execute unknown or untrusted code. Multi-
tenant clusters and clusters whose containers run untrusted workloads are more exposed to security
vulnerabilities than other clusters. Examples include SaaS providers, web-hosting providers, or other
organizations that allow their users to upload and run code. When you enable GKE Sandbox on a
node pool, a sandbox is created for each Pod running on a node in that node pool. In addition, nodes
running sandboxed Pods are prevented from accessing other Google Cloud services or cluster
metadata. Each sandbox uses its own userspace kernel. With this in mind, you can make decisions
about how to group your containers into Pods, based on the level of isolation you require and the
characteristics of your applications.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/sandbox-pods
45.You want to set up a Google Kubernetes Engine cluster Verifiable node identity and integrity are
required for the cluster, and nodes cannot be accessed from the internet. You want to reduce the
operational cost of managing your cluster, and you want to follow Google-recommended practices.
What should you do?
A. Deploy a private autopilot cluster
B. Deploy a public autopilot cluster.
C. Deploy a standard public cluster and enable shielded nodes.
D. Deploy a standard private cluster and enable shielded nodes.
Answer: A
46.You have developed a containerized web application that will serve Internal colleagues during
business hours. You want to ensure that no costs are incurred outside of the hours the application is
used. You have just created a new Google Cloud project and want to deploy the application.
What should you do?
A. Deploy the container on Cloud Run for Anthos, and set the minimum number of instances to zero
B. Deploy the container on Cloud Run (fully managed), and set the minimum number of instances to
zero.
C. Deploy the container on App Engine flexible environment with autoscaling. and set the value
min_instances to zero in the app yaml
D. Deploy the container on App Engine flexible environment with manual scaling, and set the value
instances to zero in the app yaml
Answer: B
Explanation:
https://cloud.google.com/kuberun/docs/architecture-overview#components_in_the_default_installation
47.1.Every employee of your company has a Google account. Your operational team needs to
manage a large number of instances on Compute Engine. Each member of this team needs only
administrative access to the servers. Your security team wants to ensure that the deployment of
credentials is operationally efficient and must be able to determine who accessed a given instance.
What should you do?
A. Generate a new SSH key pair. Give the private key to each member of your team. Configure the
public key in the metadata of each instance.
B. Ask each member of the team to generate a new SSH key pair and to send you their public key.
Use a configuration management tool to deploy those keys on each instance.
C. Ask each member of the team to generate a new SSH key pair and to add the public key to their
Google account. Grant the “compute.osAdminLogin” role to the Google group corresponding to this
team.
D. Generate a new SSH key pair. Give the private key to each member of your team. Configure the
public key as a project-wide public SSH key in your Cloud Platform project and allow project-wide
public SSH keys on each instance.
Answer: C
Explanation:
https://cloud.google.com/compute/docs/instances/managing-instance-access
48.You need to select and configure compute resources for a set of batch processing jobs. These
jobs
take around 2 hours to complete and are run nightly. You want to minimize service costs.
What should you do?
A. Select Google Kubernetes Engine. Use a single-node cluster with a small instance type.
B. Select Google Kubernetes Engine. Use a three-node cluster with micro instance types.
C. Select Compute Engine. Use preemptible VM instances of the appropriate standard machine type.
D. Select Compute Engine. Use VM instance types that support micro bursting.
Answer: C
Explanation:
If your apps are fault-tolerant and can withstand possible instance preemptions, then preemptible
instances can reduce your Compute Engine costs significantly. For example, batch processing jobs
can
run on preemptible instances. If some of those instances stop during processing, the job slows but
does not completely stop. Preemptible instances complete your batch processing tasks without
placing additional workload on your existing instances and without requiring you to pay full price for
additional normal instances.
https://cloud.google.com/compute/docs/instances/preemptible
49.You have a large 5-TB AVRO file stored in a Cloud Storage bucket. Your analysts are proficient
only in SQL and need access to the data stored in this file. You want to find a cost-effective way to
complete their request as soon as possible.
What should you do?
A. Load data in Cloud Datastore and run a SQL query against it.
B. Create a BigQuery table and load data in BigQuery. Run a SQL query on this table and drop this
table after you complete your request.
C. Create external tables in BigQuery that point to Cloud Storage buckets and run a SQL query on
these external tables to complete your request.
D. Create a Hadoop cluster and copy the AVRO file to NDFS by compressing it. Load the file in a hive
table and provide access to your analysts so that they can run SQL queries.
Answer: C
Explanation:
https://cloud.google.com/bigquery/external-data-sources
An external data source is a data source that you can query directly from BigQuery, even though the
data is not stored in BigQuery storage.
BigQuery supports the following external data sources:
Amazon S3
Azure Storage
Cloud Bigtable
Cloud Spanner
Cloud SQL
Cloud Storage
Drive
50.You need to set up permissions for a set of Compute Engine instances to enable them to write
data into a particular Cloud Storage bucket. You want to follow Google-recommended practices.
What should you do?
A. Create a service account with an access scope. Use the access scope
‘https://www.googleapis.com/auth/devstorage.write_only’.
B. Create a service account with an access scope. Use the access scope
‘https://www.googleapis.com/auth/cloud-platform’.
C. Create a service account and add it to the IAM role ‘storage.objectCreator’ for that bucket.
D. Create a service account and add it to the IAM role ‘storage.objectAdmin’ for that bucket.
Answer: C
Explanation:
https://cloud.google.com/iam/docs/understanding-service-
accounts#using_service_accounts_with_compute_engine
https://cloud.google.com/storage/docs/access-control/iam-roles
51.Your auditor wants to view your organization's use of data in Google Cloud. The auditor is most
interested in auditing who accessed data in Cloud Storage buckets. You need to help the auditor
access the data they need.
What should you do?
A. Assign the appropriate permissions, and then use Cloud Monitoring to review metrics
B. Use the export logs API to provide the Admin Activity Audit Logs in the format they want
C. Turn on Data Access Logs for the buckets they want to audit, and Then build a query in the log
viewer that filters on Cloud Storage
D. Assign the appropriate permissions, and then create a Data Studio report on Admin Activity Audit
Logs
Answer: C
Explanation:
Types of audit logs Cloud Audit Logs provides the following audit logs for each Cloud project, folder,
and organization: Admin Activity audit logs Data Access audit logs System Event audit logs Policy
Denied audit logs ***Data Access audit logs contain API calls that read the configuration or metadata
of resources, as well as user-driven API calls that create, modify, or read user-provided resource
data. https://cloud.google.com/logging/docs/audit#types
https://cloud.google.com/logging/docs/audit#data-access Cloud Storage: When Cloud Storage usage
logs are enabled, Cloud Storage writes usage data to the Cloud Storage bucket, which generates
Data Access audit logs for the bucket. The generated Data Access audit log has its caller identity
redacted.
52.Your company uses a large number of Google Cloud services centralized in a single project. All
teams have specific projects for testing and development. The DevOps team needs access to all of
the production services in order to perform their job. You want to prevent Google Cloud product
changes from broadening their permissions in the future. You want to follow Google-recommended
practices.
What should you do?
A. Grant all members of the DevOps team the role of Project Editor on the organization level.
B. Grant all members of the DevOps team the role of Project Editor on the production project.
C. Create a custom role that combines the required permissions. Grant the DevOps team the custom
role on the production project.
D. Create a custom role that combines the required permissions. Grant the DevOps team the custom
role on the organization level.
Answer: C
Explanation:
Understanding IAM custom roles
Key Point: Custom roles enable you to enforce the principle of least privilege, ensuring that the user
and service accounts in your organization have only the permissions essential to performing their
intended functions.
Basic concepts
Custom roles are user-defined, and allow you to bundle one or more supported permissions to meet
your specific needs. Custom roles are not maintained by Google; when new permissions, features, or
services are added to Google Cloud, your custom roles will not be updated automatically.
When you create a custom role, you must choose an organization or project to create it in. You can
then grant the custom role on the organization or project, as well as any resources within that
organization or project.
https://cloud.google.com/iam/docs/understanding-custom-roles#basic_concepts
53. Peer the two VPCs together.4. Configure the Compute Engine instance to use the address of the
load balancer that has been created.
D. 1. In GKE, create a Service of type LoadBalancer that uses the application's Pods as backend.
54.All development (dev) teams in your organization are located in the United States. Each dev team
has its own Google Cloud project. You want to restrict access so that each dev team can only create
cloud resources in the United States (US).
What should you do?
A. Create a folder to contain all the dev projects Create an organization policy to limit resources in US
locations.
B. Create an organization to contain all the dev projects. Create an Identity and Access Management
(IAM) policy to limit the resources in US regions.
C. Create an Identity and Access Management <IAM) policy to restrict the resources locations in the
US. Apply the policy to all dev projects.
D. Create an Identity and Access Management (IAM)policy to restrict the resources locations in all
dev projects. Apply the policy to all dev roles.
Answer: A
55.An employee was terminated, but their access to Google Cloud Platform (GCP) was not removed
until 2 weeks later. You need to find out this employee accessed any sensitive customer information
after their termination.
What should you do?
A. View System Event Logs in Stackdriver. Search for the user’s email as the principal.
B. View System Event Logs in Stackdriver. Search for the service account associated with the user.
C. View Data Access audit logs in Stackdriver. Search for the user’s email as the principal.
D. View the Admin Activity log in Stackdriver. Search for the service account associated with the user.
Answer: C
Explanation:
https://cloud.google.com/logging/docs/audit
Data Access audit logs Data Access audit logs contain API calls that read the configuration or
metadata of resources, as well as user-driven API calls that create, modify, or read user-provided
resource data.
https://cloud.google.com/logging/docs/audit#data-access
56.You are the project owner of a GCP project and want to delegate control to colleagues to manage
buckets and files in Cloud Storage. You want to follow Google-recommended practices.
Which IAM roles should you grant your colleagues?
A. Project Editor
B. Storage Admin
C. Storage Object Admin
D. Storage Object Creator
Answer: B
Explanation:
Storage Admin (roles/storage.admin) Grants full control of buckets and objects.
When applied to an individual bucket, control applies only to the specified bucket and objects within
the bucket.
firebase.projects.get
resourcemanager.projects.get
resourcemanager.projects.list
storage.buckets.*
storage.objects.*
https://cloud.google.com/storage/docs/access-control/iam-roles
This role grants full control of buckets and objects. When applied to an individual bucket, control
applies only to the specified bucket and objects within the bucket.
Ref: https://cloud.google.com/iam/docs/understanding-roles#storage-roles
57. Create an Alerting Policy in Stackdriver that uses the threshold as a trigger condition.
58.You have a Linux VM that must connect to Cloud SQL. You created a service account with the
appropriate access rights. You want to make sure that the VM uses this service account instead of
the default Compute Engine service account.
What should you do?
A. When creating the VM via the web console, specify the service account under the ‘Identity and
API Access’ section.
B. Download a JSON Private Key for the service account. On the Project Metadata, add that JSON as
the value for the key compute-engine-service-account.
C. Download a JSON Private Key for the service account. On the Custom Metadata of the VM, add
that JSON as the value for the key compute-engine-service-account.
D. Download a JSON Private Key for the service account. After creating the VM, ssh into the VM and
save the JSON under ~/.gcloud/compute-engine-service-account.json.
Answer: A
Explanation:
Reference:
https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances
https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances
Changing the service account and access scopes for an instance If you want to run the VM as a
different identity, or you determine that the instance needs a different set of scopes to call the
required APIs, you can change the service account and the access scopes of an existing instance.
For example, you can change access scopes to grant access to a new API, or change an instance so
that it runs as a service account that you created, instead of the Compute Engine default service
account. However, Google recommends that you use the fine-grained IAM policies instead of relying
on access scopes to control resource access for the service account. To change an instance's service
account and access scopes, the instance must be temporarily stopped. To stop your instance, read
the documentation for Stopping an instance. After changing the service account or access scopes,
remember to restart the instance. Use one of the following methods to the change service account or
access scopes of the stopped instance.
59.Your company uses Cloud Storage to store application backup files for disaster recovery
purposes. You want to follow Google’s recommended practices.
Which storage option should you use?
A. Multi-Regional Storage
B. Regional Storage
C. Nearline Storage
D. Coldline Storage
Answer: D
Explanation:
Reference: https://cloud.google.com/storage/docs/storage-classes#nearline
https://cloud.google.com/blog/products/gcp/introducing-coldline-and-a-unified-platform-for-data-
storage
Cloud Storage Coldline: a low-latency storage class for long-term archiving Coldline is a new Cloud
Storage class designed for long-term archival and disaster recovery. Coldline is perfect for the
archival needs of big data or multimedia content, allowing businesses to archive years of data.
Coldline provides fast and instant (millisecond) access to data and changes the way that companies
think about storing and accessing their cold data.
60.Your company is moving its continuous integration and delivery (CI/CD) pipeline to Compute
Engine instances. The pipeline will manage the entire cloud infrastructure through code.
How can you ensure that the pipeline has appropriate permissions while your system is following
security best practices?
A. • Add a step for human approval to the CI/CD pipeline before the execution of the infrastructure
provisioning.
• Use the human approvals IAM account for the provisioning.
B. • Attach a single service account to the compute instances.
• Add minimal rights to the service account.
• Allow the service account to impersonate a Cloud Identity user with elevated permissions to create,
update, or delete resources.
C. • Attach a single service account to the compute instances.
• Add all required Identity and Access Management (IAM) permissions to this service account to
create, update, or delete resources
D. • Create multiple service accounts, one for each pipeline with the appropriate minimal Identity and
Access Management (IAM) permissions.
• Use a secret manager service to store the key files of the service accounts.
• Allow the CI/CD pipeline to request the appropriate secrets during the execution of the pipeline.
Answer: B
Explanation:
The best option is to attach a single service account to the compute instances and add minimal rights
to the service account. Then, allow the service account to impersonate a Cloud Identity user with
elevated permissions to create, update, or delete resources. This way, the service account can use
short-lived access tokens to authenticate to Google Cloud APIs without needing to manage service
account keys. This option follows the principle of least privilege and reduces the risk of credential
leakage and misuse.
Option A is not recommended because it requires human intervention, which can slow down the
CI/CD pipeline and introduce human errors. Option C is not secure because it grants all required IAM
permissions to a single service account, which can increase the impact of a compromised key.
Option D is not cost-effective because it requires creating and managing multiple service accounts
and keys, as well as using a secret manager service.
Reference:
1: https://cloud.google.com/iam/docs/impersonating-service-accounts
2: https://cloud.google.com/iam/docs/best-practices-for-managing-service-account-keys
3: https://cloud.google.com/iam/docs/understanding-service-accounts
61.Your team maintains the infrastructure for your organization. The current infrastructure requires
changes. You need to share your proposed changes with the rest of the team. You want to follow
Google’s recommended best practices.
What should you do?
A. Use Deployment Manager templates to describe the proposed changes and store them in a Cloud
Storage bucket.
B. Use Deployment Manager templates to describe the proposed changes and store them in Cloud
Source Repositories.
C. Apply the change in a development environment, run gcloud compute instances list, and then save
the output in a shared Storage bucket.
D. Apply the change in a development environment, run gcloud compute instances list, and then save
the output in Cloud Source Repositories.
Answer: B
Explanation:
Showing Deployment Manager templates to your team will allow you to define the changes you want
to implement in your cloud infrastructure. You can use Cloud Source Repositories to store
Deployment Manager templates and collaborate with your team. Cloud Source Repositories are fully-
featured, scalable, and private Git repositories you can use to store, manage and track changes to
your code.
https://cloud.google.com/source-repositories/docs/features
62.Your digital media company stores a large number of video files on-premises. Each video file
ranges from 100 MB to 100 GB. You are currently storing 150 TB of video data in your on-premises
network, with no room for expansion. You need to migrate all infrequently accessed video files older
than one year to Cloud Storage to ensure that on-premises storage remains available for new files.
You must also minimize costs and control bandwidth usage.
What should you do?
A. Create a Cloud Storage bucket. Establish an Identity and Access Management (IAM) role with
write permissions to the bucket. Use the gsutil tool to directly copy files over the network to Cloud
Storage.
B. Set up a Cloud Interconnect connection between the on-premises network and Google Cloud.
Establish a private endpoint for Filestore access. Transfer the data from the existing Network File
System (NFS) to Filestore.
C. Use Transfer Appliance to request an appliance. Load the data locally, and ship the appliance
back to Google for ingestion into Cloud Storage.
D. Use Storage Transfer Service to move the data from the selected on-premises file storage systems
to a Cloud Storage bucket.
Answer: D
Explanation:
Let's analyze each option:
A. Using gsutil: While gsutil can transfer data to Cloud Storage, for 150 TB of infrequently accessed
data, direct transfer over the network might be slow and consume significant bandwidth, potentially
impacting other network operations. It also lacks built-in mechanisms for filtering files based on age.
B. Using Cloud Interconnect and Filestore: Cloud Interconnect provides a dedicated connection, but
Filestore is a fully managed NFS service primarily designed for high-performance file sharing for
applications running in Google Cloud. Migrating 150 TB of infrequently accessed data to Filestore
would be cost-inefficient compared to Cloud Storage and doesn't directly address the requirement of
moving older than one year files.
C. Using Transfer Appliance: Transfer Appliance is suitable for very large datasets (petabytes) or
when network connectivity is poor or unreliable. While it addresses bandwidth concerns, it involves a
physical appliance and might be an overkill for 150 TB of data, especially if network connectivity is
reasonable.
D. Using Storage Transfer Service: Storage Transfer Service is specifically designed for moving large
amounts of data between online storage systems, including on-premises file systems and Cloud
Storage. It offers features like filtering by file age, scheduling transfers, and bandwidth control, directly
addressing all the requirements of the question: migrating infrequently accessed files older than one
year to Cloud Storage, minimizing costs (by using appropriate Cloud Storage classes for infrequent
access), and controlling bandwidth usage. Google Cloud Documentation
Reference: Storage Transfer Service Overview: https://cloud.google.com/storage-transfer-
service/docs/overview - This page details the capabilities and use cases of Storage Transfer Service,
including transferring from on-premises.
Storage Transfer Service for on-premises data: https://cloud.google.com/storage-transfer-
service/docs/on-prem-overview - This specifically covers transferring data from on-premises file
systems.
Cloud Storage Classes: https://cloud.google.com/storage/docs/storage-classes - Understanding the
different storage classes (Standard, Nearline, Coldline, Archive) is crucial for cost optimization of
infrequently accessed data. Storage Transfer Service can be configured to move data to a cost-
effective class like Nearline or
Coldline.
63.You need to enable traffic between multiple groups of Compute Engine instances that are currently
running two different GCP projects. Each group of Compute Engine instances is running in its own
VPC.
What should you do?
A. Verify that both projects are in a GCP Organization. Create a new VPC and add all instances.
B. Verify that both projects are in a GCP Organization. Share the VPC from one project and request
that the Compute Engine instances in the other project use this shared VPC.
C. Verify that you are the Project Administrator of both projects. Create two new VPCs and add all
instances.
D. Verify that you are the Project Administrator of both projects. Create a new VPC and add all
instances.
Answer: B
Explanation:
Shared VPC allows an organization to connect resources from multiple projects to a common Virtual
Private Cloud (VPC) network, so that they can communicate with each other securely and efficiently
using internal IPs from that network. When you use Shared VPC, you designate a project as a host
project and attach one or more other service projects to it. The VPC networks in the host project are
called Shared VPC networks. Eligible resources from service projects can use subnets in the Shared
VPC network
https://cloud.google.com/vpc/docs/shared-vpc
"For example, an existing instance in a service project cannot be reconfigured to use a Shared VPC
network, but a new instance can be created to use available subnets in a Shared VPC network."
64.You are creating an application that will run on Google Kubernetes Engine. You have identified
MongoDB as the most suitable database system for your application and want to deploy a managed
MongoDB environment that provides a support SLA.
What should you do?
A. Create a Cloud Bigtable cluster and use the HBase API
B. Deploy MongoDB Alias from the Google Cloud Marketplace
C. Download a MongoDB installation package and run it on Compute Engine instances
D. Download a MongoDB installation package, and run it on a Managed Instance Group
Answer: B
Explanation:
https://console.cloud.google.com/marketplace/details/gc-launcher-for-mongodb-atlas/mongodb-atlas
65.You are migrating your company’s on-premises compute resources to Google Cloud. You need to
deploy batch processing jobs that run every night. The jobs require significant CPU and memory for
several hours but can tolerate interruptions. You must ensure that the deployment is cost-effective.
What should you do?
A. Containerize the batch processing jobs and deploy them on Compute Engine.
B. Use custom machine types on Compute Engine.
C. Use the M1 machine series on Compute Engine.
D. Use Spot VMs on Compute Engine.
Answer: D
Explanation:
Spot VMs (formerly known as preemptible VMs) are Compute Engine virtual machine instances that
are available at a much lower price than standard Compute Engine instances. However, Compute
Engine might preempt (stop) these instances if it needs to reclaim those resources for other tasks.
This makes Spot VMs ideal for batch processing jobs that are fault-tolerant and can handle
interruptions, as they can be restarted when resources become available again. This directly
addresses the requirement for a cost-effective solution for interruptible workloads.
Option A: While containerization offers portability and consistency, it doesn't inherently provide cost
savings for compute resources. You would still need to choose a cost-effective underlying compute
option.
Option B: Custom machine types allow you to tailor the CPU and memory configuration of your VMs,
which can optimize costs to some extent by avoiding over-provisioning. However, they don't offer the
significant cost reduction that Spot VMs provide.
Option C: The M1 machine series is a specific family of Compute Engine instances optimized for
memory-intensive workloads. While potentially suitable for the job's requirements, it doesn't inherently
address the cost-effectiveness requirement as directly as Spot VMs, which are priced lower
regardless of the machine series.
Reference to Google Cloud Certified - Associate Cloud Engineer Documents:
The concept and use cases for Spot VMs are explicitly covered in the Compute Engine section of the
Google Cloud documentation, which is a key area for the Associate Cloud Engineer certification. The
cost savings and suitability for fault-tolerant workloads are highlighted as primary benefits.