0% found this document useful (0 votes)
330 views40 pages

Acloud - Guru - GCP Associate Engineer - Test 2

The document discusses the results of a practice test containing questions about Google Cloud services and concepts. It provides explanations for the answers to each multiple choice question, covering topics such as project access management, Kubernetes cluster configuration, Cloud Storage classes, and enabling APIs.

Uploaded by

admredhat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
330 views40 pages

Acloud - Guru - GCP Associate Engineer - Test 2

The document discusses the results of a practice test containing questions about Google Cloud services and concepts. It provides explanations for the answers to each multiple choice question, covering topics such as project access management, Kubernetes cluster configuration, Cloud Storage classes, and enabling APIs.

Uploaded by

admredhat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

Practice Test 2 - Results

Return to review

Attempt 1
All questions
Question 1: Skipped
When should you add new users to your projects?

On the weekends, to minimize the effects of downtime

When the new user is available enter their credentials on your computer

At the end of the billing cycle of the linked billing account

On weekdays so that Google Support personnel can respond to your queries

Whenever the new user should begin having access to the project

(Correct)

Explanation
Changing project authorization does not involve any downtime. Changing project
authorization does not require any Google Support involvement. Changing project
authorization does not have anything to do with a billing cycle. You can grant and revoke
access for a user as long as you know their email address; they don't need to log in.
Question 2: Skipped
You have a GKE cluster that has fluctuating load over the course of each day and you
would like to reduce costs. What should you do?

Run `gcloud container clusters resize mycluster --size=auto` .


In the GCE console, add the nodes to an unmanaged instance group.

In the GKE console, edit the cluster and enable cluster autoscaling.

(Correct)

In the GCE console, add the nodes to a managed instance group.

Write a script to recreate the cluster as demand changes.


Explanation
Clusters are editable, not immutable, and should not be recreated because of changes in
demand. You cannot manage GKE nodes with your own instance groups--and you can’t
migrate nodes into a managed instance group, anyway. You cannot enable cluster autoscaling
with the `resize` command, but you can turn that option on in the console or using the
command `gcloud container clusters update CLUSTER_NAME --enable-autoscaling`.
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture
https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster
https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize
Question 3: Skipped
You need to allow writing objects to a particular GCS bucket. Which of the following
would be the best way to grant these permissions?

roles/editor

roles/storage.objectAdmin

roles/storage.legacyBucketWriter
(Correct)

None of the other options will work

roles/source.writer
Explanation
The source.writer role is related to Source Repository (i.e. git hosting). The
storage.objectAdmin and project editor roles are too powerful, when we only need to allow
write access. (Plus, the access granted by the project editor role is revokable; see the linked
documentation.) It’s worth reading the documentation on how the “Legacy” roles interact
with GCS ACLs. https://cloud.google.com/iam/docs/understanding-roles
https://cloud.google.com/storage/docs/access-control/iam
https://cloud.google.com/storage/docs/access-control/iam-roles
Question 4: Skipped
When should new projects be created?

On the weekends, to minimize the effects of downtime.

On weekdays so that Google Support personnel can respond to your queries.

Whenever the new project is needed.

(Correct)

When the new project owner is available enter their credentials on your computer.

At the end of the billing cycle of the linked billing account.


Explanation
Project creation does not involve any downtime. Google Support does not need to be
involved in creating new projects. You can create a project and transfer ownership of it, if
you need. Billing accounts can be linked and unlinked from projects and do not have to sync
up with project lifetimes.
Question 5: Skipped
You need to store some structured data and query and continually update it with SQL
from your web app backend. The data volume and query load are reasonably
consistent and you would like to reduce ongoing maintenance and management costs.
Which option would best serve these requirements?

MySQL on GCE

BigQuery

Cloud Bigtable

Cloud Storage

None of the other options is appropriate

Cloud SQL

(Correct)

Explanation
Cloud Storage is for unstructured data and does not support SQL. BigQuery is made for
mostly-static analytics situations--not continually updated data as indicated in the scenario--
and a web app backend may need lower latency than BigQuery offers. Bigtable is made for
low-latency analytics situations. Managing your own MySQL installation on GCE would be a
lot more work than using Cloud SQL. Cloud SQL is a good fit for the described situation.
https://medium.com/google-cloud/a-gcp-flowchart-a-day-2d57cc109401
Question 6: Skipped
You currently have 850TB of Closed-Circuit Television (CCTV) capture data and are
adding new data at a rate of 80TB/month. The rate of data captured and needing to be
stored is expected to grow to 200TB/month within one year because new locations are
being added, each with 4-10 cameras. Archival data must be stored indefinitely, and as
inexpensively as possible. The users of your system currently need to access 250TB of
current-month footage and 100GB of archival footage, and access rates are expected to
grow linearly with data volume. Which of the following storage options best suits this
purpose?

Store new data as Multi-Regional and then use Lifecycle Management to transition it to Regional
after 30 days.

Store new data as Multi-Regional and then use Lifecycle Management to transition it to Nearline
after 30 days.

Always keep all data stored as Multi-Regional, because access volume is high.

Store new data as Regional and then use Lifecycle Management to transition it to Coldline after
30 days.

(Correct)

Immediately store all data as Coldline, because the access volume is low.
Explanation
Data cannot be transitioned from Multi-Regional to Regional through Lifecycle Management;
that would change the location. The access rate for new data is 250/80--so quite high--but
archival data access is very low (100/850000). Because of this, we need to start with
Regional or Multi-Regional and should transition to Coldline to meet the “as inexpensively as
possible” requirement. https://cloud.google.com/storage/pricing
https://cloud.google.com/storage/docs/storage-classes
https://cloud.google.com/storage/docs/lifecycle
Question 7: Skipped
You are working together with a contractor from the Acme company and you need to
allow GCE instances running in one of Acme’s GCP projects to write to a Cloud
Pub/Sub topic you own. Which of the following pieces of information are enough to let
you enable that access?

The email address of the Acme contractor

The Acme GCP project’s name

The Acme GCP project’s project ID

The email address of the Acme project service account

(Correct)

The Acme GCP project’s project number

(Correct)

Explanation
You need to grant access to the service account being used by Acme’s GCE instances, not the
contractor, so you don’t care about the contractor’s email address. If you are given the service
account email address, you’re done; that’s enough. If you need to use the pattern to construct
the email address, you’ll need to know the Project Number (not its ID, unlike for App
Engine!) to construct the email address used by the default GCE service account:
`[email protected]` . If Acme wants to use a
different service account than the default one, they’d need to give you more than is listed in
the response options--both the Project ID (not its number, this time!) and also the name they
gave to the service account:
SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com
https://cloud.google.com/iam/docs/service-accounts
https://cloud.google.com/iam/docs/understanding-service-accounts
https://cloud.google.com/iam/docs/granting-roles-to-service-accounts
Question 8: Skipped
Google has just released a new XYZ service and you would like to try it out in your pre-
existing skunkworks project. How can you enable the XYZ API in the fewest number
of steps?

Open Cloud Shell, run `gcloud services enable xyz`.

Since you have Gold-level support on this project, phone support to enable XYZ.

Open Cloud Shell, run `gcloud services enable xyz.googleapis.com`.

(Correct)

Do nothing. It is enabled by default.

Open Cloud Shell, configure authentication, select the “defaults” project, run `gcloud enable xyz
service`.

Open Cloud Shell, configure authentication, run `gcloud services enable xyz.googleapis.com`.

Since you have Silver-level support on your linked billing account, email support to enable XYZ.
Explanation
Google does not generally enable new services by default for existing projects. Cloud Shell
does not require you to configure authentication. GCP Support does not get involved with
things like enabling APIs for you; that's something you simply do for yourself. The API URL
in the gcloud command to enable it includes `googleapis.com`.
Question 9: Skipped
You are planning to use GPUs for your system on GCP. Which of the following
statements is true about using the pricing calculator for this situation?


GPUs are always entered on the GPU tab.

GPUs are always entered on the GCE tab.

None of the other options is correct.

GPUs can be entered on any of the GCE, GKE, and GAE tabs.

GPUs can be entered on both the GCE and GKE tabs.

(Correct)

Explanation
The pricing calculator does not have a GPU tab. App Engine doesn’t support GPUs. GPUs
can be entered on the GKE tab. GPUs can be entered on the GCE tab.
https://cloud.google.com/products/calculator/
Question 10: Skipped
You need to host a legacy accounting process on SUSE Linux Enterprise Server 12 SP3.
Which of the following is the best option for this?

BQ

CF

GCE

(Correct)

GKE

GAE
Explanation
If you aren’t familiar with all the common service abbreviations, you could get tripped up.
You cannot choose the OS on Cloud Functions, App Engine, or Kubernetes Engine.
BigQuery is not a compute service. Note that the linked flowchart is a little out of date in that
GKE does now support GPUs. https://medium.com/google-cloud/a-gcp-flowchart-a-day-
2d57cc109401 https://cloud.google.com/blog/products/gcp/accelerate-highly-parallelized-
compute-tasks-with-gpus-in-kubernetes-engine
Question 11: Skipped
What is the easiest way to clone a project?

Navigate to the project creation screen in the console and in the Clone From Project dropdown,
select any project linked to the same billing account as the new project.

Open a support request to clone it and wait 2-5 days for it to be completed.

Run `gcloud projects clone --fromproject oldprojid --toproject newprojid`

There is no general way to automatically clone a project. You must handle each resource
separately.

(Correct)

Navigate to the project creation screen in the console and in the Clone From Project dropdown,
select any project for which you are a project administrator.
Explanation
There’s no automatic functionality for this, and Google Support cannot and will not get
involved in such an undertaking.
Question 12: Skipped
When comparing `n1-standard-8`, `n1-highcpu-8`, and `n1-highmem-16`, which of the
following statements are true?

The `n1-standard-8` has the least RAM

The `n1-highcpu-8` and `n1-highmem-16` cost about the same amount.

The `n1-highmem-16` has the most CPUs

(Correct)

The `n1-highmem-16` has the most RAM

(Correct)

The `n1-highmem-16` has the least CPUs


Explanation
The number at the end of the machine type indicates how many CPUs it has, and the type
tells you where in the range of allowable RAM that machine falls--from minimum (highcpu)
to balanced (standard) to maximum (highmem). The cost of each machine type is determined
by how much CPU and RAM it uses. Understanding that is enough to correctly answer this
question. https://cloud.google.com/compute/docs/machine-types
https://cloud.google.com/compute/pricing#pricing
Question 13: Skipped
You are planning a log analysis system to be deployed on GCP. Which of the following
would be the best way to ingest the logs?

BigTable

Cloud Pub/Sub

Cloud Storage

Activity Log

Stackdriver Logging

(Correct)

Explanation
Stackdriver Logging is perfect for accepting many logs, and is a better choice than Cloud
Pub/Sub for the initial ingestion. It can then send logs to Cloud Storage for archiving and/or
send them to Cloud Pub/Sub for streaming to something like Cloud Dataflow.
https://cloud.google.com/logging/ http://gcp.solutions/diagram/Log%20Processing
Question 14: Skipped
When comparing `n1-standard-8`, `n1-highcpu-8`, and `n1-highmem-16`, which of the
following statements are true?

The `n1-highmem-16` has the most RAM

(Correct)

The `n1-highmem-16` has the least CPUs

The `n1-highcpu-8` costs less than the `n1-highmem-16`.

(Correct)

The `n1-standard-8` has the least RAM

The `n1-highcpu-8` and `n1-highmem-16` cost about the same amount.

The `n1-highcpu-8` costs more than the `n1-highmem-16`.

The `n1-highmem-16` has the most CPUs

(Correct)

Explanation
The number at the end of the machine type indicates how many CPUs it has, and the type
tells you where in the range of allowable RAM that machine falls--from minimum (highcpu)
to balanced (standard) to maximum (highmem). The cost of each machine type is determined
by how much CPU and RAM it uses. Understanding that is enough to correctly answer this
question. https://cloud.google.com/compute/docs/machine-types
https://cloud.google.com/compute/pricing#pricing
Question 15: Skipped
You need to visualize costs associated with a system you’ve been running on GCP.
Which of the following is the best tool for this?

Data Studio

(Correct)

Cloud Billing API

Google Sheets

GCP Pricing Calculator

Cloud Pricing API


Explanation
The Billing API is about managing billing accounts, not about seeing or predicting costs. The
Pricing API (actually, it’s the Catalog API) can give you information about what things
would cost, but it would be a lot of work to turn that info into a system-level analysis. The
pricing calculator is a critical tool to be familiar with, so make sure you study it before your
exam, but it is not about historical costs. Data Studio is a great tool for visualizing historical
costs. https://cloud.google.com/billing/docs/how-to/visualize-data
https://cloud.google.com/products/calculator/ https://cloud.google.com/billing/reference/rest/
https://cloud.google.com/billing/v1/how-tos/catalog-api
Question 16: Skipped
You already installed and configured `gcloud` for use on your work computer (not
Cloud Shell). What do you need to so you can also use `gsutil` and `bq`?

Run `gcloud config export gsutil` and `gcloud config export bq`.

Run `gcloud config export storage` and `gcloud config export query`.

Configure those tools independently.

Run `gsutil config import gcloud` and `bq config import gcloud`.

Nothing

(Correct)
Explanation
These tools all share their configuration, which is managed by gcloud.
https://cloud.google.com/storage/docs/gsutil/commands/config
https://cloud.google.com/sdk/docs/initializing https://cloud.google.com/bigquery/docs/bq-
command-line-tool
Question 17: Skipped
How many projects can you create?

There are no limits on creating new projects

As many as allowed by your quota

(Correct)

A maximum of five per month

As many as Google Support will make for you

A maximum of one per five minutes

A maximum of five per second

It doesn't matter, as you should really only need one


Explanation
You do have a quota for the total number of projects you can have at once.
Question 18: Skipped
What will happen if a running GKE node encounters a fatal error?

GKE will automatically restart the node in an available deployment.

GKE will automatically restart that node on an available pod.

You can tell GKE to restart the node in an available deployment.

GKE will automatically restart that node on an available GCE host.

(Correct)

GKE nodes are immutable and cannot encounter fatal errors.


Explanation
Nodes are GCE instances managed by the GKE system. If one of the nodes dies, GKE will
bring another node up to replace it and will ensure that any affected pods are restarted.
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture
Question 19: Skipped
Which of the following statements is true?

None of the other statements is true.

(Correct)

You must specify a Service Account when creating an instance or none will be attached.

Every instance must have a Service Account attached to it.


Service Accounts should be used by GKE nodes and pods but not by GCE instances.
Explanation
If you don't do (or specify) anything, the default service account will be attached by default to
each new GCE instance. However, you can stop that from happening by either deleting the
default service account or opting out of attaching it when you are creating a new GCE
instance. https://cloud.google.com/iam/docs/understanding-service-accounts
Question 20: Skipped
Which of the following are true about a newly-created project?

None of the other statements is true

(Correct)

Since BigQuery is enabled by default, charges will immediately accrue until you shut it off

The free tier lasts 30 days

It cannot be used until the organization owner has completed the approval form

The free tier lasts one year


Explanation
Projects are separate from billing accounts and their free tier. BigQuery is enabled by default,
but it starts empty and has no charge until you use it. There is no approval form for new
projects.
Question 21: Skipped
You need to make sure a GCE instance can access other services in GCP. Which of the
following are Google-recommended practices for enabling this?


Use Account Cross Access to authorize requests that originate from the instance.

Generate an SSH key for the instance using gcloud or keygen.

Access the token via the metadata service.

(Correct)

Hash and salt all passwords transferred to the instance.

Securely log onto the account to enter the required credentials.

Grant a service account access to the required resources.

(Correct)

Explanation
Hash and salt all passwords transferred to the instance.
https://cloud.google.com/iam/docs/understanding-service-accounts
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations
https://cloud.google.com/iam/docs/granting-roles-to-service-accounts
Question 22: Skipped
You are thinking through all the things that happen when a Compute Engine instance
starts up with a startup script that installs the Stackdriver agent and runs gsutil to
retrieve a large amount of data from Cloud Storage. Of the following steps, which is the
last one to happen?

Space is reserved on a host machine.


The gcloud command to start the instance completes.

The instance goes into the Running state.

(Correct)

The service account is created.


Explanation
Whether or not it is the default service account, the service account must exist before it can
be attached to the instance. After a request to create a new instance has been accepted and
while space is being found on some host machine, that instance starts in the Provisioning
state. After space has been found and reserved on a host machine, the instance state goes to
Staging while the host prepares to run it and sorts out things like the network adapter that will
be used. Immediately when the VM is powered on and the OS starts booting up, the instance
is considered to be Running. That's when gcloud completes, if it was run without `--async`.
https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances
https://cloud.google.com/compute/docs/instances/checking-instance-status
https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
Question 23: Skipped
In Cloud Shell, you run the command `gcloud compute instances list`, and the response
that you see is `HTTPError 403: Access Not Configured.`. What is a likely explanation
for this error message?

This Cloud shell instance does not have read access to any of the currently running instances.

The GCE API has not yet been enabled for this Cloud Shell instance.

Your user account does not have read access to any of the currently running instances.

The startup script for this Cloud Shell instance has not yet finished running.

The GCE API has not yet been enabled for this account.

The GCE API has not yet been enabled for this project.

(Correct)

Explanation
APIs must be enabled at the project level, and 403 can indicate that that has not yet been
done.
Question 24: Skipped
You need to store a large amount of unstructured data, including video, audio, image,
and text files. The data volume is expected to double every 18 months and data access is
sporadic and often clustered on a small portion of the overall data. You would like to
reduce ongoing maintenance and management costs. Which option would best serve
these requirements?

Cloud Storage

(Correct)

None of the other options is appropriate

Cloud SQL

BigQuery

Cloud Bigtable

MySQL on GCE
Explanation
Cloud Storage is perfect for unstructured data like this. BigQuery is made for analytics of
structured data. Bigtable is made for low-latency analytics of structured data. Cloud SQL is
not a good tool to store unstructured data like this and managing your own MySQL
installation on GCE would be even worse. https://medium.com/google-cloud/a-gcp-
flowchart-a-day-2d57cc109401
Question 25: Skipped
You have a GKE cluster that currently has six nodes but has lots of idle capacity. What
should you do?

In the GCE console, delete one of the nodes.

Clusters are immutable so simply create a new cluster for the smaller workload.

Run `gcloud container clusters resize mycluster --size=5` .

(Correct)

Nothing. GKE is always fully managed and will scale down by default.

In the GCE console, terminate one of the nodes.


Explanation
Clusters are editable, not immutable, and should not be recreated because of changes in
demand. Cluster autoscaling is an optional setting. You do not manage nodes via GCE,
directly--you always manage them through GKE, even though you can see them via GCE.
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture
https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster
https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize
Question 26: Skipped
You currently have 850TB of Closed-Circuit Television (CCTV) capture data and are
adding new data at a rate of 80TB/month. The rate of data captured and needing to be
stored is expected to grow to 200TB/month within one year because new locations are
being added, each with 4-10 cameras. Archival data must be stored indefinitely, and as
inexpensively as possible. The users of your system currently need to access 60GB of
current-month footage and 50GB of archival footage, and access rates are expected to
grow linearly with data volume. Which of the following storage options best suits this
purpose?

Immediately store all data as Coldline, because the access volume is low.

(Correct)

Always keep all data stored as Multi-Regional, because access volume is high.

Store new data as Regional and then use Lifecycle Management to transition it to Coldline after
30 days.

Store new data as Multi-Regional and then use Lifecycle Management to transition it to Regional
after 30 days.

Store new data as Multi-Regional and then use Lifecycle Management to transition it to Nearline
after 30 days.
Explanation
Data cannot be transitioned from Multi-Regional to Regional through Lifecycle Management;
that would change the location. The access rate for new data is 60/80000--so very low--and
archival data access is even lower (50/850000). Because of this, the most cost-effective
option is also the simplest one: just use Coldline for everything.
https://cloud.google.com/storage/pricing https://cloud.google.com/storage/docs/storage-
classes https://cloud.google.com/storage/docs/lifecycle
Question 27: Skipped
You are currently creating instances with `gcloud compute instances create myvm --
machine-type=n1-highmem-8`. This is good but you would just like a bit more RAM.
Which of the following replacements would be the most cost effective?

`gcloud compute instances create myvm --custom-cpu=2 --custom-memory=10`

`gcloud compute instances create myvm --custom-cpu=10 --custom-memory=60`

(Correct)

`gcloud compute instances create myvm --custom-cpu=1 --custom-memory=10`

`gcloud compute instances create myvm --machine-type=n1-highmem-16`

`gcloud compute instances create myvm --machine-type=n1-highmem-10`

`gcloud compute instances create myvm --custom-cpu=8 --custom-memory=60`

`gcloud compute instances create myvm --machine-type=n1-highcpu-16`


Explanation
For reference, the n1-highmem-8 has 8 CPUs and 52 GB of memory, but you do NOT need
to remember this. Just remember that predefined machine types are named by their CPU
counts and those are always powers of two--so `n1-highmem-10` is invalid. Custom machine
types let you tweak the predefined types, but you can’t add more RAM per CPU than you get
with the `highmem` machine types unless you use “Extended Memory”. But since the 8-CPU
custom type option does not include `--custom-extensions`, it doesn’t get Extended Memory
and the command won’t work. Since you’ll need to add more CPUs, you could go to `n1-
highmem-16`--but a custom machine type with only 10 CPUs will be less expensive than
that. https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-
machine-type https://cloud.google.com/compute/docs/instances/creating-instance-with-
custom-machine-type#extendedmemory https://cloud.google.com/compute/docs/machine-
types https://cloud.google.com/compute/pricing#pricing
Question 28: Skipped
You are thinking through all the things that happen when a Compute Engine instance
starts up with a startup script that installs the Stackdriver agent and runs gsutil to
retrieve a large amount of data from Cloud Storage. Of the following steps, which is the
last one to happen?

Data retrieval from GCS completes.

(Correct)

The metadata service returns information about this instance to the first requestor.

The instance startup script begins.

Stackdriver Logging shows the first log lines from the startup script
Explanation
Immediately when the VM is powered on and the OS starts booting up, the instance is
considered to be Running. That's when gcloud completes, if it was run without `--async`.
Then the metadata service will provide the startup script to the OS boot process. The gsutil
command will also need to get metadata--like the service account token--but since it it is
synchronous by default and will take some time to transfer the volume of data to the instance,
the Stackdriver agent should have a chance to push logs and show the startup script progress.
When the transfer is done, the startup script will complete and more logs will eventually be
pushed to Stackdriver Logging. https://cloud.google.com/compute/docs/instances/checking-
instance-status https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
https://cloud.google.com/compute/docs/storing-retrieving-metadata
https://cloud.google.com/compute/docs/startupscript
Question 29: Skipped
What will happen if a running GKE Deployment encounters a fatal error?

GKE will automatically restart that deployment on an available GCE host.


None of the other options is correct.

You can tell GKE to restart the deployment in an available pod.

GKE will automatically restart that deployment on an available node.

GKE Deployments are configuration information and do not directly encounter fatal errors.

(Correct)

Explanation
GKE Deployments are a declaration of what you want. Functionally, a Deployment uses
ReplicaSets to make sure that the right configuration and number of pods are deployed to the
cluster. https://cloud.google.com/kubernetes-engine/docs/concepts/deployment
Question 30: Skipped
You need to store thousands of 2TB objects for one year and it is very unlikely that you
will need to retrieve any of them. Which of the following options would be the most
cost-effective?

Multi-Regional Cloud Storage bucket

Bigtable

Nearline Cloud Storage bucket

Regional Cloud Storage bucket


Coldline Cloud Storage bucket

(Correct)

Explanation
Bigtable is not made for storing large objects. Since Coldline’s minimum storage duration of
90 days is easily met, it makes it less expensive than Nearline, Regional, and Multi-Regional.
https://cloud.google.com/storage/docs/storage-classes
https://cloud.google.com/storage/pricing https://cloud.google.com/bigtable/
Question 31: Skipped
You need to process batch data in GCP and reuse your existing Hadoop-based
processing code. Which of the following is a managed service that would best handle
this situation?

Kubernetes Engine

Compute Engine

Cloud Dataflow

Cloud Storage Processing

Cloud Dataproc

(Correct)

Explanation
Google does not have a service called “Cloud Storage Processing”. Cloud Dataflow is for
newly-built processing that can take advantage of Apache Beam. Compute Engine and
Kubernetes Engine could both be used to run the processing, but they are not managed
services to serve the described situation. Cloud Dataproc is made for running Hadoop/Spark
work. https://cloud.google.com/dataflow/ https://cloud.google.com/dataproc/
Question 32: Skipped
You are planning a log analysis system to be deployed on GCP. Which of the following
would be the best service for processing streamed logs?

Cloud Dataflow

(Correct)

Cloud Pub/Sub

Big Table

Cloud Dataproc

Stackdriver Logging
Explanation
Cloud Dataflow uses the Apache Beam framework and can process streamed data. Cloud
Dataproc is for Spark/Hadoop and doesn’t handle streamed data. Stackdriver Logging doesn’t
do custom log processing for a system like this. Cloud Pub/Sub can accept and deliver large
volumes of data, but it’s not a processing service. BigTable can handle lots of data, but it’s
for storage, not processing. https://cloud.google.com/logging/
http://gcp.solutions/diagram/Log%20Processing https://cloud.google.com/dataflow/
https://cloud.google.com/dataproc/
Question 33: Skipped
You go to the Activity Log to look at the “Create VM” event for a GCE instance you
just created. You set the Resource Type to “GCE VM Instance”. Which of the
following will display the “Create VM” event you wish to see?

Set the “Activity Types” dropdown to “Monitoring”

Set the “Activity Types” dropdown to “Configuration”


(Correct)

Set the “Activity Types” dropdown to “Data Access”

Set the “Activity Types” dropdown to “Development”


Explanation
You must become very familiar with the Activity Log. In this case, “Create VM” is
considered to be a “Configuration” activity. https://console.cloud.google.com/home/activity
https://cloud.google.com/logging/docs/audit/ https://cloud.google.com/compute/docs/audit-
logging
Question 34: Skipped
You need to very quickly set up Nginx Plus on GCP. Which of the following is the
fastest option to get up and running?

Cloud Functions

App Engine Standard

Kubernetes Engine

Cloud Launcher

(Correct)

Explanation
Nginx cannot run on Cloud Functions, nor on App Engine Standard. Setting it up on
Kubernetes Engine would take rather more time/effort than using the marketplace. The Cloud
Launcher was renamed to be the GCP Marketplace--so these refer to the same thing--and this
is a quick way to deploy all sorts of different systems, including Nginx Plus.
https://console.cloud.google.com/marketplace/details/nginx-public/nginx-plus
https://www.nginx.com/partners/google-cloud-platform/
https://techcrunch.com/2018/07/18/googles-cloud-launcher-is-now-the-gcp-marketplace-
adds-container-based-applications/ https://cloud.google.com/marketplace/
https://cloud.google.com/marketplace/docs/
Question 35: Skipped
You need to estimate costs associated with a new system you plan to build on GCP.
Which of the following is the best tool for this?

Google Sheets

Cloud Billing API

GCP Pricing Calculator

(Correct)

Data Studio

Cloud Pricing API


Explanation
The Billing API is about managing billing accounts, not about seeing or predicting costs.
Data Studio is great for visualizing historical costs, but not for estimating the costs for a new
system. The Pricing API (actually, it’s the Catalog API) can give you information about what
things would cost, but it would be a lot of work to turn that info into a system-level estimate.
The pricing calculator is a critical tool to be familiar with, so make sure you study it before
your exam. https://cloud.google.com/products/calculator/
https://cloud.google.com/billing/docs/how-to/visualize-data
https://cloud.google.com/billing/reference/rest/
Question 36: Skipped
You are planning to use BigQuery for a system you will manage. Which of the
following statements best represents how you will use the pricing calculator?


You will enter some sample data and queries into the BQ Data Analyzer and have it transfer its
amounts directly to the main GCP pricing calculator.

None of the other options is correct.

You will enter some sample data and queries directly in the main GCP pricing calculator.

You will enter some sample data to be stored directly in the main GCP pricing calculator and
estimate your query data volume separately.

You will separately estimate the data to be stored, streamed, and queried by your system and
enter your estimated amounts into the GCP pricing calculator.

(Correct)

Explanation
There is not any such tool as the “BQ Data Analyzer” that does estimates and connects with
the GCP Pricing Calculator. The GCP Pricing Calculator does not accept any sample data or
queries; those need to be estimated separately. To estimate how much data a BQ Query will
consider, use BQ’s “Dry Run” functionality.
https://cloud.google.com/bigquery/docs/estimate-costs
https://cloud.google.com/products/calculator/
Question 37: Skipped
You need to store thousands of 2TB objects for one week and it is very unlikely that you
will need to retrieve any of them. Which of the following options would be the most
cost-effective?

Bigtable

Multi-Regional Cloud Storage bucket


Regional Cloud Storage bucket

(Correct)

Nearline Cloud Storage bucket

Coldline Cloud Storage bucket


Explanation
Bigtable is not made for storing large objects. Nearline and Coldline have minimum storage
durations that make them more expensive for short-term storage. Multi-Regional is more
expensive than Regional. https://cloud.google.com/storage/docs/storage-classes
https://cloud.google.com/storage/pricing https://cloud.google.com/bigtable/
Question 38: Skipped
Which of the following are Google-recommended practices for creating new projects?

New projects should only be created when your organization can handle at least one hour of
downtime.

Create a project for each environment for your system--such as Dev, QA, and Prod.

(Correct)

Create separate projects for systems owned by different departments in your organization.

(Correct)

Use projects to limit blast radius.

(Correct)

Create a new project each time you deploy your system.

Create a project for each user of your system.

Add more systems into a project until you hit a quota limit, then make a new one.

Because quotas are shared across all projects, it doesn't matter how many you make.
Explanation
Creating new projects does not involve any downtime. Projects can be shared between all
persons working with them; they do not have to be individual, and usually aren't. The
system(s) in one project normally get deployed multiple times and serve many users. It's a
good idea to use projects to separate different systems and environments from each other,
partly for organization and partly to prevent them from interacting badly with each other.
Question 39: Skipped
You need to to list objects in a newly-created GCS bucket. Which of the following
would allow you to do this?

roles/owner

(Correct)

roles/iam.roleViewer

roles/storage.legacyBucketReader

(Correct)

roles/resourcemanager.folderViewer

roles/compute.storageAdmin
Explanation
The iam.roleViewer role “Provides read access to all custom roles in the project.” The
compute.storageAdmin role grants “Permissions to create, modify, and delete disks, images,
and snapshots.” The resourcemanager.folderViewer role is related to project organization, not
GCS. The legacyBucketReader and project owner roles interact a bit differently than you
might expect, so it could be a good idea to read through the linked documentation pages, even
if you answered this question correctly. https://cloud.google.com/iam/docs/understanding-
roles https://cloud.google.com/iam/docs/understanding-roles
Question 40: Skipped
Can you generate access keys for service accounts?

No. Only Google can generate keys for service accounts.

Yes. You may generate as many keys as you want for different purposes.

Yes. You may generate a small number of keys per service account to facilitate key rotation.

(Correct)

Yes. You may generate one key per service account.


Explanation
It is best when you let Google manage all service account keys, but it is possible to generate
some so you can use them outside of the situations that GCP handles--such as from AWS or
your local machine. You don’t need to remember how many keys you can generate (10), only
that the reason you can create more than one is so that you can put a new one in place before
disabling an old one (i.e. key rotation). https://cloud.google.com/iam/docs/service-accounts
Question 41: Skipped
You have just installed the Google Cloud SDK. Which of the following are the best way
to initialize the command line tools?

`gcloud config set account`

`gcloud config set project`

`gcloud auth login`

`gcloud config configurations create default`

`gcloud config export gsutil bq`

`gcloud init`

(Correct)

Only one of the other options is required.

(Correct)

Explanation
It’s really quite straightforward to initialize gcloud: simply `gcloud init` and follow the
prompts. And this also configures `gsutil` and `bq`.
https://cloud.google.com/sdk/docs/initializing
https://cloud.google.com/sdk/gcloud/reference/auth/login
Question 42: Skipped
You have a GCE instance using the default service account and access scopes allowing
full access to storage, compute, and billing. What will happen if an attacker
compromises this instance and runs their own program on it?

They will be unable to access any credentials because of the “Metadata-Flavor: Google”
protection.

If they send the credentials and use them outside of GCP, they will be able to access everything
allowed by the service account.

None of the other options is correct.

(Correct)

If they send the credentials and use them outside of GCP, they will be able to access everything
allowed by the access scopes.

If they send the credentials and use them outside of GCP, they will not be able to access any GCP
services.

If they send the credentials and use them outside of GCP, they will have the same access as the
GCE instance only if they spoof that machine’s MAC address.
Explanation
Requiring the “Metadata-Flavor: Google” header protects against a different type of attack
than the one described in this question, so it will not help in this case. The access token will
be available to the attacker’s program and it will work the same way from outside of GCP as
it does from within it, regardless of the MAC address. In particular, the token will only allow
the attacker (as any user) to perform whatever is allowed by *both* the service account
*and* the access scopes. Since both the service account and the access scopes are missing
some capabilities from the other, the actual access possible by using the token will be less
than either of them, independently. https://cloud.google.com/iam/docs/understanding-service-
accounts https://cloud.google.com/compute/docs/storing-retrieving-metadata
Question 43: Skipped
You are planning to run a system with four custom-sized VMs, in Belgium (europe-
west1). Which of the following statements is true about using the pricing calculator?

You will need to convert prices from us-east1, which the calculator uses.

You will need to account for the sustained use discount after converting the daily estimate to
monthly.

You will need to enter predefined machine types closest to the custom machine types you want to
use and manually estimate the small differences.

You will need to convert prices from us-central1, which the calculator uses.

None of the other options is correct.

(Correct)

You will need to convert displayed estimates from USD into Euros.
Explanation
You need to be very familiar with the pricing calculator. Prices correspond to whatever
region you select for each resource. Custom Machine Types are fully supported. You can
choose whichever currency you’d like for the the estimate. Sustained Use Discounts are fully
supported. https://cloud.google.com/products/calculator/
Question 44: Skipped
You are monitoring a GKE cluster and see that a pod is being terminated. What will
happen?


The ports used in the StatefulSet will be opened.

The domains used in the deployment will be reduced.

The processes used in the PersistentSet will remain locked.

The memory used by the containers will be freed.

(Correct)

Explanation
There are such things as deployments and StatefulSets, but they don’t have domains or ports,
respectively. GKE doesn’t have such a thing as a PersistentSet, but it does have DaemonSets.
https://cloud.google.com/kubernetes-engine/docs/concepts/pod
https://cloud.google.com/kubernetes-engine/docs/concepts/deployment
https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset
https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset
Question 45: Skipped
You are planning out your organization’s usage of GCP. Which of the following is a
Google-recommended practice?

The project owner should generally be a Service Account.

Auditor access should be granted through a Service Account.

GCS ACLs should always be set by a Service Account.

GCS ACLs should always be set to a Service Account.


None of the other options is correct.

(Correct)

Explanation
Service accounts are meant to be used by programs and they are one--but not the only!--way
to manage access to resources. https://cloud.google.com/iam/docs/understanding-service-
accounts
Question 46: Skipped
You need to view both request and application logs for your Python-based App Engine
app. Which of the following options would be best?

Use the built-in support to get both request and app logs to Stackdriver.

(Correct)

None of the other options is appropriate.

Use the built-in support to view request logs in the App Engine console and install the
Stackdriver agent to get app logs to Stackdriver.

Install the Stackdriver agent to get request logs to Stackdriver; use the Stackdriver Logging API
to send app logs directly to Stackdriver.
Explanation
Google App Engine natively connects to Stackdriver and sends both request logs and any
application logs you give it (via the GAE SDK).
https://cloud.google.com/appengine/articles/logging
https://cloud.google.com/appengine/docs/standard/python/logs/
Question 47: Skipped
A co-worker tried to access the `myfile` file that you have stored in the `mybucket` GCS
bucket, but they were denied access? Which of the following represents the best way to
allow them to view it?

In the GCP console, go to the “IAM & Admin” section, switch to the “Roles” tab, and add the co-
worker under “Editor”.

In Cloud Shell, type `gsutil acl ch -u [email protected]:r gs://mybucket/myfile`

(Correct)

In the GCP console, go to the Activity screen, find the “File Access Denied” line, and press the
“Add Exception” button.

SSH to a GCE instance and type `gcloud storage allow-access [email protected]


gs://mybucket/myfile`
Explanation
There is no “Add Exception” button on the Activity screen. Neither will `gcloud storage
allow-access` work. You could add the co-worker as a project editor, but that is way more
privilege than they need to view one file.
Question 48: Skipped
You are thinking through all the things that happen when a Compute Engine instance
starts up with a startup script that installs the Stackdriver agent and runs gsutil to
retrieve a large amount of data from Cloud Storage. Of the following steps, which is the
first one to happen?

Space is reserved on a host machine

(Correct)

The gcloud command to start the instance completes


The instance startup script completes

The instance goes into the Running state


Explanation
After a request to create a new instance has been accepted and while space is being found on
some host machine, that instance starts in the Provisioning state. After space has been found
and reserved on a host machine, the instance state goes to Staging while the host prepares to
run it and sorts out things like the network adapter that will be used. Immediately when the
VM is powered on and the OS starts booting up, the instance is considered to be Running.
That's when gcloud completes, if it was run without `--async`.
https://cloud.google.com/compute/docs/instances/checking-instance-status
https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
Question 49: Skipped
You are thinking through all the things that happen when a Compute Engine instance
starts up with a startup script that installs the Stackdriver agent and runs gsutil to
retrieve a large amount of data from Cloud Storage. Of the following steps, which is the
first one to happen?

The gcloud command to start the instance completes.

Space is reserved on a host machine.

The instance goes into the Running state.

The service account is created.

(Correct)

Explanation
Whether or not it is the default service account, the service account must exist before it can
be attached to the instance. After a request to create a new instance has been accepted and
while space is being found on some host machine, that instance starts in the Provisioning
state. After space has been found and reserved on a host machine, the instance state goes to
Staging while the host prepares to run it and sorts out things like the network adapter that will
be used. Immediately when the VM is powered on and the OS starts booting up, the instance
is considered to be Running. That's when gcloud completes, if it was run without `--async`.
https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances
https://cloud.google.com/compute/docs/instances/checking-instance-status
https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
Question 50: Skipped
You are planning to use Persistent Disks in your system. In the context of what other
GCP service(s) will you be using these Persistent Disks?

Cloud Storage

Kubernetes Engine

(Correct)

BigTable

Compute Engine

(Correct)

You can only use Persistent Disks with one of the other listed options
Explanation
Persistent Disks attach to GCE instances, but they can also be used through GKE. Cloud
Storage and BigTable are completely separate types of storage.
https://cloud.google.com/persistent-disk/
Continue
Retake test

You might also like