Acloud - Guru - GCP Associate Engineer - Test 2
Acloud - Guru - GCP Associate Engineer - Test 2
Return to review
Attempt 1
All questions
Question 1: Skipped
When should you add new users to your projects?
When the new user is available enter their credentials on your computer
Whenever the new user should begin having access to the project
(Correct)
Explanation
Changing project authorization does not involve any downtime. Changing project
authorization does not require any Google Support involvement. Changing project
authorization does not have anything to do with a billing cycle. You can grant and revoke
access for a user as long as you know their email address; they don't need to log in.
Question 2: Skipped
You have a GKE cluster that has fluctuating load over the course of each day and you
would like to reduce costs. What should you do?
In the GKE console, edit the cluster and enable cluster autoscaling.
(Correct)
roles/editor
roles/storage.objectAdmin
roles/storage.legacyBucketWriter
(Correct)
roles/source.writer
Explanation
The source.writer role is related to Source Repository (i.e. git hosting). The
storage.objectAdmin and project editor roles are too powerful, when we only need to allow
write access. (Plus, the access granted by the project editor role is revokable; see the linked
documentation.) It’s worth reading the documentation on how the “Legacy” roles interact
with GCS ACLs. https://cloud.google.com/iam/docs/understanding-roles
https://cloud.google.com/storage/docs/access-control/iam
https://cloud.google.com/storage/docs/access-control/iam-roles
Question 4: Skipped
When should new projects be created?
(Correct)
When the new project owner is available enter their credentials on your computer.
MySQL on GCE
BigQuery
Cloud Bigtable
Cloud Storage
Cloud SQL
(Correct)
Explanation
Cloud Storage is for unstructured data and does not support SQL. BigQuery is made for
mostly-static analytics situations--not continually updated data as indicated in the scenario--
and a web app backend may need lower latency than BigQuery offers. Bigtable is made for
low-latency analytics situations. Managing your own MySQL installation on GCE would be a
lot more work than using Cloud SQL. Cloud SQL is a good fit for the described situation.
https://medium.com/google-cloud/a-gcp-flowchart-a-day-2d57cc109401
Question 6: Skipped
You currently have 850TB of Closed-Circuit Television (CCTV) capture data and are
adding new data at a rate of 80TB/month. The rate of data captured and needing to be
stored is expected to grow to 200TB/month within one year because new locations are
being added, each with 4-10 cameras. Archival data must be stored indefinitely, and as
inexpensively as possible. The users of your system currently need to access 250TB of
current-month footage and 100GB of archival footage, and access rates are expected to
grow linearly with data volume. Which of the following storage options best suits this
purpose?
Store new data as Multi-Regional and then use Lifecycle Management to transition it to Regional
after 30 days.
Store new data as Multi-Regional and then use Lifecycle Management to transition it to Nearline
after 30 days.
Always keep all data stored as Multi-Regional, because access volume is high.
Store new data as Regional and then use Lifecycle Management to transition it to Coldline after
30 days.
(Correct)
Immediately store all data as Coldline, because the access volume is low.
Explanation
Data cannot be transitioned from Multi-Regional to Regional through Lifecycle Management;
that would change the location. The access rate for new data is 250/80--so quite high--but
archival data access is very low (100/850000). Because of this, we need to start with
Regional or Multi-Regional and should transition to Coldline to meet the “as inexpensively as
possible” requirement. https://cloud.google.com/storage/pricing
https://cloud.google.com/storage/docs/storage-classes
https://cloud.google.com/storage/docs/lifecycle
Question 7: Skipped
You are working together with a contractor from the Acme company and you need to
allow GCE instances running in one of Acme’s GCP projects to write to a Cloud
Pub/Sub topic you own. Which of the following pieces of information are enough to let
you enable that access?
(Correct)
(Correct)
Explanation
You need to grant access to the service account being used by Acme’s GCE instances, not the
contractor, so you don’t care about the contractor’s email address. If you are given the service
account email address, you’re done; that’s enough. If you need to use the pattern to construct
the email address, you’ll need to know the Project Number (not its ID, unlike for App
Engine!) to construct the email address used by the default GCE service account:
`[email protected]` . If Acme wants to use a
different service account than the default one, they’d need to give you more than is listed in
the response options--both the Project ID (not its number, this time!) and also the name they
gave to the service account:
SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com
https://cloud.google.com/iam/docs/service-accounts
https://cloud.google.com/iam/docs/understanding-service-accounts
https://cloud.google.com/iam/docs/granting-roles-to-service-accounts
Question 8: Skipped
Google has just released a new XYZ service and you would like to try it out in your pre-
existing skunkworks project. How can you enable the XYZ API in the fewest number
of steps?
Since you have Gold-level support on this project, phone support to enable XYZ.
(Correct)
Open Cloud Shell, configure authentication, select the “defaults” project, run `gcloud enable xyz
service`.
Open Cloud Shell, configure authentication, run `gcloud services enable xyz.googleapis.com`.
Since you have Silver-level support on your linked billing account, email support to enable XYZ.
Explanation
Google does not generally enable new services by default for existing projects. Cloud Shell
does not require you to configure authentication. GCP Support does not get involved with
things like enabling APIs for you; that's something you simply do for yourself. The API URL
in the gcloud command to enable it includes `googleapis.com`.
Question 9: Skipped
You are planning to use GPUs for your system on GCP. Which of the following
statements is true about using the pricing calculator for this situation?
GPUs are always entered on the GPU tab.
GPUs can be entered on any of the GCE, GKE, and GAE tabs.
(Correct)
Explanation
The pricing calculator does not have a GPU tab. App Engine doesn’t support GPUs. GPUs
can be entered on the GKE tab. GPUs can be entered on the GCE tab.
https://cloud.google.com/products/calculator/
Question 10: Skipped
You need to host a legacy accounting process on SUSE Linux Enterprise Server 12 SP3.
Which of the following is the best option for this?
BQ
CF
GCE
(Correct)
GKE
GAE
Explanation
If you aren’t familiar with all the common service abbreviations, you could get tripped up.
You cannot choose the OS on Cloud Functions, App Engine, or Kubernetes Engine.
BigQuery is not a compute service. Note that the linked flowchart is a little out of date in that
GKE does now support GPUs. https://medium.com/google-cloud/a-gcp-flowchart-a-day-
2d57cc109401 https://cloud.google.com/blog/products/gcp/accelerate-highly-parallelized-
compute-tasks-with-gpus-in-kubernetes-engine
Question 11: Skipped
What is the easiest way to clone a project?
Navigate to the project creation screen in the console and in the Clone From Project dropdown,
select any project linked to the same billing account as the new project.
Open a support request to clone it and wait 2-5 days for it to be completed.
There is no general way to automatically clone a project. You must handle each resource
separately.
(Correct)
Navigate to the project creation screen in the console and in the Clone From Project dropdown,
select any project for which you are a project administrator.
Explanation
There’s no automatic functionality for this, and Google Support cannot and will not get
involved in such an undertaking.
Question 12: Skipped
When comparing `n1-standard-8`, `n1-highcpu-8`, and `n1-highmem-16`, which of the
following statements are true?
(Correct)
(Correct)
BigTable
Cloud Pub/Sub
Cloud Storage
Activity Log
Stackdriver Logging
(Correct)
Explanation
Stackdriver Logging is perfect for accepting many logs, and is a better choice than Cloud
Pub/Sub for the initial ingestion. It can then send logs to Cloud Storage for archiving and/or
send them to Cloud Pub/Sub for streaming to something like Cloud Dataflow.
https://cloud.google.com/logging/ http://gcp.solutions/diagram/Log%20Processing
Question 14: Skipped
When comparing `n1-standard-8`, `n1-highcpu-8`, and `n1-highmem-16`, which of the
following statements are true?
(Correct)
(Correct)
(Correct)
Explanation
The number at the end of the machine type indicates how many CPUs it has, and the type
tells you where in the range of allowable RAM that machine falls--from minimum (highcpu)
to balanced (standard) to maximum (highmem). The cost of each machine type is determined
by how much CPU and RAM it uses. Understanding that is enough to correctly answer this
question. https://cloud.google.com/compute/docs/machine-types
https://cloud.google.com/compute/pricing#pricing
Question 15: Skipped
You need to visualize costs associated with a system you’ve been running on GCP.
Which of the following is the best tool for this?
Data Studio
(Correct)
Google Sheets
Run `gcloud config export gsutil` and `gcloud config export bq`.
Run `gcloud config export storage` and `gcloud config export query`.
Run `gsutil config import gcloud` and `bq config import gcloud`.
Nothing
(Correct)
Explanation
These tools all share their configuration, which is managed by gcloud.
https://cloud.google.com/storage/docs/gsutil/commands/config
https://cloud.google.com/sdk/docs/initializing https://cloud.google.com/bigquery/docs/bq-
command-line-tool
Question 17: Skipped
How many projects can you create?
(Correct)
(Correct)
(Correct)
You must specify a Service Account when creating an instance or none will be attached.
Service Accounts should be used by GKE nodes and pods but not by GCE instances.
Explanation
If you don't do (or specify) anything, the default service account will be attached by default to
each new GCE instance. However, you can stop that from happening by either deleting the
default service account or opting out of attaching it when you are creating a new GCE
instance. https://cloud.google.com/iam/docs/understanding-service-accounts
Question 20: Skipped
Which of the following are true about a newly-created project?
(Correct)
Since BigQuery is enabled by default, charges will immediately accrue until you shut it off
It cannot be used until the organization owner has completed the approval form
Use Account Cross Access to authorize requests that originate from the instance.
(Correct)
(Correct)
Explanation
Hash and salt all passwords transferred to the instance.
https://cloud.google.com/iam/docs/understanding-service-accounts
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations
https://cloud.google.com/iam/docs/granting-roles-to-service-accounts
Question 22: Skipped
You are thinking through all the things that happen when a Compute Engine instance
starts up with a startup script that installs the Stackdriver agent and runs gsutil to
retrieve a large amount of data from Cloud Storage. Of the following steps, which is the
last one to happen?
The gcloud command to start the instance completes.
(Correct)
This Cloud shell instance does not have read access to any of the currently running instances.
The GCE API has not yet been enabled for this Cloud Shell instance.
Your user account does not have read access to any of the currently running instances.
The startup script for this Cloud Shell instance has not yet finished running.
The GCE API has not yet been enabled for this account.
The GCE API has not yet been enabled for this project.
(Correct)
Explanation
APIs must be enabled at the project level, and 403 can indicate that that has not yet been
done.
Question 24: Skipped
You need to store a large amount of unstructured data, including video, audio, image,
and text files. The data volume is expected to double every 18 months and data access is
sporadic and often clustered on a small portion of the overall data. You would like to
reduce ongoing maintenance and management costs. Which option would best serve
these requirements?
Cloud Storage
(Correct)
Cloud SQL
BigQuery
Cloud Bigtable
MySQL on GCE
Explanation
Cloud Storage is perfect for unstructured data like this. BigQuery is made for analytics of
structured data. Bigtable is made for low-latency analytics of structured data. Cloud SQL is
not a good tool to store unstructured data like this and managing your own MySQL
installation on GCE would be even worse. https://medium.com/google-cloud/a-gcp-
flowchart-a-day-2d57cc109401
Question 25: Skipped
You have a GKE cluster that currently has six nodes but has lots of idle capacity. What
should you do?
Clusters are immutable so simply create a new cluster for the smaller workload.
(Correct)
Nothing. GKE is always fully managed and will scale down by default.
Immediately store all data as Coldline, because the access volume is low.
(Correct)
Always keep all data stored as Multi-Regional, because access volume is high.
Store new data as Regional and then use Lifecycle Management to transition it to Coldline after
30 days.
Store new data as Multi-Regional and then use Lifecycle Management to transition it to Regional
after 30 days.
Store new data as Multi-Regional and then use Lifecycle Management to transition it to Nearline
after 30 days.
Explanation
Data cannot be transitioned from Multi-Regional to Regional through Lifecycle Management;
that would change the location. The access rate for new data is 60/80000--so very low--and
archival data access is even lower (50/850000). Because of this, the most cost-effective
option is also the simplest one: just use Coldline for everything.
https://cloud.google.com/storage/pricing https://cloud.google.com/storage/docs/storage-
classes https://cloud.google.com/storage/docs/lifecycle
Question 27: Skipped
You are currently creating instances with `gcloud compute instances create myvm --
machine-type=n1-highmem-8`. This is good but you would just like a bit more RAM.
Which of the following replacements would be the most cost effective?
(Correct)
(Correct)
The metadata service returns information about this instance to the first requestor.
Stackdriver Logging shows the first log lines from the startup script
Explanation
Immediately when the VM is powered on and the OS starts booting up, the instance is
considered to be Running. That's when gcloud completes, if it was run without `--async`.
Then the metadata service will provide the startup script to the OS boot process. The gsutil
command will also need to get metadata--like the service account token--but since it it is
synchronous by default and will take some time to transfer the volume of data to the instance,
the Stackdriver agent should have a chance to push logs and show the startup script progress.
When the transfer is done, the startup script will complete and more logs will eventually be
pushed to Stackdriver Logging. https://cloud.google.com/compute/docs/instances/checking-
instance-status https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
https://cloud.google.com/compute/docs/storing-retrieving-metadata
https://cloud.google.com/compute/docs/startupscript
Question 29: Skipped
What will happen if a running GKE Deployment encounters a fatal error?
None of the other options is correct.
GKE Deployments are configuration information and do not directly encounter fatal errors.
(Correct)
Explanation
GKE Deployments are a declaration of what you want. Functionally, a Deployment uses
ReplicaSets to make sure that the right configuration and number of pods are deployed to the
cluster. https://cloud.google.com/kubernetes-engine/docs/concepts/deployment
Question 30: Skipped
You need to store thousands of 2TB objects for one year and it is very unlikely that you
will need to retrieve any of them. Which of the following options would be the most
cost-effective?
Bigtable
Coldline Cloud Storage bucket
(Correct)
Explanation
Bigtable is not made for storing large objects. Since Coldline’s minimum storage duration of
90 days is easily met, it makes it less expensive than Nearline, Regional, and Multi-Regional.
https://cloud.google.com/storage/docs/storage-classes
https://cloud.google.com/storage/pricing https://cloud.google.com/bigtable/
Question 31: Skipped
You need to process batch data in GCP and reuse your existing Hadoop-based
processing code. Which of the following is a managed service that would best handle
this situation?
Kubernetes Engine
Compute Engine
Cloud Dataflow
Cloud Dataproc
(Correct)
Explanation
Google does not have a service called “Cloud Storage Processing”. Cloud Dataflow is for
newly-built processing that can take advantage of Apache Beam. Compute Engine and
Kubernetes Engine could both be used to run the processing, but they are not managed
services to serve the described situation. Cloud Dataproc is made for running Hadoop/Spark
work. https://cloud.google.com/dataflow/ https://cloud.google.com/dataproc/
Question 32: Skipped
You are planning a log analysis system to be deployed on GCP. Which of the following
would be the best service for processing streamed logs?
Cloud Dataflow
(Correct)
Cloud Pub/Sub
Big Table
Cloud Dataproc
Stackdriver Logging
Explanation
Cloud Dataflow uses the Apache Beam framework and can process streamed data. Cloud
Dataproc is for Spark/Hadoop and doesn’t handle streamed data. Stackdriver Logging doesn’t
do custom log processing for a system like this. Cloud Pub/Sub can accept and deliver large
volumes of data, but it’s not a processing service. BigTable can handle lots of data, but it’s
for storage, not processing. https://cloud.google.com/logging/
http://gcp.solutions/diagram/Log%20Processing https://cloud.google.com/dataflow/
https://cloud.google.com/dataproc/
Question 33: Skipped
You go to the Activity Log to look at the “Create VM” event for a GCE instance you
just created. You set the Resource Type to “GCE VM Instance”. Which of the
following will display the “Create VM” event you wish to see?
Cloud Functions
Kubernetes Engine
Cloud Launcher
(Correct)
Explanation
Nginx cannot run on Cloud Functions, nor on App Engine Standard. Setting it up on
Kubernetes Engine would take rather more time/effort than using the marketplace. The Cloud
Launcher was renamed to be the GCP Marketplace--so these refer to the same thing--and this
is a quick way to deploy all sorts of different systems, including Nginx Plus.
https://console.cloud.google.com/marketplace/details/nginx-public/nginx-plus
https://www.nginx.com/partners/google-cloud-platform/
https://techcrunch.com/2018/07/18/googles-cloud-launcher-is-now-the-gcp-marketplace-
adds-container-based-applications/ https://cloud.google.com/marketplace/
https://cloud.google.com/marketplace/docs/
Question 35: Skipped
You need to estimate costs associated with a new system you plan to build on GCP.
Which of the following is the best tool for this?
Google Sheets
(Correct)
Data Studio
You will enter some sample data and queries into the BQ Data Analyzer and have it transfer its
amounts directly to the main GCP pricing calculator.
You will enter some sample data and queries directly in the main GCP pricing calculator.
You will enter some sample data to be stored directly in the main GCP pricing calculator and
estimate your query data volume separately.
You will separately estimate the data to be stored, streamed, and queried by your system and
enter your estimated amounts into the GCP pricing calculator.
(Correct)
Explanation
There is not any such tool as the “BQ Data Analyzer” that does estimates and connects with
the GCP Pricing Calculator. The GCP Pricing Calculator does not accept any sample data or
queries; those need to be estimated separately. To estimate how much data a BQ Query will
consider, use BQ’s “Dry Run” functionality.
https://cloud.google.com/bigquery/docs/estimate-costs
https://cloud.google.com/products/calculator/
Question 37: Skipped
You need to store thousands of 2TB objects for one week and it is very unlikely that you
will need to retrieve any of them. Which of the following options would be the most
cost-effective?
Bigtable
Regional Cloud Storage bucket
(Correct)
New projects should only be created when your organization can handle at least one hour of
downtime.
Create a project for each environment for your system--such as Dev, QA, and Prod.
(Correct)
Create separate projects for systems owned by different departments in your organization.
(Correct)
(Correct)
Add more systems into a project until you hit a quota limit, then make a new one.
Because quotas are shared across all projects, it doesn't matter how many you make.
Explanation
Creating new projects does not involve any downtime. Projects can be shared between all
persons working with them; they do not have to be individual, and usually aren't. The
system(s) in one project normally get deployed multiple times and serve many users. It's a
good idea to use projects to separate different systems and environments from each other,
partly for organization and partly to prevent them from interacting badly with each other.
Question 39: Skipped
You need to to list objects in a newly-created GCS bucket. Which of the following
would allow you to do this?
roles/owner
(Correct)
roles/iam.roleViewer
roles/storage.legacyBucketReader
(Correct)
roles/resourcemanager.folderViewer
roles/compute.storageAdmin
Explanation
The iam.roleViewer role “Provides read access to all custom roles in the project.” The
compute.storageAdmin role grants “Permissions to create, modify, and delete disks, images,
and snapshots.” The resourcemanager.folderViewer role is related to project organization, not
GCS. The legacyBucketReader and project owner roles interact a bit differently than you
might expect, so it could be a good idea to read through the linked documentation pages, even
if you answered this question correctly. https://cloud.google.com/iam/docs/understanding-
roles https://cloud.google.com/iam/docs/understanding-roles
Question 40: Skipped
Can you generate access keys for service accounts?
Yes. You may generate as many keys as you want for different purposes.
Yes. You may generate a small number of keys per service account to facilitate key rotation.
(Correct)
`gcloud init`
(Correct)
(Correct)
Explanation
It’s really quite straightforward to initialize gcloud: simply `gcloud init` and follow the
prompts. And this also configures `gsutil` and `bq`.
https://cloud.google.com/sdk/docs/initializing
https://cloud.google.com/sdk/gcloud/reference/auth/login
Question 42: Skipped
You have a GCE instance using the default service account and access scopes allowing
full access to storage, compute, and billing. What will happen if an attacker
compromises this instance and runs their own program on it?
They will be unable to access any credentials because of the “Metadata-Flavor: Google”
protection.
If they send the credentials and use them outside of GCP, they will be able to access everything
allowed by the service account.
(Correct)
If they send the credentials and use them outside of GCP, they will be able to access everything
allowed by the access scopes.
If they send the credentials and use them outside of GCP, they will not be able to access any GCP
services.
If they send the credentials and use them outside of GCP, they will have the same access as the
GCE instance only if they spoof that machine’s MAC address.
Explanation
Requiring the “Metadata-Flavor: Google” header protects against a different type of attack
than the one described in this question, so it will not help in this case. The access token will
be available to the attacker’s program and it will work the same way from outside of GCP as
it does from within it, regardless of the MAC address. In particular, the token will only allow
the attacker (as any user) to perform whatever is allowed by *both* the service account
*and* the access scopes. Since both the service account and the access scopes are missing
some capabilities from the other, the actual access possible by using the token will be less
than either of them, independently. https://cloud.google.com/iam/docs/understanding-service-
accounts https://cloud.google.com/compute/docs/storing-retrieving-metadata
Question 43: Skipped
You are planning to run a system with four custom-sized VMs, in Belgium (europe-
west1). Which of the following statements is true about using the pricing calculator?
You will need to convert prices from us-east1, which the calculator uses.
You will need to account for the sustained use discount after converting the daily estimate to
monthly.
You will need to enter predefined machine types closest to the custom machine types you want to
use and manually estimate the small differences.
You will need to convert prices from us-central1, which the calculator uses.
(Correct)
You will need to convert displayed estimates from USD into Euros.
Explanation
You need to be very familiar with the pricing calculator. Prices correspond to whatever
region you select for each resource. Custom Machine Types are fully supported. You can
choose whichever currency you’d like for the the estimate. Sustained Use Discounts are fully
supported. https://cloud.google.com/products/calculator/
Question 44: Skipped
You are monitoring a GKE cluster and see that a pod is being terminated. What will
happen?
The ports used in the StatefulSet will be opened.
(Correct)
Explanation
There are such things as deployments and StatefulSets, but they don’t have domains or ports,
respectively. GKE doesn’t have such a thing as a PersistentSet, but it does have DaemonSets.
https://cloud.google.com/kubernetes-engine/docs/concepts/pod
https://cloud.google.com/kubernetes-engine/docs/concepts/deployment
https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset
https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset
Question 45: Skipped
You are planning out your organization’s usage of GCP. Which of the following is a
Google-recommended practice?
(Correct)
Explanation
Service accounts are meant to be used by programs and they are one--but not the only!--way
to manage access to resources. https://cloud.google.com/iam/docs/understanding-service-
accounts
Question 46: Skipped
You need to view both request and application logs for your Python-based App Engine
app. Which of the following options would be best?
Use the built-in support to get both request and app logs to Stackdriver.
(Correct)
Use the built-in support to view request logs in the App Engine console and install the
Stackdriver agent to get app logs to Stackdriver.
Install the Stackdriver agent to get request logs to Stackdriver; use the Stackdriver Logging API
to send app logs directly to Stackdriver.
Explanation
Google App Engine natively connects to Stackdriver and sends both request logs and any
application logs you give it (via the GAE SDK).
https://cloud.google.com/appengine/articles/logging
https://cloud.google.com/appengine/docs/standard/python/logs/
Question 47: Skipped
A co-worker tried to access the `myfile` file that you have stored in the `mybucket` GCS
bucket, but they were denied access? Which of the following represents the best way to
allow them to view it?
In the GCP console, go to the “IAM & Admin” section, switch to the “Roles” tab, and add the co-
worker under “Editor”.
(Correct)
In the GCP console, go to the Activity screen, find the “File Access Denied” line, and press the
“Add Exception” button.
(Correct)
The instance startup script completes
(Correct)
Explanation
Whether or not it is the default service account, the service account must exist before it can
be attached to the instance. After a request to create a new instance has been accepted and
while space is being found on some host machine, that instance starts in the Provisioning
state. After space has been found and reserved on a host machine, the instance state goes to
Staging while the host prepares to run it and sorts out things like the network adapter that will
be used. Immediately when the VM is powered on and the OS starts booting up, the instance
is considered to be Running. That's when gcloud completes, if it was run without `--async`.
https://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances
https://cloud.google.com/compute/docs/instances/checking-instance-status
https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
Question 50: Skipped
You are planning to use Persistent Disks in your system. In the context of what other
GCP service(s) will you be using these Persistent Disks?
Cloud Storage
Kubernetes Engine
(Correct)
BigTable
Compute Engine
(Correct)
You can only use Persistent Disks with one of the other listed options
Explanation
Persistent Disks attach to GCE instances, but they can also be used through GKE. Cloud
Storage and BigTable are completely separate types of storage.
https://cloud.google.com/persistent-disk/
Continue
Retake test