Associate Cloud Engineer - 8
Associate Cloud Engineer - 8
Get the Full Associate-Cloud-Engineer dumps in VCE and PDF From SurePassExam
https://www.surepassexam.com/Associate-Cloud-Engineer-exam-dumps.html (244 New Questions)
Google
Exam Questions Associate-Cloud-Engineer
Google Cloud Certified - Associate Cloud Engineer
NEW QUESTION 1
You need to monitor resources that are distributed over different projects in Google Cloud Platform. You want to consolidate reporting under the same Stackdriver
Monitoring dashboard. What should you do?
A. Use Shared VPC to connect all projects, and link Stackdriver to one of the projects.
B. For each project, create a Stackdriver accoun
C. In each project, create a service account for that project and grant it the role of Stackdriver Account Editor in all other projects.
D. Configure a single Stackdriver account, and link all projects to the same account.
E. Configure a single Stackdriver account for one of the project
F. In Stackdriver, create a Group and add the other project names as criteria for that Group.
Answer: C
Explanation:
When you intially click on Monitoring(Stackdriver Monitoring) it creates a workspac(a stackdriver account) linked to the ACTIVE(CURRENT) Project from which it
was clicked.
Now if you change the project and again click onto Monitoring it would create an another workspace(a stackdriver account) linked to the changed
ACTIVE(CURRENT) Project, we don't want this as this would not consolidate our result into a single dashboard(workspace/stackdriver account).
If you have accidently created two diff workspaces merge them under Monitoring > Settings > Merge Workspaces > MERGE.
If we have only one workspace and two projects we can simply add other GCP Project under Monitoring > Settings > GCP Projects > Add GCP Projects.
https://cloud.google.com/monitoring/settings/multiple-projects
Nothing about groups https://cloud.google.com/monitoring/settings?hl=en
NEW QUESTION 2
You have a developer laptop with the Cloud SDK installed on Ubuntu. The Cloud SDK was installed from the Google Cloud Ubuntu package repository. You want
to test your application locally on your laptop with Cloud Datastore. What should you do?
Answer: D
Explanation:
The Datastore emulator provides local emulation of the production Datastore environment. You can use the emulator to develop and test your application
locallyRef: https://cloud.google.com/datastore/docs/tools/datastore-emulator
NEW QUESTION 3
Your company has a large quantity of unstructured data in different file formats. You want to perform ETL transformations on the data. You need to make the data
accessible on Google Cloud so it can be processed by a Dataflow job. What should you do?
Answer: B
Explanation:
"large quantity" : Cloud Storage or BigQuery "files" a file is nothing but an Object
NEW QUESTION 4
You are developing a new application and are looking for a Jenkins installation to build and deploy your source code. You want to automate the installation as
quickly and easily as possible. What should you do?
Answer: A
Explanation:
Installing Jenkins
In this section, you use Cloud Marketplace to provision a Jenkins instance. You customize this instance to use the agent image you created in the previous
section.
Go to the Cloud Marketplace solution for Jenkins. Click Launch on Compute Engine.
Change the Machine Type field to 4 vCPUs 15 GB Memory, n1-standard-4.
Machine type selection for Jenkins deployment.
Click Deploy and wait for your Jenkins instance to finish being provisioned. When it is finished, you will see: Jenkins has been deployed.
https://cloud.google.com/solutions/using-jenkins-for-distributed-builds-on-compute-engine#installing_jenkins
NEW QUESTION 5
Your company has developed a new application that consists of multiple microservices. You want to deploy the application to Google Kubernetes Engine (GKE),
and you want to ensure that the cluster can scale as more applications are deployed in the future. You want to avoid manual intervention when each new
application is deployed. What should you do?
Answer: C
Explanation:
https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler#adding_a_node_pool_with_autoscal
NEW QUESTION 6
Your company is moving its entire workload to Compute Engine. Some servers should be accessible through the Internet, and other servers should only be
accessible over the internal network. All servers need to be able to talk to each other over specific ports and protocols. The current on-premises network relies on
a demilitarized zone (DMZ) for the public servers and a Local Area Network (LAN) for the private servers. You need to design the networking infrastructure on
Google Cloud to match these requirements. What should you do?
A. 1. Create a single VPC with a subnet for the DMZ and a subnet for the LA
B. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ.
C. 1. Create a single VPC with a subnet for the DMZ and a subnet for the LA
D. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.
E. 1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LA
F. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public ingress traffic for the DMZ.
G. 1. Create a VPC with a subnet for the DMZ and another VPC with a subnet for the LA
H. 2. Set up firewall rules to open up relevant traffic between the DMZ and the LAN subnets, and another firewall rule to allow public egress traffic for the DMZ.
Answer: C
Explanation:
https://cloud.google.com/vpc/docs/vpc-peering
NEW QUESTION 7
You want to send and consume Cloud Pub/Sub messages from your App Engine application. The Cloud Pub/Sub API is currently disabled. You will use a service
account to authenticate your application to the API. You want to make sure your application can use Cloud Pub/Sub. What should you do?
A. Enable the Cloud Pub/Sub API in the API Library on the GCP Console.
B. Rely on the automatic enablement of the Cloud Pub/Sub API when the Service Account accesses it.
C. Use Deployment Manager to deploy your applicatio
D. Rely on the automatic enablement of all APIs used by the application being deployed.
E. Grant the App Engine Default service account the role of Cloud Pub/Sub Admi
F. Have your application enable the API on the first connection to Cloud Pub/Sub.
Answer: A
Explanation:
Quickstart: using the Google Cloud Console
This page shows you how to perform basic tasks in Pub/Sub using the Google Cloud Console. Note: If you are new to Pub/Sub, we recommend that you start with
the interactive tutorial. Before you begin
Set up a Cloud Console project. Set up a project
Click to:
Create or select a project.
Enable the Pub/Sub API for that project.
You can view and manage these resources at any time in the Cloud Console. Install and initialize the Cloud SDK.
Note: You can run the gcloud tool in the Cloud Console without installing the Cloud SDK. To run the gcloud tool in the Cloud Console, use Cloud Shell .
https://cloud.google.com/pubsub/docs/quickstart-console
NEW QUESTION 8
Your finance team wants to view the billing report for your projects. You want to make sure that the finance team does not get additional permissions to the project.
What should you do?
A. Add the group for the finance team to roles/billing user role.
B. Add the group for the finance team to roles/billing admin role.
C. Add the group for the finance team to roles/billing viewer role.
D. Add the group for the finance team to roles/billing project/Manager role.
Answer: C
Explanation:
"Billing Account Viewer access would usually be granted to finance teams, it provides access to spend information, but does not confer the right to link or unlink
projects or otherwise manage the properties of the billing account." https://cloud.google.com/billing/docs/how-to/billing-access
NEW QUESTION 9
You deployed an LDAP server on Compute Engine that is reachable via TLS through port 636 using UDP. You want to make sure it is reachable by clients over
A. Add the network tag allow-udp-636 to the VM instance running the LDAP server.
B. Create a route called allow-udp-636 and set the next hop to be the VM instance running the LDAP server.
C. Add a network tag of your choice to the instanc
D. Create a firewall rule to allow ingress on UDP port 636 for that network tag.
E. Add a network tag of your choice to the instance running the LDAP serve
F. Create a firewall rule to allow egress on UDP port 636 for that network tag.
Answer: C
Explanation:
A tag is simply a character string added to a tags field in a resource, such as Compute Engine virtual machine (VM) instances or instance templates. A tag is not a
separate resource, so you cannot create it separately. All resources with that string are considered to have that tag. Tags enable you to make firewall rules and
routes applicable to specific VM instances.
NEW QUESTION 10
Your company is using Google Workspace to manage employee accounts. Anticipated growth will increase the number of personnel from 100 employees to 1.000
employees within 2 years. Most employees will need access to your company's Google Cloud account. The systems and processes will need to support 10x
growth without performance degradation, unnecessary complexity, or security issues. What should you do?
Answer: B
NEW QUESTION 10
You are hosting an application from Compute Engine virtual machines (VMs) in us–central1–a. You want to adjust your design to support the failure of a single
Compute Engine zone, eliminate downtime, and minimize cost. What should you do?
A. – Create Compute Engine resources in us–central1–b.–Balance the load across both us–central1–a and us–central1–b.
B. – Create a Managed Instance Group and specify us–central1–a as the zone.–Configure the Health Check with a short Health Interval.
C. – Create an HTTP(S) Load Balancer.–Create one or more global forwarding rules to direct traffic to your VMs.
D. – Perform regular backups of your application.–Create a Cloud Monitoring Alert and be notified if your application becomes unavailable.–Restore from backups
when notified.
Answer: A
Explanation:
Choosing a region and zone You choose which region or zone hosts your resources, which controls where your data is stored and used. Choosing a region and
zone is important for several reasons:
Handling failures
Distribute your resources across multiple zones and regions to tolerate outages. Google designs zones to be independent from each other: a zone usually has
power, cooling, networking, and control planes that are isolated from other zones, and most single failure events will affect only a single zone. Thus, if a zone
becomes unavailable, you can transfer traffic to another zone in the same region to keep your services running. Similarly, if a region experiences any disturbances,
you should have backup services running in a different region. For more information about distributing your resources and designing a robust system, see
Designing Robust Systems. Decreased network latency To decrease network latency, you might want to choose a region or zone that is close to your point of
service.
https://cloud.google.com/compute/docs/regions-zones#choosing_a_region_and_zone
NEW QUESTION 15
You have a Compute Engine instance hosting a production application. You want to receive an email if the instance consumes more than 90% of its CPU
resources for more than 15 minutes. You want to use Google services. What should you do?
A. * 1. Create a consumer Gmail account.* 2. Write a script that monitors the CPU usage.* 3. When the CPU usage exceeds the threshold, have that script send
an email using the Gmail account and smtp.gmail.com on port 25 as SMTP server.
B. * 1. Create a Stackdriver Workspace, and associate your Google Cloud Platform (GCP) project with it.* 2.Create an Alerting Policy in Stackdriver that uses the
threshold as a trigger conditio
C. 3.Configure your email address in the notification channel.
D. * 1. Create a Stackdriver Workspace, and associate your GCP project with it.* 2. Write a script that monitors the CPU usage and sends it as a custom metric to
Stackdrive
E. 3.Create an uptime check for the instance in Stackdriver.
F. * 1. In Stackdriver Logging, create a logs-based metric to extract the CPU usage by using this regular expression: CPU Usage: ([0-9] {1,3}) %* 2. In Stackdriver
Monitoring, create an Alerting Policy based on this metri
G. 3.Configure your email address in the notification channel.
Answer: B
Explanation:
Specifying conditions for alerting policies This page describes how to specify conditions for alerting policies. The conditions for an alerting policy define what is
monitored and when to trigger an alert. For example, suppose you want to define an alerting policy that emails you if the CPU utilization of a Compute Engine VM
instance is above 80% for more than 3 minutes. You use the conditions dialog to specify that you want to monitor the CPU utilization of a Compute Engine VM
instance, and that you want an alerting policy to trigger when that utilization is above 80% for 3 minutes. https://cloud.google.com/monitoring/alerts/ui-conditions-ga
https://cloud.google.com/monitoring/alerts/using-alerting-ui https://cloud.google.com/monitoring/support/notification-options
NEW QUESTION 20
You have a virtual machine that is currently configured with 2 vCPUs and 4 GB of memory. It is running out of memory. You want to upgrade the virtual machine to
have 8 GB of memory. What should you do?
A. Rely on live migration to move the workload to a machine with more memory.
B. Use gcloud to add metadata to the V
C. Set the key to required-memory-size and the value to 8 GB.
D. Stop the VM, change the machine type to n1-standard-8, and start the VM.
E. Stop the VM, increase the memory to 8 GB, and start the VM.
Answer: D
Explanation:
In Google compute engine, if predefined machine types don’t meet your needs, you can create an instance with custom virtualized hardware settings. Specifically,
you can create an instance with a custom number of vCPUs and custom memory, effectively using a custom machine type. Custom machine types are ideal for the
following scenarios: 1. Workloads that aren’t a good fit for the predefined machine types that are available you. 2. Workloads that require more processing power
or more memory but don’t need all of the upgrades that are provided by the next machine type level.In our scenario, we only need a memory upgrade. Moving to a
bigger instance would also bump up the CPU which we don’t need so we have to use a custom machine type. It is not possible to change memory while the
instance is running so you need to first stop the instance, change the memory and then start it again. See below a screenshot that shows how CPU/Memory can
be customized for an instance that has been
stopped.Ref: https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine-type
NEW QUESTION 25
You have files in a Cloud Storage bucket that you need to share with your suppliers. You want to restrict the time that the files are available to your suppliers to 1
hour. You want to follow Google recommended practices. What should you do?
A. Create a service account with just the permissions to access files in the bucke
B. Create a JSON key for the service accoun
C. Execute the command gsutil signurl -m 1h gs:///*.
D. Create a service account with just the permissions to access files in the bucke
E. Create a JSON key for the service accoun
F. Execute the command gsutil signurl -d 1h gs:///**.
G. Create a service account with just the permissions to access files in the bucke
H. Create a JSON key for the service accoun
I. Execute the command gsutil signurl -p 60m gs:///.
J. Create a JSON key for the Default Compute Engine Service Accoun
K. Execute the command gsutil signurl -t 60m gs:///***
Answer: B
Explanation:
This command correctly specifies the duration that the signed url should be valid for by using the -d flag. The default is 1 hour so omitting the -d flag would have
also resulted in the same outcome. Times may be specified with no suffix (default hours), or with s = seconds, m = minutes, h = hours, d = days. The max duration
allowed is 7d.Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl
NEW QUESTION 26
A team of data scientists infrequently needs to use a Google Kubernetes Engine (GKE) cluster that you manage. They require GPUs for some long-running, non-
restartable jobs. You want to minimize cost. What should you do?
Answer: A
Explanation:
auto-provisioning = Attaches and deletes node pools to cluster based on the requirements. Hence creating a GPU node pool, and auto-scaling would be better
https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning
NEW QUESTION 29
You built an application on Google Cloud Platform that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to
table data. You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices. What
should you do?
Answer: A
Explanation:
roles/monitoring.viewer provides read-only access to get and list information about all monitoring data and configurations. This role provides monitoring access
and fits our requirements. roles/monitoring.viewer. is the right answer.
Ref: https://cloud.google.com/iam/docs/understanding-roles#cloud-spanner-roles
NEW QUESTION 33
Your company has workloads running on Compute Engine and on-premises. The Google Cloud Virtual Private Cloud (VPC) is connected to your WAN over a
Virtual Private Network (VPN). You need to deploy a new Compute Engine instance and ensure that no public Internet traffic can be routed to it. What should you
do?
Answer: A
Explanation:
VMs cannot communicate over the internet without a public IP address. Private Google Access permits access to Google APIs and services in Google's production
infrastructure.
https://cloud.google.com/vpc/docs/private-google-access
NEW QUESTION 37
Your projects incurred more costs than you expected last month. Your research reveals that a development
GKE container emitted a huge number of logs, which resulted in higher costs. You want to disable the logs quickly using the minimum number of steps. What
should you do?
A. 1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE container resource.
B. 1. Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE Cluster Operations resource.
C. 1. Go to the GKE console, and delete existing clusters.2. Recreate a new cluster.3. Clear the option to enable legacy Stackdriver Logging.
D. 1. Go to the GKE console, and delete existing clusters.2. Recreate a new cluster.3. Clear the option to enable legacy Stackdriver Monitoring.
Answer: A
Explanation:
https://cloud.google.com/logging/docs/api/v2/resource-list GKE Containers have more log than GKE Cluster Operations:
-GKE Containe:
cluster_name: An immutable name for the cluster the container is running in. namespace_id: Immutable ID of the cluster namespace the container is running in.
instance_id: Immutable ID of the GCE instance the container is running in. pod_id: Immutable ID of the pod the container is running in.
container_name: Immutable name of the container. zone: The GCE zone in which the instance is running. VS -GKE Cluster Operations
project_id: The identifier of the GCP project associated with this resource, such as "my-project". cluster_name: The name of the GKE Cluster.
location: The location in which the GKE Cluster is running.
NEW QUESTION 40
You need to manage a Cloud Spanner Instance for best query performance. Your instance in production runs in a single Google Cloud region. You need to
improve performance in the shortest amount of time. You want to follow Google best practices for service configuration. What should you do?
A. Create an alert in Cloud Monitoring to alert when the percentage of high priority CPU utilization reaches 45% If you exceed this threshold, add nodes lo your
instance.
B. Create an alert in Cloud Monitoring to alert when the percentage to high priority CPU utilization reaches 45% Use database query statistics to identify queries
that result in high CPU usage, and then rewrite those queries to optimize their resource usage
C. Create an alert in Cloud Monitoring to alert when the percentage of high priority CPU utilization reaches 65% If you exceed this threshold, add nodes to your
instance
D. Create an alert in Cloud Monitoring to alert when the percentage of high priority CPU utilization reaches 65%. Use database query statistics to identity queries
that result in high CPU usage, and then rewrite those queries to optimize their resource usage.
Answer: B
Explanation:
https://cloud.google.com/spanner/docs/cpu-utilization#recommended-max
NEW QUESTION 44
You are using Google Kubernetes Engine with autoscaling enabled to host a new application. You want to expose this new application to the public, using HTTPS
on a public IP address. What should you do?
A. Create a Kubernetes Service of type NodePort for your application, and a Kubernetes Ingress to expose this Service via a Cloud Load Balancer.
B. Create a Kubernetes Service of type ClusterIP for your applicatio
C. Configure the public DNS name of your application using the IP of this Service.
D. Create a Kubernetes Service of type NodePort to expose the application on port 443 of each node of the Kubernetes cluste
E. Configure the public DNS name of your application with the IP of every node of the cluster to achieve load-balancing.
F. Create a HAProxy pod in the cluster to load-balance the traffic to all the pods of the application.Forward the public traffic to HAProxy with an iptable rul
G. Configure the DNS name of your application using the public IP of the node HAProxy is running on.
Answer: A
NEW QUESTION 49
You need to create a Compute Engine instance in a new project that doesn’t exist yet. What should you do?
A. Using the Cloud SDK, create a new project, enable the Compute Engine API in that project, and then create the instance specifying your new project.
B. Enable the Compute Engine API in the Cloud Console, use the Cloud SDK to create the instance, and then use the ––project flag to specify a new project.
C. Using the Cloud SDK, create the new instance, and use the ––project flag to specify the new project.Answer yes when prompted by Cloud SDK to enable the
Answer: A
Explanation:
https://cloud.google.com/sdk/gcloud/reference/projects/create Quickstart: Creating a New Instance Using the Command Line Before you begin
* 1. In the Cloud Console, on the project selector page, select or create a Cloud project.
* 2. Make sure that billing is enabled for your Google Cloud project. Learn how to confirm billing is enabled for your project.
To use the gcloud command-line tool for this quickstart, you must first install and initialize the Cloud SDK:
* 1. Download and install the Cloud SDK using the instructions given on Installing Google Cloud SDK.
* 2. Initialize the SDK using the instructions given on Initializing Cloud SDK.
To use gcloud in Cloud Shell for this quickstart, first activate Cloud Shell using the instructions given on Starting Cloud Shell.
https://cloud.google.com/ai-platform/deep-learning-vm/docs/quickstart-cli#before-you-begin
NEW QUESTION 51
You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will
run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do?
Answer: B
Explanation:
https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset#usage_patterns
DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets
automatically add Pods to the new nodes as needed.
In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you
add nodes to a node pool, DaemonSets automatically add Pods to the new nodes as needed. So, this is a perfect fit for our monitoring pod.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples
of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd. For example, you could have
DaemonSets for each type of daemon run on all of your nodes. Alternatively, you could run multiple DaemonSets for a single type of daemon, but have them use
different configurations for different hardware types and resource needs.
NEW QUESTION 53
You need to create a copy of a custom Compute Engine virtual machine (VM) to facilitate an expected increase in application traffic due to a business acquisition.
What should you do?
Answer: D
Explanation:
A custom image belongs only to your project. To create an instance with a custom image, you must first have a custom image.
NEW QUESTION 57
You are building a pipeline to process time-series data. Which Google Cloud Platform services should you put in boxes 1,2,3, and 4?
Answer: D
NEW QUESTION 59
You create a new Google Kubernetes Engine (GKE) cluster and want to make sure that it always runs a supported and stable version of Kubernetes. What should
you do?
Answer: B
Explanation:
Creating or upgrading a cluster by specifying the version as latest does not provide automatic upgrades. Enable node auto-upgrades to ensure that the nodes in
your cluster are up-to-date with the latest stable version.
https://cloud.google.com/kubernetes-engine/versioning-and-upgrades
Node auto-upgrades help you keep the nodes in your cluster up to date with the cluster master version when your master is updated on your behalf. When you
create a new cluster or node pool with Google Cloud Console or the gcloud command, node auto-upgrade is enabled by default.
Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades
NEW QUESTION 60
You are setting up a Windows VM on Compute Engine and want to make sure you can log in to the VM via RDP. What should you do?
A. After the VM has been created, use your Google Account credentials to log in into the VM.
B. After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM.
C. When creating the VM, add metadata to the instance using ‘windows-password’ as the key and a password as the value.
D. After the VM has been created, download the JSON private key for the default Compute Engine service accoun
E. Use the credentials in the JSON file to log in to the VM.
Answer: B
Explanation:
You can generate Windows passwords using either the Google Cloud Console or the gcloud command-line tool. This option uses the right syntax to reset the
windows password.
gcloud compute reset-windows-password windows-instance
Ref: https://cloud.google.com/compute/docs/instances/windows/creating-passwords-for-windows-instances#gc
NEW QUESTION 64
Your managed instance group raised an alert stating that new instance creation has failed to create new instances. You need to maintain the number of running
instances specified by the template to be able to process expected application traffic. What should you do?
A. Create an instance template that contains valid syntax which will be used by the instance grou
B. Delete any persistent disks with the same name as instance names.
C. Create an instance template that contains valid syntax that will be used by the instance grou
D. Verify that the instance name and persistent disk name values are not the same in the template.
E. Verify that the instance template being used by the instance group contains valid synta
F. Delete any persistent disks with the same name as instance name
G. Set the disks.autoDelete property to true in the instance template.
H. Delete the current instance template and replace it with a new instance templat
I. Verify that the instance name and persistent disk name values are not the same in the templat
J. Set the disks.autoDelete property to true in the instance template.
Answer: A
Explanation:
https://cloud.google.com/compute/docs/troubleshooting/troubleshooting-migs https://cloud.google.com/compute/docs/instance-
templates#how_to_update_instance_templates
NEW QUESTION 65
Your application development team has created Docker images for an application that will be deployed on Google Cloud. Your team does not want to manage the
infrastructure associated with this application. You need to ensure that the application can scale automatically as it gains popularity. What should you do?
A. Create an Instance template with the container image, and deploy a Managed Instance Group with Autoscaling.
B. Upload Docker images to Artifact Registry, and deploy the application on Google Kubernetes Engine using Standard mode.
C. Upload Docker images to the Cloud Storage, and deploy the application on Google Kubernetes Engine using Standard mode.
D. Upload Docker images to Artifact Registry, and deploy the application on Cloud Run.
Answer: D
NEW QUESTION 69
You want to deploy an application on Cloud Run that processes messages from a Cloud Pub/Sub topic. You want to follow Google-recommended practices. What
should you do?
A. 1. Create a Cloud Function that uses a Cloud Pub/Sub trigger on that topic.2. Call your application on Cloud Run from the Cloud Function for every message.
B. 1. Grant the Pub/Sub Subscriber role to the service account used by Cloud Run.2. Create a Cloud Pub/Sub subscription for that topic.3. Make your application
pull messages from that subscription.
C. 1. Create a service account.2. Give the Cloud Run Invoker role to that service account for your Cloud Run application.3. Create a Cloud Pub/Sub subscription
that uses that service account and uses your Cloud Run application as the push endpoint.
D. 1. Deploy your application on Cloud Run on GKE with the connectivity set to Internal.2. Create a Cloud Pub/Sub subscription for that topic.3. In the same
Google Kubernetes Engine cluster as your application, deploy a container that takes the messages and sends them to your application.
Answer: C
Explanation:
https://cloud.google.com/run/docs/tutorials/pubsub#integrating-pubsub
* 1. Create a service account. 2. Give the Cloud Run Invoker role to that service account for your Cloud Run application. 3. Create a Cloud Pub/Sub subscription
that uses that service account and uses your Cloud Run application as the push endpoint.
NEW QUESTION 72
Your company completed the acquisition of a startup and is now merging the IT systems of both companies. The startup had a production Google Cloud project in
their organization. You need to move this project into your organization and ensure that the project is billed lo your organization. You want to accomplish this task
with minimal effort. What should you do?
Answer: A
NEW QUESTION 75
You have a Compute Engine instance hosting an application used between 9 AM and 6 PM on weekdays. You want to back up this instance daily for disaster
recovery purposes. You want to keep the backups for 30 days. You want the Google-recommended solution with the least management overhead and the least
number of services. What should you do?
A. * 1. Update your instances’ metadata to add the following value: snapshot–schedule: 0 1 * * ** 2. Update your instances’ metadata to add the following value:
snapshot–retention: 30
B. * 1. In the Cloud Console, go to the Compute Engine Disks page and select your instance’s disk.* 2. In the Snapshot Schedule section, select Create Schedule
and configure the following parameters:–Schedule frequency: Daily–Start time: 1:00 AM – 2:00 AM–Autodelete snapshots after 30 days
C. * 1. Create a Cloud Function that creates a snapshot of your instance’s disk.* 2.Create a Cloud Function that deletes snapshots that are older than 30 day
D. 3.Use Cloud Scheduler to trigger both Cloud Functions daily at 1:00 AM.
E. * 1. Create a bash script in the instance that copies the content of the disk to Cloud Storage.* 2. Create a bash script in the instance that deletes data older than
30 days in the backup Cloud Storage bucket.* 3. Configure the instance’s crontab to execute these scripts daily at 1:00 AM.
Answer: B
Explanation:
Creating scheduled snapshots for persistent disk This document describes how to create a snapshot schedule to regularly and automatically back up your zonal
and regional persistent disks. Use snapshot schedules as a best practice to back up your Compute Engine workloads. After creating a snapshot schedule, you can
apply it to one or more persistent disks. https://cloud.google.com/compute/docs/disks/scheduled-snapshots
NEW QUESTION 79
You manage three Google Cloud projects with the Cloud Monitoring API enabled. You want to follow Google-recommended practices to visualize CPU and
network metrics for all three projects together. What should you do?
A. * 1. Create a Cloud Monitoring Dashboard* 2. Collect metrics and publish them into the Pub/Sub topics 3. Add CPU and network Charts (or each of (he three
projects
B. * 1. Create a Cloud Monitoring Dashboard.* 2. Select the CPU and Network metrics from the three projects.* 3. Add CPU and network Charts lot each of the
three protects.
C. * 1 Create a Service Account and apply roles/viewer on the three projects* 2. Collect metrics and publish them lo the Cloud Monitoring API* 3. Add CPU and
network Charts for each of the three projects.
D. * 1. Create a fourth Google Cloud project* 2 Create a Cloud Workspace from the fourth project and add the other three projects
Answer: B
NEW QUESTION 83
You are building an archival solution for your data warehouse and have selected Cloud Storage to archive your data. Your users need to be able to access this
archived data once a quarter for some regulatory requirements. You want to select a cost-efficient option. Which storage option should you use?
A. Coldline Storage
B. Nearline Storage
C. Regional Storage
D. Multi-Regional Storage
Answer: A
Explanation:
Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is ideal for data you plan to read or
modify at most once a quarter. Since we have a requirement to access data once a quarter and want to go with the most cost-efficient option, we should select
Coldline Storage.
Ref: https://cloud.google.com/storage/docs/storage-classes#coldline
NEW QUESTION 84
Your company runs one batch process in an on-premises server that takes around 30 hours to complete. The task runs monthly, can be performed offline, and
must be restarted if interrupted. You want to migrate this workload to the cloud while minimizing cost. What should you do?
Answer: D
Explanation:
Install the workload in a compute engine VM, start and stop the instance as needed, because as per the question the VM runs for 30 hours, process can be
performed offline and should not be interrupted, if interrupted we need to restart the batch process again. Preemptible VMs are cheaper, but they will not be
available beyond 24hrs, and if the process gets interrupted the preemptible VM will restart.
NEW QUESTION 88
You created a Kubernetes deployment by running kubectl run nginx image=nginx replicas=1. After a few days, you decided you no longer want this deployment.
You identified the pod and deleted it by running kubectl delete pod. You noticed the pod got recreated.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-84748895c4-nqqmt 1/1 Running 0 9m41s
$ kubectl delete pod nginx-84748895c4-nqqmt
pod nginx-84748895c4-nqqmt deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-84748895c4-k6bzl 1/1 Running 0 25s
What should you do to delete the deployment and avoid pod getting recreated?
Answer: A
Explanation:
This command correctly deletes the deployment. Pods are managed by kubernetes workloads (deployments). When a pod is deleted, the deployment detects the
pod is unavailable and brings up another pod to maintain the replica count. The only way to delete the workload is by deleting the deployment itself using the
kubectl delete deployment command.
$ kubectl delete deployment nginx
deployment.apps nginx deleted
Ref: https://kubernetes.io/docs/reference/kubectl/cheatsheet/#deleting-resources
NEW QUESTION 93
You need to deploy an application, which is packaged in a container image, in a new project. The application exposes an HTTP endpoint and receives very few
requests per day. You want to minimize costs. What should you do?
Answer: A
Explanation:
Cloud Run takes any container images and pairs great with the container ecosystem: Cloud Build, Artifact Registry, Docker. ... No infrastructure to manage: once
deployed, Cloud Run manages your services so you can sleep well. Fast autoscaling. Cloud Run automatically scales up or down from zero to N depending on
traffic.
https://cloud.google.com/run
NEW QUESTION 97
You need to reduce GCP service costs for a division of your company using the fewest possible steps. You need to turn off all configured services in an existing
GCP project. What should you do?
A. * 1. Verify that you are assigned the Project Owners IAM role for this project.* 2. Locate the project in the GCP console, click Shut down and then enter the
project ID.
B. * 1. Verify that you are assigned the Project Owners IAM role for this project.* 2. Switch to the project in the GCP console, locate the resources and delete them.
C. * 1. Verify that you are assigned the Organizational Administrator IAM role for this project.* 2. Locate the project in the GCP console, enter the project ID and
then click Shut down.
D. * 1. Verify that you are assigned the Organizational Administrators IAM role for this project.* 2. Switch to the project in the GCP console, locate the resources
and delete them.
Answer: A
Explanation:
https://cloud.google.com/run/docs/tutorials/gcloud https://cloud.google.com/resource-manager/docs/creating-managing-projects
https://cloud.google.com/iam/docs/understanding-roles#primitive_roles
You can shut down projects using the Cloud Console. When you shut down a project, this immediately happens: All billing and traffic serving stops, You lose
access to the project, The owners of the project will be notified and can stop the deletion within 30 days, The project will be scheduled to be deleted after 30 days.
However, some resources may be deleted much earlier.
NEW QUESTION 99
You are managing a project for the Business Intelligence (BI) department in your company. A data pipeline ingests data into BigQuery via streaming. You want the
users in the BI department to be able to run the custom SQL queries against the latest data in BigQuery. What should you do?
A. Create a Data Studio dashboard that uses the related BigQuery tables as a source and give the BI team view access to the Data Studio dashboard.
B. Create a Service Account for the BI team and distribute a new private key to each member of the BI team.
C. Use Cloud Scheduler to schedule a batch Dataflow job to copy the data from BigQuery to the BI team's internal data warehouse.
D. Assign the IAM role of BigQuery User to a Google Group that contains the members of the BI team.
Answer: D
Explanation:
When applied to a dataset, this role provides the ability to read the dataset's metadata and list tables in the dataset. When applied to a project, this role also
provides the ability to run jobs, including queries, within the project. A member with this role can enumerate their own jobs, cancel their own jobs, and enumerate
datasets within a project. Additionally, allows the creation of new datasets within the project; the creator is granted the BigQuery Data Owner role
(roles/bigquery.dataOwner) on these new datasets.
https://cloud.google.com/bigquery/docs/access-control
A. Set up a low-priority (65534) rule that blocks all egress and a high-priority rule (1000) that allows only the appropriate ports.
B. Set up a high-priority (1000) rule that pairs both ingress and egress ports.
C. Set up a high-priority (1000) rule that blocks all egress and a low-priority (65534) rule that allows only the appropriate ports.
D. Set up a high-priority (1000) rule to allow the appropriate ports.
Answer: A
Explanation:
Implied rules Every VPC network has two implied firewall rules. These rules exist, but are not shown in the Cloud Console: Implied allow egress rule. An egress
rule whose action is allow, destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send traffic to any destination, except for traffic
blocked by Google Cloud. A higher priority firewall rule may restrict outbound access. Internet access is allowed if no other firewall rules deny outbound traffic and
if the instance has an external IP address or uses a Cloud NAT instance. For more information, see Internet access requirements. Implied deny ingress rule. An
ingress rule whose action is deny, source is 0.0.0.0/0, and priority is the lowest possible (65535) protects all instances by blocking incoming connections to them. A
higher priority rule might allow incoming access. The default network includes some additional rules that override this one, allowing certain types of incoming
connections. https://cloud.google.com/vpc/docs/firewalls#default_firewall_rules
Answer: A
Explanation:
Create an instance template for the instances so VMs have same specs. Set the "˜Automatic Restart' to on to VM automatically restarts upon crash. Set the "˜On-
host maintenance' to Migrate VM instance. This will take care of VM during maintenance window. It will migrate VM instance making it highly available Add the
instance template to an instance group so instances can be managed.
• onHostMaintenance: Determines the behavior when a maintenance event occurs that might cause your instance to reboot.
• [Default] MIGRATE, which causes Compute Engine to live migrate an instance when there is a maintenance event.
• TERMINATE, which stops an instance instead of migrating it.
• automaticRestart: Determines the behavior when an instance crashes or is stopped by the system.
• [Default] true, so Compute Engine restarts an instance if the instance crashes or is stopped.
• false, so Compute Engine does not restart an instance if the instance crashes or is stopped.
Enabling automatic restart ensures that compute engine instances are automatically restarted when they crash. And Enabling Migrate VM Instance enables live
migrates i.e. compute instances are migrated during system maintenance and remain running during the migration.
Automatic Restart If your instance is set to terminate when there is a maintenance event, or if your instance crashes because of an underlying hardware issue, you
can set up Compute Engine to automatically restart the instance by setting the automaticRestart field to true. This setting does not apply if the instance is taken
offline through a user action, such as calling sudo shutdown, or during a zone outage.Ref: https://cloud.google.com/compute/docs/instances/setting-instance-
scheduling-options#autorestart
Enabling the Migrate VM Instance option migrates your instance away from an infrastructure maintenance event, and your instance remains running during the
migration. Your instance might experience a short period of decreased performance, although generally, most instances should not notice any difference. This is
ideal for instances that require constant uptime and can tolerate a short period of decreased
performance.Ref: https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options#live_
A. Meet with the cloud enablement team to discuss load balancer options.
B. Redesign the application to use a distributed user session service that does not rely on WebSockets and HTTP sessions.
C. Review the encryption requirements for WebSocket connections with the security team.
D. Convert the WebSocket code to use HTTP streaming.
Answer: A
Explanation:
Google HTTP(S) Load Balancing has native support for the WebSocket protocol when you use HTTP or HTTPS, not HTTP/2, as the protocol to the backend.
Ref: https://cloud.google.com/load-balancing/docs/https#websocket_proxy_support
We dont need to convert WebSocket code to use HTTP streaming or Redesign the application, as
WebSocket support is offered by Google HTTP(S) Load Balancing. Reviewing the encryption requirements is a good idea but it has nothing to do with
WebSockets.
A. Create a new project, enable the Compute Engine and Cloud SQL APIs in that project, and replicate the setup you have created in the development
environment.
B. Create a new production subnet in the existing VPC and a new production Cloud SQL instance in your existing project, and deploy your application using those
resources.
C. Create a new project, modify your existing VPC to be a Shared VPC, share that VPC with your new project, and replicate the setup you have in the
development environment in that new project, in the Shared VPC.
D. Ask the security team to grant you the Project Editor role in an existing production project used by another division of your compan
E. Once they grant you that role, replicate the setup you have in the development environment in that project.
Answer: A
Explanation:
This aligns with Googles recommended practices. By creating a new project, we achieve complete isolation between development and production environments;
as well as isolate this production application from production applications of other departments.
Ref: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#define-hierarchy
A. Create a cron job that runs on a scheduled basis to review stackdriver monitoring metrics, and then resize the Spanner instance accordingly.
B. Create a Stackdriver alerting policy to send an alert to oncall SRE emails when Cloud Spanner CPU exceeds the threshol
C. SREs would scale resources up or down accordingly.
D. Create a Stackdriver alerting policy to send an alert to Google Cloud Support email when Cloud Spanner CPU exceeds your threshol
E. Google support would scale resources up or down accordingly.
F. Create a Stackdriver alerting policy to send an alert to webhook when Cloud Spanner CPU is over or under your threshol
G. Create a Cloud Function that listens to HTTP and resizes Spanner resources accordingly.
Answer: D
Explanation:
As to mexblood1's point, CPU utilization is a recommended proxy for traffic when it comes to Cloud Spanner. See: Alerts for high CPU utilization The following
table specifies our recommendations for maximum CPU usage for both single-region and multi-region instances. These numbers are to ensure that your instance
has enough compute capacity to continue to serve your traffic in the event of the loss of an entire zone (for single-region instances) or an entire region (for multi-
region instances). - https://cloud.google.com/spanner/docs/cpu-utilization
Answer: A
You need to verify that a Google Cloud Platform service account was created at a particular time. What should you do?
Answer: A
Explanation:
https://developers.google.com/cloud-search/docs/guides/audit-logging-manual
A. Use Deployment Manager templates to describe the proposed changes and store them in a Cloud Storage bucket.
B. Use Deployment Manager templates to describe the proposed changes and store them in Cloud Source Repositories.
C. Apply the change in a development environment, run gcloud compute instances list, and then save the output in a shared Storage bucket.
D. Apply the change in a development environment, run gcloud compute instances list, and then save the output in Cloud Source Repositories.
Answer: B
Explanation:
Showing Deployment Manager templates to your team will allow you to define the changes you want to implement in your cloud infrastructure. You can use Cloud
Source Repositories to store Deployment Manager templates and collaborate with your team. Cloud Source Repositories are fully-featured, scalable, and private
Git repositories you can use to store, manage and track changes to your code.
https://cloud.google.com/source-repositories/docs/features
A. Use Google Cloud Directory Sync (GCDS) to synchronize users into Cloud Identity.
B. Use the cloud Identity APIs and write a script to synchronize users to Cloud Identity.
C. Export users from Active Directory as a CSV and import them to Cloud Identity via the Admin Console.
D. Ask each employee to create a Google account using self signu
E. Require that each employee use their company email address and password.
Answer: A
A. Deployment Manager
B. Cloud Composer
C. Managed Instance Group
D. Unmanaged Instance Group
Answer: A
Explanation:
https://cloud.google.com/deployment-manager/docs/configuration/create-basic-configuration
Answer: D
Explanation:
As part of your workload, there might be certain VM instances that are critical to running your application or services, such as an instance running a SQL server, a
server used as a license manager, and so on. These VM instances might need to stay running indefinitely so you need a way to protect these VMs from being
deleted. By setting the deletionProtection flag, a VM instance can be protected from accidental deletion. If a user attempts to delete a VM instance for which you
have set the deletionProtection flag, the request fails. Only a user that has been granted a role with compute.instances.create permission can reset the flag to
allow the resource to be deleted.Ref: https://cloud.google.com/compute/docs/instances/preventing-accidental-vm-deletion
Answer: D
Explanation:
M1 machine series Medium in-memory databases such as SAP HANA Tasks that require intensive use of memory with higher memory-to-vCPU ratios than the
general-purpose high-memory machine types.
In-memory databases and in-memory analytics, business warehousing (BW) workloads, genomics analysis, SQL analysis services. Microsoft SQL Server and
similar databases.
https://cloud.google.com/compute/docs/machine-types
https://cloud.google.com/compute/docs/machine-types#:~:text=databases%20such%20as-,SAP%20HANA,-In%
https://www.sap.com/india/products/hana.html#:~:text=is%20SAP%20HANA-,in%2Dmemory,-database%3F
Answer: C
Answer: A
Explanation:
Creating and starting a preemptible VM instance This page explains how to create and use a preemptible virtual machine (VM) instance. A preemptible instance is
an instance you can create and run at a much lower price than normal instances. However, Compute Engine might terminate (preempt) these instances if it
requires access to those resources for other tasks. Preemptible instances will always terminate after 24 hours. To learn more about preemptible instances, read
the preemptible instances documentation. Preemptible instances are recommended only for fault-tolerant applications that can withstand instance preemptions.
Make sure your application can handle preemptions before you decide to create a preemptible instance. To understand the risks and value of preemptible
instances, read the preemptible instances documentation. https://cloud.google.com/compute/docs/instances/create-start-preemptible-instance
A. Change the default region property setting in the existing GCP project to asia-northeast1.
B. Change the region property setting in the existing App Engine application from us-central to asia-northeast1.
C. Create a second App Engine application in the existing GCP project and specify asia-northeast1 as the region to serve your application.
D. Create a new GCP project and create an App Engine application inside this new projec
E. Specify asia-northeast1 as the region to serve your application.
Answer: D
Explanation:
https://cloud.google.com/appengine/docs/flexible/managing-projects-apps-billing#:~:text=Each%20Cloud%20p Two App engine can't be running on the same
project: you can check this easy diagram for more info:
https://cloud.google.com/appengine/docs/standard/an-overview-of-app-engine#components_of_an_application
And you can't change location after setting it for your app Engine. https://cloud.google.com/appengine/docs/standard/locations
App Engine is regional and you cannot change an apps region after you set it. Therefore, the only way to have an app run in another region is by creating a new
project and targeting the app engine to run in the required region (asia-northeast1 in our case).
Ref: https://cloud.google.com/appengine/docs/locations
A. Use Binary Authorization and whitelist only the container images used by your customers’ Pods.
B. Use the Container Analysis API to detect vulnerabilities in the containers used by your customers’ Pods.
C. Create a GKE node pool with a sandbox type configured to gviso
D. Add the parameter runtimeClassName: gvisor to the specification of your customers’ Pods.
E. Use the cos_containerd image for your GKE node
F. Add a nodeSelector with the value cloud.google.com/gke-os-distribution: cos_containerd to the specification of your customers’ Pods.
Answer: C
A. Configure Billing Data Export to BigQuery and visualize the data in Data Studio.
B. Visit the Cost Table page to get a CSV export and visualize it using Data Studio.
C. Fill all resources in the Pricing Calculator to get an estimate of the monthly cost.
D. Use the Reports view in the Cloud Billing Console to view the desired cost information.
Answer: A
Explanation:
https://cloud.google.com/billing/docs/how-to/export-data-bigquery "Cloud Billing export to BigQuery enables you to export detailed Google Cloud billing data (such
as usage, cost estimates, and pricing data) automatically throughout the day to a BigQuery dataset that you specify."
Answer: A
Explanation:
https://cloud.google.com/spanner/docs/getting-started/set-up
A. • Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions in one project within the Google Cloud organization.• Copy the role
across all projects created within the organization with the gcloud iam roles copy command.• Assign the role to developers in those projects.
B. • Add all developers to a Google group in Google Groups for Workspace.• Assign the predefined role of Compute Admin to the Google group at the Google
Cloud organization level.
C. • Add all developers to a Google group in Cloud Identity.• Assign predefined roles for Compute Engine, Cloud Functions, and Cloud SQL permissions to the
Google group for each project in the Google Cloud organization.
D. • Add all developers to a Google group in Cloud Identity.• Create a custom role with Compute Engine, Cloud Functions, and Cloud SQL permissions at the
Google Cloud organization level.• Assign the custom role to the Google group.
Answer: D
Explanation:
https://www.cloudskillsboost.google/focuses/1035?parent=catalog#:~:text=custom%20role%20at%20the%20or
Answer: BE
Explanation:
https://cloud.google.com/bigquery/docs/custom-quotas https://cloud.google.com/bigquery/pricing#flat_rate_pricing
with Cloud Pub/Sub. You want the ability to scale down to zero when there is no traffic in order to minimize costs. You want to follow Google recommended
practices. What should you suggest?
Answer: D
Explanation:
Cloud Functions is Google Cloud’s event-driven serverless compute platform that lets you run your code locally or in the cloud without having to provision servers.
Cloud Functions scales up or down, so you pay only for compute resources you use. Cloud Functions have excellent integration with Cloud Pub/Sub, lets you
scale down to zero and is recommended by Google as the ideal serverless platform to use when dependent on Cloud Pub/Sub."If you’re building a simple API (a
small set of functions to be accessed via HTTP or Cloud Pub/Sub), we recommend using Cloud Functions."Ref: https://cloud.google.com/serverless-options
Answer: C
Explanation:
Logged onto console and followed the steps and was able to see all the assigned users and roles.
A. Use Cloud Logging filters to create log-based metrics for firewall and instance action
B. Monitor the changes and set up reasonable alerts.
C. Install Kibana on a compute Instanc
D. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Pub/Su
E. Target the Pub/Sub topic to push messages to the Kibana instanc
F. Analyze the logs on Kibana in real time.
G. Turn on Google Cloud firewall rules logging, and set up alerts for any insert, update, or delete events.
H. Create a log sink to forward Cloud Audit Logs filtered for firewalls and compute instances to Cloud Storage.Use BigQuery to periodically analyze log events in
the storage bucket.
Answer: A
Explanation:
This answer is the simplest and most effective way to monitor unexpected firewall changes and instance creation in Google Cloud. Cloud Logging filters allow you
to specify the criteria for the log entries that you want to view or export. You can use the Logging query language to write filters based on the LogEntry fields, such
as resource.type, severity, or protoPayload.methodName. For example, you can filter for firewall-related events by using the following query:
resource.type=“gce_subnetwork” logName=“projects/PROJECT_ID/logs/compute.googleapis.com%2Ffirewall”
You can filter for instance-related events by using the following query: resource.type=“gce_instance”
logName=“projects/PROJECT_ID/logs/compute.googleapis.com%2Factivity_log”
You can create log-based metrics from these filters to measure the rate or count of log entries that match the filter. Log-based metrics can be used to create charts
and dashboards in Cloud Monitoring, or to set up alerts based on the metric values. For example, you can create an alert policy that triggers when the log-based
metric for firewall changes exceeds a certain threshold in a given time interval. This way, you can get notified of any unexpected or malicious changes to your
firewall rules.
Option B is incorrect because it is unnecessarily complex and costly. Installing Kibana on a compute instance requires additional configuration and maintenance.
Creating a log sink to forward Cloud Audit Logs to Pub/Sub also incurs additional charges for the Pub/Sub service. Analyzing the logs on Kibana in real time may
not be feasible or efficient, as it requires constant monitoring and manual intervention.
Option C is incorrect because Google Cloud firewall rules logging is a different feature from Cloud Audit Logs. Firewall rules logging allows you to audit, verify, and
analyze the effects of your firewall rules by creating connection records for each rule that applies to traffic. However, firewall rules logging does not log the insert,
update, or delete events for the firewall rules themselves. Those events are logged by Cloud Audit Logs, which record the administrative activities in your Google
Cloud project.
Option D is incorrect because it is not a real-time solution. Creating a log sink to forward Cloud Audit Logs to Cloud Storage requires additional storage space and
charges. Using BigQuery to periodically analyze log events in the storage bucket also incurs additional costs for the BigQuery service. Moreover, this option does
not provide any alerting mechanism to notify you of any unexpected or malicious changes to your firewall rules or instances.
Answer: B
Explanation:
https://console.cloud.google.com/marketplace/details/gc-launcher-for-mongodb-atlas/mongodb-atlas
A. Create a Billing account, associate a payment method with it, and provide all project creators with permission to associate that billing account with their projects.
B. Grant all engineer’s permission to create their own billing accounts for each new project.
C. Apply for monthly invoiced billing, and have a single invoice tor the project paid by the finance team.
D. Create a billing account, associate it with a monthly purchase order (PO), and send the PO to Google Cloud.
Answer: A
A. * 1. Create a configuration for each project you need to manage.* 2. Activate the appropriate configuration when you work with each of your assigned GCP
projects.
B. * 1. Create a configuration for each project you need to manage.* 2. Use gcloud init to update the configuration values when you need to work with a non-default
project
C. * 1. Use the default configuration for one project you need to manage.* 2. Activate the appropriate configuration when you work with each of your assigned GCP
projects.
D. * 1. Use the default configuration for one project you need to manage.* 2. Use gcloud init to update the configuration values when you need to work with a non-
default project.
Answer: A
Explanation:
https://cloud.google.com/sdk/gcloud https://cloud.google.com/sdk/docs/configurations#multiple_configurations
A. Run gcloud compute instances list to get the IP address of the instance, then use the ssh command.
B. Use the gcloud compute ssh command.
C. Create a key with the ssh-keygen comman
D. Then use the gcloud compute ssh command.
E. Create a key with the ssh-keygen comman
F. Upload the key to the instanc
G. Run gcloud compute instances list to get the IP address of the instance, then use the ssh command.
Answer: B
Explanation:
gcloud compute ssh ensures that the user’s public SSH key is present in the project’s metadata. If the user does not have a public SSH key, one is generated
using ssh-keygen and added to the project’s metadata. This is similar to the other option where we copy the key explicitly to the project’s metadata but here it is
done automatically for us. There are also security benefits with this approach. When we use gcloud compute ssh to connect to Linux instances, we are adding a
layer of security by storing your host keys as guest attributes. Storing SSH host keys as guest attributes improve the security of your connections by helping to
protect against vulnerabilities such as man-in-the-middle (MITM) attacks. On the initial boot of a VM instance, if guest attributes are enabled, Compute Engine
stores your generated host keys as guest attributes.
Compute Engine then uses these host keys that were stored during the initial boot to verify all subsequent connections to the VM instance.
Ref: https://cloud.google.com/compute/docs/instances/connecting-to-instanceRef: https://cloud.google.com/s
Answer: D
A. Create an HTTP load balancer with a backend configuration that references an existing instance group.Set the health check to healthy (HTTP).
B. Create an HTTP load balancer with a backend configuration that references an existing instance group.Define a balancing mode and set the maximum RPS to
10.
C. Create a managed instance grou
Answer: C
Explanation:
https://cloud.google.com/compute/docs/instance-groups
https://cloud.google.com/load-balancing/docs/network/transition-to-backend-services#console
In order to enable auto-healing, you need to group the instances into a managed instance group.
Managed instance groups (MIGs) maintain the high availability of your applications by proactively keeping your virtual machine (VM) instances available. An auto-
healing policy on the MIG relies on an application-based health check to verify that an application is responding as expected. If the auto-healer determines that an
application isnt responding, the managed instance group automatically recreates that instance.
It is important to use separate health checks for load balancing and for auto-healing. Health checks for load balancing can and should be more aggressive
because these health checks determine whether an instance receives user traffic. You want to catch non-responsive instances quickly, so you can redirect traffic if
necessary. In contrast, health checking for auto-healing causes Compute Engine to proactively replace failing instances, so this health check should be more
conservative than a load balancing health check.
* Associate-Cloud-Engineer Most Realistic Questions that Guarantee you a Pass on Your FirstTry
* Associate-Cloud-Engineer Practice Test Questions in Multiple Choice Formats and Updatesfor 1 Year