PCA Set2
PCA Set2
com
Professional Cloud Architect on Google Cloud Platform v1.0 (Professional Cloud Architect) -
Full Access
Question 101 ( Question Set 1 )
One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.
You want to optimize this Dockerfile for faster deployment times without adversely affecting the appג€™s functionality.
Which two actions should you take? (Choose two.)
Answer : CE
Explanation:
The speed of deployment can be changed by limiting the size of the uploaded app, limiting the complexity of the build necessary in the Dockerfile, if present, and by ensuring a fast and reliable internet connection.
Note: Alpine Linux is built around musl libc and busybox. This makes it smaller and more resource efficient than traditional GNU/Linux distributions. A container requires no more than 8 MB and a minimal
installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.
Reference:
https://groups.google.com/forum/#!topic/google-appengine/hZMEkmmObDU https://www.alpinelinux.org/about/
Answer : D
A. Set timeouts on your application so that you can fail requests faster
B. Send custom metrics for each of your requests to Stackdriver Monitoring
C. Use Stackdriver Monitoring to look for insights that show when your API latencies are high
D. Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice
Answer : D
Reference:
https://cloud.google.com/trace/docs/quickstart#find_a_trace
Answer : B
Answer : B
Explanation:
Stackdriver Logging provides you with the ability to filter, search, and view logs from your cloud and open source application services. Allows you to define metrics based on log contents that are incorporated into
dashboards and alerts. Enables you to export logs to BigQuery, Google Cloud Storage, and Pub/Sub.
Reference:
https://cloud.google.com/stackdriver/
Answer : A
Explanation:
Google Cloud Dedicated Interconnect provides direct physical connections and RFC 1918 communication between your on-premises network and Googleג€™s network. Dedicated Interconnect enables you to transfer
large amounts of data between networks, which can be more cost effective than purchasing additional bandwidth over the public Internet or using VPN tunnels.
Benefits:
✑ Traffic between your on-premises network and your VPC network doesn't traverse the public Internet. Traffic traverses a dedicated connection with fewer hops, meaning there are less points of failure where traffic
might get dropped or disrupted.
✑ Your VPC network's internal (RFC 1918) IP addresses are directly accessible from your on-premises network. You don't need to use a NAT device or VPN tunnel to reach internal IP addresses. Currently, you can
only reach internal IP addresses over a dedicated connection. To reach Google external IP addresses, you must use a separate connection.
✑ You can scale your connection to Google based on your needs. Connection capacity is delivered over one or more 10 Gbps Ethernet connections, with a maximum of eight connections (80 Gbps total per
interconnect).
✑ The cost of egress traffic from your VPC network to your on-premises network is reduced. A dedicated connection is generally the least expensive method if you have a high-volume of traffic to and from
Googleג€™s network.
Reference:
https://cloud.google.com/interconnect/docs/details/dedicated
A. Create custom Google Stackdriver alerts and send them to the auditor
B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
C. Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditorג€™s view
D. Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket
Answer : D
Answer : C
Reference:
https://cloud.google.com/kms/docs/secret-management
Answer : BF
Explanation:
Answer : D
Explanation:
Jenkins is an open-source automation server that lets you flexibly orchestrate your build, test, and deployment pipelines. Kubernetes Engine is a hosted version of
Kubernetes, a powerful cluster manager and orchestration system for containers.
When you need to set up a continuous delivery (CD) pipeline, deploying Jenkins on Kubernetes Engine provides important benefits over a standard VM-based deployment
Incorrect Answers:
A: Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.
Use Helm to:
Find and use popular software packaged as Kubernetes charts
Answer : C
Explanation:
A startup script, or a shutdown script, is specified through the metadata server, using startup script metadata keys.
Reference:
https://cloud.google.com/compute/docs/startupscript
Answer : D
Explanation:
Google Cloud Platform(GCP) enforces firewall rules through rules and tags. GCP rules and tags can be defined once and used across all regions.
Reference:
https://cloud.google.com/docs/compare/openstack/
https://aws.amazon.com/it/blogs/aws/building-three-tier-architectures-with-security-groups/
Answer : ACE
Answer : AE
Answer : B
Answer : C
Explanation:
Folders are nodes in the Cloud Platform Resource Hierarchy. A folder can contain projects, other folders, or a combination of both. You can use folders to group projects under an organization in a hierarchy. For
example, your organization might contain multiple departments, each with its own set of GCP resources. Folders allow you to group these resources on a per-department basis. Folders are used to group resources that
share common IAM policies. While a folder can contain multiple folders or resources, a given folder or resource can have exactly one parent.
Reference:
https://cloud.google.com/resource-manager/docs/creating-managing-folders
Answer : B
A. Tag messages client side with the originating user identifier and the destination user.
B. Encrypt the message client side using block-based encryption with a shared key.
C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private key.
D. Use a trusted certificate authority to enable SSL connectivity between the client application and the server.
Answer : C
Answer : B
Answer : C
Reference:
https://cloud.google.com/solutions/pci-dss-compliance-in-gcp#using_data_loss_prevention_api_to_sanitize_data
A. ~/bin
B. Cloud Storage
C. /google/scripts
D. /usr/local/bin
Answer : A
A. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.
B. Create a VPC and connect it to your on-premises data center using a single Cloud VPN.
C. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center using Dedicated Interconnect.
D. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter using a single Cloud VPN.
Answer : A
A. Utilize free tier and sustained use discounts. Provision a staff position for service cost management.
B. Utilize free tier and sustained use discounts. Provide training to the team about service cost management.
C. Utilize free tier and committed use discounts. Provision a staff position for service cost management.
D. Utilize free tier and committed use discounts. Provide training to the team about service cost management.
Answer : B
A. Use Spinnaker to deploy builds to production using the red/black deployment strategy so that changes can easily be rolled back.
B. Use Spinnaker to deploy builds to production and run tests on production deployments.
C. Use Jenkins to build the staging branches and the master branch. Build and deploy changes to production for 10% of users before doing a complete rollout.
D. Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for testing. After testing, tag the repository for production and deploy that to the production environment.
Answer : C
Reference:
https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes/blob/master/README.md
Answer : C
A. App Engine is the only compute platform on GCP that is certified for PCI DSS hosting.
B. GKE cannot be used under PCI DSS because it is considered shared hosting.
C. GKE and GCP provide the tools you need to build a PCI DSS-compliant environment.
D. All Google Cloud services are usable because Google Cloud Platform is certified PCI-compliant.
Answer : C
A. Upload your files into Cloud Storage. Use Cloud Datalab to explore and clean your data.
B. Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data.
C. Connect Cloud Datalab to your on-premises systems. Use Cloud Datalab to explore and clean your data.
D. Connect Cloud Dataprep to your on-premises systems. Use Cloud Dataprep to explore and clean your data.
Answer : B
A. The effective policy is determined only by the policy set at the node
B. The effective policy is the policy set at the node and restricted by the policies of its ancestors
C. The effective policy is the union of the policy set at the node and policies inherited from its ancestors
D. The effective policy is the intersection of the policy set at the node and policies inherited from its ancestors
Answer : C
Reference:
https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy
Answer : D
Answer : A
A. Deploy the application on two Compute Engine instances in the same project but in a different region. Use the first instance to serve traffic, and use the HTTP load balancing service to fail over to the standby
instance in case of a disaster.
B. Deploy the application on a Compute Engine instance. Use the instance to serve traffic, and use the HTTP load balancing service to fail over to an instance on your premises in case of a disaster.
C. Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to
the standby instance group in case of a disaster.
D. Deploy the application on two Compute Engine instance groups, each in a separate project and a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to
the standby instance group in case of a disaster.
Answer : C
A. Deploy your application on App Engine standard environment and use App Engine firewall rules to limit access to the open on-premises database.
B. Deploy your application on App Engine standard environment and use Cloud VPN to limit access to the on-premises database.
C. Deploy your application on App Engine flexible environment and use App Engine firewall rules to limit access to the on-premises database.
D. Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.
Answer : D
A. Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM
using gsutil.
B. Upload the required installation files to Cloud Storage and use firewall rules to block all traffic except the IP address range for Cloud Storage. Download the files to the VM using gsutil.
C. Upload the required installation files to Cloud Source Repositories. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files
to the VM using gcloud.
D. Upload the required installation files to Cloud Source Repositories and use firewall rules to block all traffic except the IP address range for Cloud Source Repositories. Download the files to the VM using gsutil.
Answer : B
A. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage.
B. Move your data onto a Transfer Appliance. Use Cloud Dataprep to decrypt the data into Cloud Storage.
C. Install gsutil on each server that contains data. Use resumable transfers to upload the data into Cloud Storage.
D. Install gsutil on each server containing data. Use streaming transfers to upload the data into Cloud Storage.
Answer : A
Answer : A
A. Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery dataViewer on the projects that contain the data.
https://www.itexams.com/exam/Professional Cloud Architect? 8/21
4/11/22, 9:41 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
B. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery user on the projects that contain the data.
C. Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data.
D. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery jobUser on the projects that contain the data.
Answer : C
A. Have users upload the images to Cloud Storage. Protect the bucket with a password that expires after 24 hours.
B. Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours.
C. Create an App Engine web application where users can upload images. Configure App Engine to disable the application after 24 hours. Authenticate users via Cloud Identity.
D. Create an App Engine web application where users can upload images for the next 24 hours. Authenticate users via Cloud Identity.
Answer : B
A. Ensure that your web application only uses native features and services of Google Cloud Platform, because Google already has various certifications and provides ג€pass-onג€ compliance when you use native
features.
B. Enable the relevant GDPR compliance setting within the GCPConsole for each of the services in use within your application.
C. Ensure that Cloud Security Scanner is part of your test planning strategy in order to pick up any compliance gaps.
D. Define a design for the security of data in your web application that meets GDPR requirements.
Answer : D
Reference:
https://www.mobiloud.com/blog/gdpr-compliant-mobile-app/
Answer : D
A. Use gcloud to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
B. Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment.
C. Use kubectl to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
D. Use kubectl to create a Kubernetes cluster. Use kubectl to create the deployment.
Answer : B
A. Allocate budget for team training. Set a deadline for the new GCP project.
B. Allocate budget for team training. Create a roadmap for your team to achieve Google Cloud certification based on job role.
C. Allocate budget to hire skilled external consultants. Set a deadline for the new GCP project.
D. Allocate budget to hire skilled external consultants. Create a roadmap for your team to achieve Google Cloud certification based on job role.
Answer : A
A. Cloud Functions
B. Compute Engine
C. Google Kubernetes Engine
D. AppEngine flexible environment
Answer : A
A. Create the Key object for each Entity and run a batch get operation
B. Create the Key object for each Entity and run multiple get operations, one operation for each entity
C. Use the identifiers to create a query filter and run a batch query operation
D. Use the identifiers to create a query filter and run multiple query operations, one operation for each entity
Answer : A
A. Supply the encryption key in a .boto configuration file. Use gsutil to upload the files.
B. Supply the encryption key using gcloud config. Use gsutil to upload the files to that bucket.
C. Use gsutil to upload the files, and use the flag --encryption-key to supply the encryption key.
D. Use gsutil to create a bucket, and use the flag --encryption-key to supply the encryption key. Use gsutil to upload the files to that bucket.
Answer : A
A. Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio.
B. Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them.
C. Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and visualize the results in Google Data Studio.
D. Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud Datalab.
Answer : A
Reference:
https://cloud.google.com/solutions/data-lifecycle-cloud-platform
A. Perform the following: 1. Create a managed instance group with f1-micro type machines. 2. Use a startup script to clone the repository, check out the production branch, install the dependencies, and start the
Python app. 3. Restart the instances to automatically deploy new production releases.
B. Perform the following: 1. Create a managed instance group with n1-standard-1 type machines. 2. Build a Compute Engine image from the production branch that contains all of the dependencies and automatically
starts the Python app. 3. Rebuild the Compute Engine image, and update the instance template to deploy new production releases.
C. Perform the following: 1. Create a Google Kubernetes Engine (GKE) cluster with n1-standard-1 type machines. 2. Build a Docker image from the production branch with all of the dependencies, and tag it with the
version number. 3. Create a Kubernetes Deployment with the imagePullPolicy set to 'IfNotPresent' in the staging namespace, and then promote it to the production namespace after testing.
D. Perform the following: 1. Create a GKE cluster with n1-standard-4 type machines. 2. Build a Docker image from the master branch with all of the dependencies, and tag it with 'latest'. 3. Create a Kubernetes
Deployment in the default namespace with the imagePullPolicy set to 'Always'. Restart the pods to automatically deploy new production releases.
Answer : B
A. Use the Admin Directory API to authenticate against the Active Directory domain controller.
B. Use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML SSO.
C. Use Cloud Identity-Aware Proxy configured to use the on-premises Active Directory domain controller as an identity provider.
D. Use Compute Engine to create an Active Directory (AD) domain controller that is a replica of the on-premises AD domain controller using Google Cloud Directory Sync.
Answer : B
A. Review the Stackdriver logs for each Compute Engine instance that is serving as a node in the cluster.
B. Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the application.
C. Connect to the cluster using gcloud credentials and connect to a container in one of the pods to read the logs.
D. Review the Serial Port logs for each Compute Engine instance that is serving as a node in the cluster.
Answer : B
Answer : D
A. Create a snapshot of the existing disk. Create an instance template from the snapshot. Create an autoscaled managed instance group from the instance template.
B. Create a snapshot of the existing disk. Create a custom image from the snapshot. Create an autoscaled managed instance group from the custom image.
C. Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template.
D. Create an instance template from the existing disk. Create a custom image from the instance template. Create an autoscaled managed instance group from the custom image.
Answer : C
Answer : B
A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag,
and shard the database to reduce replication time.
B. 1. Enable automatic storage increase for the instance. 2. Change the instance type to a 32-core machine type to keep CPU usage below 75%. 3. Create a Stackdriver alert for replication lag, and deploy memcache to
reduce load on the master.
C. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Change the instance type to a 32-core
machine type to reduce replication lag.
D. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Create a Stackdriver alert for replication
lag, and change the instance type to a 32-core machine type to reduce replication lag.
Answer : A
Answer : D
Reference:
https://cloud.google.com/files/BigQueryTechnicalWP.pdf
Answer : C
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
B. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles.
C. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM.
D. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles.
Answer : A
A. Use VPC Network Peering between the VPC and the on-premises network.
B. Expose the VPC to the on-premises network using IAM and VPC Sharing.
C. Create a global Cloud VPN Gateway with VPN tunnels from each region to the on-premises peer gateway.
D. Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.
Answer : D
Answer : B
A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console.
B. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable autoscaling on the managed instance group for the cluster using the gcloud command.
C. Create a deployment and set the maxUnavailable and maxSurge properties. Enable the Cluster Autoscaler using the gcloud command.
D. Create a deployment and set the maxUnavailable and maxSurge properties. Enable autoscaling on the cluster managed instance group from the GCP Console.
Answer : A
A. Verify that Dedicated Interconnect can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if Dedicated Interconnect fails.
B. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails.
C. Verify that the Transfer Appliance can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if the Transfer Appliance fails.
D. Verify that the Transfer Appliance can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if the Transfer Appliance fails.
Answer : B
A. Provision preemptible VMs to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant.
B. Provision preemptible VMs to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not HIPAA-compliant.
C. Provision standard VMs in the same region to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant.
D. Provision standard VMs to the same region to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not HIPAA-compliant.
Answer : B
A. Engage with a security company to run web scrapers that look your for usersג€™ authentication data om malicious websites and notify you if any is found.
B. Deploy intrusion detection software to your virtual machines to detect and log unauthorized access.
C. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behaves.
D. Configure a read replica for your Cloud SQL instance in a different zone than the master, and then manually trigger a failover while monitoring KPIs for our REST API.
Answer : C
A. Connect Google Data Studio to BigQuery. Create a dimension for the users and a metric for the amount of queries per user.
B. In the BigQuery interface, execute a query on the JOBS table to get the required information.
C. Use ג€˜bq showג€™ to list all jobs. Per job, use ג€˜bq lsג€™ to list job information and get the required information.
D. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the required information.
Answer : C
A. Use Terraform to create the managed instance group and a startup script to install the OS package dependencies.
B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the VM image.
C. Use Puppet to create the managed instance group and install the OS package dependencies.
D. Use Deployment Manager to create the managed instance group and Ansible to install the OS package dependencies.
Answer : B
A. Create a group per country. Add analysts to their respective country-groups. Create a single group ג€˜all_analystsג€™, and add all country-groups as members. Grant the ג€˜all_analystsג€™ group the IAM role of
BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group.
B. Create a group per country. Add analysts to their respective country-groups. Create a single group ג€˜all_analystsג€™, and add all country-groups as members. Grant the ג€˜all_analystsג€™ group the IAM role of
BigQuery jobUser. Share the appropriate tables with view access with each respective analyst country-group.
C. Create a group per country. Add analysts to their respective country-groups. Create a single group ג€˜all_analystsג€™, and add all country-groups as members. Grant the ג€˜all_analystsג€™ group the IAM role of
BigQuery dataViewer. Share the appropriate dataset with view access with each respective analyst country- group.
D. Create a group per country. Add analysts to their respective country-groups. Create a single group ג€˜all_analystsג€™, and add all country-groups as members. Grant the ג€˜all_analystsג€™ group the IAM role of
BigQuery dataViewer. Share the appropriate table with view access with each respective analyst country-group.
Answer : A
A. Local SSD for customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.
B. Memcache backed by Cloud Datastore for the customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.
C. Memcache backed by Cloud SQL for customer session state data. Assorted local SSD-backed instances for VM boot/data volumes. Cloud Storage for log archives and thumbnails.
D. Memcache backed by Persistent Disk SSD storage for customer session state data. Assorted local SSD-backed instances for VM boot/data volumes. Cloud Storage for log archives and thumbnails.
Answer : D
A. StatefulSets
B. Role-based access control
C. Container environment variables
D. Persistent Volumes
Answer : A
A. Customize the cache keys to omit the protocol from the key.
B. Shorten the expiration time of the cached objects.
C. Make sure the HTTP(S) header ג€Cache-Regionג€ points to the closest region of your users.
D. Replicate the static content in a Cloud Storage bucket. Point CloudCDN toward a load balancer on that bucket.
Answer : A
Reference:
https://cloud.google.com/cdn/docs/best-practices#using_custom_cache_keys_to_improve_cache_hit_ratio
Answer : B
You have an App Engine application that needs to be updated. You want to test the update with production traffic before replacing the current application version.
What should you do?
A. Deploy the update using the Instance Group Updater to create a partial rollout, which allows for canary testing.
B. Deploy the update as a new version in the App Engine application, and split traffic between the new and current versions.
C. Deploy the update in a new VPC, and use Googleג€™s global HTTP load balancing to split traffic between the update and current applications.
D. Deploy the update as a new App Engine application, and use Googleג€™s global HTTP load balancing to split traffic between the new and current applications.
Answer : B
A. Create an egress rule with priority 1000 to deny all traffic for all instances. Create another egress rule with priority 100 to allow the Active Directory traffic for all instances.
B. Create an egress rule with priority 100 to deny all traffic for all instances. Create another egress rule with priority 1000 to allow the Active Directory traffic for all instances.
C. Create an egress rule with priority 1000 to allow the Active Directory traffic. Rely on the implied deny egress rule with priority 100 to block all traffic for all instances.
D. Create an egress rule with priority 100 to allow the Active Directory traffic. Rely on the implied deny egress rule with priority 1000 to block all traffic for all instances.
Answer : A
A. Export Cloud Machine Learning Engine performance metrics from Stackdriver to BigQuery, to be used to analyze the efficiency of the model.
B. Build a roadmap to move the machine learning model training from Cloud GPUs to Cloud TPUs, which offer better results.
C. Monitor Compute Engine announcements for availability of newer CPU architectures, and deploy the model to them as soon as they are available for additional performance.
D. Save a history of recommendations and results of the recommendations in BigQuery, to be used as training data.
Answer : D
A. Use the Horizontal Pod Autoscaler and enable cluster autoscaling. Use an Ingress resource to load-balance the HTTPS traffic.
B. Use the Horizontal Pod Autoscaler and enable cluster autoscaling on the Kubernetes cluster. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic.
C. Enable autoscaling on the Compute Engine instance group. Use an Ingress resource to load-balance the HTTPS traffic.
D. Enable autoscaling on the Compute Engine instance group. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic.
Answer : B
Reference:
https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler
Answer : B
Reference:
https://cloud.google.com/load-balancing/docs/https/url-map
D. Monitor https://status.cloud.google.com/feed.atom and only make requests if Cloud Storage is not reporting an incident.
Answer : B
Reference:
https://cloud.google.com/storage/docs/json_api/v1/status-codes
A. Use Deployment Manager to automate service provisioning. Use Activity Logs to monitor and debug your tests.
B. Use Deployment Manager to automate service provisioning. Use Stackdriver to monitor and debug your tests.
C. Use gcloud scripts to automate service provisioning. Use Activity Logs to monitor and debug your tests.
D. Use gcloud scripts to automate service provisioning. Use Stackdriver to monitor and debug your tests.
Answer : B
Answer : A
A. Store the data in Google Drive and manually delete records as they expire.
B. Anonymize the data using the Cloud Data Loss Prevention API and store it indefinitely.
C. Store the data in Cloud Storage and use lifecycle management to delete files when they expire.
D. Store the data in Cloud Storage and run a nightly batch script that deletes all expired data.
Answer : C
A. Set the memcache service level to dedicated. Create a key from the hash of the query, and return database values from memcache before issuing a query to Cloud SQL.
B. Set the memcache service level to dedicated. Create a cron task that runs every minute to populate the cache with keys containing query results.
C. Set the memcache service level to shared. Create a cron task that runs every minute to save all expected queries to a key called ג€cached_queriesג€.
D. Set the memcache service level to shared. Create a key called ג€cached_queriesג€, and return database values from the key before using a query to Cloud SQL.
Answer : A
A. Using the Cron service provided by App Engine, publish messages directly to a message-processing utility service running on Compute Engine instances.
B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
C. Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to a message-processing utility service running on Compute Engine instances.
D. Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
Answer : B
A. Compress and upload both archived files and files uploaded daily using the gsutil ג€"m option.
B. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a connection with Google using a Dedicated Interconnect or Direct Peering
connection and use it to upload files daily.
C. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish one Cloud VPN Tunnel to VPC networks over the public internet, and compress and
upload files daily using the gsutil ג€"m option.
D. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a Cloud VPN Tunnel to VPC networks over the public internet, and compress and
upload files daily.
Answer : B
Answer : D
Reference:
https://cloud.google.com/pubsub/docs/ordering
A. 1. Define a migration plan based on the list of the applications and their dependencies. 2. Migrate all virtual machines into Compute Engine individually with Migrate for Compute Engine.
B. 1. Perform an assessment of virtual machines running in the current VMware environment. 2. Create images of all disks. Import disks on Compute Engine. 3. Create standard virtual machines where the boot disks
are the ones you have imported.
C. 1. Perform an assessment of virtual machines running in the current VMware environment. 2. Define a migration plan, prepare a Migrate for Compute Engine migration RunBook, and execute the migration.
D. 1. Perform an assessment of virtual machines running in the current VMware environment. 2. Install a third-party agent on all selected virtual machines. 3. Migrate all virtual machines into Compute Engine.
Answer : C
Explanation:
The framework illustrated in the preceding diagram has four phases:
ג€¢ Assess. In this phase, you assess your source environment, assess the workloads that you want to migrate to Google Cloud, and assess which VMs support each workload.
ג€¢ Plan. In this phase, you create the basic infrastructure for Migrate for Compute Engine, such as provisioning the resource hierarchy and setting up network access.
ג€¢ Deploy. In this phase, you migrate the VMs from the source environment to Compute Engine.
ג€¢ Optimize. In this phase, you begin to take advantage of the cloud technologies and capabilities.
Reference:
https://cloud.google.com/architecture/migrating-vms-migrate-for-compute-engine-getting-started
A. Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use an HTTP load balancer in front of the instances.
B. Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use a network load balancer in front of the instances.
C. Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use an HTTP load balancer in front of the instances.
D. Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use a network load balancer in front of the instances.
Answer : D
Reference:
https://cloud.google.com/compute/docs/instance-groups
A. Use OpenVPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
B. Configure a direct peering connection between the on-premises environment and Google Cloud.
C. Use Cloud VPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
D. Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google Cloud.
Answer : C
Reference -
https://cloud.google.com/architecture/setting-up-private-access-to-cloud-apis-through-vpn-tunnels
A. Deploy a new revision to Cloud Run with the new version. Configure traffic percentage between revisions.
B. Deploy a new service to Cloud Run with the new version. Add a Cloud Load Balancing instance in front of both services.
C. In the Google Cloud Console page for Cloud Run, set up continuous deployment using Cloud Build for the development branch. As part of the Cloud Build trigger, configure the substitution variable
TRAFFIC_PERCENTAGE with the percentage of traffic you want directed to a new version.
D. In the Google Cloud Console, configure Traffic Director with a new Service that points to the new version of the application on Cloud Run. Configure Traffic Director to send a small percentage of traffic to the new
version of the application.
Answer : C
A. Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create alert policies.
B. Navigate the predefined dashboards in the Cloud Monitoring workspace, create custom metrics, and install alerting software on a Compute Engine instance.
C. Write a shell script that gathers metrics from GKE nodes, publish these metrics to a Pub/Sub topic, export the data to BigQuery, and make a Data Studio dashboard.
D. Create a custom dashboard in the Cloud Monitoring workspace for each incident, and then add metrics and create alert policies.
Answer : D
Reference:
https://cloud.google.com/monitoring/charts/dashboards
A. Sharding
B. Read replicas
C. Binary logging
D. Automated backups
E. Semisynchronous replication
Answer : CD
Explanation:
Backups help you restore lost data to your Cloud SQL instance. Additionally, if an instance is having a problem, you can restore it to a previous state by using the backup to overwrite it. Enable automated backups for
any instance that contains necessary data. Backups protect your data from loss or damage.
Enabling automated backups, along with binary logging, is also required for some operations, such as clone and replica creation.
Reference:
https://cloud.google.com/sql/docs/mysql/backup-recovery/backups
A. Use a unique identifier for each individual. Upon a deletion request, delete all rows from BigQuery with this identifier.
B. When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify any personal information. As part of the DLP scan, save the result to Data Catalog. Upon a deletion
request, query Data Catalog to find the column with personal information.
C. Create a BigQuery view over the table that contains all data. Upon a deletion request, exclude the rows that affect the subjectג€™s data from this view. Use this view instead of the source table for all analysis tasks.
D. Use a unique identifier for each individual. Upon a deletion request, overwrite the column with the unique identifier with a salted SHA256 of its value.
Answer : B
Your company has announced that they will be outsourcing operations functions. You want to allow developers to easily stage new versions of a cloud-based application in the production environment and allow the
outsourced operations team to autonomously promote staged versions to production. You want to minimize the operational overhead of the solution. Which Google Cloud product should you migrate to?
A. App Engine
B. GKE On-Prem
C. Compute Engine
D. Google Kubernetes Engine
Answer : D
Reference:
https://cloud.google.com/security/compliance/eba-outsourcing-mapping-gcp
A. Create a shell script that uses the gcloud command to change the machine type of the development and acceptance instances to a smaller machine type outside of office hours. Schedule the shell script on one of the
production instances to automate the task.
B. Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance environments after office hours and start them just before office hours.
C. Deploy the development and acceptance applications on a managed instance group and enable autoscaling.
D. Use regular Compute Engine instances for the production environment, and use preemptible VMs for the acceptance and development environments.
Answer : D
Reference:
https://cloud.google.com/blog/products/it-ops/best-practices-for-optimizing-your-cloud-costs
A. 1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server. 2. Stop the on-premises application. 3. Create a mysqldump of the on-
premises MySQL server. 4. Upload the dump to a Cloud Storage bucket. 5. Import the dump into Cloud SQL. 6. Modify the source code of the application to write queries to both databases and read from its local
database. 7. Start the Compute Engine application. 8. Stop the on-premises application.
B. 1. Set up Cloud SQL proxy and MySQL proxy. 2. Create a mysqldump of the on-premises MySQL server. 3. Upload the dump to a Cloud Storage bucket. 4. Import the dump into Cloud SQL. 5. Stop the on-premises
application. 6. Start the Compute Engine application.
C. 1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server. 2. Stop the on-premises application. 3. Start the Compute Engine
application, configured to read and write to the on-premises MySQL server. 4. Create the replication configuration in Cloud SQL. 5. Configure the source database server to accept connections from the Cloud SQL
replica. 6. Finalize the Cloud SQL replica configuration. 7. When replication has been completed, stop the Compute Engine application. 8. Promote the Cloud SQL replica to a standalone instance. 9. Restart the
Compute Engine application, configured to read and write to the Cloud SQL standalone instance.
D. 1. Stop the on-premises application. 2. Create a mysqldump of the on-premises MySQL server. 3. Upload the dump to a Cloud Storage bucket. 4. Import the dump into Cloud SQL. 5. Start the application on
Compute Engine.
Answer : A
A. Remove the default route on all VPCs. Move all approved instances into a new subnet that has a default route to an internet gateway.
B. Create a new VPC in custom mode. Create a new subnet for the approved instances, and set a default route to the internet gateway on this new subnet.
C. Implement a Cloud NAT solution to remove the need for external IP addresses entirely.
D. Set an Organization Policy with a constraint on constraints/compute.vmExternalIpAccess. List the approved instances in the allowedValues list.
Answer : D
Reference:
https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address
Answer : B
https://www.itexams.com/exam/Professional Cloud Architect? 19/21
4/11/22, 9:41 PM Professional Cloud Architect Exam - Free Questions and Answers - ITExams.com
Answer : B
Reference:
https://cloud.google.com/network-intelligence-center/docs/firewall-insights/how-to/using-firewall-insights
A. 1. Create a VPC Service Controls perimeter that includes the projects with the buckets. 2. Create an access level with the CIDR of the office network.
B. 1. Create a firewall rule for all instances in the Virtual Private Cloud (VPC) network for source range. 2. Use the Classless Inter-domain Routing (CIDR) of the office network.
C. 1. Create a Cloud Function to remove IAM permissions from the buckets, and another Cloud Function to add IAM permissions to the buckets. 2. Schedule the Cloud Functions with Cloud Scheduler to add
permissions at the start of business and remove permissions at the end of business.
D. 1. Create a Cloud VPN to the office network. 2. Configure Private Google Access for on-premises hosts.
Answer : C
Answer : C
A. Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to restore the disk in the same zone.
B. Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin
up the application in another zone in the same region. Use the regional persistent disk for the application data.
C. Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to restore the disk in another zone within the same region.
D. Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin
up the application in another region. Use the regional persistent disk for the application data.
Answer : D
A. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no overlapping IP space.
B. Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT instance to perform NAT on the overlapping IP space.
C. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply a custom route advertisement to block the overlapping IP space.
D. Create a Cloud VPN connection from the new VPC to the data center, and apply a firewall rule that blocks the overlapping IP space.
Answer : A
Answer : A
Reference:
https://cloud.google.com/architecture/hadoop/hadoop-gcp-migration-jobs
Instance #1 is an exception and must communicate directly with both Instance #2 and Instance #3 via internal IPs. How should you accomplish this?
Answer : B
A. Create a Compute Engine instance template using the most recent Debian image. Create an instance from this template, and install and configure the application as part of the startup script. Repeat this process
whenever a new Google-managed Debian image becomes available.
B. Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.
C. Create an instance with the latest available Debian image. Connect to the instance via SSH, and install and configure the application on the instance. Repeat this process whenever a new Google-managed Debian
image becomes available.
D. Create a Docker container with Debian as the base image. Install and configure the application as part of the Docker image creation process. Host the container on Google Kubernetes Engine and restart the
container whenever a new update is available.
Answer : B
Reference:
https://cloud.google.com/compute/docs/os-patch-management