Data Base
Data Base
QUESTION: 1
You are developing a new application on a VM that is on your corporate network. The
application will use Java Database Connectivity (JDBC) to connect to Cloud SQL for
PostgreSQL. Your Cloud SQL instance is configured with IP address 192.168.3.48, and
SSL is disabled. You want to ensure that your application can access your database
instance without requiring configuration changes to your database.
What should you do?
A. Define a connection string using your Google username and password to point to
the external (public) IP address of your Cloud SQL instance.
C. Define a connection string using Cloud SQL Auth proxy configured with a service
account to point to the internal (private) IP address of your Cloud SQL instance.
D. Define a connection string using Cloud SQL Auth proxy configured with a service
account to point to the external (public) IP address of your Cloud SQL instance.
Answer(s): C
Explanation:
The Cloud SQL connectors are libraries that provide encryption and IAM-based
authorization when connecting to a Cloud SQL instance. They can't provide a network
path to a Cloud SQL instance if one is not already present. Other ways to connect to a
Cloud SQL instance include using a database client or the Cloud SQL Auth proxy.
https://cloud.google.com/sql/docs/postgres/connect-connectors
https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-
factory/blob/main/docs/jdbc- postgres.md
QUESTION: 2
Your digital-native business runs its database workloads on Cloud SQL. Your website
must be globally accessible 24/7. You need to prepare your Cloud SQL instance for
high availability (HA). You want to follow Google-recommended practices.
What should you do? (Choose two.)
Answer(s): D,E
Explanation:
D) Enable point-in-time recovery - This feature allows you to restore your database to a
specific point in time. It helps protect against data loss and can be used in the event of
data corruption or accidental data deletion. E. Schedule automated backups -
Automated backups allow you to take regular backups of your database without manual
intervention. You can use these backups to restore your database in the event of data
loss or corruption.
QUESTION: 3
Your company wants to move to Google Cloud. Your current data center is closing in six
months. You are running a large, highly transactional Oracle application footprint on
VMWare. You need to design a solution with minimal disruption to the current
architecture and provide ease of migration to Google Cloud.
What should you do?
Answer(s): A
Explanation:
https://cloud.google.com/blog/products/databases/migrate-databases-to-google-cloud-
vmware- engine-gcve
QUESTION: 4
Your customer has a global chat application that uses a multi-regional Cloud Spanner
instance. The application has recently experienced degraded performance after a new
version of the application was launched. Your customer asked you for assistance.
During initial troubleshooting, you observed high read latency.
What should you do?
Answer(s): C
Explanation:
To troubleshoot high read latency, you can use SQL statements to analyze the
SPANNER_SYS.READ_STATS* tables. These tables contain statistics about read
operations in Cloud Spanner, including the number of reads, read latency, and the
number of read errors. By analyzing these tables, you can identify the cause of the high
read latency and take appropriate action to resolve the issue. Other options, such as
using query parameters to speed up frequently executed queries or changing the Cloud
Spanner configuration from multi-region to single region, may not be directly related to
the issue of high read latency. Similarly, analyzing the
SPANNER_SYS.QUERY_STATS* tables, which contain statistics about query
operations, may not be relevant to the issue of high read latency.
QUESTION: 5
Your company has PostgreSQL databases on-premises and on Amazon Web Services
(AWS). You are planning multiple database migrations to Cloud SQL in an effort to
reduce costs and downtime. You want to follow Google-recommended practices and
use Google native data migration tools. You also want to closely monitor the migrations
as part of the cutover strategy.
What should you do?
B. Use Database Migration Service for one-time migrations, and use third-party or
partner tools for change data capture (CDC) style migrations.
Answer(s): A
Explanation:
https://cloud.google.com/blog/products/databases/tips-for-migrating-across-compatible-
database- engines
QUESTION: 6
You are setting up a Bare Metal Solution environment. You need to update the
operating system to the latest version. You need to connect the Bare Metal Solution
environment to the internet so you can receive software updates.
What should you do?
Answer(s): C
Explanation:
https://cloud.google.com/bare-metal/docs/bms-setup?hl=en#bms-access-internet-vm-
nat The docs specifically says "Setting up a NAT gateway on a Compute Engine VM" is
the way to give BMS internet access.
QUESTION: 7
Your organization is running a MySQL workload in Cloud SQL. Suddenly you see a
degradation in database performance. You need to identify the root cause of the
performance degradation.
What should you do?
B. Use Cloud Monitoring to monitor CPU, memory, and storage utilization metrics.
Answer(s): B
Explanation:
https://cloud.google.com/sql/docs/mysql/diagnose-
issues#:~:text=If%20your%20instance%20stops%20responding%20to%20connections
%20or%20perf
ormance%20is%20degraded%2C%20make%20sure%20it%20conforms%20to%20the
%20Operational %20Guidelines
QUESTION: 8
You work for a large retail and ecommerce company that is starting to extend their
business globally. Your company plans to migrate to Google Cloud. You want to use
platforms that will scale easily, handle transactions with the least amount of latency, and
provide a reliable customer experience. You need a storage layer for sales transactions
and current inventory levels. You want to retain the same relational schema that your
existing platform uses.
What should you do?
A. Store your data in Firestore in a multi-region location, and place your compute
resources in one of the constituent regions.
B. Deploy Cloud Spanner using a multi-region instance, and place your compute
resources close to the default leader region.
D. Deploy a Bigtable instance with a cluster in one region and a replica cluster in
another geographic region.
Answer(s): B
QUESTION: 9
You host an application in Google Cloud. The application is located in a single region
and uses Cloud SQL for transactional data. Most of your users are located in the same
time zone and expect the application to be available 7 days a week, from 6 AM to 10
PM. You want to ensure regular maintenance updates to your Cloud SQL instance
without creating downtime for your users.
Answer(s): A
Explanation:
Configure a maintenance window during a period when no users will be on the system.
Control the order of update by setting non-production instances to earlier and
production instances to later.
QUESTION: 10
A. Identify and optimize slow running queries, or set parallel replication flags.
C. Edit the primary instance to upgrade to a larger disk, and increase vCPU count.
Answer(s): A
Explanation:
https://cloud.google.com/sql/docs/mysql/replication/replication-
lag#optimize_queries_and_schema
QUESTION: 11
Answer(s): A
Explanation:
https://cloud.google.com/spanner/docs/iam#spanner.backupWriter
QUESTION: 12
You are designing an augmented reality game for iOS and Android devices. You plan to
use Cloud Spanner as the primary backend database for game state storage and player
authentication. You want to track in-game rewards that players unlock at every stage of
the game. During the testing phase, you discovered that costs are much higher than
anticipated, but the query response times are within the SLA.You want to follow Google-
recommended practices. You need the database to be performant and highly available
while you keep costs low.
What should you do?
A. Manually scale down the number of nodes after the peak period has passed.
C. Use the Cloud Spanner query optimizer to determine the most efficient way to
execute the SQL query.
Answer(s): D
Explanation:
Granular instance is available in Public Preview. With this feature, you can run
workloads on Spanner at as low as 1/10th the cost of regular instances,
https://cloud.google.com/blog/products/databases/get-more-out-of-spanner-with-
granular- instance-sizing
QUESTION: 13
You recently launched a new product to the US market. You currently have two Bigtable
clusters in one US region to serve all the traffic. Your marketing team is planning an
immediate expansion to APAC. You need to roll out the regional expansion while
implementing high availability according to Google-recommended practices.
What should you do?
Answer(s): D
Explanation:
https://cloud.google.com/bigtable/docs/replication-settings#regional-failover
QUESTION: 14
Your ecommerce website captures user clickstream data to analyze customer traffic
patterns in real time and support personalization features on your website. You plan to
analyze this data using big data tools. You need a low-latency solution that can store 8
TB of data and can scale to millions of read and write requests per second.
What should you do?
A. Write your data into Bigtable and use Dataproc and the Apache Hbase libraries
for analysis.
B. Deploy a Cloud SQL environment with read replicas for improved performance.
Use Datastream to export data to Cloud Storage and analyze with Dataproc and
the Cloud Storage connector.
D. Stream your data into BigQuery and use Dataproc and the BigQuery Storage API
to analyze large volumes of data.
Answer(s): A
Explanation:
Start with the lowest tier and smallest size and then grow your instance as needed.
Memorystore provides automated scaling using APIs, and optimized node placement
across zones for redundancy. Memorystore for Memcached can support clusters as
large as 5 TB, enabling millions of QPS at very low latency
QUESTION: 15
Your company uses Cloud Spanner for a mission-critical inventory management system
that is globally available. You recently loaded stock keeping unit (SKU) and product
catalog data from a company acquisition and observed hot-spots in the Cloud Spanner
database. You want to follow Google-recommended schema design practices to avoid
performance degradation.
What should you do? (Choose two.)
Answer(s): D,E
Explanation:
https://cloud.google.com/spanner/docs/schema-design D because high cardinality
means you have more unique values in the collumn. That's a good thing for a hot-
spotting issue. E because Spanner specifically has this feature to reduce hot spotting.
Basically, it generates unique values https://cloud.google.com/spanner/docs/schema-
design#bit_reverse_primary_key
QUESTION: 16
You are managing multiple applications connecting to a database on Cloud SQL for
PostgreSQL. You need to be able to monitor database performance to easily identify
applications with long-running and resource-intensive queries.
What should you do?
C. Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.
Answer(s): B
Explanation:
https://cloud.google.com/sql/docs/mysql/using-query-insights#introduction\
QUESTION: 17
You are building an application that allows users to customize their website and mobile
experiences. The application will capture user information and preferences. User
profiles have a dynamic schema, and users can add or delete information from their
profile. You need to ensure that user changes automatically trigger updates to your
downstream BigQuery data warehouse.
What should you do?
A. Store your data in Bigtable, and use the user identifier as the key. Use one
column family to store user profile data, and use another column family to store
user preferences.
B. Use Cloud SQL, and create different tables for user profile data and user
preferences from your recommendations model. Use SQL to join the user profile
data and preferences
C. Use Firestore in Native mode, and store user profile data as a document. Update
the user profile with preferences specific to that user and use the user identifier
to query.
D. Use Firestore in Datastore mode, and store user profile data as a document.
Update the user profile with preferences specific to that user and use the user
identifier to query.
Answer(s): C
Explanation:
Use Firestore in Datastore mode for new server projects. Firestore in Datastore mode
allows you to use established Datastore server architectures while removing
fundamental Datastore limitations. Datastore mode can automatically scale to millions of
writes per second. Use Firestore in Native mode for new mobile and web apps.
Firestore offers mobile and web client libraries with real-time and offline features. Native
mode can automatically scale to millions of concurrent clients.
QUESTION: 18
Your application uses Cloud SQL for MySQL. Your users run reports on data that relies
on near-real time; however, the additional analytics caused excessive load on the
primary database. You created a read replica for the analytics workloads, but now your
users are complaining about the lag in data changes and that their reports are still slow.
You need to improve the report performance and shorten the lag in data replication
without making changes to the current reports.
Which two approaches should you implement? (Choose two.)
B. Create additional read replicas, and partition your analytics users to use different
read replicas.
C. Disable replication on the read replica, and set the flag for parallel replication on
the read replica.
Re-enable replication and optimize performance by setting flags on the primary
instance.
D. Disable replication on the primary instance, and set the flag for parallel
replication on the primary instance. Re-enable replication and optimize
performance by setting flags on the read replica.
Answer(s): B,C
Explanation:
Replication lag and slow report performance. E is eliminated because using BigQuery
would mean changes to the current reports. Report slowness could be the result of poor
indexing or just too much read load (or both!). Since excessive load is mentioned in the
question, creating additional read replicas and spreading the analytics workload around
makes B correct and eliminates A as a way to speed up reporting. That leaves the
replication problem. Cloud SQL enables single threaded replication by default, so it
stands to reason enabling parallel replication would help the lag. To do that you disable
replication on the replica (not the primary), set flags on the replica and optionally set
flags on the primary instance to optimize performance for parallel replication. That
makes C correct and D incorrect.
https://cloud.google.com/sql/docs/mysql/replication/manage- replicas#configuring-
parallel-replication
QUESTION: 19
You are evaluating Cloud SQL for PostgreSQL as a possible destination for your on-
premises PostgreSQL instances. Geography is becoming increasingly relevant to
customer privacy worldwide. Your solution must support data residency requirements
and include a strategy to:
configure where data is stored control where the encryption keys are stored govern the
access to data
What should you do?
A. Replicate Cloud SQL databases across different zones.
B. Create a Cloud SQL for PostgreSQL instance on Google Cloud for the data that
does not need to adhere to data residency requirements. Keep the data that
must adhere to data residency requirements on-premises. Make application
changes to support both databases.
C. Allow application access to data only if the users are in the same region as the
Google Cloud region for the Cloud SQL for PostgreSQL database.
Answer(s): D
Explanation:
https://cloud.google.com/blog/products/identity-security/meet-data-residency-
requirements-with- google-cloud
QUESTION: 20
Your customer is running a MySQL database on-premises with read replicas. The
nightly incremental backups are expensive and add maintenance overhead. You want
to follow Google-recommended practices to migrate the database to Google Cloud, and
you need to ensure minimal downtime.
What should you do?
A. Create a Google Kubernetes Engine (GKE) cluster, install MySQL on the cluster,
and then import the dump file.
B. Use the mysqldump utility to take a backup of the existing on-premises database,
and then import it into Cloud SQL.
C. Create a Compute Engine VM, install MySQL on the VM, and then import the
dump file.
D. Create an external replica, and use Cloud SQL to synchronize the data to the
replica.
Answer(s): D
Explanation:
https://cloud.google.com/sql/docs/mysql/replication/configure-replication-from-external
QUESTION: 21
Your team uses thousands of connected IoT devices to collect device maintenance data
for your oil and gas customers in real time. You want to design inspection routines,
device repair, and replacement schedules based on insights gathered from the data
produced by these devices. You need a managed solution that is highly scalable,
supports a multi-cloud strategy, and offers low latency for these IoT devices.
What should you do?
Answer(s): C
Explanation:
This scenario has BigTable written all over it - large amounts of data from many devices
to be analysed in realtime. I would even argue it could qualify as a multicloud solution,
given the links to HBASE. BUT it does not support SQL queries and is not therefore
compatible (on its own) with Looker. Firestore + Looker has the same problem. Spanner
+ Data Studio is at least a compatible pairing, but I agree with others that it doesn't fit
this use-case - not least because it's Google-native. By contrast, MongoDB Atlas is a
managed solution (just not by Google) which is compatible with the proposed reporting
tool (Mongo's own Charts), it's specifically designed for this type of solution and of
course it can run on any cloud.
QUESTION: 22
Your application follows a microservices architecture and uses a single large Cloud SQL
instance, which is starting to have performance issues as your application grows. in the
Cloud Monitoring dashboard, the CPU utilization looks normal You want to follow
Google-recommended practices to resolve and prevent these performance issues while
avoiding any major refactoring.
What should you do?
Answer(s): D
Explanation:
https://cloud.google.com/sql/docs/mysql/best-practices#data-arch
QUESTION: 23
You need to perform a one-time migration of data from a running Cloud SQL for MySQL
instance in the us-central1 region to a new Cloud SQL for MySQL instance in the us-
east1 region. You want to follow Google-recommended practices to minimize
performance impact on the currently running instance.
What should you do?
A. Create and run a Dataflow job that uses JdbcIO to copy data from one Cloud
SQL instance to another.
B. Create two Datastream connection profiles, and use them to create a stream
from one Cloud SQL instance to another.
C. Create a SQL dump file in Cloud Storage using a temporary instance, and then
use that file to import into a new instance.
D. Create a CSV file by running the SQL statement SELECT...INTO OUTFILE, copy
the file to a Cloud Storage bucket, and import it into a new instance.
Answer(s): C
Explanation:
https://cloud.google.com/sql/docs/mysql/import-export#serverless
QUESTION: 24
You are running a mission-critical application on a Cloud SQL for PostgreSQL database
with a multi- zonal setup. The primary and read replica instances are in the same region
but in different zones. You need to ensure that you split the application load between
both instances.
What should you do?
A. Use Cloud Load Balancing for load balancing between the Cloud SQL primary
and read replica instances.
B. Use PgBouncer to set up database connection pooling between the Cloud SQL
primary and read replica instances.
C. Use HTTP(S) Load Balancing for database connection pooling between the
Cloud SQL primary and read replica instances.
D. Use the Cloud SQL Auth proxy for database connection pooling between the
Cloud SQL primary and read replica instances.
Answer(s): B
Explanation:
https://severalnines.com/blog/how-achieve-postgresql-high-availability-pgbouncer/
https://cloud.google.com/blog/products/databases/using-haproxy-to-scale-read-only-
workloads-on- cloud-sql-for-postgresql
This answer is correct because PgBouncer is a lightweight connection pooler for
PostgreSQL that can help you distribute read requests between the Cloud SQL primary
and read replica instances1. PgBouncer can also improve performance and scalability
by reducing the overhead of creating new connections and reusing existing ones1. You
can install PgBouncer on a Compute Engine instance and configure it to connect to the
Cloud SQL instances using private IP addresses or the Cloud SQL Auth proxy2.
QUESTION: 25
Your organization deployed a new version of a critical application that uses Cloud SQL
for MySQL with high availability (HA) and binary logging enabled to store transactional
information. The latest release of the application had an error that caused massive data
corruption in your Cloud SQL for MySQL database. You need to minimize data loss.
What should you do?
A. Open the Google Cloud Console, navigate to SQL > Backups, and select the last
version of the automated backup before the corruption.
B. Reload the Cloud SQL for MySQL database using the LOAD DATA command to
load data from CSV files that were used to initialize the instance.
D. Fail over to the Cloud SQL for MySQL HA instance. Use that instance to recover
the transactions that occurred before the corruption.
Answer(s): C
Explanation:
Binary Logging enabled, with that you can identify the point of time the data was good
and recover from that point time. https://cloud.google.com/sql/docs/mysql/backup-
recovery/pitr#perform_the_point-in-time_recovery_using_binary_log_positions
QUESTION: 26
You plan to use Database Migration Service to migrate data from a PostgreSQL on-
premises instance to Cloud SQL. You need to identify the prerequisites for creating and
automating the task.
What should you do? (Choose two.)
D. Shut down the database before the Data Migration Service task is started.
Answer(s): C,E
Explanation:
https://cloud.google.com/database-migration/docs/postgres/faq
QUESTION: 27
You are using Compute Engine on Google Cloud and your data center to manage a set
of MySQL databases in a hybrid configuration. You need to create replicas to scale
reads and to offload part of the management operation.
What should you do?
QUESTION: 28
Your company is shutting down their data center and migrating several MySQL and
PostgreSQL databases to Google Cloud. Your database operations team is severely
constrained by ongoing production releases and the lack of capacity for additional on-
premises backups. You want to ensure that the scheduled migrations happen with
minimal downtime and that the Google Cloud databases stay in sync with the on-
premises data changes until the applications can cut over.
E. Use replication from an external server to migrate the databases to Cloud SQL.
Answer(s): C,E
QUESTION: 29
A. Create a Cloud SQL for MySQL instance for your databases, and configure
Datastream to stream your database changes to Cloud SQL.
Select the Backfill historical data check box on your stream configuration to
initiate Datastream to backfill any data that is out of sync between the source and
destination.
Delete your stream when all changes are moved to Cloud SQL for MySQL, and
update your application to use the new instance.
D. Use the mysqldump utility to manually initiate a backup of MySQL during the
application maintenance window.
Move the files to Cloud Storage, and import each database into your Cloud SQL
instance.
Continue to dump each database until all the databases are migrated.
Update your application connections to the new instance.
Answer(s): B
Explanation:
https://cloud.google.com/datastream/docs/overview.
QUESTION: 30
Your company uses the Cloud SQL out-of-disk recommender to analyze the storage
utilization trends of production databases over the last 30 days. Your database
operations team uses these recommendations to proactively monitor storage utilization
and implement corrective actions. You receive a recommendation that the instance is
likely to run out of disk space.
What should you do to address this storage alert?
Answer(s): C
Explanation:
https://cloud.google.com/sql/docs/mysql/instance-settings#storage-capacity-2ndgen
QUESTION: 31
You are managing a mission-critical Cloud SQL for PostgreSQL instance. Your
application team is running important transactions on the database when another DBA
starts an on-demand backup. You want to verify the status of the backup.
What should you do?
Answer(s): B
Explanation:
https://cloud.google.com/sql/docs/postgres/backup-recovery/backups#troubleshooting-
backups Under Troubleshooting: Issue: "You can't see the current operation's status."
The Google Cloud console reports only success or failure when the operation is done. It
isn't designed to show warnings or other updates. Run the gcloud sql operations list
command to list all operations for the given Cloud SQL instance
QUESTION: 32
Answer(s): A
Explanation:
In case of high CPU utilization like, mentioned in question, refer:
https://cloud.google.com/spanner/docs/identify-latency-
point#:~:text=Check%20the%20CPU%20utilization%20of%20the%20instance.%20If%2
0the%20CPU%20utilization%20of%20the%20instance%20is%20above%20the%20reco
mmended%20level%2C%20y
ou%20should%20manually%20add%20more%20nodes%2C%20or%20set%20up%20a
uto%20scaling. "Check the CPU utilization of the instance. If the CPU utilization of the
instance is above the recommended level, you should manually add more nodes, or set
up auto scaling." Indexes and schema are reviewed post identifying query with slow
performance. Refer :
https://cloud.google.com/spanner/docs/troubleshooting-performance-
regressions#review-schema
QUESTION: 33
Your company uses Bigtable for a user-facing application that displays a low-latency
real-time dashboard. You need to recommend the optimal storage type for this read-
intensive database.
What should you do?
B. Recommend splitting the Bigtable instance into two instances in order to load
balance the concurrent reads.
C. Recommend hard disk drives (HDD).
Answer(s): A
Explanation:
if you plan to store extensive historical data for a large number of remote-sensing
devices and then use the data to generate daily reports, the cost savings for HDD
storage might justify the performance tradeoff. On the other hand, if you plan to use the
data to display a real-time dashboard, it probably would not make sense to use HDD
storage--reads would be much more frequent in this case, and reads that are not scans
are much slower with HDD storage.
QUESTION: 34
Your organization has a critical business app that is running with a Cloud SQL for
MySQL backend database. Your company wants to build the most fault-tolerant and
highly available solution possible. You need to ensure that the application database can
survive a zonal and regional failure with a primary region of us-central1 and the backup
region of us-east1.
What should you do?
Answer(s): B
Explanation:
https://cloud.google.com/sql/docs/sqlserver/intro-to-cloud-sql-disaster-recovery
QUESTION: 35
You are building an Android game that needs to store data on a Google Cloud
serverless database. The database will log user activity, store user preferences, and
receive in-game updates. The target audience resides in developing countries that have
intermittent internet connectivity. You need to ensure that the game can synchronize
game data to the backend database whenever an internet network is available.
What should you do?
A. Use Firestore.
Answer(s): A
Explanation:
https://firebase.google.com/docs/firestore
QUESTION: 36
You released a popular mobile game and are using a 50 TB Cloud Spanner instance to
store game data in a PITR-enabled production environment.
When you analyzed the game statistics, you realized that some players are exploiting a
loophole to gather more points to get on the leaderboard. Another DBA accidentally ran
an emergency bugfix script that corrupted some of the data in the production
environment. You need to determine the extent of the data corruption and restore the
production environment.
What should you do? (Choose two.)
A. If the corruption is significant, use backup and restore, and specify a recovery
timestamp.
D. If the corruption is insignificant, use backup and restore, and specify a recovery
timestamp.
QUESTION: 37
You are starting a large CSV import into a Cloud SQL for MySQL instance that has
many open connections. You checked memory and CPU usage, and sufficient
resources are available. You want to follow Google-recommended practices to ensure
that the import will not time out.
What should you do?
A. Close idle connections or restart the instance before beginning the import
operation.
C. Ensure that the service account has the Storage Admin role.
D. Increase the number of CPUs for the instance to ensure that it can handle the
additional import operation.
Answer(s): A
Explanation:
https://cloud.google.com/sql/docs/mysql/import-export#troubleshooting
QUESTION: 38
You are migrating your data center to Google Cloud. You plan to migrate your
applications to Compute Engine and your Oracle databases to Bare Metal Solution for
Oracle. You must ensure that the applications in different projects can communicate
securely and efficiently with the Oracle databases.
What should you do?
A. Set up a Shared VPC, configure multiple service projects, and create firewall
rules.
Answer(s): A
Explanation:
https://medium.com/google-cloud/shared-vpc-in-google-cloud-
64527e0a409e#:~:text=Unlike%20VPC%20peering%2C%20Shared%20VPC%20conne
cts%20projects
%20within%20the%20same%20organization.&text=There%20are%20a%20lot%20of,be
tween%20VP Cs%20in%20different%20projects.
QUESTION: 39
You are running an instance of Cloud Spanner as the backend of your ecommerce
website. You learn that the quality assurance (QA) team has doubled the number of
their test cases. You need to create a copy of your Cloud Spanner database in a new
test environment to accommodate the additional test cases. You want to follow Google-
recommended practices.
What should you do?
Answer(s): C
Explanation:
https://cloud.google.com/spanner/docs/import-export-overview#file-format
QUESTION: 40
You need to redesign the architecture of an application that currently uses Cloud SQL
for PostgreSQL. The users of the application complain about slow query response
times. You want to enhance your application architecture to offer sub-millisecond query
latency.
What should you do?
Answer(s): D
Explanation:
"sub-millisecond latency" always involves Memorystore. Furthermore, as we are talking
about a relational DB (Cloud SQL), BigTable is not a solution to be considered.
QUESTION: 41
You need to migrate existing databases from Microsoft SQL Server 2016 Standard
Edition on a single Windows Server 2019 Datacenter Edition to a single Cloud SQL for
SQL Server instance. During the discovery phase of your project, you notice that your
on-premises server peaks at around 25,000 read IOPS. You need to ensure that your
Cloud SQL instance is sized appropriately to maximize read performance.
What should you do?
A. Create a SQL Server 2019 Standard on Standard machine type with 4 vCPUs,
15 GB of RAM, and 800 GB of solid-state drive (SSD).
B. Create a SQL Server 2019 Standard on High Memory machine type with at least
16 vCPUs, 104 GB of RAM, and 200 GB of SSD.
C. Create a SQL Server 2019 Standard on High Memory machine type with 16
vCPUs, 104 GB of RAM, and 4 TB of SSD.
D. Create a SQL Server 2019 Enterprise on High Memory machine type with 16
vCPUs, 104 GB of RAM, and 500 GB of SS
Answer(s): C
Explanation:
Given that Google SSD performance is related to the size of the disk in an order of 30
IOPS for each GB, ti would require at least 833 GB to handle 25000 IOPS, the only
answer that exceeds this value is
C. https://cloud.google.com/compute/docs/disks/performance
QUESTION: 42
You are managing a small Cloud SQL instance for developers to do testing. The
instance is not critical and has a recovery point objective (RPO) of several days. You
want to minimize ongoing costs for this instance.
What should you do?
B. Take one manual backup per day, and turn off transaction log retention.
Answer(s): C
Explanation:
https://cloud.google.com/sql/docs/mysql/backup-recovery/backups
QUESTION: 43
You manage a meeting booking application that uses Cloud SQL. During an important
launch, the Cloud SQL instance went through a maintenance event that resulted in a
downtime of more than 5 minutes and adversely affected your production application.
You need to immediately address the maintenance issue to prevent any unplanned
events in the future.
What should you do?
B. Migrate the Cloud SQL instance to Cloud Spanner to avoid any future disruptions
due to maintenance.
C. Contact Support to understand why your Cloud SQL instance had a downtime of
more than 5 minutes.
QUESTION: 44
You are designing a highly available (HA) Cloud SQL for PostgreSQL instance that will
be used by 100 databases. Each database contains 80 tables that were migrated from
your on-premises environment to Google Cloud. The applications that use these
databases are located in multiple regions in the US, and you need to ensure that read
and write operations have low latency.
What should you do?
A. Deploy 2 Cloud SQL instances in the us-central1 region with HA enabled, and
create read replicas in us-east1 and us-west1.
B. Deploy 2 Cloud SQL instances in the us-central1 region, and create read replicas
in us-east1 and us- west1.
C. Deploy 4 Cloud SQL instances in the us-central1 region with HA enabled, and
create read replicas in us-central1, us-east1, and us-west1.
D. Deploy 4 Cloud SQL instances in the us-central1 region, and create read replicas
in us-central1, us- east1 and us-west1.
Answer(s): A
Explanation:
https://cloud.google.com/sql/docs/mysql/quotas#table_limit
QUESTION: 45
You work in the logistics department. Your data analysis team needs daily extracts from
Cloud SQL for MySQL to train a machine learning model. The model will be used to
optimize next-day routes. You need to export the data in CSV format. You want to follow
Google-recommended practices.
What should you do?
A. Use Cloud Scheduler to trigger a Cloud Function that will run a select * from
table(s) query to call the cloudsql.instances.export API.
B. Use Cloud Scheduler to trigger a Cloud Function through Pub/Sub to call the
cloudsql.instances.export API.
Answer(s): B
Explanation:
https://cloud.google.com/blog/topics/developers-practitioners/scheduling-cloud-sql-
exports-using- cloud-functions-and-cloud-scheduler
QUESTION: 46
You are choosing a database backend for a new application. The application will ingest
data points from IoT sensors. You need to ensure that the application can scale up to
millions of requests per second with sub-10ms latency and store up to 100 TB of
history.
What should you do?
C. Use Memorystore for Memcached, and add nodes as necessary to achieve the
required throughput.
D. Use Bigtable, and add nodes as necessary to achieve the required throughput.
Answer(s): D
Explanation:
https://cloud.google.com/memorystore/docs/redis/redis-overview
QUESTION: 47
You are designing a payments processing application on Google Cloud. The application
must continue to serve requests and avoid any user disruption if a regional failure
occurs. You need to use AES-256 to encrypt data in the database, and you want to
control where you store the encryption key.
What should you do?
Answer(s): A
Explanation:
Yes default encryption comes with AES-256 but the question states that you need to
control where you store the encryption keys. that can be achieved by CMEK.
QUESTION: 48
You are managing a Cloud SQL for MySQL environment in Google Cloud. You have
deployed a primary instance in Zone A and a read replica instance in Zone B, both in
the same region. You are notified that the replica instance in Zone B was unavailable for
10 minutes. You need to ensure that the read replica instance is still working.
What should you do?
A. Use the Google Cloud Console or gcloud CLI to manually create a new clone
database.
B. Use the Google Cloud Console or gcloud CLI to manually create a new failover
replica from backup.
Answer(s): C
Explanation:
Recovery Process: Once Zone-B becomes available again, Cloud SQL will initiate the
recovery process for the impacted read replica. The recovery process involves the
following steps: 1. Synchronization:
Cloud SQL will compare the data in the recovered read replica with the primary instance
in Zone-A. If there is any data divergence due to the unavailability period, Cloud SQL
will synchronize the read replica with the primary instance to ensure data consistency.
2. Catch-up Replication: The recovered read replica will start catching up on the
changes that occurred on the primary instance during its unavailability. It will apply the
necessary updates from the primary instance's binary logs (binlogs) to bring the replica
up to date. 3. Resuming Read Traffic: Once the synchronization and catch-up
replication processes are complete, the read replica in Zone-B will resume its normal
operation. It will be able to serve read traffic and stay updated with subsequent changes
from the primary instance.
QUESTION: 49
You are migrating an on-premises application to Google Cloud. The application requires
a high availability (HA) PostgreSQL database to support business-critical functions.
Your company's disaster recovery strategy requires a recovery time objective (RTO)
and recovery point objective (RPO) within 30 minutes of failure. You plan to use a
Google Cloud managed service.
What should you do to maximize uptime for your application?
Answer(s): C
Explanation:
The best answer is deploy an HA configuration and have a read replica you could
promote to the primary in a different region
QUESTION: 50
Your team is running a Cloud SQL for MySQL instance with a 5 TB database that must
be available 24/7. You need to save database backups on object storage with minimal
operational overhead or risk to your production workloads.
What should you do?
B. Create a read replica, and then use the mysqldump utility to export each table.
C. Clone the Cloud SQL instance, and then use the mysqldump utlity to export the
data.
D. Use the mysqldump utility on the primary database instance to export the
backup.
Answer(s): A
Explanation:
https://cloud.google.com/blog/products/databases/introducing-cloud-sql-serverless-
exports
QUESTION: 51
You are deploying a new Cloud SQL instance on Google Cloud using the Cloud SQL
Auth proxy. You have identified snippets of application code that need to access the
new Cloud SQL instance. The snippets reside and execute on an application server
running on a Compute Engine machine. You want to follow Google-recommended
practices to set up Identity and Access Management (IAM) as quickly and securely as
possible.
What should you do?
Answer(s): C
Explanation:
https://cloud.google.com/sql/docs/mysql/sql-proxy#using-a-service-account
QUESTION: 52
Answer(s): C
Explanation:
https://cloud.google.com/sql/docs/sqlserver/features
QUESTION: 53
An analytics team needs to read data out of Cloud SQL for SQL Server and update a
table in Cloud Spanner. You need to create a service account and grant least privilege
access using predefined roles.
What roles should you assign to the service account?
Answer(s): A
Explanation:
To read data out of Cloud SQL for SQL Server, you need to use a service account with
the roles/cloudsql.viewer role on the Cloud SQL instance. This role grants the service
account permission to read data from the instance.
Whereas roles/cloudsql.instanceUser will only allow to login to cloud SQL instance. No
resource will be allowed to view.
QUESTION: 54
You are responsible for designing a new database for an airline ticketing application in
Google Cloud.
This application must be able to:
Work with transactions and offer strong consistency.
Work with structured and semi-structured (JSON) data. Scale transparently to multiple
regions globally as the operation grows. You need a Google Cloud database that meets
all the requirements of the application.
What should you do?
A. Use Cloud SQL for PostgreSQL with both cross-region read replicas.
B. Use Cloud Spanner in a multi-region configuration.
Answer(s): B
Explanation:
https://cloud.google.com/blog/products/databases/manage-semi-structured-data-in-
cloud-spanner- with-json
QUESTION: 55
You are writing an application that will run on Cloud Run and require a database
running in the Cloud SQL managed service. You want to secure this instance so that it
only receives connections from applications running in your VPC environment in Google
Cloud.
What should you do?
Answer(s): D
Explanation:
https://cloud.google.com/sql/docs/mysql/connect-run#configure
https://cloud.google.com/sql/docs/mysql/connect-run#connection-pools
QUESTION: 56
You are troubleshooting a connection issue with a newly deployed Cloud SQL instance
on Google Cloud.
While investigating the Cloud SQL Proxy logs, you see the message Error 403: Access
Not Configured.
What should you do?
Answer(s): C
Explanation:
https://cloud.google.com/sql/docs/mysql/connect-auth-proxy#troubleshooting C because
in docs it says "Make sure to enable the Cloud SQL Admin API. If it is not, you see
output like Error 403: Access Not Configured in your Cloud SQL
QUESTION: 57
You are working on a new centralized inventory management system to track items
available in 200 stores, which each have 500 GB of dat
A. You are planning a gradual rollout of the system to a few stores each week. You
need to design an SQL database architecture that minimizes costs and user
disruption during each regional rollout and can scale up or down on nights and
holidays.
What should you do?
B. Use Oracle Real Application Cluster (RAC) databases on Bare Metal Solution for
Oracle.
C. Use sharded Cloud SQL instances with one or more stores per database
instance.
D. Use a Biglable cluster with autoscaling.
Answer(s): D
Explanation:
https://cloud.google.com/spanner/docs/autoscaling-overview
1. CloudSQL max out at 64TB, so unable to told 100TB of data.
https://cloud.google.com/sql/docs/quotas#metrics_collection_limit 2. Scale is done
manually on SQL Cloud
QUESTION: 58
Your organization has strict policies on tracking rollouts to production and periodically
shares this information with external auditors to meet compliance requirements. You
need to enable auditing on several Cloud Spanner databases.
What should you do?
D. Manually capture detailed DBA audit logs when changes are rolled out to higher
environments.
Answer(s): C
Explanation:
To satisfy audit reporting you would need a way to record what was changed and when.
The best answer is one which uses some kind of source code control system (SCCS).
That rules out A and B. Any mention of anything manual in a cloud environment should
look suspicious, which leave option C. As it happens, Liquibase is an SCCS and can be
integrated with Spanner.
https://cloud.google.com/spanner/docs/use-liquibase
QUESTION: 59
Your organization has a production Cloud SQL for MySQL instance. Your instance is
configured with 16 vCPUs and 104 GB of RAM that is running between 90% and 100%
CPU utilization for most of the day. You need to scale up the database and add vCPUs
with minimal interruption and effort.
What should you do?
A. Issue a gcloud sql instances patch command to increase the number of vCPUs.
D. Back up the database, create an instance with additional vCPUs, and restore the
database.
Answer(s): A
Explanation:
https://cloud.google.com/sdk/gcloud/reference/sql/instances/patch
QUESTION: 60
You are configuring a brand new Cloud SQL for PostgreSQL database instance in
Google Cloud. Your application team wants you to deploy one primary instance, one
standby instance, and one read replica instance. You need to ensure that you are
following Google-recommended practices for high availability.
What should you do?
A. Configure the primary instance in zone A, the standby instance in zone C, and
the read replica in zone B, all in the same region.
B. Configure the primary and standby instances in zone A and the read replica in
zone B, all in the same region.
C. Configure the primary instance in one region, the standby instance in a second
region, and the read replica in a third region.
D. Configure the primary, standby, and read replica instances in zone A, all in the
same region.
Answer(s): A
Explanation:
https://cloud.google.com/sql/docs/postgres/high-availability#failover-overview
QUESTION: 61
You are running a transactional application on Cloud SQL for PostgreSQL in Google
Cloud. The database is running in a high availability configuration within one region.
You have encountered issues with data and want to restore to the last known pristine
version of the database.
What should you do?
A. Create a clone database from a read replica database, and restore the clone in
the same region.
B. Create a clone database from a read replica database, and restore the clone into
a different zone.
C. Use the Cloud SQL point-in-time recovery (PITR) feature. Restore the copy from
two hours ago to a new database instance.
D. Use the Cloud SQL database import feature. Import last week's dump file from
Cloud Storage.
Answer(s): C
Explanation:
Using import/export from last week is slow for large scale databases and will restore
database from last week.
QUESTION: 62
Your organization has a security policy to ensure that all Cloud SQL for PostgreSQL
databases are secure. You want to protect sensitive data by using a key that meets
specific locality or residency requirements. Your organization needs to control the key's
lifecycle activities. You need to ensure that data is encrypted at rest and in transit.
What should you do?
Answer(s): B
Explanation:
https://cloud.google.com/sql/docs/postgres/configure-cmek#createcmekinstance
QUESTION: 63
Your organization has an existing app that just went viral. The app uses a Cloud SQL
for MySQL backend database that is experiencing slow disk performance while using
hard disk drives (HDDs). You need to improve performance and reduce disk I/O wait
times.
What should you do?
A. Export the data from the existing instance, and import the data into a new
instance with solid- state drives (SSDs).
B. Edit the instance to change the storage type from HDD to SSD.
C. Create a high availability (HA) failover instance with SSDs, and perform a failover
to the new instance.
D. Create a read replica of the instance with SSDs, and perform a failover to the
new instance
Answer(s): A
Explanation:
https://stackoverflow.com/questions/72034607/can-i-change-storage-type-from-hdd-to-
ssd-on- cloud-sql-after-creating-an-instance
QUESTION: 64
You are configuring a new application that has access to an existing Cloud Spanner
database. The new application reads from this database to gather statistics for a
dashboard. You want to follow Google-recommended practices when granting Identity
and Access Management (IAM) permissions.
What should you do?
B. Create a new service account, and grant it the Cloud Spanner Database Admin
role.
C. Create a new service account, and grant it the Cloud Spanner Database Reader
role.
Answer(s): C
Explanation:
https://cloud.google.com/iam/docs/overview
QUESTION: 65
Your retail organization is preparing for the holiday season. Use of catalog services is
increasing, and your DevOps team is supporting the Cloud SQL databases that power a
microservices-based application. The DevOps team has added instrumentation through
Sqlcommenter. You need to identify the root cause of why certain microservice calls are
failing.
What should you do?
B. Watch the Cloud SQL instance monitor for CPU utilization metrics.
Answer(s): A
Explanation:
Cloud Trace doesn't support Cloud SQL. Eliminate D. Cloud SQL recommenders for
overprovisioned instances would tell you about Cloud SQL instances which are too
large for their workload. Eliminate C. Monitoring CPU utilization wouldn't tell you why
microservice calls are failing. Eliminate B. SQLcommenter integrates with Query
Insights. So A is the best answer. https://cloud.google.com/blog/topics/developers-
practitioners/introducing-sqlcommenter-open- source-orm-auto-instrumentation-library
QUESTION: 66
You are designing a database architecture for a global application that stores
information about public parks worldwide. The application uses the database for read-
only purposes, and a centralized batch job updates the database nightly. You want to
select an open source, SQL-compliant database.
What should you do?
A. Use Bigtable with multi-region clusters.
Answer(s): C
QUESTION: 67
Your company is migrating their MySQL database to Cloud SQL and cannot afford any
planned downtime during the month of December. The company is also concerned with
cost, so you need the most cost-effective solution.
What should you do?
B. Use Cloud SQL maintenance settings to prevent any maintenance during the
month of December.
C. Create MySQL read replicas in different zones so that, if any downtime occurs,
the read replicas will act as the primary instance during the month of December.
D. Create a MySQL regional instance so that, if any downtime occurs, the standby
instance will act as the primary instance during the month of December.
Answer(s): B
Explanation:
https://cloud.google.com/sql/docs/mysql/maintenance?hl=fr
QUESTION: 68
Your online delivery business that primarily serves retail customers uses Cloud SQL for
MySQL for its inventory and scheduling application. The required recovery time
objective (RTO) and recovery point objective (RPO) must be in minutes rather than
hours as a part of your high availability and disaster recovery design. You need a high
availability configuration that can recover without data loss during a zonal or a regional
failure.
What should you do?
C. Set up read replicas in different zones of the same region as the primary instance
with synchronous replication, and set up read replicas in different regions with
asynchronous replication.
D. Set up read replicas in different zones of the same region as the primary instance
with asynchronous replication, and set up read replicas in different regions with
synchronous replication.
Answer(s): C
Explanation:
This answer meets the RTO and RPO requirements by using synchronous replication
within the same region, which ensures that all writes made to the primary instance are
replicated to disks in both zones before a transaction is reported as committed1. This
minimizes data loss and downtime in case of a zonal or an instance failure, and allows
for a quick failover to the standby instance1. This answer also meets the high
availability and disaster recovery requirements by using asynchronous replication
across different regions, which ensures that the data changes made to the primary
instance are replicated to the read replicas in other regions with minimal delay2. This
provides additional redundancy and backup in case of a regional failure, and allows for
a manual failover to the read replica in another region2.
QUESTION: 69
Your hotel booking company is expanding into Country A, where personally identifiable
information (PII) must comply with regional data residency requirements and audits.
You need to isolate customer data in Country A from the rest of the customer dat
Answer(s): B
Explanation:
https://cloud.google.com/solutions/implementing-multi-tenancy-cloud-spanner#multi-
tenancy- data-management-patterns https://cloud.google.com/solutions/implementing-
multi-tenancy-cloud-spanner
QUESTION: 70
You work for a financial services company that wants to use fully managed database
services. Traffic volume for your consumer services products has increased annually at
a constant rate with occasional spikes around holidays. You frequently need to upgrade
the capacity of your database. You want to use Cloud Spanner and include an
automated method to increase your hardware capacity to support a higher level of
concurrency.
What should you do?
C. Upgrade the Cloud Spanner instance on a periodic basis during the scheduled
maintenance window.
D. Set up alerts that are triggered when Cloud Spanner utilization metrics breach
the threshold, and then schedule an upgrade during the scheduled maintenance
window.
Answer(s): A
Explanation:
Linear scaling is best used with load patterns that change more gradually or have a few
large peaks. The method calculates the minimum number of nodes or processing units
required to keep utilization below the scaling threshold. The number of nodes or
processing units added or removed in each scaling event is not limited to a fixed step
amount. https://cloud.google.com/spanner/docs/autoscaling-overview#linear
QUESTION: 71
Your organization has a busy transactional Cloud SQL for MySQL instance. Your
analytics team needs access to the data so they can build monthly sales reports. You
need to provide data access to the analytics team without adversely affecting
performance.
What should you do?
A. Create a read replica of the database, provide the database IP address,
username, and password to the analytics team, and grant read access to
required tables to the team.
Answer(s): B
Explanation:
"Read replicas do not have the cloudsql.iam_authentication flag enabled automatically
when it is enabled on the primary instance."
https://cloud.google.com/sql/docs/postgres/replication/create-
replica#configure_iam_replicas
QUESTION: 72
Your organization stores marketing data such as customer preferences and purchase
history on Bigtable. The consumers of this database are predominantly data analysts
and operations users. You receive a service ticket from the database operations
department citing poor database performance between 9 AM-10 AM every day. The
application team has confirmed no latency from their logs. A new cohort of pilot users
that is testing a dataset loaded from a third-party data provider is experiencing poor
database performance. Other users are not affected. You need to troubleshoot the
issue.
What should you do?
A. Isolate the data analysts and operations user groups to use different Bigtable
instances.
Answer(s): C
Explanation:
https://cloud.google.com/bigtable/docs/performance#troubleshooting