Professional Data Engineer Demo
Professional Data Engineer Demo
Professional-Data-Engineer Exam
Google Cloud Certified
www.certsland.com
Questions & Answers PDF Page 2
Version:16.0
Question: 1
Your company built a TensorFlow neural-network model with a large number of neurons and layers.
The model fits well for the training dat
a. However, when tested against new data, it performs poorly. What method can you employ to
address this?
A. Threading
B. Serialization
C. Dropout Methods
D. Dimensionality Reduction
Answer: C
Explanation:
www.certsland.com
Questions & Answers PDF Page 3
Reference: https://medium.com/mlreview/a-simple-deep-learning-model-for-stock-price-prediction-
using-tensorflow-30505541d877
Question: 2
You are building a model to make clothing recommendations. You know a user’s fashion preference
is likely to change over time, so you build a data pipeline to stream new data back to the model as it
becomes available. How should you use this data to train the model?
B. Continuously retrain the model on a combination of existing data and the new data.
C. Train on the existing data while using the new data as your test set.
D. Train on the new data while using the existing data as your test set.
Answer: C
Explanation:
https://cloud.google.com/automl-tables/docs/prepare
Question: 3
You designed a database for patient records as a pilot project to cover a few hundred patients in
three clinics. Your design used a single database table to represent all patients and their visits, and
you used self-joins to generate reports. The server resource utilization was at 50%. Since then, the
scope of the project has expanded. The database must now store 100 times more patient records.
You can no longer run the reports, because they either take too long or they encounter errors with
www.certsland.com
Questions & Answers PDF Page 4
insufficient compute resources. How should you adjust the database design?
A. Add capacity (memory and disk space) to the database server by the order of 200.
B. Shard the tables into smaller ones based on date ranges, and only generate reports with
prespecified date ranges.
C. Normalize the master patient-record table into the patient table and the visits table, and create
other necessary tables to avoid self-join.
D. Partition the table into smaller tables, with one for each clinic. Run queries against the smaller
table pairs, and use unions for consolidated reports.
Answer: C
Explanation:
Question: 4
You create an important report for your large team in Google Data Studio 360. The report uses
Google BigQuery as its data source. You notice that visualizations are not showing data that is less
than 1 hour old. What should you do?
D. Clear your browser history for the past hour then reload the tab showing the virtualizations.
Answer: A
Explanation:
www.certsland.com
Questions & Answers PDF Page 5
Reference: https://support.google.com/datastudio/answer/7020039?hl=en
Question: 5
An external customer provides you with a daily dump of data from their database. The data flows
into Google Cloud Storage GCS as comma-separated values (CSV) files. You want to analyze this data
in Google BigQuery, but the data could have rows that are formatted incorrectly or corrupted. How
should you build this pipeline?
A. Use federated data sources, and check data in the SQL query.
C. Import the data into BigQuery using the gcloud CLI and set max_bad_records to 0.
D. Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to
another dead-letter table for analysis.
Answer: D
Explanation:
Question: 6
Your weather app queries a database every 15 minutes to get the current temperature. The frontend
is powered by Google App Engine and server millions of users. How should you design the frontend
to respond to a database failure?
www.certsland.com
Questions & Answers PDF Page 6
C. Retry the query every second until it comes back online to minimize staleness of data.
D. Reduce the query frequency to once every hour until the database comes back online.
Answer: B
Explanation:
https://cloud.google.com/sql/docs/mysql/manage-connections#backoff
Question: 7
You are creating a model to predict housing prices. Due to budget constraints, you must run it on a
single resource-constrained virtual machine. Which learning algorithm should you use?
A. Linear regression
B. Logistic classification
Answer: A
Explanation:
Question: 8
You are building new real-time data warehouse for your company and will use Google BigQuery
streaming inserts. There is no guarantee that data will only be sent in once but you do have a unique
ID for each row of data and an event timestamp. You want to ensure that duplicates are not included
while interactively querying dat
www.certsland.com
Questions & Answers PDF Page 7
B. Use GROUP BY on the unique ID column and timestamp column and SUM on the values.
C. Use the LAG window function with PARTITION by unique ID along with WHERE LAG IS NOT NULL.
D. Use the ROW_NUMBER window function with PARTITION by unique ID along with WHERE row
equals 1.
Answer: D
Explanation:
https://cloud.google.com/bigquery/docs/reference/standard-sql/analytic-function-concepts
Question: 9
Your company is using WHILECARD tables to query data across multiple tables with similar names.
The SQL statement is currently failing with the following error:
SELECT age
FROM
bigquery-public-data.noaa_gsod.gsod
WHERE
age != 99
AND_TABLE_SUFFIX = ‘1929’
ORDER BY
www.certsland.com
Questions & Answers PDF Page 8
age DESC
Which table name will make the SQL statement work correctly?
A. ‘bigquery-public-data.noaa_gsod.gsod‘
B. bigquery-public-data.noaa_gsod.gsod*
C. ‘bigquery-public-data.noaa_gsod.gsod’*
D. ‘bigquery-public-data.noaa_gsod.gsod*`
Answer: D
Explanation:
Question: 10
Your company is in a highly regulated industry. One of your requirements is to ensure individual users
have access only to the minimum amount of information required to do their jobs. You want to
enforce this requirement with Google BigQuery. Which three approaches can you take? (Choose
three.)
Answer: B,D,F
www.certsland.com
Questions & Answers PDF Page 9
Explanation:
Question: 11
You are designing a basket abandonment system for an ecommerce company. The system will send a
message to a user based on these rules:
You use Google Cloud Dataflow to process the data and decide if a message should be sent. How
should you design the pipeline?
D. Use a global window with a time based trigger with a delay of 60 minutes.
Answer: C
Explanation:
Question: 12
Your company handles data processing for a number of different clients. Each client prefers to use
their own suite of analytics tools, with some allowing direct query access via Google BigQuery. You
need to secure the data so that clients cannot see each other’s dat
www.certsland.com
Questions & Answers PDF Page 10
a. You want to ensure appropriate access to the data. Which three steps should you take? (Choose
three.)
F. Use the appropriate identity and access management (IAM) roles for each client’s users.
Answer: B,D,F
Explanation:
Question: 13
You want to process payment transactions in a point-of-sale application that will run on Google Cloud
Platform. Your user base could grow exponentially, but you do not want to manage infrastructure
scaling.
A. Cloud SQL
B. BigQuery
C. Cloud Bigtable
D. Cloud Datastore
Answer: A
Explanation:
www.certsland.com
Questions & Answers PDF Page 11
Question: 14
You want to use a database of information about tissue samples to classify future tissue samples as
either normal or mutated. You are evaluating an unsupervised anomaly detection method for
classifying the tissue samples. Which two characteristic support this method? (Choose two.)
B. There are roughly equal occurrences of both normal and mutated samples in the database.
C. You expect future mutations to have different features from the mutated samples in the database.
D. You expect future mutations to have similar features to the mutated samples in the database.
E. You already have labels for which samples are mutated and which are normal in the database.
Answer: AD
Explanation:
Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under
the assumption that the majority of the instances in the data set are normal by looking for instances
that seem to fit least to the remainder of the data set.
https://en.wikipedia.org/wiki/Anomaly_detection
Question: 15
You need to store and analyze social media postings in Google BigQuery at a rate of 10,000 messages
per minute in near real-time. Initially, design the application to use streaming inserts for individual
postings. Your application also performs data aggregations right after the streaming inserts. You
discover that the queries after streaming inserts do not exhibit strong consistency, and reports from
the queries might miss in-flight dat
www.certsland.com
Questions & Answers PDF Page 12
B. Convert the streaming insert code to batch load for individual messages.
C. Load the original message to Google Cloud SQL, and export the table every hour to BigQuery via
streaming inserts.
D. Estimate the average latency for data availability after streaming inserts, and always run queries
after waiting twice as long.
Answer: D
Explanation:
The data is first comes to buffer and then written to Storage. If we are running queries in buffer we
will face above mentioned issues. If we wait for the bigquery to write the data to storage then we
won’t face the issue. So We need to wait till it’s written tio storage
Question: 16
Your startup has never implemented a formal security policy. Currently, everyone in the company has
access to the datasets stored in Google BigQuery. Teams have freedom to use the service as they see
fit, and they have not documented their use cases. You have been asked to secure the data
warehouse. You need to discover what everyone is doing. What should you do first?
B. Get the identity and access management IIAM) policy of each table
D. Use the Google Cloud Billing API to see what account the warehouse is being billed to.
www.certsland.com
Questions & Answers PDF Page 13
Answer: A
Explanation:
Question: 17
Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-use
Hadoop jobs they have already created and minimize the management of the cluster as much as
possible. They also want to be able to persist data beyond the life of the cluster. What should you do?
B. Create a Google Cloud Dataproc cluster that uses persistent disks for HDFS.
C. Create a Hadoop cluster on Google Compute Engine that uses persistent disks.
D. Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector.
E. Create a Hadoop cluster on Google Compute Engine that uses Local SSD disks.
Answer: D
Explanation:
Question: 18
Business owners at your company have given you a database of bank transactions. Each row contains
the user ID, transaction type, transaction location, and transaction amount. They ask you to
investigate what type of machine learning can be applied to the dat
a. Which three machine learning applications can you use? (Choose three.)
www.certsland.com
Questions & Answers PDF Page 14
Answer: B,C,D
Explanation:
Question: 19
Your company’s on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided
to migrate the cluster to Google Cloud Dataproc. A like-for-like migration of the cluster would require
50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much
block storage. You want to minimize the storage cost of the migration. What should you do?
B. Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.
C. Tune the Cloud Dataproc cluster so that there is just enough disk for all data.
D. Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent
Disk.
Answer: B
Explanation:
Reference:
Question: 20
www.certsland.com
Questions & Answers PDF Page 15
You work for a car manufacturer and have set up a data pipeline using Google Cloud Pub/Sub to
capture anomalous sensor events. You are using a push subscription in Cloud Pub/Sub that calls a
custom HTTPS endpoint that you have created to take action of these anomalous events as they
occur. Your custom HTTPS endpoint keeps getting an inordinate amount of duplicate messages. What
is the most likely cause of these duplicate messages?
C. The Cloud Pub/Sub topic has too many messages published to it.
D. Your custom endpoint is not acknowledging messages within the acknowledgement deadline.
Answer: B
Explanation:
www.certsland.com
Thank You for trying Professional-Data-Engineer PDF Demo
https://www.certsland.com/professional-data-engineer-dumps/
[Limited Time Offer] Use Coupon " SAVE20 " for extra 20%
discount on the purchase of PDF file. Test your
Professional-Data-Engineer preparation with actual exam questions
www.certsland.com