0% found this document useful (0 votes)
25 views4 pages

PDE-sample Ques

Uploaded by

rajalekshmi114
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views4 pages

PDE-sample Ques

Uploaded by

rajalekshmi114
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Q1.

Your company built a TensorFlow neutral-network model with a large number of neurons and
layers. The model fits well for the training data. However, when tested against new data, it performs
poorly. What method can you employ to address this?

Dropout Methods

Q2. You are building a model to make clothing recommendations. You know a user's fashion
preference is likely to change over time, so you build a data pipeline to stream new data back to the
model as it becomes available. How should you use this data to train the model?

Continuously retrain the model on a combination of existing data and the new data.

Q3. You designed a database for patient records as a pilot project to cover a few hundred patients in
three clinics. Your design used a single database table to represent all patients and their visits, and you
used self-joins to generate reports. The server resource utilization was at 50%. Since then, the scope
of the project has expanded. The database must now store 100 times more patient records. You can no
longer run the reports, because they either take too long or they encounter errors with insufficient
compute resources. How should you adjust the database design?

Normalize the master patient-record table into the patient table and the visits table, and create other
necessary tables to avoid self-join.

Q4. You create an important report for your large team in Google Data Studio 360. The report uses
Google BigQuery as its data source. You notice that visualizations are not showing data that is less
than 1 hour old. What should you do?

Disable caching by editing the report settings

Q5. An external customer provides you with a daily dump of data from their database. The data flows
into Google Cloud Storage GCS as comma-separated values
(CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows that are
formatted incorrectly or corrupted. How should you build this pipeline?

Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to
another dead-letter table for analysis.

Q6. Your weather app queries a database every 15 minutes to get the current temperature. The
frontend is powered by Google App Engine and server millions of users. How should you design the
frontend to respond to a database failure?

Retry the query with exponential backoff, up to a cap of 15 minutes.

Q7. You are creating a model to predict housing prices. Due to budget constraints, you must run it on
a single resource-constrained virtual machine. Which learning algorithm should you use?

Linear regression

Q8. You are building new real-time data warehouse for your company and will use Google BigQuery
streaming inserts. There is no guarantee that data will only be sent in once but you do have a unique
ID for each row of data and an event timestamp. You want to ensure that duplicates are not included
while interactively querying data. Which query type should you use?

Use the ROW_NUMBER window function with PARTITION by unique ID along with WHERE row
equals 1.

Q9. Your company is using WILDCARD tables to query data across multiple tables with similar
names. The SQL statement is currently failing with the following error:

Which table name will make the SQL statement work correctly?

'bigquery-public-data.noaa_gsod.gsod*`

Q10. Your company is in a highly regulated industry. One of your requirements is to ensure individual
users have access only to the minimum amount of information required to do their jobs. You want to
enforce this requirement with Google BigQuery. Which three approaches can you take? (Choose
three.)

A. Restrict access to tables by role.


B. Restrict BigQuery API access to approved users.
C. Use Google Stackdriver Audit Logging to determine policy violations.

Q11. You are designing a basket abandonment system for an ecommerce company. The system will
send a message to a user based on these rules:
✑ No interaction by the user on the site for 1 hour
✑ Has added more than $30 worth of products to the basket
✑ Has not completed a transaction
You use Google Cloud Dataflow to process the data and decide if a message should be sent. How
should you design the pipeline?

Use a session window with a gap time duration of 60 minutes.

Q12. Your company handles data processing for a number of different clients. Each client prefers to
use their own suite of analytics tools, with some allowing direct query access via Google BigQuery.
You need to secure the data so that clients cannot see each other's data. You want to ensure appropriate
access to the data. Which three steps should you take? (Choose three.)

A. Load data into a different dataset for each client


B. Restrict a client's dataset to approved users.
C. Use the appropriate identity and access management (IAM) roles for each client's users.
Q13. You want to process payment transactions in a point-of-sale application that will run on Google
Cloud Platform. Your user base could grow exponentially, but you do not want to manage
infrastructure scaling. Which Google database service should you use?

Cloud SQL

Q14. You want to use a database of information about tissue samples to classify future tissue samples
as either normal or mutated. You are evaluating an unsupervised anomaly detection method for
classifying the tissue samples. Which two characteristic support this method? (Choose two.)

A. You expect future mutations to have different features from the mutated samples in the
database.
B. There are roughly equal occurrences of both normal and mutated samples in the database.

Q15. You need to store and analyze social media postings in Google BigQuery at a rate of 10,000
messages per minute in near real-time. Initially, design the application to use streaming inserts for
individual postings. Your application also performs data aggregations right after the streaming inserts.
You discover that the queries after streaming inserts do not exhibit strong consistency, and reports
from the queries might miss in-flight data. How can you adjust your application design?

Estimate the average latency for data availability after streaming inserts, and always run queries after
waiting twice as long.

Q16. Your startup has never implemented a formal security policy. Currently, everyone in the
company has access to the datasets stored in Google BigQuery. Teams have freedom to use the service
as they see fit, and they have not documented their use cases. You have been asked to secure the data
warehouse. You need to discover what everyone is doing. What should you do first?

Use Google Stackdriver Audit Logs to review data access.

Q17. Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-
use Hadoop jobs they have already created and minimize the management of the cluster as much as
possible. They also want to be able to persist data beyond the life of the cluster. What should you do?

Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector

Q18. Business owners at your company have given you a database of bank transactions. Each row
contains the user ID, transaction type, transaction location, and transaction amount. They ask you to
investigate what type of machine learning can be applied to the data. Which three machine learning
applications can you use? (Choose three.)

A. Unsupervised learning to determine which transactions are most likely to be fraudulent.


B. Clustering to divide the transactions into N categories based on feature similarity.
C. Reinforcement learning to predict the location of a transaction.

Q19. Your company's on-premises Apache Hadoop servers are approaching end-of-life, and IT has
decided to migrate the cluster to Google Cloud Dataproc. A like-for- like migration of the cluster
would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of
using that much block storage. You want to minimize the storage cost of the migration. What should
you do?
Put the data into Google Cloud Storage.

Q20. You work for a car manufacturer and have set up a data pipeline using Google Cloud Pub/Sub to
capture anomalous sensor events. You are using a push subscription in Cloud Pub/Sub that calls a
custom HTTPS endpoint that you have created to take action of these anomalous events as they occur.
Your custom HTTPS endpoint keeps getting an inordinate amount of duplicate messages. What is the
most likely cause of these duplicate messages?

Your custom endpoint has an out-of-date SSL certificate.

Q21. Your company uses a proprietary system to send inventory data every 6 hours to a data ingestion
service in the cloud. Transmitted data includes a payload of several fields and the timestamp of the
transmission. If there are any concerns about a transmission, the system re-transmits the data. How
should you deduplicate the data most efficiency?

Maintain a database table to store the hash value and other metadata for each data entry

Q22. Your company has hired a new data scientist who wants to perform complicated analyses across
very large datasets stored in Google Cloud Storage and in a
Cassandra cluster on Google Compute Engine. The scientist primarily wants to create labelled data
sets for machine learning projects, along with some visualization tasks. She reports that her laptop is
not powerful enough to perform her tasks and it is slowing her down. You want to help her perform
her tasks.
What should you do?

Grant the user access to Google Cloud Shell.

Q23. You are deploying 10,000 new Internet of Things devices to collect temperature data in your
warehouses globally. You need to process, store and analyze these very large datasets in real time.
What should you do?

Send the data to Google Cloud Pub/Sub, stream Cloud Pub/Sub to Google Cloud Dataflow, and store
the data in Google BigQuery.

Q24. You have spent a few days loading data from comma-separated values (CSV) files into the
Google BigQuery table CLICK_STREAM. The column DT stores the epoch time of click events. For
convenience, you chose a simple schema where every field is treated as the STRING type. Now, you
want to compute web session durations of users who visit your site, and you want to change its data
type to the TIMESTAMP. You want to minimize the migration effort without making future queries
computationally expensive. What should you do?

Add two columns to the table CLICK STREAM: TS of the TIMESTAMP type and IS_NEW of the
BOOLEAN type. Reload all data in append mode. For each appended row, set the value of IS_NEW
to true. For future queries, reference the column TS instead of the column DT, with the WHERE
clause ensuring that the value of IS_NEW must be true.

Q25. You want to use Google Stackdriver Logging to monitor Google BigQuery usage. You need an
instant notification to be sent to your monitoring tool when new data is appended to a certain table
using an insert job, but you do not want to receive notifications for other tables. What should you do?

In the Stackdriver logging admin interface, and enable a log sink export to BigQuery.

You might also like