Data Bricks
Data Bricks
A data organization leader is upset about the data analysis team’s reports
being different from the data engineering team’s reports. The leader believes
the siloed nature of their organization’s data engineering and data analysis
architectures is to blame.
Which of the following describes how a data lakehouse could alleviate this
issue?
B. Both teams would use the same source of truth for their work Most
Voted
2. Which of the following describes a scenario in which a data team will want
to utilize cluster pools?
A. Worker node
D. Databricks Filesystem
E. Driver node
A. Delta tables are stored in a single file that contains data, history,
metadata, and other attributes.
B. Delta tables store their data in a single file and all metadata in a
collection of files in a separate location.
D. Delta tables are stored in a collection of files that contain only the
data stored within the table.
E. Delta tables are stored in a single file that contains only the data
stored within the table.
6. Which of the following code blocks will remove the rows where the value
in column age is greater than 25 from the existing Delta table my_table and
save the updated table?
7. A data engineer has realized that they made a mistake when making a
daily update to a table. They need to use Delta time travel to restore the
table to a version that is 3 days old. However, when the data engineer
attempts to time travel to the older version, they are unable to restore the
data because the data files have been deleted.
Which of the following explains why the data files are no longer present?
A. Commit
B. Pull
C. Push
D. Clone
10. A data engineer has left the organization. The data team needs to
transfer ownership of the data engineer’s Delta tables to a new data
engineer. The new data engineer is the lead engineer on the data team.
Assuming the original data engineer no longer has access, which of the
following individuals must be the one to transfer ownership of the Delta
tables in Data Explorer?
Hide Solution
12. A data analyst has created a Delta table sales that is used by the entire
data analysis team. They want help from the data engineering team to
implement a series of tests to ensure the data is clean. However, the data
engineering team uses Python for its tests rather than SQL.
Which of the following commands could the data engineering team use to
access sales in PySpark?
C. spark.sql("sales")D. spark.delta.table("sales")
13. Which of the following commands will return the location of database
customer360?
14. A data engineer wants to create a new table containing the names of
customers that live in France.
They have written the following command:
A senior data engineer mentions that it is organization policy to include a
table property indicating that the new table includes personally identifiable
information (PII).
Which of the following lines of code fills in the above blank to successfully
complete the task?
B. "COMMENT PII"
C. TBLPROPERTIES PII
E. PII
15. Which of the following benefits is provided by the array functions from
Spark SQL?
16. Which of the following commands can be used to write data into a Delta
table while avoiding the writing of duplicate records?
A. DROP
B. IGNORE
D. APPEND
E. INSERT
17. A data engineer needs to apply custom logic to string column city in
table stores for a specific use case. In order to apply this custom logic at
scale, the data engineer wants to create a SQL user-defined function (UDF).
Which of the following code blocks creates this SQL UDF?
A. Most Voted
B.
C.
D.
E.
18. A data analyst has a series of queries in a SQL program. The data analyst
wants this program to run every day. They only want the final query in the
program to run on Sundays. They ask for help from the data engineering
team to complete this task.
Which of the following approaches could be used by the data engineering
team to complete this task?
B. They could wrap the queries using PySpark and use Python’s control
flow system to determine when to run the final query. Most Voted
E. They could redesign the data model to separate the data used in the
final query into a new table.
19. A data engineer runs a statement every day to copy the previous day’s
sales into the table transactions. Each day’s sales are in their own file in the
location "/transactions/raw".
Today, the data engineer runs the following command to complete this task:
After running the command today, the data engineer notices that the
number of records in table transactions has not changed.
Which of the following describes why the statement might not have copied
any new records into the table?
A. The format of the files to be copied were not included with the
FORMAT_OPTIONS keyword.
B. The names of the files to be copied were not included with the FILES
keyword.
C. The previous day’s file has already been copied into the table. Most
Voted
20. A data engineer needs to create a table in Databricks using data from
their organization’s existing SQLite database.
They run the following command:
Which of the following lines of code fills in the above blank to successfully
complete the task?
B. autoloader
C. DELTA
D. sqlite
E. org.apache.spark.sql.sqlite
21. A data engineering team has two tables. The first table
march_transactions is a collection of all retail transactions in the month of
March. The second table april_transactions is a collection of all retail
transactions in the month of April. There are no duplicate records between
the tables.
Which of the following commands should be run to create a new table
all_transactions that contains all records from march_transactions and
april_transactions without duplicate records?
22. A data engineer only wants to execute the final block of a Python program
if the Python variable day_of_week is equal to 1 and the Python variable
review_period is True.
Which of the following control flow statements should the data engineer use
to begin this conditionally executed code block?
23. A data engineer is attempting to drop a Spark SQL table my_table. The
data engineer wants to delete all table metadata and data.
They run the following command:
24. A data engineer wants to create a data entity from a couple of tables. The
data entity must be used by other data engineers in other sessions. It also
must be saved to a physical location.
Which of the following data entities should the data engineer create?
A. Database
B. Function
C. View
D. Temporary view
25. A data engineer is maintaining a data pipeline. Upon data ingestion, the
data engineer notices that the source data is starting to have a lower level of
quality. The data engineer would like to automate the process of monitoring
the quality level.
Which of the following tools can the data engineer use to solve this problem?
A. Unity Catalog
B. Data Explorer
C. Delta Lake
E. Auto Loader
26. A Delta Live Table pipeline includes two datasets defined using
STREAMING LIVE TABLE. Three datasets are defined against Delta Lake table
sources using LIVE TABLE.
The table is configured to run in Production mode using the Continuous
Pipeline Mode.
Assuming previously unprocessed data exists and all definitions are valid,
what is the expected outcome after clicking Start to update the pipeline?
A. All datasets will be updated at set intervals until the pipeline is shut
down. The compute resources will persist to allow for additional
testing.
B. All datasets will be updated once and the pipeline will persist
without any processing. The compute resources will persist but go
unused.
C. All datasets will be updated at set intervals until the pipeline is shut
down. The compute resources will be deployed for the update and
terminated when the pipeline is stopped. Most Voted
D. All datasets will be updated once and the pipeline will shut down.
The compute resources will be terminated.
E. All datasets will be updated once and the pipeline will shut down.
The compute resources will persist to allow for additional testing.
27. In order for Structured Streaming to reliably track the exact progress of
the processing so that it can handle any kind of failure by restarting and/or
reprocessing, which of the following two approaches is used by Spark to
record the offset range of the data being processed in each trigger?
A. Checkpointing and Write-ahead Logs Most Voted
28. Which of the following describes the relationship between Gold tables and
Silver tables?
B. Gold tables are more likely to contain valuable data than Silver
tables.
C. Gold tables are more likely to contain a less refined view of data
than Silver tables.
D. Gold tables are more likely to contain more data than Silver tables.
E. Gold tables are more likely to contain truthful data than Silver
tables.
29. Which of the following describes the relationship between Bronze tables
and raw data?
D. Bronze tables contain a less refined view of data than raw data.
E. Bronze tables contain raw data with a schema applied. Most Voted
30. Which of the following tools is used by Auto Loader process data
incrementally?
A. Checkpointing
C. Data Explorer
D. Unity Catalog
E. Databricks SQL
31. A data engineer has configured a Structured Streaming job to read from a
table, manipulate the data, and then perform a streaming write into a new
table.
The cade block used by the data engineer is below:
A. trigger("5 seconds")
B. trigger()
C. trigger(once="5 seconds")
E. trigger(continuous="5 seconds")
32. A dataset has been defined using Delta Live Tables and includes an
expectations clause:
CONSTRAINT valid_timestamp EXPECT (timestamp > '2020-01-01') ON
VIOLATION DROP ROW
What is the expected behavior when a batch of data containing data that
violates these constraints is processed?
A. Records that violate the expectation are dropped from the target
dataset and loaded into a quarantine table.
B. Records that violate the expectation are added to the target dataset
and flagged as invalid in a field added to the target dataset.
C. Records that violate the expectation are dropped from the target
dataset and recorded as invalid in the event log. Most Voted
D. Records that violate the expectation are added to the target dataset
and recorded as invalid in the event log.
33. Which of the following describes when to use the CREATE STREAMING
LIVE TABLE (formerly CREATE INCREMENTAL LIVE TABLE) syntax over the
CREATE LIVE TABLE syntax when creating Delta Live Tables (DLT) tables
using SQL?
C. CREATE STREAMING LIVE TABLE is redundant for DLT and it does not
need to be used.
A. Unity Catalog
B. Delta Lake
C. Databricks SQL
D. Data Explorer
A.
B.
C.
D.
E.
Most Voted
36. A data engineer has three tables in a Delta Live Tables (DLT) pipeline.
They have configured the pipeline to drop invalid records at each table. They
notice that some data is being dropped due to quality concerns at some
point in the DLT pipeline. They would like to determine at which table in their
pipeline the data is being dropped.
Which of the following approaches can the data engineer take to identify the
table that is dropping the records?
C. They can set up DLT to notify them via email when records are
dropped.
D. They can navigate to the DLT pipeline page, click on each table, and
view the data quality statistics. Most Voted
E. They can navigate to the DLT pipeline page, click on the “Error”
button, and review the present errors.
37. A data engineer has a single-task Job that runs each morning before they
begin working. After identifying an upstream data issue, they need to set up
another task to run a new notebook prior to the original task.
Which of the following approaches can the data engineer use to set up the
new task?
A. They can clone the existing task in the existing Job and update it to
run the new notebook.
B. They can create a new task in the existing Job and then add it as a
dependency of the original task. Most Voted
C. They can create a new task in the existing Job and then add the
original task as a dependency of the new task.
D. They can create a new job from scratch and add both tasks to run
concurrently.
E. They can clone the existing task to a new Job and then edit it to run
the new notebook.
B. They can set the query’s refresh schedule to end after a certain
number of refreshes.
C. They cannot ensure the query does not cost the organization money
beyond the first week of the project’s release.
D. They can set a limit to the number of individuals that are able to
manage the query’s refresh schedule.
E. They can set the query’s refresh schedule to end on a certain date in
the query scheduler. Most Voted
39. A data analysis team has noticed that their Databricks SQL queries are
running too slowly when connected to their always-on SQL endpoint. They
claim that this issue is present when many members of the team are running
small queries simultaneously. They ask the data engineering team for help.
The data engineering team notices that each of the team’s queries uses the
same SQL endpoint.
Which of the following approaches can the data engineering team use to
improve the latency of the team’s queries?
C. They can turn on the Auto Stop feature for the SQL endpoint.
D. They can turn on the Serverless feature for the SQL endpoint.
E. They can turn on the Serverless feature for the SQL endpoint and
change the Spot Instance Policy to “Reliability Optimized.”
C. They can turn on the Auto Stop feature for the SQL endpoint. Most
Voted
E. They can ensure the dashboard’s SQL endpoint is not one of the
included query’s SQL endpoint.
41.