Practice Test UDEMY 1
Practice Test UDEMY 1
A user cannot view the result set from a query that another user executed
except for the ACCOUNTADMIN role. (True / False)
Your answer is incorrect
TRUE
Correct answer
FALSE
Overall explanation
A user cannot view the result set from a query that another user
executed. This behavior is intentional. For security reasons, only the user
who executed a query can access the query results. This behavior is not
connected to the Snowflake access control model for objects. Even a user
with the ACCOUNTADMIN role cannot view the results for a query
run by another user.
Domain
Account Access & Security
Question 2Incorrect
A task can execute any one of the following types of SQL code: (Select 3)
Correct selection
Procedural logic using Snowflake Scripting
Your selection is correct
Call to a stored procedure
Multiple SQL statements
Your selection is correct
Single SQL Statement
Overall explanation
A task can execute any one of the following types of SQL code:
Single SQL statement
Call to a stored procedure
Procedural logic using Snowflake Scripting.
Domain
Snowflake Data Platform Features and Architecture
Question 3Correct
Which of these roles is dedicated to user and role management only?
ORGADMIN
ACCOUNTADMIN
SYSADMIN
Your answer is correct
USERADMIN
SECURITYADMIN
Overall explanation
USERADMIN role is dedicated to user and role management only. More
specifically, this role:
Is granted the CREATE USER and CREATE ROLE security privileges.
Can create users and roles in the account.
This role can also manage users and roles that it owns.
Only the role with the OWNERSHIP privilege on an object (i.e. user or
role), or a higher role, can modify the object properties.
Domain
Account Access & Security
Question 4Correct
Which database objects can be shared using the Snowflake Secure Data
Sharing feature? (Select all that apply)
Your selection is correct
Tables
Your selection is correct
Secure Views
Your selection is correct
Secure UDFs
Your selection is correct
Secure Materialized View
Your selection is correct
External Tables
Roles
Overall explanation
Secure Data Sharing enables sharing selected objects in a database in
your account with other Snowflake accounts. The following Snowflake
database objects can be shared:
Tables
External tables
Secure views
Secure materialized views
Secure UDFs
Domain
Snowflake Data Platform Features and Architecture
Question 5Correct
Which of these SQL functions helps returns the absolute path of a staged file
using the stage name and path of the file relative to its location in the stage
as inputs.?
BUILD_STAGE_FILE_URI
Explanation
Explanation
Explanation
Explanation
Explanation
This is incorrect. This function extracts the relative path of a staged file but
does not construct an absolute path. It is used for the reverse operation—
retrieving the path relative to the base location.
BUILD_SCOPED_FILE_URL
Explanation
Question 6Incorrect
In which of the cloud platforms a Snowflake account can be hosted? (Select
3)
IBM Cloud
Your selection is incorrect
Oracle Cloud
Your selection is correct
AWS
Your selection is correct
AZURE
Correct selection
GCP
Overall explanation
A Snowflake account can be hosted on any of the following cloud platforms:
Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure
(Azure). On each platform, Snowflake provides one or more regions where
the account is provisioned.
Domain
Snowflake Data Platform Features and Architecture
Question 7Correct
As an ACCOUNTADMIN in Snowflake, which methods can you use to view
billing details for Automatic Clustering? (Select all that apply)
Your selection is correct
Snowsight (Web Interface):
Explanation
In the Snowsight interface, you can navigate to the specified sections to view
cost insights related to Automatic Clustering.
Your selection is correct
Classic Web Interface:
Click on Account.
Explanation
The Classic Web Interface provides a Billing & Usage section where you
can find entries associated with AUTOMATIC_CLUSTERING, detailing the
billing information.
Your selection is correct
Access the AUTOMATIC_CLUSTERING_HISTORY view to retrieve detailed
information about Automatic Clustering activities, including credits
consumed.
Explanation
Example: This query provides insights into clustering history and associated
costs for January 2023.
1. SELECT
2. TABLE_NAME,
3. CLUSTERING_KEY,
4. CREDITS_CONSUMED,
5. BYTES_UPDATED,
6. ROWS_UPDATED,
7. TIMESTAMP
8. FROM
9. SNOWFLAKE.ACCOUNT_USAGE.AUTOMATIC_CLUSTERING_HISTORY
10. WHERE
11. TIMESTAMP >= '2023-01-01'
12. AND TIMESTAMP <= '2023-01-31'
13. ORDER BY
14. TIMESTAMP DESC;
Explanation
Third-party tools may offer additional insights but are not the primary means
provided by Snowflake for monitoring billing.
Overall explanation
Options 1, 2, and 3 are the correct.
By utilizing these methods, you can effectively monitor and analyze the
billing details for Automatic Clustering in your Snowflake account.
Domain
Snowflake Data Platform Features and Architecture
Question 8Correct
If you create a Network Policy by providing both 'Allowed IP Addresses' and
'Blocked IP Addresses', which is applied first by Snowflake while validating
the access?
Allowed IP Addresses
Your answer is correct
Blocked IP Addresses
Overall explanation
If you provide both Allowed IP Addresses and Blocked IP
Addresses, Snowflake applies the Blocked List first.
Domain
Account Access & Security
Question 9Incorrect
The Snowflake Information Schema includes table functions you can query to
retrieve information about your directory tables. Which table function can be
used to query the history of data files registered in the metadata of specified
objects and the credits billed for these operations?
STAGE_DIRECTORY_FILE_REGISTRATION_HISTORY
Correct answer
AUTO_REFRESH_REGISTRATION_HISTORY
DATABASE_REFRESH_HISTORY
Your answer is incorrect
STAGE_STORAGE_USAGE_HISTORY
Overall explanation
AUTO_REFRESH_REGISTRATION_HISTORY table function can be used to
query the history of data files registered in the metadata of specified objects
and the credits billed for these operations. The table function returns the
billing history within a specified date range for your entire Snowflake
account. This function returns billing activity within the last 14 days.
Question 10Correct
Which of these Snowflake Editions automatically stores data in an encrypted
state?
Standard
Virtual Private Snowflake(VPS)
Your answer is correct
All of the Snowflake Editions
Business Critical
Enterprise
Overall explanation
All of the Snowflake Editions (Standard, Enterprise, Business Critical, Virtual
Private Snowflake) automatically store data in an encrypted state.
Domain
Data Protection and Data Sharing
Question 11Correct
Which of these system-defined roles can manage operations at the
organization level?
USERADMIN
ACCOUNTADMIN
SYSADMIN
SECURITYADMIN
Your answer is correct
ORGADMIN
Overall explanation
ORGADMIN role manages operations at the organizational level. More
specifically, this role:
Can create accounts in the organization.
Can view all accounts in the organization (using SHOW ORGANIZATION
ACCOUNTS) and all regions enabled for the organization (using SHOW
REGIONS).
Can view usage information across the organization.
Domain
Account Access & Security
Question 12Correct
A user can be assigned multiple roles. (True / False)
FALSE
Your answer is correct
TRUE
Overall explanation
Roles are the entities to which privileges on securable objects can be
granted and revoked. Roles are assigned to users to allow them to perform
actions required for business functions in their organization. A user can be
assigned multiple roles. It allows users to switch roles (i.e., choose which
role is active in the current Snowflake session) to perform different actions
using separate sets of privileges.
Domain
Account Access & Security
Question 13Incorrect
John has to create a PIPE that will be triggered for loading by calling the
Snowpipe REST endpoints. What parameter does he need to specify in
CREATE PIPE statement?
API_INGEST = FALSE
Explanation
The parameter API_INGEST = FALSE is not the correct parameter to specify
in the CREATE PIPE statement for a pipe that will be triggered for loading by
calling the Snowpipe REST endpoints. This setting does not align with the
requirement of external triggers for loading and does not provide the
necessary configuration for the described scenario.
API_INGEST = TRUE
Explanation
Explanation
Explanation
This is partially correct, as it considers size and activity, but does not
address the performance implications related to cache retention.
Your answer is correct
Consider the trade-off between saving credits by suspending the
warehouse versus maintaining the cache of data from the previous
queries to help with performance.
Explanation
Explanation
This is partially correct, but it lacks the critical aspect of cache impact,
which is a key factor in deciding whether to suspend a warehouse
Overall explanation
Suspending a Snowflake warehouse can be cost-effective as it stops compute
billing. However, the local cache, which stores previously accessed data, is
lost during suspension. This may result in slower performance when the
warehouse is resumed, especially for queries that benefit from cached data.
It is essential to balance the cost savings from suspension against the
potential performance hit due to cache loss. For instance, if your workload
involves repetitive queries, maintaining the warehouse in an active state
might yield better performance despite the ongoing costs.
Domain
Performance Concepts
Question 15Incorrect
Which of the following languages does Snowflake support for writing UDFs
(User-Defined Functions)? (Select 4)
Your selection is correct
Python
Explanation
Explanation
Snowflake supports JAVA for writing UDFs, allowing users to create custom
functions in JAVA to extend the functionality of Snowflake.
Your selection is incorrect
GO
Explanation
Snowflake does not support GO for writing UDFs. Only JAVA, JavaScript, SQL,
Python, and a few other languages are supported for UDF development in
Snowflake.
Your selection is correct
JavaScript
Explanation
Explanation
SQL is supported by Snowflake for writing UDFs, providing users with the
ability to create custom functions directly in SQL to enhance data processing
capabilities.
C#
Explanation
Snowflake does not support C# for writing UDFs. Only specific languages like
JAVA, JavaScript, SQL, and Python are supported for UDF development in
Snowflake.
Overall explanation
In Snowflake, User-Defined Functions (UDFs) can be written
in SQL, JavaScript, Java, Python, and Scala. These supported languages
allow users to implement custom functions that perform complex
calculations or data transformations directly within Snowflake, using familiar
programming environments. For instance, SQL UDFs are ideal for simple,
inline data manipulations, while Java and Python UDFs enable more complex
data processing tasks that leverage external libraries. Snowflake
does not support UDFs in languages like Go or C#.
It's good to know the information provided below:
1. SQL UDF
SQL UDFs are best for simple data transformations within SQL expressions.
2. JavaScript UDF
JavaScript UDFs allow for more flexible scripting directly in Snowflake using
JavaScript syntax.
3. Java UDF
Java UDFs use Java code and require compiling Java classes into Snowflake
using the CREATE FUNCTION statement. To set up a Java UDF, you would need
to use a JAR file containing the Java class.
Java Class Example (compiled and uploaded as a JAR file):
4. Python UDF
Python UDFs are used for data transformations involving Python libraries
and syntax.
Scala UDFs require a compiled JAR file similar to Java UDFs and use Scala
code. These UDFs are useful for data processing tasks leveraging Scala’s
capabilities.
1. object MathUtils {
2. def square(x: Int): Int = x * x
3. }
Domain
Data Transformation
Question 16Correct
What happens to the data when the retention period ends for an object?
SYSADMIN can restore the data from Fail-safe
Data can be restored by increasing the retention period
Data is permanently lost
Your answer is correct
Data is moved to Snowflake Fail-safe
Overall explanation
When the retention period ends for an object, the historical data is moved
into Snowflake Fail-safe. Snowflake support needs to be contacted to get
the data restored from Fail-safe.
Domain
Data Protection and Data Sharing
Question 17Incorrect
The user access history can be found by querying the
Your answer is incorrect
Information Schema ACCESS_HISTORY view
Information Schema ACCESS_REPORT view
Account Usage ACCESS_REPORT view
Correct answer
Account Usage ACCESS_HISTORY view
Overall explanation
Access History in Snowflake refers to when the user query reads column data
and when the SQL statement performs a data write operation, such as
INSERT, UPDATE, and DELETE, along with variations of the COPY command,
from the source data object to the target data object. The user access
history can be found by querying the Account Usage
ACCESS_HISTORY view.
Domain
Account Access & Security
Question 18Correct
Which of these Snowflake features does enable accessing historical data
(i.e., data that has been changed or deleted) at any point within a defined
period?
Data Sharing
Zero Copy Cloning
Search Optimization Service
Your answer is correct
Time Travel
Overall explanation
Snowflake Time Travel enables accessing historical data (i.e. data that
has been changed or deleted) at any point within a defined period. It serves
as a powerful tool for performing the following tasks:
Restoring data-related objects (tables, schemas, and databases) that
might have been accidentally or intentionally deleted. - Duplicating
and backing up data from key points in the past.
Analyzing data usage/manipulation over specified periods of time.
Domain
Data Protection and Data Sharing
Question 19Correct
A stored procedure can simultaneously run the caller’s and the owner’s
rights. (True / False)
TRUE
Your answer is correct
FALSE
Overall explanation
A stored procedure runs with either the caller’s rights or the owner’s rights. It
cannot run with both at the same time. A caller’s rights stored
procedure runs with the privileges of the caller. The primary advantage of a
caller’s rights stored procedure is that it can access information about that
caller or about the caller’s current session. For example, a caller’s rights
stored procedure can read the caller’s session variables and use them in a
query. An owner’s rights stored procedure runs mostly with the
privileges of the stored procedure’s owner. The primary advantage of an
owner’s rights stored procedure is that the owner can delegate specific
administrative tasks, such as cleaning up old data, to another role without
granting that role more general privileges, such as privileges to delete all
data from a specific table. At the time that the stored procedure is created,
the creator specifies whether the procedure runs with owner’s rights or
caller’s rights. The default is owner’s rights.
Domain
Snowflake Data Platform Features and Architecture
Question 20Correct
Dynamic Data Masking is supported by (Select all that apply)
Your selection is correct
VPS
Standard Edition
Your selection is correct
Enterprise Edition
Your selection is correct
Business Critical
Overall explanation
Dynamic Data Masking features require Enterprise Edition (or
higher).
Domain
Account Access & Security
Question 21Correct
Which is not the DML (Data Manipulation Language) command?
DELETE
UPDATE
TRUNCATE
INSERT
MERGE
Your answer is correct
UNDROP
Overall explanation
DML commands are used for managing data within database objects. In
Snowflake, typical DML commands include INSERT (to insert data into a
table), MERGE (to merge rows into a table), UPDATE (to update existing data
within a table), DELETE (to delete records from a table), and TRUNCATE (to
delete all records from a table but not the table itself).
Domain
Snowflake Data Platform Features and Architecture
Question 22Correct
Only the user who generated the scoped URL can use the URL to access the
referenced file. (True/False)
FALSE
Your answer is correct
TRUE
Overall explanation
True, only the user who generated the scoped URL can use the URL to access
the referenced file. I case of File URL, any role that has sufficient privileges
on the stage can access the file.
Domain
Data Transformation
Question 23Correct
You have a table with a 30-day retention period. If you decrease the
retention period to 20 days, how would it affect the data that would have
been removed after 30 days?
Your answer is correct
The data will now retain for a shorter period of 20 days
The data will still retain for 30-day before moving to Fail-safe
Overall explanation
Decreasing Retention reduces the amount of time data is retained in Time
Travel:
For active data modified after the retention period is reduced, the new
shorter period applies.
For data that is currently in Time Travel:
If the data is still within the new shorter period, it remains in
Time Travel.
If the data is outside the new period, it moves into Fail-safe.
For example, if you have a table with a 30-day retention period and you
decrease the period to 20-day, data from days 21 to 30 will be moved into
Fail-safe, leaving only the data from day 1 to 20 accessible through Time
Travel. However, the process of moving the data from Time Travel into Fail-
safe is performed by a background process, so the change is not
immediately visible. Snowflake guarantees that the data will be moved, but
does not specify when the process will complete; until the background
process completes, the data is still accessible through Time Travel.
Domain
Data Protection and Data Sharing
Question 24Incorrect
Which of these functions helps generate the FILE URL to access the
unstructured data file?
GET_RELATIVE_PATH
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
This is correct. BUILD_STAGE_FILE_URL function is the correct choice as it
is specifically designed to help generate the FILE URL to access unstructured
data files within Snowflake. It constructs the URL needed to access the file in
the stage, making it the appropriate choice for this scenario.
Overall explanation
The BUILD_STAGE_FILE_URL function is specifically designed to generate a
permanent file URL for accessing unstructured data stored in a stage. It
directly uses the stage name and the file's relative path, ensuring a stable
link for data access. Other options focus on generating temporary access
points or extracting paths, not creating a durable file URL.
Domain
Data Transformation
Question 25Incorrect
Snowflake blocks certain IPs by default to ensure that customer is getting the
highest level of Network security. (TRUE / FALSE)
Your answer is incorrect
TRUE
Correct answer
FALSE
Overall explanation
By default, Snowflake allows users to connect to the service from
any computer or device IP address. A security administrator (or higher)
can create a network policy to allow or deny access to a single IP address or
a list of addresses.
Domain
Account Access & Security
Question 26Correct
At what frequency does Snowflake rotate the object keys?
16 Days
60 Days
1 Year
Your answer is correct
30 Days
Overall explanation
All Snowflake-managed keys are automatically rotated by Snowflake
when they are more than 30 days old. Active keys are retired, and new
keys are created. When Snowflake determines the retired key is no longer
needed, the key is automatically destroyed. When active, a key is used to
encrypt data and is available for usage by the customer. When retired, the
key is used solely to decrypt data and is only available for accessing the
data.
Domain
Account Access & Security
Question 27Incorrect
What is the expiration period of a File URL?
The URL expires when the persisted query result period ends
Your answer is incorrect
Length of time specified in the expiration_time argument
Correct answer
It is Permanent
Overall explanation
The expiration period of Scoped URL: The URL expires when the persisted
query result period ends.
Domain
Data Transformation
Question 28Incorrect
Select the type of function that can operate on a subset of rows within the
set of input rows.
System Function
Explanation
Explanation
This is correct. Window functions, like ROW_NUMBER , RANK , and SUM OVER , are
specifically designed to perform calculations across a subset (window) of
rows related to the current row. They allow calculations over partitions of
data, making them suitable for subset operations.
Scalar Function
Explanation
This is incorrect. Scalar functions return a single value for each row of
input. They do not operate on a subset of rows but rather on individual rows.
Your answer is incorrect
Aggregate Function
Explanation
This is incorrect. Aggregate functions like SUM or AVG operate over a set of
rows to produce a single summary value but do not operate on subsets
within the input rows. They are not specifically designed to work on
partitions or windows of data.
User-Defined Function
Explanation
This is incorrect. UDFs are custom functions created by users that can
operate at a row level or aggregate level, but they are not specifically
designed for window-based calculations.
Table Function
Explanation
Domain
Account Access & Security
Question 30Correct
File URL is ideal for
None of these
business intelligence applications or reporting tools that need to
display the unstructured file contents
use in custom applications, providing unstructured data to other
accounts via a share
Your answer is correct
custom applications that require access to unstructured data files
Overall explanation
File URL: URL that identifies the database, schema, stage, and file path to a
set of files. A role that has sufficient privileges on the stage can access the
files. Ideal for custom applications that require access to
unstructured data files.
Scoped URL: Encoded URL that permits temporary access to a staged file
without granting privileges to the stage. The URL expires when the persisted
query result period ends (i.e., the results cache expires), which is currently
24 hours. Ideal for use in custom applications, providing
unstructured data to other accounts via a share, or for downloading
and ad hoc analysis of unstructured data via Snowsight.
Pre-signed URL: Simple HTTPS URL used to access a file via a web browser.
A file is temporarily accessible to users via this URL using a pre-signed
access token. The expiration time for the access token is configurable. Ideal
for business intelligence applications or reporting tools that need to
display unstructured file contents.
Domain
Data Transformation
Question 31Incorrect
Select the correct statements for Table Clustering. (Select 3)
Your selection is incorrect
Automatic Clustering doesn’t consume credit.
Explanation
This is incorrect – Automatic clustering does consume credits since it
involves compute resources provided by Snowflake to manage the clustering
process.
Snowflake doesn’t charge for Reclustering.
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
Sharing internally with private data exchange or externally with public data
exchange is a capability of Snowgrid. It enables secure data sharing within
an organization or with external partners through controlled access.
Correct selection
Live, ready to query data
Explanation
Explanation
Explanation
Zero-copy cloning is not a capability of Snowgrid. It is a feature that allows
for fast and efficient cloning of databases in Snowflake, but it is not directly
related to Snowgrid's capabilities.
Overall explanation
Snowgrid allows you to use Secure Data Sharing features to provide
access to live data, without any ETL or movement of files across
environments.
Domain
Snowflake Data Platform Features and Architecture
Question 33Incorrect
Which stream type is supported for streams on the external table only?
Standard
Explanation
Explanation
Explanation
The stream type "Update-only" is not specific to external tables and does not
define the stream type supported for streams on external tables only. This
type of stream captures only the updates made to a table, regardless of
whether it is an internal or external table.
Correct answer
Insert-only
Explanation
The stream type "Insert-only" is the correct choice as it is the stream type
supported for streams on external tables only. This type of stream captures
only the new rows inserted into an external table, making it the appropriate
choice for external table streams.
Your answer is incorrect
Append-only
Explanation
The stream type "Append-only" is not exclusive to external tables and does
not specify the stream type supported for streams on external tables only.
This type of stream captures only the new rows added to a table, regardless
of its internal or external nature.
Overall explanation
Insert-only is supported for streams on external tables only. An insert-only
stream tracks row inserts only; they do not record delete operations that
remove rows from an inserted set (i.e. no-ops).
Domain
Snowflake Data Platform Features and Architecture
Question 34Correct
Snowflake is available in four editions. Which are those? (Select 4)
Your selection is correct
Business Critical
Explanation
Explanation
The Virtual Private Snowflake (VPS) edition is a specialized edition that offers
dedicated resources and infrastructure for organizations that require a
higher level of isolation and control over their Snowflake environment. It
provides enhanced security and customization options for specific business
requirements.
Your selection is correct
Enterprise
Explanation
Explanation
The Standard edition of Snowflake is one of the available editions that
provide basic functionalities for data warehousing and analytics. It is suitable
for small to medium-sized businesses with standard data processing needs.
Professional
Explanation
Explanation
The Professional Plus edition is also not one of the available editions of
Snowflake. This choice is incorrect as it is not a valid edition provided by
Snowflake for data processing and analytics.
Overall explanation
Snowflake offers four editions designed to meet varying business needs:
1. Standard Edition: This is the base edition providing essential
features like SQL data warehousing, secure data sharing, and 1-day
time travel.
2. Enterprise Edition: This edition builds on Standard and adds features
such as multi-cluster warehouses and up to 90-day time travel for
enhanced scalability.
3. Business Critical Edition: This edition provides additional security
measures, such as HIPAA, PCI compliance, and encryption everywhere,
catering to highly regulated industries.
4. Virtual Private Snowflake (VPS): Designed for organizations with
strict security requirements, offering isolated, private Snowflake
instances with dedicated resources.
Domain
Snowflake Data Platform Features and Architecture
Question 35Correct
Readers accounts enable providers to share data with consumers who are
not already Snowflake customers without requiring the consumers to
become Snowflake Customers. Which role can create the Reader account?
SECURITYADMIN
Your answer is correct
ACCOUNTADMIN
USERADMIN
SYSADMIN
Overall explanation
ACCOUNTADMIN role (or a role granted the CREATE ACCOUNT global
privilege) only can create the Reader account.
Domain
Account Access & Security
Question 36Correct
If you recreate a pipe using CREATE OR REPLACE PIPE command. What does
happen to load history if the Snowpipe gets recreated?
The pipe can not be recreated
Explanation
The pipe can indeed be recreated using the CREATE OR REPLACE PIPE
command in Snowflake. This command allows for the replacement of an
existing pipe with a new definition, but it does not prevent the recreation
process or affect the load history.
The recreated Pipe still has tracks of the files loaded by the old Pipe
Explanation
The recreated pipe does not retain any tracks of the files loaded by the old
pipe. When a pipe is recreated, it starts fresh without any previous load
history or file tracking.
Snowflake still keeps load history
Explanation
Snowflake does not retain the load history when a pipe is recreated using the
CREATE OR REPLACE PIPE command. The load history is reset during the
recreation process.
Your answer is correct
The load history gets reset to empty
Explanation
Explanation
This is incorrect. This description aligns with the Standard scaling policy,
not the Economy policy. The Standard policy is more aggressive in adding
clusters to minimize queuing, while the Economy policy is more
conservative.
Your answer is correct
Only if the system estimates there’s enough query load to keep the
cluster busy for at least 6 minutes.
Explanation
Explanation
This is correct. Smaller batches consume less memory, making it less likely
for data to overflow to local or remote storage. This strategy ensures more
efficient memory use, leading to faster query execution.
Processing data in larger batches.
Explanation
This is incorrect. Larger batches require more memory, increasing the risk
of data spillage to local or remote storage, which can slow down query
performance even further.
Your selection is correct
Splitting the processing into multiple steps.
Explanation
Explanation
Explanation
Explanation
This is incorrect. The query history page in Snowflake does not allow
viewing details for the past 31 days. Instead, the standard interface, like the
Classic Console and Snowsight, displays query history for the last 14 days.
Your answer is correct
FALSE
Explanation
This is correct. The default time frame for query history visibility in
Snowflake’s UI is 14 days, not 31 days. For longer retention, querying
the QUERY_HISTORY view can allow access to historical data, but this requires
using specific database views or queries that cover up to a year.
Overall explanation
In Snowflake's UI, the Query History page provides a list of queries
executed within the last 14 days, not 31 days. For more extended query
history, Snowflake’s QUERY_HISTORY view in the ACCOUNT_USAGE schema allows
access to up to 365 days of history, but this requires database-level querying
rather than using the standard UI interface.
Domain
Performance Concepts
Question 41Correct
For which object the Kafka connector does create a topic?
One internal stage to temporarily store data files for each topic
One pipe to ingest the data files for each topic partition
Your answer is correct
All of these
One table for each topic. If the table specified for each topic does
not exist
Overall explanation
The connector creates the following objects for each topic:
One internal stage to temporarily store data files for each topic.
One pipe to ingest the data files for each topic partition.
One table for each topic. If the table specified for each topic does not
exist, the connector creates it; otherwise, the connector creates the
RECORD_CONTENT and RECORD_METADATA columns in the existing
table and verifies that the other columns are nullable (and produces an
error if they are not).
Domain
Snowflake Data Platform Features and Architecture
Question 42Correct
John is trying to load JSON data sets with a huge array containing multiple
records. Considering the VARIANT data type imposed size of 16 MB, what do
you recommend to John for optimally loading the data?
No need to remove the outer array structure, as Snowflake
Intelligent Engine will take care of that.
Explanation
Explanation
Explanation
Explanation
This is correct – Enabling the STRIP_OUTER_ARRAY option removes the top-
level array from the JSON file. This makes it possible to load each element
within the array as individual rows in a Snowflake table, helping to stay
within the 16 MB size limitation of the VARIANT data type.
Overall explanation
When dealing with large JSON data sets containing arrays, enabling
the STRIP_OUTER_ARRAY option in the JSON file format for the COPY
INTO command ensures that Snowflake removes the top-level array. This
allows the system to treat each nested JSON element as an individual row,
bypassing the 16 MB limit of the VARIANT data type. Here’s a code example
demonstrating this configuration:
1. CREATE FILE FORMAT my_json_format
2. TYPE = 'JSON'
3. STRIP_OUTER_ARRAY = TRUE;
4.
5. COPY INTO my_table
6. FROM @my_stage/file.json
7. FILE_FORMAT = (FORMAT_NAME = my_json_format);
This configuration helps load large JSON files effectively by breaking down
the array structure, enhancing both performance and storage handling within
Snowflake's limitations. Sources: Snowflake Documentation on JSON File
Formats, InterWorks Guide on Semi-Structured Data
Domain
Data Loading and Unloading
Question 43Correct
How can we turn off the query result cache?
Setting the parameter USE_QUERY_CACHED to FALSE
Explanation
Explanation
Explanation
This is incorrect. There is no parameter called USE_CACHED_INFO in
Snowflake’s configuration settings. This term does not exist in official
Snowflake documentation.
Your answer is correct
Setting the parameter USE_CACHED_RESULT to FALSE
Explanation
Overall explanation
To disable the query result cache in Snowflake, the correct parameter
is USE_CACHED_RESULT . By setting this parameter to FALSE at the session level
(using ALTER SESSION SET USE_CACHED_RESULT=FALSE ), Snowflake will not use
cached results for queries, ensuring that each query execution retrieves
fresh data. This can be useful when testing performance or validating query
results without relying on cache optimizations. Parameters
like USE_CACHED_INFO or USE_QUERY_CACHED are not valid, and the result cache
can indeed be controlled.
Domain
Performance Concepts
Question 44Correct
If we make any changes to the original table, then
The cloned table data get refreshed with the entire new data of the
source table
Your answer is correct
The changes do not reflect in the cloned table
The changes get immediately reflected in the cloned table
Overall explanation
Zero-copy cloning allows us to make a snapshot of any table, schema,
or database without actually copying data. A clone is writable and is
independent of its source (i.e., changes made to the source or clone
are not reflected in the other object). A new clone of a table points to
the original table's micro partitions, using no data storage. If we make any
changes in the cloned table, then only its changed micro partitions are
written to storage.
Domain
Data Protection and Data Sharing
Question 45Incorrect
Which of these are not supported by the Search Optimization Service?
(Select all that apply)
Your selection is correct
External Tables
Correct selection
Analytical Expressions
Correct selection
Columns defined with COLLATE clause
Your selection is correct
Column Concatenation
Correct selection
Casts on table columns
Your selection is correct
Materialized Views
Overall explanation
None of these are currently supported by the Search Optimization
Service. Additionally, Tables and views protected by row access policies
cannot be used with the Search Optimization Search.
Domain
Snowflake Data Platform Features and Architecture
Question 46Correct
What is the purpose of VALIDATION_MODE in the COPY INTO <table> command?
Your answer is correct
VALIDATION_MODE is used to validate the load file for errors instead of
loading it into the specified table.
Explanation
Explanation
Explanation
In this example, the COPY INTO command checks the data files
in @my_stage for errors and returns any issues found, without loading the
data into my_table .
Domain
Data Loading and Unloading
Question 47Incorrect
What are the supported file formats for data unloading in Snowflake?
Your selection is incorrect
Avro
Explanation
Explanation
Explanation
Explanation
This is correct – Parquet is a supported format for data unloading in
Snowflake, favored for analytical workloads because of its columnar
structure.
XML
Explanation
Snowflake's support for JSON and Parquet enables better data integration
with downstream analytical tools and data lakes. Other formats like Avro and
ORC are primarily for data loading, not unloading.
Domain
Data Loading and Unloading
Question 48Correct
Which command will list the pipes for which you have access privileges?
SHOW PIPES();
Explanation
The command SHOW PIPES(); is not a valid Snowflake SQL command. It will
not show the pipes for which you have access privileges.
DESCRIBE PIPES;
Explanation
Explanation
The command LIST PIPES; is not a valid Snowflake SQL command. It will not
list the pipes for which you have access privileges.
Your answer is correct
SHOW PIPES;
Explanation
Explanation
The command LIST PIPES(); is not a valid Snowflake SQL command. It will not
list the pipes for which you have access privileges.
Overall explanation
SHOW PIPES Command lists the pipes for which you have access privileges.
This command can list the pipes for a specified database or schema (or the
current database/schema for the session), or your entire account.
Domain
Snowflake Data Platform Features and Architecture
Question 49Incorrect
If a user is logged in to Snowflake in a federated environment and IdP times
out, what does happen to the user's snowflake session?
Correct answer
It does not affect the user's Snowflake sessions. However, to initiate
any new Snowflake sessions, the user must log into the IdP again.
Your answer is incorrect
The Snowflake web interface is disabled, and the prompt for IdP
authentication is displayed.
Overall explanation
After a specified period of time (defined by the IdP), a user’s session in the
IdP automatically times out, but this does not affect their Snowflake sessions.
Any Snowflake sessions that are active at the time remain open and do not
require re-authentication. However, to initiate any new Snowflake sessions,
the user must log into the IdP again.
Domain
Account Access & Security
Question 50Correct
Snowflake automatically and transparently maintains materialized views.
(True/False)
FALSE
Your answer is correct
TRUE
Overall explanation
Snowflake automatically and transparently maintains materialized
views. A background service updates the materialized view after changes to
the base table. This is more efficient and less error-prone than manually
maintaining the equivalent of a materialized view at the application level.
Domain
Snowflake Data Platform Features and Architecture
Question 51Correct
What all options are available for data transformation while loading data into
a table using the COPY command? (Select all that apply)
Your selection is correct
Casts
Explanation
Casts allow you to perform data type conversions during data loading using
the COPY command. This feature is useful for ensuring data compatibility
between the source and destination tables, especially when the data types
do not match.
Your selection is correct
Column reordering
Explanation
Explanation
Column omission enables you to exclude specific columns from being loaded
into the destination table during data loading using the COPY command. This
can be helpful when certain columns are not needed or should be excluded
from the loading process.
Join
Explanation
Join is not an option available for data transformation while loading data into
a table using the COPY command. Joins are typically used for combining data
from multiple tables, and they are not directly related to the data
transformation process during data loading.
Your selection is correct
Truncation of Text Strings
Explanation
Truncation of Text Strings option allows you to truncate text strings that
exceed the specified length during data loading using the COPY command.
This can help in handling data integrity issues and ensuring that the data fits
within the defined schema constraints.
Overall explanation
Snowflake supports transforming data while loading it into a table using the
COPY command. Options include:
Column reordering
Column omission
Casts
Truncating text strings that exceed the target column length.
Domain
Data Transformation
Question 52Correct
Multi-cluster warehouses are beneficial in improving the performance of
slow-running queries or data loading. (True/False)
Your answer is correct
FALSE
TRUE
Overall explanation
Multi-cluster warehouses in Snowflake are primarily designed to handle high
concurrency scenarios, where many users or queries need to be processed
simultaneously. They automatically scale by adding or reducing clusters
based on workload demands, which helps to manage the number of
concurrent users effectively. However, for improving the performance of
individual slow-running queries or data loading, increasing the size of
a standard single-cluster warehouse is usually more effective. Multi-cluster
warehouses do not inherently speed up slow queries; instead, they are better
suited for distributing workload among multiple users and queries
Domain
Performance Concepts
Question 53Correct
What value will be returned by the following query?
Explanation
The query is attempting to flatten an empty JSON array using the FLATTEN
function. Since there are no elements in the array to flatten, the result will
not be a numerical value like 0.
[]
Explanation
The query is using the FLATTEN function on an empty JSON array. When
flattening an empty array, the result will not be another array, such as an
empty array.
NULL
Explanation
The query is trying to flatten an empty JSON array, which will not result in a
NULL value. The FLATTEN function will handle the empty array by not
returning a NULL value.
Your answer is correct
Nothing will return or, the output of the input row will be omitted
Explanation
Explanation
Explanation
This is correct – Azure Blob Storage is also supported for staging, enabling
data operations directly from Azure containers.
Oracle Cloud Storage
Explanation
Explanation
Explanation
Domain
Data Loading and Unloading
Question 55Incorrect
An account-level resource monitor overrides the resource monitor
assignment for individual warehouses. (True/False)
Your answer is incorrect
TRUE
Explanation
Explanation
Domain
Performance Concepts
Question 56Incorrect
Search optimization is a Database-level property applied to all the tables
within the database with supported data types. (True/False)
Correct answer
FALSE
Your answer is incorrect
TRUE
Overall explanation
Search optimization is a table-level property and applies to all columns
with supported data types. The search optimization service aims to
significantly improve the performance of selective point lookup queries on
tables. A point lookup query returns only one or a small number of distinct
rows. A user can register one or more tables to the search optimization
service.
Domain
Snowflake Data Platform Features and Architecture
Question 57Incorrect
Which objects are not available for replication in the Standard Edition of
Snowflake? (Select 3)
Database
Correct selection
Integrations
Your selection is correct
Roles
Your selection is correct
Users
Your selection is incorrect
Shares
Overall explanation
Database and share replication are available in all editions, including the
Standard edition. Replication of all other objects is only available for
Business Critical Edition (or higher).
Domain
Data Protection and Data Sharing
Question 58Correct
How long do results remain in the Query results cache?
Your answer is correct
24 hours
Explanation
Explanation
This is incorrect. Query results in Snowflake are not cached for 12 hours.
This duration does not align with Snowflake’s official settings for query result
caching.
1 hour
Explanation
This is incorrect. The query result cache in Snowflake is not set to expire
after just 1 hour. This is not consistent with the official caching period
provided by Snowflake.
31 hours
Explanation
This is incorrect. Snowflake does not have a default 31-hour cache period
for query results. While the 24-hour cache retention can extend if reused, 31
hours is not a specific setting.
16 hours
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
This option omits rows with non-expandable elements (default behavior) but
does not affect recursive expansion.
Correct answer
RECURSIVE => TRUE
Explanation
Explanation
Setting OUTER to TRUE ensures that rows with non-expandable elements (like
empty arrays) are included in the output with NULL values. This does not
enable recursive expansion of sub-elements.
RECURSIVE => FALSE
Explanation
Example -
Domain
Data Transformation
Question 63Correct
What authentication methods does Snowflake support for REST API
authentication? (Select 2)
Snowflake Account User ID and Password
Your selection is correct
Key Pair Authentication
Authentication is not required in case Snowflake SQL API
Your selection is correct
OAuth
Overall explanation
Snowflake SQL API supports Oauth, and Key Pair authentication.
Domain
Data Transformation
Question 64Correct
What would you create (UDF or Stored procedure) if you need a function that
can be called as part of a SQL statement and must return a value that will be
used in the statement?
Stored Procedure
Explanation
Explanation
UDFs are created to calculate and return a value that can be used within SQL
statements. They can be invoked as part of expressions in SELECT , WHERE , or
other clauses, making them suitable for scenarios where a function's return
value is needed within a SQL statement.
Overall explanation
In Snowflake, if you need a function that can be called within a SQL
statement and whose return value is utilized in that statement, you should
create a User-Defined Function (UDF). UDFs are designed to be part of
SQL expressions, allowing for seamless integration into queries. In
contrast, Stored Procedures are intended for executing a series of SQL
statements and are called independently using the CALL statement; they are
not designed to be embedded within other SQL expressions.
Example:
Creating a UDF to calculate the square of a number:
Domain
Data Transformation
Question 65Correct
The suspended warehouse cannot be resized until it resumes. (True / False)
TRUE
Explanation
Explanation
Explanation
Explanation
Explanation
External Stage in Snowflake is a feature that allows users to load data from
external cloud storage locations, such as Amazon S3 or Azure Blob Storage,
into Snowflake. It is not used for loading data directly from a local file
system.
ETL tools
Explanation
ETL (Extract, Transform, Load) tools are commonly used for data integration
and loading tasks, including extracting data from various sources,
transforming it, and loading it into a target system like Snowflake. While ETL
tools can be used to load data into Snowflake, they are not the primary tool
for loading data from a local file system.
Overall explanation
SnowSQL is the primary command-line tool used to load data from a
local file system into Snowflake. It allows you to stage data using
the PUT command and then load it into tables via the COPY INTO command.
This tool is especially useful for bulk loading operations from local sources to
Snowflake's internal stages
Domain
Snowflake Data Platform Features and Architecture
Question 68Correct
Which of the following file format is not supported by Snowflake?
JSON
Explanation
Snowflake supports JSON file format for data loading and unloading
operations. JSON is a popular format for storing semi-structured data.
Your answer is correct
EDI
Explanation
Explanation
Snowflake supports ORC (Optimized Row Columnar) file format for data
loading and unloading operations. ORC is a columnar storage format that
provides high compression and efficient data processing.
PARQUET
Explanation
Snowflake supports PARQUET file format for data loading and unloading
operations. PARQUET is a columnar storage format that is highly optimized
for query performance and efficient data storage.
CSV
Explanation
Snowflake supports CSV file format for data loading and unloading
operations. It is a common format used for storing tabular data.
AVRO
Explanation
Snowflake supports AVRO file format for data loading and unloading
operations. AVRO is a compact and efficient binary format for data
serialization.
Overall explanation
Snowflake supports a range of structured and semi-structured file formats,
including CSV, JSON, AVRO, ORC, and PARQUET, making it versatile for
diverse data scenarios. These formats are ideal for structured and analytical
data, whereas EDI is not a natively supported format.
Domain
Data Loading and Unloading
Question 69Correct
Which view in the Account Usage Schema can be used to query the
replication history for a specified database?
DATA_TRANSFER_HISTORY
DATABASE_REFRESH_HISTORY
REPLICATION_GROUP_REFRESH_HISTORY
Your answer is correct
REPLICATION_USAGE_HISTORY
Overall explanation
This REPLICATION_USAGE_HISTORY view in the Account Usage
Schema can be used to query the replication history for a specified
database. The returned results include the database name, credits
consumed, and bytes transferred for replication. Usage data is retained for
365 days (1 year).
Domain
Data Protection and Data Sharing
Question 70Correct
How many maximum columns (or expressions) are recommended for a
cluster key?
12 to 16
Higher the number of columns (or expressions) in the key, better
will be the performance
7 to 8
Your answer is correct
3 to 4
Overall explanation
A single clustering key can contain one or more columns or
expressions. Snowflake recommends a maximum of 3 or 4 columns (or
expressions) per key for most tables. Adding more than 3-4 columns
tends to increase costs more than benefits.
Domain
Snowflake Data Platform Features and Architecture
Question 71Correct
Which copyOptions can help load a file with expired metadata (if the
LAST_MODIFIED date is older than 64 days and the initial set of data was
loaded into the table more than 64 days earlier (and if the file was loaded
into the table, that also occurred more than 64 days earlier))? (Select 2)
LOAD_CERTAIN_FILES = TRUE
Explanation
Explanation
Explanation
The option LOAD_FILES = TRUE does not specifically address the scenario of
loading a file with expired metadata. It is a general option for loading files
and does not provide a solution for the specific condition mentioned in the
question.
Your selection is correct
FORCE = TRUE
Explanation
The option FORCE = TRUE is the correct choice for loading a file with expired
metadata. When set to TRUE, it forces the loading of data files into a table,
even if the metadata is outdated or if the file was loaded into the table more
than 64 days earlier. This option helps in handling files with expired
metadata effectively.
ON_ERROR = CONTINUE
Explanation
Explanation
The option FORCE = FALSE does not directly relate to handling files with
expired metadata. It is used to control whether to force the loading of data
files into a table, but it does not address the specific condition mentioned in
the question.
Overall explanation
In Snowflake, the COPY INTO <table> command tracks loaded files to prevent
duplicate loading. If a file's metadata is older than 64 days, Snowflake may
skip it due to uncertainty about its load history. To load such files, you can
use the LOAD_UNCERTAIN_FILES = TRUE option, which attempts to load files
even when their load history is uncertain. Alternatively, setting FORCE =
TRUE forces the loading of all files, regardless of their load history, but may
result in duplicate data if files were previously loaded.
Example:
This command attempts to load data from @my_stage into my_table , including
files with expired metadata.
Explanation
Explanation
Both external stages (cloud storage like Amazon S3, Azure Blob)
and internal stages (within Snowflake) can handle unstructured
data types, including images, videos, and documents. Snowflake
allows storage and processing of unstructured data directly, improving
flexibility in handling various data formats.
Overall explanation
In Snowflake, both external stages (linked to cloud storage) and internal
stages support unstructured data storage, allowing flexible storage and
management of diverse data formats. This helps users integrate structured
and unstructured data efficiently for analysis.
Domain
Data Transformation
Question 73Correct
Monica wants to delete all the data from table t1. She wants to keep the
table structure, so she does not need to create the table again. Which
command will be appropriate for her need?
REMOVE
Your answer is correct
TRUNCATE
UNDROP
DELETE
DROP
Overall explanation
TRUNCATE will delete all of the data from a single table. So, once Monica
truncates table t1, table t1's structure remains, but the data will be deleted.
DELETE is usually used for deleting single rows of data.
Domain
Snowflake Data Platform Features and Architecture
Question 74Incorrect
What would happen if we suspend the warehouse while it is executing the
SQL statement?
Your answer is incorrect
All the compute resources of the warehouse will be shut down
immediately, and the running statement will be canceled.
Explanation
Explanation
This choice is incorrect because suspending the warehouse does not result in
an error while the warehouse is executing SQL statements. The suspension
process only affects idle compute resources.
All compute resources of the warehouse will be up until the
statement is complete.
Explanation
This choice is incorrect because suspending the warehouse does not keep all
compute resources up until the statement is complete. Only idle compute
resources are shut down.
Correct answer
Only idle compute resources of the warehouse will be shut down,
allowing any compute resources executing statements to continue
until the statement is complete.
Explanation
Explanation
Explanation
Explanation
Explanation
Semi-structured data does not conform to a fixed schema but retains some
structural elements (e.g., key-value pairs, tags). JSON, XML, and Avro are
typical formats, allowing data to be stored without strict organization but
enabling partial structure for easier parsing. Snowflake supports semi-
structured data with specialized functions (like FLATTEN ) to extract nested
elements.
Overall explanation
Unstructured data does not follow a predefined schema, making
traditional analysis methods challenging. Examples include documents,
multimedia, and social media data. Snowflake supports unstructured data
management, allowing it to be staged in Snowflake and cloud storage. Semi-
structured data, like JSON, has some structure without strict
organization, useful for nested or hierarchical data. Structured data, such
as tables, fits into a defined schema. Snowflake's capabilities allow handling
of all three data types, with functions like FLATTEN for semi-structured data
and specific stages for unstructured data.
Domain
Data Transformation
Question 76Correct
In what situations should you consider User-Managed Tasks over Serverless
Tasks? (Select 2)
Your selection is correct
Consider when adherence to the schedule interval is less important.
Consider when adherence to the schedule interval is highly
important.
Your selection is correct
Consider when you can fully utilize a single warehouse by
scheduling multiple concurrent tasks to take advantage of available
compute resources.
Consider when you cannot fully utilize a warehouse because too few
tasks run concurrently or they run to completion quickly (in less
than 1 minute).
Overall explanation
User-managed Tasks is recommended when you can fully utilize a
single warehouse by scheduling multiple concurrent tasks to take
advantage of available compute resources. Also, recommended when
adherence to the schedule interval is less critical. Serverless Tasks is
recommended when you cannot fully utilize a warehouse because too
few tasks run concurrently or they run to completion quickly (in less than 1
minute). Also, recommended when adherence to the schedule interval is
critical.
Domain
Snowflake Data Platform Features and Architecture
Question 77Correct
Which of these types of VIEW does Snowflake support? (Select 3)
Your selection is correct
STANDARD VIEW
Your selection is correct
MATERIALIZED VIEW
Your selection is correct
SECURE VIEW
PERMANENT VIEW
TEMPORARY VIEW
EXTERNAL VIEW
Overall explanation
Snowflake supports three types of views.
Standard View, Secure View, and Materialized View.
Secure View: The secure view is exactly like a standard view, except users
cannot see how that view was defined. Sometimes a secure view will run a
little slower than a standard view to protect the information in a secure view.
Snowflake may bypass some of the optimizations.
Domain
Snowflake Data Platform Features and Architecture
Question 78Incorrect
A user's default role is
the name used to log in to the Snowflake WebUI.
Correct answer
the role a user gets set to each time the user logs in to Snowflake.
changed each time the user logs in to Snowflake.
Your answer is incorrect
always the default PUBLIC role.
Overall explanation
A user's default role is the role a user gets set to each time the user logs in
to Snowflake. Snowflake uses roles to control the objects (virtual
warehouses, databases, tables, etc.) that users can access:
Snowflake provides a set of predefined roles, as well as a framework
for defining a hierarchy of custom roles.
All Snowflake users are automatically assigned the predefined PUBLIC
role, which enables login to Snowflake and basic object access.
In addition to the PUBLIC role, each user can be assigned additional
roles, with one of these roles designated as their default role.
A user’s default role determines the role used in the Snowflake
sessions initiated by the user; however, this is only a default. Users can
change roles within a session at any time.
Roles can be assigned at user creation or afterward.
Domain
Account Access & Security
Question 79Incorrect
How can we add a Directory table explicitly to a stage to store a catalog of
staged files?
Your answer is incorrect
Using CREATE DIRECTORY TABLE command and then add to the
stage by ALTER STAGE command
Using CREATE DIRECTORY TABLES command and then add to the
stage by ALTER STAGE command
Correct answer
Using CREATE STAGE command
Overall explanation
A Directory table is not a separate database object; it stores a catalog of
staged files in cloud storage. Roles with sufficient privileges can query a
directory table to retrieve file URLs to access the staged files and other
metadata. A directory table can be added explicitly to a stage when the
stage is created (using CREATE STAGE) or later (using ALTER STAGE) with
supplying directoryTableParams. directoryTableParams (for internal
stages) ::= [ DIRECTORY = ( ENABLE = { TRUE | FALSE }
[ REFRESH_ON_CREATE = { TRUE | FALSE } ] ) ] ENABLE = TRUE | FALSE
Specifies whether to add a directory table to the stage. When the value is
TRUE, a directory table is created with the stage.
Domain
Data Transformation
Question 80Incorrect
How can you unload data from Snowflake using the COPY INTO command into
a single file?
Correct answer
By specifying copy option SINGLE=TRUE
Explanation
By specifying the copy option SINGLE=TRUE, you can unload data from
Snowflake using the COPY INTO command into a single file. This option
ensures that the data is unloaded into a single file instead of multiple files,
which is the requirement specified in the question.
By specifying copy option MULTIPLE_FILES=FALSE
Explanation
Explanation
Explanation
Example -
In this example, data from my_table is unloaded into a single CSV file
named my_data.csv in the stage my_stage . The SINGLE = TRUE option ensures
that the output is consolidated into one file.
Domain
Data Loading and Unloading
Question 81Incorrect
Which services are managed by Snowflake's cloud services layer? (Select all
that apply)
Your selection is correct
Authentication
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
Access Control is managed by Snowflake's cloud services layer to enforce
security policies, permissions, and roles to control access to data and
resources within the Snowflake platform.
Overall explanation
The cloud services layer in Snowflake manages several key functions,
including authentication, infrastructure management, metadata
management, query parsing and optimization, and access control.
These services ensure that user requests, from login to query execution, are
processed efficiently while Snowflake manages the underlying infrastructure.
This layer also handles security and governance, ensuring that users have
appropriate access to data. It abstracts the complexity of infrastructure and
metadata management, allowing users to focus on their data without
worrying about backend processes.
Domain
Snowflake Data Platform Features and Architecture
Question 82Correct
Which algorithm does Snowflake use to estimate the approximate number of
distinct values in a data set?
HyerAccumulateLog
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
The Virtual Private Snowflake (VPS) edition of Snowflake does not specifically
mention federated authorization as a feature. It is tailored for organizations
that require a dedicated and isolated environment for their data processing
needs.
Your answer is correct
All of the Snowflake Editions
Explanation
Explanation
Explanation
Explanation
Explanation
This is incorrect; the query does not return all rows unless the table has
fewer than 100 rows.
Your answer is correct
Return a sample of 100 rows from the table
Explanation
Correct. The query returns a random sample of exactly 100 rows from t1 .
If t1 has fewer than 100 rows, all rows are returned.
Return a sample of a table in which each row has a 10% probability
of being included in the sample
Explanation
This is incorrect; the query specifies a fixed row count, not a percentage-
based sample.
Overall explanation
In Snowflake, SAMPLE ROW (100) in the SELECT statement requests a random
sample of 100 rows from the table. If the table contains fewer than 100
rows, all rows are returned. This method is useful when you want a fixed
number of rows without applying percentage-based sampling.
Domain
Data Transformation
Question 87Incorrect
Which of these is a kind of Cache in Snowflake?
Data/Local Disk Cache
Explanation
Data cache, also known as Local Disk Cache, stores frequently accessed data
on the local SSDs of virtual warehouses. It speeds up query performance by
avoiding repeated reads from cloud storage.
Correct answer
All of these
Explanation
All of these options (Metadata Cache, Data/Local Disk Cache, and Query
Result Cache) are types of caches in Snowflake that are used to optimize
query performance and improve overall system efficiency.
Metadata Cache
Explanation
Metadata cache stores information about table structure, row count, and
data attributes in Snowflake's cloud services layer. It allows queries that
don't need to read actual data (e.g., metadata-based queries) to execute
faster.
Your answer is incorrect
Query Result Cache
Explanation
Query result cache saves the results of previously executed queries for up to
24 hours. It is highly effective when identical queries are run multiple times
without changes to the underlying data.
Overall explanation
Snowflake has three types of cache.
The metadata cache that lives in the cloud services layer.
The data cache/local disk cache that lives on the SSD drives in the
virtual warehouses, and
The query result cache. If a result is small, it will be stored in the
cloud services layer, but larger results are going to be stored in the
storage layer.
Domain
Performance Concepts
Question 88Correct
The data objects stored by Snowflake are not directly visible nor accessible
by customers; they are only accessible through SQL query operations run
using Snowflake. (True/False)
FALSE
Explanation
FALSE. This statement is incorrect. In Snowflake, the data objects stored are
not directly visible or accessible by customers. Customers can only access
and interact with the data objects through SQL query operations run using
Snowflake. This level of abstraction ensures data security and privacy.
Your answer is correct
TRUE
Explanation
TRUE. In Snowflake, the data objects stored are not directly visible or
accessible by customers. Customers can only access and interact with the
data objects through SQL query operations run using Snowflake. This level of
abstraction provides security and privacy for the stored data.
Overall explanation
In Snowflake, the data objects are not directly visible or accessible to
customers. All data stored by Snowflake is managed internally and can only
be accessed through SQL query operations executed within the Snowflake
environment. This ensures that Snowflake handles aspects such as file
organization, compression, and metadata management, abstracting these
complexities away from the end user
Domain
Snowflake Data Platform Features and Architecture
Question 89Incorrect
What are the two modes available for setting up a multi-cluster warehouse in
Snowflake? Choose the correct options.
Your selection is incorrect
Standard mode
Explanation
This is incorrect. There is no "Standard mode" in the official Snowflake
multi-cluster warehouse configurations. The term may seem intuitive, but
Snowflake’s documentation clearly specifies "Maximized" and "Auto-scaling"
as the only available modes.
Correct selection
Maximized mode
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
Explanation
Snowflake supports User-Defined functions, which allow users to define their
own custom functions using SQL or JavaScript. These functions can
encapsulate complex logic and calculations for reuse within SQL queries.
Overall explanation
Snowflake supports a comprehensive range of SQL functions to facilitate
diverse data operations:
Scalar functions process individual values, returning a single result
per input.
Aggregate functions compute a single result from multiple input
rows, such as totals or averages.
Window functions perform calculations across a set of table rows
related to the current row, useful for tasks like ranking or running
totals.
System functions provide information about the system, session, or
database objects.
Table functions return a set of rows, often used to process semi-
structured data or generate series.
User-Defined Functions (UDFs) enable users to create custom
functions in languages like SQL, JavaScript, or Python, extending
Snowflake's capabilities.
Domain
Data Transformation
Question 91Incorrect
Time Travel can be disabled for an account by ACCOUNTADMIN. (True/False)
Your answer is incorrect
TRUE
Correct answer
FALSE
Overall explanation
Time Travel cannot be disabled for an account. A user with the
ACCOUNTADMIN role can set DATA_RETENTION_TIME_IN_DAYS to 0 at the
account level, which means that all databases (and subsequently all
schemas and tables) created in the account have no retention period by
default; however, this default can be overridden at any time for any
database, schema, or table.
Domain
Data Protection and Data Sharing
Question 92Correct
Which schema can be used to find out about storage, compute, and objects
in a Snowflake account?
SNOWFLAKE_SCHEMA
Explanation
Explanation
Explanation
Explanation
The schema contains specific views such as TABLES , COLUMNS , and STAGES ,
which allows users to see metadata about table structure, storage allocation,
and object history. Here’s a code snippet example that
uses INFORMATION_SCHEMA :
This query retrieves a list of tables along with their types and creation
timestamps, filtered by schema. The INFORMATION_SCHEMA offers a robust,
standardized way to manage and analyze data assets within a Snowflake
environment, supporting effective database management and optimization.
Domain
Performance Concepts
Question 93Correct
Which object parameter can users with the ACCOUNTADMIN role use to set
the default retention period for their account?
DATA_RETENTION_TIME_IN_HOURS
DATA_RETENTION_IN_TIME_TRAVEL
Your answer is correct
DATA_RETENTION_TIME_IN_DAYS
DATA_RETENTION_TIME_MAX
Overall explanation
Users can use the DATA_RETENTION_TIME_IN_DAYS object parameter
with the ACCOUNTADMIN role to set the default retention period for their
account.
Domain
Data Protection and Data Sharing
Question 94Correct
You can create an an account level network policy using _____ (Select all that
apply)
Your selection is correct
Snowsight
Your selection is correct
Classic Web Interface
Only Snowflake Support can create the Account level Network Policy
Your selection is correct
SQL
Overall explanation
Only security administrators (i.e., users with the SECURITYADMIN role) or
higher or a role with the global CREATE NETWORK POLICY privilege can
create network policies using Snowsight, Classic Web Interface, and SQL.
Domain
Account Access & Security
Question 95Incorrect
The VALIDATION_MODE parameter does not support COPY statements that
transform data during a load. (True / False)
Your answer is incorrect
FALSE
Explanation
Explanation
1. RETURN_ERRORS
This mode scans the data files and reports any validation errors found. It
does not load any data into the table. Using RETURN_ERRORS helps in
identifying initial issues, like type mismatches or missing columns, without
attempting a full data load. It’s useful for catching a sample of errors to
address before proceeding with an actual load.
2. RETURN_ALL_ERRORS
Similar to RETURN_ERRORS , this mode also scans the data for errors without
loading data. However, instead of stopping at the first few
errors, RETURN_ALL_ERRORS continues to scan the entire file and returns all
errors found. This comprehensive error report is useful when you want to see
every potential issue across the data files, although it can be slower due to
the complete scan.
3. RETURN_n_ROWS
The RETURN_n_ROWS mode validates the data files and returns up to the
specified number of rows that would have been loaded, without actually
loading them into the target table. It’s an effective way to preview data
before performing a full load to ensure it appears in the expected format.
The parameter n specifies how many sample rows should be returned. This
option does not check for data errors but allows users to inspect a sample of
the data.
Important Note:
Domain
Data Transformation
Question 96Correct
The major benefits of defining Clustering Keys: (Select 2)
Your selection is correct
To help improve query performance
Explanation
Explanation
This is incorrect. Faster data sharing is not a direct benefit of defining
Clustering Keys. Clustering Keys primarily impact table organization and
query performance rather than data sharing speed.
Your selection is correct
To help optimize table maintenance
Explanation
Explanation
This is incorrect. Clustering Keys are more beneficial for optimizing larger
tables rather than small tables (<1 GB). While they can still be used for
smaller tables, the impact on performance and maintenance optimization
may not be as significant as with larger datasets.
Overall explanation
Clustering keys provide significant benefits in improving query
performance by organizing data in a way that reduces the need for full
table scans. They also help optimize table maintenance by automatically
reclustering tables as data is modified. These advantages are particularly
impactful for large tables, where clustering ensures that queries run
efficiently. However, clustering keys do not impact data sharing or
significantly benefit small tables.
Domain
Performance Concepts
Question 97Correct
The major benefits of defining Clustering Keys: (Select 2)
To help in faster data sharing
To help in organizing small tables (<1 GB)
Your selection is correct
To help optimize table maintenance
Your selection is correct
To help improve query performance
Overall explanation
Defining clustering keys for very large tables (in the multi-terabyte
range) helps optimize table maintenance and query performance.
Small tables are not a good candidate for clustering.
Domain
Performance Concepts
Question 98Incorrect
Which SQL command determines whether a network policy is set on the
account or for a specific user?
SHOW POLICIES
SHOW PARAMETER
Your answer is incorrect
SHOW NETWORK_POLICIES
Correct answer
SHOW PARAMETERS
SHOW POLICY
Overall explanation
The SHOW PARAMETERS command determines whether a network policy
is set on the account or for a specific user.
Domain
Account Access & Security
Question 99Correct
Monica has successfully created a task with the 5 minutes schedule. It has
been 30 minutes, but the task did not run. What could be the reason?
Monica doesn't have the authority to run the task
Monica should run the ALTER TASK command to SUSPEND the task,
and then again run the ALTER TASK command to RESUME the task
Task schedule should not be less than 60 minutes
Your answer is correct
Monica should run the ALTER TASK command to RESUME the task
Overall explanation
The first time we create the TASK, we need to run the ALTER TASK command
to RESUME the task.
Domain
Snowflake Data Platform Features and Architecture
Question 100Correct
What will happen if a policy is assigned to a user who is already signed in?
Your answer is correct
The user can't do anything else until signed in and signed back in
again.
The user can continue running the SQL queries in the currently
opened session.
There will be no interruption until the user logoffs and signs in
again.
Overall explanation
If a policy is assigned to a user who already signed in, they can't do anything
else until they sign and signed back in again to make use of the new policy
Domain
Account Access & Security