0% found this document useful (0 votes)
11 views79 pages

Practice Test UDEMY 1

The document contains a series of questions and answers related to Snowflake's functionalities, roles, and features, highlighting key aspects of account access, data sharing, and SQL functions. It emphasizes the importance of security, user management, and the capabilities of Snowflake's data platform. Additionally, it discusses best practices for managing warehouses and writing user-defined functions in supported programming languages.

Uploaded by

2ashish2photo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views79 pages

Practice Test UDEMY 1

The document contains a series of questions and answers related to Snowflake's functionalities, roles, and features, highlighting key aspects of account access, data sharing, and SQL functions. It emphasizes the importance of security, user management, and the capabilities of Snowflake's data platform. Additionally, it discusses best practices for managing warehouses and writing user-defined functions in supported programming languages.

Uploaded by

2ashish2photo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 79

Question 1Incorrect

A user cannot view the result set from a query that another user executed
except for the ACCOUNTADMIN role. (True / False)
Your answer is incorrect
TRUE
Correct answer
FALSE
Overall explanation
A user cannot view the result set from a query that another user
executed. This behavior is intentional. For security reasons, only the user
who executed a query can access the query results. This behavior is not
connected to the Snowflake access control model for objects. Even a user
with the ACCOUNTADMIN role cannot view the results for a query
run by another user.
Domain
Account Access & Security

Question 2Incorrect
A task can execute any one of the following types of SQL code: (Select 3)
Correct selection
Procedural logic using Snowflake Scripting
Your selection is correct
Call to a stored procedure
Multiple SQL statements
Your selection is correct
Single SQL Statement
Overall explanation
A task can execute any one of the following types of SQL code:
 Single SQL statement
 Call to a stored procedure
 Procedural logic using Snowflake Scripting.
Domain
Snowflake Data Platform Features and Architecture

Question 3Correct
Which of these roles is dedicated to user and role management only?
ORGADMIN
ACCOUNTADMIN
SYSADMIN
Your answer is correct
USERADMIN
SECURITYADMIN
Overall explanation
USERADMIN role is dedicated to user and role management only. More
specifically, this role:
 Is granted the CREATE USER and CREATE ROLE security privileges.
 Can create users and roles in the account.
 This role can also manage users and roles that it owns.

Only the role with the OWNERSHIP privilege on an object (i.e. user or
role), or a higher role, can modify the object properties.

Domain
Account Access & Security

Question 4Correct
Which database objects can be shared using the Snowflake Secure Data
Sharing feature? (Select all that apply)
Your selection is correct
Tables
Your selection is correct
Secure Views
Your selection is correct
Secure UDFs
Your selection is correct
Secure Materialized View
Your selection is correct
External Tables
Roles
Overall explanation
Secure Data Sharing enables sharing selected objects in a database in
your account with other Snowflake accounts. The following Snowflake
database objects can be shared:
 Tables
 External tables
 Secure views
 Secure materialized views
 Secure UDFs

Snowflake enables the sharing of databases through shares created by


data providers and “imported” by data consumers.

Domain
Snowflake Data Platform Features and Architecture

Question 5Correct
Which of these SQL functions helps returns the absolute path of a staged file
using the stage name and path of the file relative to its location in the stage
as inputs.?
BUILD_STAGE_FILE_URI

Explanation

This is incorrect. This option is not an officially recognized Snowflake SQL


function. It is a fictitious or misleading term in this context.
GET_PRESIGNED_URL

Explanation

This is incorrect. GET_PRESIGNED_URL generates a temporary URL to access


a file in an external stage but does not construct an absolute path. It focuses
on creating secure access points rather than absolute path resolution.
Your answer is correct
GET_ABSOLUTE_PATH

Explanation

This is correct. The GET_ABSOLUTE_PATH function is designed to retrieve the


absolute path of a file using the stage name and a relative path. It accurately
returns the file’s location within the cloud storage environment, making it
the right choice for constructing absolute paths from relative inputs.
GET_STAGE_LOCATION

Explanation

This is incorrect. GET_STAGE_LOCATION retrieves the URL of a storage


location for a given stage, not the absolute path of a specific file within that
stage. It’s useful for getting the base URL but does not handle specific file
paths.
GET_RELATIVE_PATH

Explanation

This is incorrect. This function extracts the relative path of a staged file but
does not construct an absolute path. It is used for the reverse operation—
retrieving the path relative to the base location.
BUILD_SCOPED_FILE_URL

Explanation

This is incorrect. The BUILD_SCOPED_FILE_URL function generates a scoped


URL for accessing a staged file temporarily. It uses the stage name and
relative file path but does not retrieve the absolute path. Instead, it focuses
on generating URLs for temporary file access.
Overall explanation
The GET_ABSOLUTE_PATH function in Snowflake accurately provides the
absolute path of a file located in a stage by using both the stage name and
the file's relative path as inputs. This capability is essential for scenarios
where precise file location information is needed for data processing or
access. The other functions, such
as BUILD_SCOPED_FILE_URL and GET_PRESIGNED_URL , focus on URL generation
for file access rather than file path retrieval.
Domain
Data Transformation

Question 6Incorrect
In which of the cloud platforms a Snowflake account can be hosted? (Select
3)
IBM Cloud
Your selection is incorrect
Oracle Cloud
Your selection is correct
AWS
Your selection is correct
AZURE
Correct selection
GCP
Overall explanation
A Snowflake account can be hosted on any of the following cloud platforms:
Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure
(Azure). On each platform, Snowflake provides one or more regions where
the account is provisioned.
Domain
Snowflake Data Platform Features and Architecture
Question 7Correct
As an ACCOUNTADMIN in Snowflake, which methods can you use to view
billing details for Automatic Clustering? (Select all that apply)
Your selection is correct
Snowsight (Web Interface):

Navigate to Admin > Cost Management.

Select the Account Overview tab.

Under Cost Insights, locate the Automatic Clustering section to view


associated costs.

Explanation
In the Snowsight interface, you can navigate to the specified sections to view
cost insights related to Automatic Clustering.
Your selection is correct
Classic Web Interface:

Click on Account.

Select Billing & Usage.

Within the Billing & Usage page, identify entries related


to AUTOMATIC_CLUSTERING to view corresponding billing details.

Explanation

The Classic Web Interface provides a Billing & Usage section where you
can find entries associated with AUTOMATIC_CLUSTERING, detailing the
billing information.
Your selection is correct
Access the AUTOMATIC_CLUSTERING_HISTORY view to retrieve detailed
information about Automatic Clustering activities, including credits
consumed.

Explanation

Executing the provided SQL query against


the AUTOMATIC_CLUSTERING_HISTORY view will yield detailed records of
Automatic Clustering activities, including the credits consumed, allowing for
in-depth analysis.

Example: This query provides insights into clustering history and associated
costs for January 2023.
1. SELECT
2. TABLE_NAME,
3. CLUSTERING_KEY,
4. CREDITS_CONSUMED,
5. BYTES_UPDATED,
6. ROWS_UPDATED,
7. TIMESTAMP
8. FROM
9. SNOWFLAKE.ACCOUNT_USAGE.AUTOMATIC_CLUSTERING_HISTORY
10. WHERE
11. TIMESTAMP >= '2023-01-01'
12. AND TIMESTAMP <= '2023-01-31'
13. ORDER BY
14. TIMESTAMP DESC;

Contacting Snowflake Support


Explanation

While contacting Snowflake Support is a viable approach for billing inquiries,


it is not the most direct method for viewing Automatic Clustering billing
details.
Third-Party Tools

Explanation

Third-party tools may offer additional insights but are not the primary means
provided by Snowflake for monitoring billing.
Overall explanation
Options 1, 2, and 3 are the correct.

 Option 1: In the Snowsight interface, you can navigate to the specified


sections to view cost insights related to Automatic Clustering.
 Option 2: The Classic Web Interface provides a Billing &
Usage section where you can find entries associated
with AUTOMATIC_CLUSTERING, detailing the billing information.
 Option 3: Executing the provided SQL query against
the AUTOMATIC_CLUSTERING_HISTORY view will yield detailed records of
Automatic Clustering activities, including the credits consumed,
allowing for in-depth analysis.
 Option 4: While contacting Snowflake Support is a viable approach for
billing inquiries, it is not the most direct method for viewing Automatic
Clustering billing details.
 Option 5: Third-party tools may offer additional insights but are not
the primary means provided by Snowflake for monitoring billing.

By utilizing these methods, you can effectively monitor and analyze the
billing details for Automatic Clustering in your Snowflake account.

Domain
Snowflake Data Platform Features and Architecture

Question 8Correct
If you create a Network Policy by providing both 'Allowed IP Addresses' and
'Blocked IP Addresses', which is applied first by Snowflake while validating
the access?
Allowed IP Addresses
Your answer is correct
Blocked IP Addresses
Overall explanation
If you provide both Allowed IP Addresses and Blocked IP
Addresses, Snowflake applies the Blocked List first.
Domain
Account Access & Security
Question 9Incorrect
The Snowflake Information Schema includes table functions you can query to
retrieve information about your directory tables. Which table function can be
used to query the history of data files registered in the metadata of specified
objects and the credits billed for these operations?
STAGE_DIRECTORY_FILE_REGISTRATION_HISTORY
Correct answer
AUTO_REFRESH_REGISTRATION_HISTORY
DATABASE_REFRESH_HISTORY
Your answer is incorrect
STAGE_STORAGE_USAGE_HISTORY
Overall explanation
AUTO_REFRESH_REGISTRATION_HISTORY table function can be used to
query the history of data files registered in the metadata of specified objects
and the credits billed for these operations. The table function returns the
billing history within a specified date range for your entire Snowflake
account. This function returns billing activity within the last 14 days.

Please note, STAGE_DIRECTORY_FILE_REGISTRATION_HISTORY table function


can be used to query information about the metadata history for a directory
table, including:

 Files added or removed automatically as part of a metadata refresh.


 Any errors found when refreshing the metadata.
Domain
Data Transformation

Question 10Correct
Which of these Snowflake Editions automatically stores data in an encrypted
state?
Standard
Virtual Private Snowflake(VPS)
Your answer is correct
All of the Snowflake Editions
Business Critical
Enterprise
Overall explanation
All of the Snowflake Editions (Standard, Enterprise, Business Critical, Virtual
Private Snowflake) automatically store data in an encrypted state.
Domain
Data Protection and Data Sharing

Question 11Correct
Which of these system-defined roles can manage operations at the
organization level?
USERADMIN
ACCOUNTADMIN
SYSADMIN
SECURITYADMIN
Your answer is correct
ORGADMIN
Overall explanation
ORGADMIN role manages operations at the organizational level. More
specifically, this role:
 Can create accounts in the organization.
 Can view all accounts in the organization (using SHOW ORGANIZATION
ACCOUNTS) and all regions enabled for the organization (using SHOW
REGIONS).
 Can view usage information across the organization.
Domain
Account Access & Security
Question 12Correct
A user can be assigned multiple roles. (True / False)
FALSE
Your answer is correct
TRUE
Overall explanation
Roles are the entities to which privileges on securable objects can be
granted and revoked. Roles are assigned to users to allow them to perform
actions required for business functions in their organization. A user can be
assigned multiple roles. It allows users to switch roles (i.e., choose which
role is active in the current Snowflake session) to perform different actions
using separate sets of privileges.
Domain
Account Access & Security

Question 13Incorrect
John has to create a PIPE that will be triggered for loading by calling the
Snowpipe REST endpoints. What parameter does he need to specify in
CREATE PIPE statement?
API_INGEST = FALSE

Explanation
The parameter API_INGEST = FALSE is not the correct parameter to specify
in the CREATE PIPE statement for a pipe that will be triggered for loading by
calling the Snowpipe REST endpoints. This setting does not align with the
requirement of external triggers for loading and does not provide the
necessary configuration for the described scenario.
API_INGEST = TRUE

Explanation

The parameter API_INGEST = TRUE is not the correct parameter to specify in


the CREATE PIPE statement for a pipe that will be triggered for loading by
calling the Snowpipe REST endpoints. This setting is not relevant to the
scenario described and does not align with the requirement of external
triggers for loading.
Your answer is incorrect
AUTO_INGEST = TRUE

Explanation

The parameter AUTO_INGEST = TRUE is not the correct parameter to specify


in the CREATE PIPE statement for a pipe that will be triggered for loading by
calling the Snowpipe REST endpoints. Setting AUTO_INGEST to TRUE would
mean that the pipe automatically ingests data without the need for external
triggers, which is not the desired behavior in this case.
Correct answer
AUTO_INGEST = FALSE

Explanation

The parameter AUTO_INGEST = FALSE needs to be specified in the CREATE


PIPE statement when John wants to create a PIPE that will be triggered for
loading by calling the Snowpipe REST endpoints. This setting indicates that
the pipe will not automatically ingest data and will rely on external triggers
to start the loading process.
Overall explanation
AUTO_INGEST = TRUE enables automatic data loading. Snowpipe supports
loading from external stages (Amazon S3, Google Cloud Storage, or Microsoft
Azure). AUTO_INGEST = FALSE disables automatic data loading. You must
make calls to the Snowpipe REST API endpoints to load data files.
Domain
Data Loading and Unloading
Question 14Correct
When deciding whether to suspend a warehouse or leave it running, what
should you consider?
Consider suspending the warehouse if the warehouse is large and
there are no active queries.
Explanation

This is partially correct, as it considers size and activity, but does not
address the performance implications related to cache retention.
Your answer is correct
Consider the trade-off between saving credits by suspending the
warehouse versus maintaining the cache of data from the previous
queries to help with performance.

Explanation

This is correct – Suspending a warehouse stops credit consumption but


also drops the cache, which can lead to slower query performance when
resumed. Keeping the warehouse running retains the cache, potentially
boosting performance for frequently repeated queries.
Consider the trade-off between saving credits by suspending the
warehouse versus the operational cost of resuming the warehouse
when needed.

Explanation

This is partially correct, but it lacks the critical aspect of cache impact,
which is a key factor in deciding whether to suspend a warehouse
Overall explanation
Suspending a Snowflake warehouse can be cost-effective as it stops compute
billing. However, the local cache, which stores previously accessed data, is
lost during suspension. This may result in slower performance when the
warehouse is resumed, especially for queries that benefit from cached data.
It is essential to balance the cost savings from suspension against the
potential performance hit due to cache loss. For instance, if your workload
involves repetitive queries, maintaining the warehouse in an active state
might yield better performance despite the ongoing costs.
Domain
Performance Concepts

Question 15Incorrect
Which of the following languages does Snowflake support for writing UDFs
(User-Defined Functions)? (Select 4)
Your selection is correct
Python

Explanation

Python is supported by Snowflake for writing UDFs, allowing users to develop


custom functions in Python to perform specialized tasks within Snowflake.
Your selection is correct
JAVA

Explanation

Snowflake supports JAVA for writing UDFs, allowing users to create custom
functions in JAVA to extend the functionality of Snowflake.
Your selection is incorrect
GO

Explanation

Snowflake does not support GO for writing UDFs. Only JAVA, JavaScript, SQL,
Python, and a few other languages are supported for UDF development in
Snowflake.
Your selection is correct
JavaScript

Explanation

JavaScript is supported by Snowflake for writing UDFs, enabling users to


write custom functions in JavaScript to meet specific requirements.
Correct selection
SQL

Explanation

SQL is supported by Snowflake for writing UDFs, providing users with the
ability to create custom functions directly in SQL to enhance data processing
capabilities.
C#

Explanation

Snowflake does not support C# for writing UDFs. Only specific languages like
JAVA, JavaScript, SQL, and Python are supported for UDF development in
Snowflake.
Overall explanation
In Snowflake, User-Defined Functions (UDFs) can be written
in SQL, JavaScript, Java, Python, and Scala. These supported languages
allow users to implement custom functions that perform complex
calculations or data transformations directly within Snowflake, using familiar
programming environments. For instance, SQL UDFs are ideal for simple,
inline data manipulations, while Java and Python UDFs enable more complex
data processing tasks that leverage external libraries. Snowflake
does not support UDFs in languages like Go or C#.
It's good to know the information provided below:

1. SQL UDF

SQL UDFs are best for simple data transformations within SQL expressions.

1. -- SQL UDF to calculate the square of a number


2. CREATE OR REPLACE FUNCTION square_sql(x NUMBER)
3. RETURNS NUMBER
4. LANGUAGE SQL
5. AS
6. $$
7. x * x
8. $$;
9.
10. -- Using the SQL UDF
11. SELECT square_sql(5); -- Output: 25

2. JavaScript UDF

JavaScript UDFs allow for more flexible scripting directly in Snowflake using
JavaScript syntax.

1. -- JavaScript UDF to return the length of a string


2. CREATE OR REPLACE FUNCTION string_length_js(input STRING)
3. RETURNS NUMBER
4. LANGUAGE JAVASCRIPT
5. AS
6. $$
7. return input.length;
8. $$;
9.
10. -- Using the JavaScript UDF
11. SELECT string_length_js('Hello Snowflake!'); -- Output: 15

3. Java UDF

Java UDFs use Java code and require compiling Java classes into Snowflake
using the CREATE FUNCTION statement. To set up a Java UDF, you would need
to use a JAR file containing the Java class.
Java Class Example (compiled and uploaded as a JAR file):

1. public class StringUtils {


2. public static int stringLength(String input) {
3. return input.length();
4. }
5. }

Creating the Java UDF in Snowflake:

1. CREATE OR REPLACE FUNCTION string_length_java(input STRING)


2. RETURNS NUMBER
3. LANGUAGE JAVA
4. HANDLER='StringUtils.stringLength'
5. TARGET_PATH='@my_stage/jars/StringUtils.jar';

Using the Java UDF:

SELECT string_length_java('Snowflake'); -- Output: 9

4. Python UDF

Python UDFs are used for data transformations involving Python libraries
and syntax.

1. -- Python UDF to calculate factorial of a number


2. CREATE OR REPLACE FUNCTION factorial_py(n INT)
3. RETURNS INT
4. LANGUAGE PYTHON
5. RUNTIME_VERSION = '3.8'
6. PACKAGES = ('math')
7. HANDLER = 'factorial_func'
8. AS
9. $$
10. import math
11. def factorial_func(n):
12. return math.factorial(n)
13. $$;
14.
15. -- Using the Python UDF
16. SELECT factorial_py(5); -- Output: 120
5. Scala UDF

Scala UDFs require a compiled JAR file similar to Java UDFs and use Scala
code. These UDFs are useful for data processing tasks leveraging Scala’s
capabilities.

Scala Class Example (compiled and uploaded as a JAR file):

1. object MathUtils {
2. def square(x: Int): Int = x * x
3. }

Creating the Scala UDF in Snowflake:


1. CREATE OR REPLACE FUNCTION square_scala(x INT)
2. RETURNS INT
3. LANGUAGE JAVA
4. HANDLER='MathUtils.square'
5. TARGET_PATH='@my_stage/jars/MathUtils.jar';

Using the Scala UDF:

SELECT square_scala(6); -- Output: 36

Domain
Data Transformation
Question 16Correct
What happens to the data when the retention period ends for an object?
SYSADMIN can restore the data from Fail-safe
Data can be restored by increasing the retention period
Data is permanently lost
Your answer is correct
Data is moved to Snowflake Fail-safe
Overall explanation
When the retention period ends for an object, the historical data is moved
into Snowflake Fail-safe. Snowflake support needs to be contacted to get
the data restored from Fail-safe.
Domain
Data Protection and Data Sharing
Question 17Incorrect
The user access history can be found by querying the
Your answer is incorrect
Information Schema ACCESS_HISTORY view
Information Schema ACCESS_REPORT view
Account Usage ACCESS_REPORT view
Correct answer
Account Usage ACCESS_HISTORY view
Overall explanation
Access History in Snowflake refers to when the user query reads column data
and when the SQL statement performs a data write operation, such as
INSERT, UPDATE, and DELETE, along with variations of the COPY command,
from the source data object to the target data object. The user access
history can be found by querying the Account Usage
ACCESS_HISTORY view.
Domain
Account Access & Security

Question 18Correct
Which of these Snowflake features does enable accessing historical data
(i.e., data that has been changed or deleted) at any point within a defined
period?
Data Sharing
Zero Copy Cloning
Search Optimization Service
Your answer is correct
Time Travel
Overall explanation
Snowflake Time Travel enables accessing historical data (i.e. data that
has been changed or deleted) at any point within a defined period. It serves
as a powerful tool for performing the following tasks:
 Restoring data-related objects (tables, schemas, and databases) that
might have been accidentally or intentionally deleted. - Duplicating
and backing up data from key points in the past.
 Analyzing data usage/manipulation over specified periods of time.
Domain
Data Protection and Data Sharing
Question 19Correct
A stored procedure can simultaneously run the caller’s and the owner’s
rights. (True / False)
TRUE
Your answer is correct
FALSE
Overall explanation
A stored procedure runs with either the caller’s rights or the owner’s rights. It
cannot run with both at the same time. A caller’s rights stored
procedure runs with the privileges of the caller. The primary advantage of a
caller’s rights stored procedure is that it can access information about that
caller or about the caller’s current session. For example, a caller’s rights
stored procedure can read the caller’s session variables and use them in a
query. An owner’s rights stored procedure runs mostly with the
privileges of the stored procedure’s owner. The primary advantage of an
owner’s rights stored procedure is that the owner can delegate specific
administrative tasks, such as cleaning up old data, to another role without
granting that role more general privileges, such as privileges to delete all
data from a specific table. At the time that the stored procedure is created,
the creator specifies whether the procedure runs with owner’s rights or
caller’s rights. The default is owner’s rights.
Domain
Snowflake Data Platform Features and Architecture
Question 20Correct
Dynamic Data Masking is supported by (Select all that apply)
Your selection is correct
VPS
Standard Edition
Your selection is correct
Enterprise Edition
Your selection is correct
Business Critical
Overall explanation
Dynamic Data Masking features require Enterprise Edition (or
higher).
Domain
Account Access & Security
Question 21Correct
Which is not the DML (Data Manipulation Language) command?
DELETE
UPDATE
TRUNCATE
INSERT
MERGE
Your answer is correct
UNDROP
Overall explanation
DML commands are used for managing data within database objects. In
Snowflake, typical DML commands include INSERT (to insert data into a
table), MERGE (to merge rows into a table), UPDATE (to update existing data
within a table), DELETE (to delete records from a table), and TRUNCATE (to
delete all records from a table but not the table itself).

UNDROP, on the other hand, is not a DML command. It is used to


restore dropped objects, such as tables, which is more aligned with data
recovery operations rather than manipulation of the data within the tables.

Domain
Snowflake Data Platform Features and Architecture
Question 22Correct
Only the user who generated the scoped URL can use the URL to access the
referenced file. (True/False)
FALSE
Your answer is correct
TRUE
Overall explanation
True, only the user who generated the scoped URL can use the URL to access
the referenced file. I case of File URL, any role that has sufficient privileges
on the stage can access the file.
Domain
Data Transformation
Question 23Correct
You have a table with a 30-day retention period. If you decrease the
retention period to 20 days, how would it affect the data that would have
been removed after 30 days?
Your answer is correct
The data will now retain for a shorter period of 20 days
The data will still retain for 30-day before moving to Fail-safe
Overall explanation
Decreasing Retention reduces the amount of time data is retained in Time
Travel:
 For active data modified after the retention period is reduced, the new
shorter period applies.
 For data that is currently in Time Travel:
 If the data is still within the new shorter period, it remains in
Time Travel.
 If the data is outside the new period, it moves into Fail-safe.

For example, if you have a table with a 30-day retention period and you
decrease the period to 20-day, data from days 21 to 30 will be moved into
Fail-safe, leaving only the data from day 1 to 20 accessible through Time
Travel. However, the process of moving the data from Time Travel into Fail-
safe is performed by a background process, so the change is not
immediately visible. Snowflake guarantees that the data will be moved, but
does not specify when the process will complete; until the background
process completes, the data is still accessible through Time Travel.

Domain
Data Protection and Data Sharing
Question 24Incorrect
Which of these functions helps generate the FILE URL to access the
unstructured data file?
GET_RELATIVE_PATH
Explanation

This is incorrect. GET_RELATIVE_PATH function does not help generate the


FILE URL to access unstructured data files. It is used to retrieve the relative
path of a file within a stage in Snowflake and is not related to generating the
URL for accessing unstructured data files.
GET_PRESIGNED_URL

Explanation

This is incorrect. GET_PRESIGNED_URL function is not the function that


helps generate the FILE URL to access unstructured data files. It is typically
used for generating pre-signed URLs for secure access to objects in cloud
storage services and is not directly related to generating the URL for
accessing unstructured data files in Snowflake.
GET_STAGE_LOCATION

Explanation

This is incorrect. GET_STAGE_LOCATION function is not intended for


generating the FILE URL to access unstructured data files. It is used to
retrieve the location of a stage within Snowflake and does not directly assist
in accessing unstructured data files.
Your answer is incorrect
BUILD_SCOPED_FILE_URL

Explanation

This is incorrect. BUILD_SCOPED_FILE_URL function is not specifically


designed to generate the FILE URL to access unstructured data files. It is
used for a different purpose within Snowflake and does not directly relate to
accessing unstructured data files.
GET_ABSOLUTE_PATH

Explanation

This is incorrect. GET_ABSOLUTE_PATH function is not the function used to


generate the FILE URL to access unstructured data files. This function is used
to retrieve the absolute path of a file within a stage in Snowflake and does
not directly contribute to generating the URL for accessing unstructured data
files.
Correct answer
BUILD_STAGE_FILE_URL

Explanation
This is correct. BUILD_STAGE_FILE_URL function is the correct choice as it
is specifically designed to help generate the FILE URL to access unstructured
data files within Snowflake. It constructs the URL needed to access the file in
the stage, making it the appropriate choice for this scenario.
Overall explanation
The BUILD_STAGE_FILE_URL function is specifically designed to generate a
permanent file URL for accessing unstructured data stored in a stage. It
directly uses the stage name and the file's relative path, ensuring a stable
link for data access. Other options focus on generating temporary access
points or extracting paths, not creating a durable file URL.
Domain
Data Transformation
Question 25Incorrect
Snowflake blocks certain IPs by default to ensure that customer is getting the
highest level of Network security. (TRUE / FALSE)
Your answer is incorrect
TRUE
Correct answer
FALSE
Overall explanation
By default, Snowflake allows users to connect to the service from
any computer or device IP address. A security administrator (or higher)
can create a network policy to allow or deny access to a single IP address or
a list of addresses.
Domain
Account Access & Security
Question 26Correct
At what frequency does Snowflake rotate the object keys?
16 Days
60 Days
1 Year
Your answer is correct
30 Days
Overall explanation
All Snowflake-managed keys are automatically rotated by Snowflake
when they are more than 30 days old. Active keys are retired, and new
keys are created. When Snowflake determines the retired key is no longer
needed, the key is automatically destroyed. When active, a key is used to
encrypt data and is available for usage by the customer. When retired, the
key is used solely to decrypt data and is only available for accessing the
data.
Domain
Account Access & Security
Question 27Incorrect
What is the expiration period of a File URL?
The URL expires when the persisted query result period ends
Your answer is incorrect
Length of time specified in the expiration_time argument
Correct answer
It is Permanent
Overall explanation
The expiration period of Scoped URL: The URL expires when the persisted
query result period ends.

The expiration period of the File URL: It is permanent.

The expiration period of Pre-Signed URL: Length of time specified in the


expiration_time argument.

Domain
Data Transformation
Question 28Incorrect
Select the type of function that can operate on a subset of rows within the
set of input rows.
System Function

Explanation

This is incorrect. System functions provide system-related operations (e.g.,


metadata access) and do not inherently operate on subsets within the data
set.
Correct answer
Window Function

Explanation

This is correct. Window functions, like ROW_NUMBER , RANK , and SUM OVER , are
specifically designed to perform calculations across a subset (window) of
rows related to the current row. They allow calculations over partitions of
data, making them suitable for subset operations.
Scalar Function

Explanation

This is incorrect. Scalar functions return a single value for each row of
input. They do not operate on a subset of rows but rather on individual rows.
Your answer is incorrect
Aggregate Function

Explanation
This is incorrect. Aggregate functions like SUM or AVG operate over a set of
rows to produce a single summary value but do not operate on subsets
within the input rows. They are not specifically designed to work on
partitions or windows of data.
User-Defined Function

Explanation

This is incorrect. UDFs are custom functions created by users that can
operate at a row level or aggregate level, but they are not specifically
designed for window-based calculations.
Table Function

Explanation

This is incorrect. Table functions in Snowflake generate a table (a set of


rows) from their input parameters, but they do not operate on subsets within
a given input set.
Overall explanation
Window functions in Snowflake allow calculations over a specified subset
of rows, called a "window," within the dataset, unlike other function types
that work on single rows or entire sets of rows without partitioning. This
makes them ideal for analytics like running totals, moving averages, and
ranking.
Domain
Data Transformation
Question 29Incorrect
If an account has federated authentication enabled. Can Snowflake admins
still maintain user IDs and passwords in Snowflake?
Your answer is incorrect
No
Correct answer
Yes
Overall explanation
With federated authentication enabled on an account, Snowflake still allows
maintaining and using Snowflake user credentials (login name and
password). In other words:
 Account and security administrators can still create users with
passwords maintained in Snowflake.
 Users can still log into Snowflake using their Snowflake credentials.

However, if federated authentication is enabled for an account,


Snowflake does not recommend maintaining user passwords in
Snowflake. Instead, user passwords should be maintained solely in
your IdP.

Domain
Account Access & Security
Question 30Correct
File URL is ideal for
None of these
business intelligence applications or reporting tools that need to
display the unstructured file contents
use in custom applications, providing unstructured data to other
accounts via a share
Your answer is correct
custom applications that require access to unstructured data files
Overall explanation
File URL: URL that identifies the database, schema, stage, and file path to a
set of files. A role that has sufficient privileges on the stage can access the
files. Ideal for custom applications that require access to
unstructured data files.

Scoped URL: Encoded URL that permits temporary access to a staged file
without granting privileges to the stage. The URL expires when the persisted
query result period ends (i.e., the results cache expires), which is currently
24 hours. Ideal for use in custom applications, providing
unstructured data to other accounts via a share, or for downloading
and ad hoc analysis of unstructured data via Snowsight.

Pre-signed URL: Simple HTTPS URL used to access a file via a web browser.
A file is temporarily accessible to users via this URL using a pre-signed
access token. The expiration time for the access token is configurable. Ideal
for business intelligence applications or reporting tools that need to
display unstructured file contents.

Domain
Data Transformation
Question 31Incorrect
Select the correct statements for Table Clustering. (Select 3)
Your selection is incorrect
Automatic Clustering doesn’t consume credit.

Explanation
This is incorrect – Automatic clustering does consume credits since it
involves compute resources provided by Snowflake to manage the clustering
process.
Snowflake doesn’t charge for Reclustering.

Explanation

This is incorrect – Snowflake does charge for reclustering. Automatic


clustering utilizes Snowflake’s compute resources, and these operations
incur costs based on the volume of data being reclustered.
Correct selection
Clustering keys are not for every table.

Explanation

This is correct – Clustering is generally recommended for large tables,


especially those in the multi-terabyte range where clustering can
significantly improve query performance. Smaller tables typically do not
benefit as much from clustering.
Your selection is correct
Tables in the multi-terabyte range are good candidates for
clustering keys.

Explanation

This is correct – Tables that are multi-terabyte in size are ideal


candidates for clustering keys because they improve query performance by
reducing data scanning through efficient micro-partitioning.
Your selection is correct
Snowflake recommends a maximum of three or four columns (or
expressions) per key.

Explanation

This is correct – Snowflake advises limiting clustering keys to three or


four columns to maintain optimal performance. More columns can reduce
the effectiveness of clustering.
Automatic clustering can not be suspended or resumed.

Explanation

This is incorrect – Snowflake allows users to suspend and resume


automatic clustering for tables. This feature can be controlled via SQL
commands.
Overall explanation
Clustering keys are not for every table. Tables in the multi-terabyte
range are good candidates for clustering keys. Both automatic
clustering and reclustering consume credit. A single clustering key can
contain one or more columns or expressions. Snowflake recommends a
maximum of three or four columns (or expressions) per key for most
tables. Adding more than 3-4 columns tends to increase costs more than
benefits.
Domain
Snowflake Data Platform Features and Architecture
Question 32Incorrect
Which of these are Snowgrid's capabilities? (Select all that apply)
ETL dependent

Explanation

Snowgrid is not ETL dependent. It is a data sharing platform that focuses on


enabling secure and efficient data sharing, rather than being tied to specific
ETL processes.
Correct selection
Share internally with private data exchange or externally with
public data exchange

Explanation

Sharing internally with private data exchange or externally with public data
exchange is a capability of Snowgrid. It enables secure data sharing within
an organization or with external partners through controlled access.
Correct selection
Live, ready to query data

Explanation

Snowgrid provides live, ready-to-query data capabilities. This means that


users can access and query data in real-time without the need for complex
data processing or transformations, making data analysis more efficient.
Your selection is correct
Secure, governed data sharing

Explanation

Secure, governed data sharing is a key capability of Snowgrid. It ensures that


data sharing is done in a secure and compliant manner, with governance
controls in place to protect sensitive information.
Zero-copy cloning

Explanation
Zero-copy cloning is not a capability of Snowgrid. It is a feature that allows
for fast and efficient cloning of databases in Snowflake, but it is not directly
related to Snowgrid's capabilities.
Overall explanation
Snowgrid allows you to use Secure Data Sharing features to provide
access to live data, without any ETL or movement of files across
environments.
Domain
Snowflake Data Platform Features and Architecture
Question 33Incorrect
Which stream type is supported for streams on the external table only?
Standard

Explanation

The stream type "Standard" is a common type of stream in Snowflake that


captures all changes made to a table, whether internal or external. It is not
limited to external tables only, so it is not the correct choice for the stream
type supported for streams on external tables.
External

Explanation

The stream type "External" is not specifically related to streams on external


tables. It is a general term that can refer to external data sources or external
storage locations in Snowflake, but it does not define the stream type
supported for streams on external tables only.
Update-only

Explanation

The stream type "Update-only" is not specific to external tables and does not
define the stream type supported for streams on external tables only. This
type of stream captures only the updates made to a table, regardless of
whether it is an internal or external table.
Correct answer
Insert-only

Explanation

The stream type "Insert-only" is the correct choice as it is the stream type
supported for streams on external tables only. This type of stream captures
only the new rows inserted into an external table, making it the appropriate
choice for external table streams.
Your answer is incorrect
Append-only
Explanation

The stream type "Append-only" is not exclusive to external tables and does
not specify the stream type supported for streams on external tables only.
This type of stream captures only the new rows added to a table, regardless
of its internal or external nature.
Overall explanation
Insert-only is supported for streams on external tables only. An insert-only
stream tracks row inserts only; they do not record delete operations that
remove rows from an inserted set (i.e. no-ops).
Domain
Snowflake Data Platform Features and Architecture
Question 34Correct
Snowflake is available in four editions. Which are those? (Select 4)
Your selection is correct
Business Critical

Explanation

The Business Critical edition of Snowflake is tailored for organizations that


require high availability, performance, and security for their critical business
operations. It provides advanced features such as enhanced support and
service level agreements to meet demanding business needs.
Your selection is correct
Virtual Private Snowflake (VPS)

Explanation

The Virtual Private Snowflake (VPS) edition is a specialized edition that offers
dedicated resources and infrastructure for organizations that require a
higher level of isolation and control over their Snowflake environment. It
provides enhanced security and customization options for specific business
requirements.
Your selection is correct
Enterprise

Explanation

The Enterprise edition of Snowflake is designed for larger organizations with


more advanced data processing requirements. It offers additional features
and capabilities compared to the Standard edition to support complex data
analytics and warehousing tasks.
Your selection is correct
Standard

Explanation
The Standard edition of Snowflake is one of the available editions that
provide basic functionalities for data warehousing and analytics. It is suitable
for small to medium-sized businesses with standard data processing needs.
Professional

Explanation

The Professional edition is not one of the available editions of Snowflake.


This choice is incorrect as it does not align with the actual editions offered by
Snowflake for data warehousing and analytics.
Professional Plus

Explanation

The Professional Plus edition is also not one of the available editions of
Snowflake. This choice is incorrect as it is not a valid edition provided by
Snowflake for data processing and analytics.
Overall explanation
Snowflake offers four editions designed to meet varying business needs:
1. Standard Edition: This is the base edition providing essential
features like SQL data warehousing, secure data sharing, and 1-day
time travel.
2. Enterprise Edition: This edition builds on Standard and adds features
such as multi-cluster warehouses and up to 90-day time travel for
enhanced scalability.
3. Business Critical Edition: This edition provides additional security
measures, such as HIPAA, PCI compliance, and encryption everywhere,
catering to highly regulated industries.
4. Virtual Private Snowflake (VPS): Designed for organizations with
strict security requirements, offering isolated, private Snowflake
instances with dedicated resources.

These editions cater to different security, scalability, and regulatory needs.

Domain
Snowflake Data Platform Features and Architecture
Question 35Correct
Readers accounts enable providers to share data with consumers who are
not already Snowflake customers without requiring the consumers to
become Snowflake Customers. Which role can create the Reader account?
SECURITYADMIN
Your answer is correct
ACCOUNTADMIN
USERADMIN
SYSADMIN
Overall explanation
ACCOUNTADMIN role (or a role granted the CREATE ACCOUNT global
privilege) only can create the Reader account.
Domain
Account Access & Security
Question 36Correct
If you recreate a pipe using CREATE OR REPLACE PIPE command. What does
happen to load history if the Snowpipe gets recreated?
The pipe can not be recreated

Explanation

The pipe can indeed be recreated using the CREATE OR REPLACE PIPE
command in Snowflake. This command allows for the replacement of an
existing pipe with a new definition, but it does not prevent the recreation
process or affect the load history.
The recreated Pipe still has tracks of the files loaded by the old Pipe

Explanation

The recreated pipe does not retain any tracks of the files loaded by the old
pipe. When a pipe is recreated, it starts fresh without any previous load
history or file tracking.
Snowflake still keeps load history

Explanation

Snowflake does not retain the load history when a pipe is recreated using the
CREATE OR REPLACE PIPE command. The load history is reset during the
recreation process.
Your answer is correct
The load history gets reset to empty

Explanation

When a pipe is recreated using the CREATE OR REPLACE PIPE command in


Snowflake, the load history associated with the pipe is reset to empty. This
means that any previous load information is cleared, and the pipe starts with
a clean slate for tracking new file loads.
Overall explanation
In Snowflake, when you recreate a pipe using the CREATE OR REPLACE
PIPE command, the existing load history associated with that pipe is deleted.
This means the new pipe starts with an empty load history. As a result,
previously loaded files may be reprocessed if they are still present in the
stage and not managed appropriately. To prevent duplicate data loading, it's
important to handle staged files carefully after recreating a pipe.
Domain
Data Loading and Unloading
Question 37Correct
Suppose you have an auto-scaling mode setup with an Economy policy. In
what situation does Snowflake spin up an additional cluster?
The first cluster starts immediately when either a query is queued
or the system detects that there’s one more query than the
currently-running clusters can execute.

Explanation

This is incorrect. This description aligns with the Standard scaling policy,
not the Economy policy. The Standard policy is more aggressive in adding
clusters to minimize queuing, while the Economy policy is more
conservative.
Your answer is correct
Only if the system estimates there’s enough query load to keep the
cluster busy for at least 6 minutes.

Explanation

This is correct. The Economy policy in Snowflake's auto-scaling mode aims


to conserve credits. It only adds a new cluster if the system predicts that the
workload justifies keeping the additional cluster busy. This often implies
ensuring that the load can be sustained for several minutes before initiating
a new cluster.
Overall explanation
In Snowflake's auto-scaling mode, the Economy policy is designed to
prioritize cost-efficiency by minimizing the number of clusters. It waits for
substantial load before starting a new cluster, contrasting with the
Standard policy that responds faster to queries. This behavior ensures lower
credit consumption but may lead to increased query queuing.
Domain
Performance Concepts
Question 38Correct
Monica ran a SELECT query on a large table t1. The query took longer than
expected. She examined the query profile and noticed that 'Bytes spilled to
local storage' and 'Bytes spilled to remote storage' were significantly high.
What advice would you give her to improve the query performance? (Select
3)
Your selection is correct
Processing data in smaller batches.

Explanation
This is correct. Smaller batches consume less memory, making it less likely
for data to overflow to local or remote storage. This strategy ensures more
efficient memory use, leading to faster query execution.
Processing data in larger batches.

Explanation

This is incorrect. Larger batches require more memory, increasing the risk
of data spillage to local or remote storage, which can slow down query
performance even further.
Your selection is correct
Splitting the processing into multiple steps.

Explanation

This is correct. Decomposing complex queries into smaller, manageable


operations can reduce memory requirements per step, minimizing the
likelihood of data spilling to storage.
Your selection is correct
Using a larger warehouse (effectively increasing the available
memory and local disk space for the operation)

Explanation

This is correct. Increasing the warehouse size provides more memory,


reducing the need for data to spill to disk storage. This can alleviate memory
constraints, thereby improving the performance of large queries.
Increasing the number of concurrent queries running in the
warehouse.

Explanation

This is incorrect. Increasing the number of concurrent queries can place


more load on the warehouse, potentially exacerbating memory limitations.
This approach doesn’t address the problem of data spilling to storage.
Overall explanation
When a query experiences high data spillage to local or remote storage, it
indicates insufficient memory allocation for processing. Using a larger
warehouse increases available resources, processing data in smaller
batches minimizes memory usage, and splitting operations into
steps helps manage memory efficiently. These strategies directly target
memory optimization, avoiding performance degradation caused by disk-
based operations. Increasing concurrent queries or processing in larger
batches can worsen memory issues, leading to slower query performance.
Domain
Performance Concepts
Question 39Correct
Permissions on database objects such as databases or tables are granted to:
Users
Your answer is correct
Roles
Virtual Warehouses
Schemas
Overall explanation
Snowflake supports Role-Based Access control. Permissions on
database objects such as databases or tables are granted to Roles.
Domain
Account Access & Security
Question 40Correct
Snowflake Query history page allows you to view the details of all the queries
executed in the last 31 days. (True/False)
TRUE

Explanation

This is incorrect. The query history page in Snowflake does not allow
viewing details for the past 31 days. Instead, the standard interface, like the
Classic Console and Snowsight, displays query history for the last 14 days.
Your answer is correct
FALSE

Explanation

This is correct. The default time frame for query history visibility in
Snowflake’s UI is 14 days, not 31 days. For longer retention, querying
the QUERY_HISTORY view can allow access to historical data, but this requires
using specific database views or queries that cover up to a year.
Overall explanation
In Snowflake's UI, the Query History page provides a list of queries
executed within the last 14 days, not 31 days. For more extended query
history, Snowflake’s QUERY_HISTORY view in the ACCOUNT_USAGE schema allows
access to up to 365 days of history, but this requires database-level querying
rather than using the standard UI interface.
Domain
Performance Concepts
Question 41Correct
For which object the Kafka connector does create a topic?
One internal stage to temporarily store data files for each topic
One pipe to ingest the data files for each topic partition
Your answer is correct
All of these
One table for each topic. If the table specified for each topic does
not exist
Overall explanation
The connector creates the following objects for each topic:
 One internal stage to temporarily store data files for each topic.
 One pipe to ingest the data files for each topic partition.
 One table for each topic. If the table specified for each topic does not
exist, the connector creates it; otherwise, the connector creates the
RECORD_CONTENT and RECORD_METADATA columns in the existing
table and verifies that the other columns are nullable (and produces an
error if they are not).
Domain
Snowflake Data Platform Features and Architecture
Question 42Correct
John is trying to load JSON data sets with a huge array containing multiple
records. Considering the VARIANT data type imposed size of 16 MB, what do
you recommend to John for optimally loading the data?
No need to remove the outer array structure, as Snowflake
Intelligent Engine will take care of that.

Explanation

This is incorrect – Snowflake does not automatically handle large JSON


arrays; manual configuration using STRIP_OUTER_ARRAY is necessary to
optimize the loading process.
Separate the documents with line break of commas.

Explanation

This is incorrect – Simply separating documents with line breaks or


commas won't help with Snowflake's JSON processing constraints related to
large arrays.
Enable VARIANT_OUTER_ARRAY file format for the COPY INTO
<table> command.

Explanation

This is incorrect – There is no VARIANT_OUTER_ARRAY option in Snowflake.


The correct file format option is STRIP_OUTER_ARRAY .
Your answer is correct
Enable the STRIP_OUTER_ARRAY file format option for the COPY INTO
<table> command.

Explanation
This is correct – Enabling the STRIP_OUTER_ARRAY option removes the top-
level array from the JSON file. This makes it possible to load each element
within the array as individual rows in a Snowflake table, helping to stay
within the 16 MB size limitation of the VARIANT data type.
Overall explanation
When dealing with large JSON data sets containing arrays, enabling
the STRIP_OUTER_ARRAY option in the JSON file format for the COPY
INTO command ensures that Snowflake removes the top-level array. This
allows the system to treat each nested JSON element as an individual row,
bypassing the 16 MB limit of the VARIANT data type. Here’s a code example
demonstrating this configuration:
1. CREATE FILE FORMAT my_json_format
2. TYPE = 'JSON'
3. STRIP_OUTER_ARRAY = TRUE;
4.
5. COPY INTO my_table
6. FROM @my_stage/file.json
7. FILE_FORMAT = (FORMAT_NAME = my_json_format);

This configuration helps load large JSON files effectively by breaking down
the array structure, enhancing both performance and storage handling within
Snowflake's limitations. Sources: Snowflake Documentation on JSON File
Formats, InterWorks Guide on Semi-Structured Data
Domain
Data Loading and Unloading
Question 43Correct
How can we turn off the query result cache?
Setting the parameter USE_QUERY_CACHED to FALSE

Explanation

This is incorrect. The parameter USE_QUERY_CACHED does not exist. This is


not an official configuration setting in Snowflake.
Query result cache can be turned off.

Explanation

This is incorrect. The query result cache can indeed be controlled by


setting the session-level parameter USE_CACHED_RESULT to FALSE , so stating
that it cannot be turned off is inaccurate.
Setting the parameter USE_CACHED_INFO to FALSE

Explanation
This is incorrect. There is no parameter called USE_CACHED_INFO in
Snowflake’s configuration settings. This term does not exist in official
Snowflake documentation.
Your answer is correct
Setting the parameter USE_CACHED_RESULT to FALSE

Explanation

This is correct. Setting USE_CACHED_RESULT to FALSE disables the query


result cache for the current session. This means Snowflake will not use
cached results and will re-execute queries even if identical queries were
recently run, ensuring that results are freshly computed.

Overall explanation
To disable the query result cache in Snowflake, the correct parameter
is USE_CACHED_RESULT . By setting this parameter to FALSE at the session level
(using ALTER SESSION SET USE_CACHED_RESULT=FALSE ), Snowflake will not use
cached results for queries, ensuring that each query execution retrieves
fresh data. This can be useful when testing performance or validating query
results without relying on cache optimizations. Parameters
like USE_CACHED_INFO or USE_QUERY_CACHED are not valid, and the result cache
can indeed be controlled.
Domain
Performance Concepts
Question 44Correct
If we make any changes to the original table, then
The cloned table data get refreshed with the entire new data of the
source table
Your answer is correct
The changes do not reflect in the cloned table
The changes get immediately reflected in the cloned table
Overall explanation
Zero-copy cloning allows us to make a snapshot of any table, schema,
or database without actually copying data. A clone is writable and is
independent of its source (i.e., changes made to the source or clone
are not reflected in the other object). A new clone of a table points to
the original table's micro partitions, using no data storage. If we make any
changes in the cloned table, then only its changed micro partitions are
written to storage.
Domain
Data Protection and Data Sharing
Question 45Incorrect
Which of these are not supported by the Search Optimization Service?
(Select all that apply)
Your selection is correct
External Tables
Correct selection
Analytical Expressions
Correct selection
Columns defined with COLLATE clause
Your selection is correct
Column Concatenation
Correct selection
Casts on table columns
Your selection is correct
Materialized Views
Overall explanation
None of these are currently supported by the Search Optimization
Service. Additionally, Tables and views protected by row access policies
cannot be used with the Search Optimization Search.
Domain
Snowflake Data Platform Features and Architecture
Question 46Correct
What is the purpose of VALIDATION_MODE in the COPY INTO <table> command?
Your answer is correct
VALIDATION_MODE is used to validate the load file for errors instead of
loading it into the specified table.

Explanation

This choice correctly explains the purpose of VALIDATION_MODE in the COPY


INTO command. It is used to validate the load file for errors before loading it
into the specified table, allowing for error checking before the data is
inserted.
VALIDATION_MODE is used to validate the load file and load it into the
specified table if there is no error.

Explanation

This choice incorrectly states that VALIDATION_MODE is used to validate the


load file and load it into the specified table if there is no error. In reality,
VALIDATION_MODE is used to validate the load file for errors without actually
loading it into the table.
VALIDATION_MODE is used to validate the load file, skip the errored
data, and then load it into the specified table.

Explanation

This choice incorrectly states that VALIDATION_MODE is used to validate the


load file, skip the errored data, and then load it into the specified table. In
fact, VALIDATION_MODE is used to validate the load file for errors without
loading it into the table, not to skip and load errored data.
Overall explanation
The VALIDATION_MODE parameter in Snowflake's COPY INTO <table> command
allows you to check data files for errors before loading them into a table. By
using this parameter, you can identify issues such as data type mismatches
or formatting errors without affecting the target table. This preemptive
validation helps ensure data quality and integrity during the loading process.
1. COPY INTO my_table
2. FROM @my_stage
3. VALIDATION_MODE = 'RETURN_ERRORS';

In this example, the COPY INTO command checks the data files
in @my_stage for errors and returns any issues found, without loading the
data into my_table .
Domain
Data Loading and Unloading
Question 47Incorrect
What are the supported file formats for data unloading in Snowflake?
Your selection is incorrect
Avro

Explanation

Avro is not supported for data unloading in Snowflake.


Correct selection
JSON

Explanation

This is correct – JSON is a supported format for unloading semi-structured


data, typically using Newline Delimited JSON (NDJSON) format.
Your selection is incorrect
ORC

Explanation

ORC is not a supported file format for data unloading in Snowflake.


Snowflake supports JSON, Parquet, and Avro as file formats for data
unloading.
Correct selection
Parquet

Explanation
This is correct – Parquet is a supported format for data unloading in
Snowflake, favored for analytical workloads because of its columnar
structure.
XML

Explanation

XML is not a supported file format for data unloading in Snowflake.


Snowflake supports JSON, Parquet, and Avro as file formats for data
unloading.
Overall explanation
Snowflake allows the unloading of data into formats that support both
structured and semi-structured data needs, specifically JSON and Parquet.
These formats are efficient for handling complex data structures and large
datasets. For example, you can unload data to a JSON file using:

COPY INTO @mystage FROM my_table FILE_FORMAT = (TYPE = JSON);

Snowflake's support for JSON and Parquet enables better data integration
with downstream analytical tools and data lakes. Other formats like Avro and
ORC are primarily for data loading, not unloading.

Domain
Data Loading and Unloading
Question 48Correct
Which command will list the pipes for which you have access privileges?
SHOW PIPES();

Explanation

The command SHOW PIPES(); is not a valid Snowflake SQL command. It will
not show the pipes for which you have access privileges.
DESCRIBE PIPES;

Explanation

The command DESCRIBE PIPES; is used to display the structure of a specific


pipe, not to list all pipes for which you have access privileges.
LIST PIPES;

Explanation

The command LIST PIPES; is not a valid Snowflake SQL command. It will not
list the pipes for which you have access privileges.
Your answer is correct
SHOW PIPES;

Explanation

The command SHOW PIPES; is the correct command to use in Snowflake to


list the pipes for which you have access privileges. It will display the pipes
that you can work with.
LIST PIPES();

Explanation

The command LIST PIPES(); is not a valid Snowflake SQL command. It will not
list the pipes for which you have access privileges.
Overall explanation
SHOW PIPES Command lists the pipes for which you have access privileges.
This command can list the pipes for a specified database or schema (or the
current database/schema for the session), or your entire account.
Domain
Snowflake Data Platform Features and Architecture
Question 49Incorrect
If a user is logged in to Snowflake in a federated environment and IdP times
out, what does happen to the user's snowflake session?
Correct answer
It does not affect the user's Snowflake sessions. However, to initiate
any new Snowflake sessions, the user must log into the IdP again.
Your answer is incorrect
The Snowflake web interface is disabled, and the prompt for IdP
authentication is displayed.
Overall explanation
After a specified period of time (defined by the IdP), a user’s session in the
IdP automatically times out, but this does not affect their Snowflake sessions.
Any Snowflake sessions that are active at the time remain open and do not
require re-authentication. However, to initiate any new Snowflake sessions,
the user must log into the IdP again.
Domain
Account Access & Security
Question 50Correct
Snowflake automatically and transparently maintains materialized views.
(True/False)
FALSE
Your answer is correct
TRUE
Overall explanation
Snowflake automatically and transparently maintains materialized
views. A background service updates the materialized view after changes to
the base table. This is more efficient and less error-prone than manually
maintaining the equivalent of a materialized view at the application level.
Domain
Snowflake Data Platform Features and Architecture
Question 51Correct
What all options are available for data transformation while loading data into
a table using the COPY command? (Select all that apply)
Your selection is correct
Casts

Explanation

Casts allow you to perform data type conversions during data loading using
the COPY command. This feature is useful for ensuring data compatibility
between the source and destination tables, especially when the data types
do not match.
Your selection is correct
Column reordering

Explanation

Column reordering allows you to change the order of columns in the


destination table while loading data using the COPY command. This can be
useful for aligning the data with the table schema or for specific data
processing requirements.
Your selection is correct
Column omission

Explanation

Column omission enables you to exclude specific columns from being loaded
into the destination table during data loading using the COPY command. This
can be helpful when certain columns are not needed or should be excluded
from the loading process.
Join

Explanation

Join is not an option available for data transformation while loading data into
a table using the COPY command. Joins are typically used for combining data
from multiple tables, and they are not directly related to the data
transformation process during data loading.
Your selection is correct
Truncation of Text Strings

Explanation
Truncation of Text Strings option allows you to truncate text strings that
exceed the specified length during data loading using the COPY command.
This can help in handling data integrity issues and ensuring that the data fits
within the defined schema constraints.
Overall explanation
Snowflake supports transforming data while loading it into a table using the
COPY command. Options include:
 Column reordering
 Column omission
 Casts
 Truncating text strings that exceed the target column length.
Domain
Data Transformation
Question 52Correct
Multi-cluster warehouses are beneficial in improving the performance of
slow-running queries or data loading. (True/False)
Your answer is correct
FALSE
TRUE
Overall explanation
Multi-cluster warehouses in Snowflake are primarily designed to handle high
concurrency scenarios, where many users or queries need to be processed
simultaneously. They automatically scale by adding or reducing clusters
based on workload demands, which helps to manage the number of
concurrent users effectively. However, for improving the performance of
individual slow-running queries or data loading, increasing the size of
a standard single-cluster warehouse is usually more effective. Multi-cluster
warehouses do not inherently speed up slow queries; instead, they are better
suited for distributing workload among multiple users and queries
Domain
Performance Concepts
Question 53Correct
What value will be returned by the following query?

SELECT * FROM TABLE(FLATTEN(input => parse_json('[]'))) f;

Explanation

The query is attempting to flatten an empty JSON array using the FLATTEN
function. Since there are no elements in the array to flatten, the result will
not be a numerical value like 0.
[]
Explanation

The query is using the FLATTEN function on an empty JSON array. When
flattening an empty array, the result will not be another array, such as an
empty array.
NULL

Explanation

The query is trying to flatten an empty JSON array, which will not result in a
NULL value. The FLATTEN function will handle the empty array by not
returning a NULL value.
Your answer is correct
Nothing will return or, the output of the input row will be omitted

Explanation

This choice is correct because when attempting to flatten an empty JSON


array, the output will be omitted or nothing will be returned. The input row
will not have any elements to flatten, resulting in no output.
Overall explanation
In Snowflake, the FLATTEN function is used to convert semi-structured data,
such as arrays or objects, into a relational format by producing a lateral view.
When FLATTEN processes an empty array ( [] ) and the OUTER parameter is not
specified (defaulting to FALSE ), it omits the input row from the output,
resulting in no rows being returned. To include rows for empty arrays, set
the OUTER parameter to TRUE .
Domain
Data Transformation
Question 54Correct
What all locations do Snowflake support for staging the data? (Select all that
apply)
Your selection is correct
Snowflake Internal Stages

Explanation

This is correct – Snowflake provides internal stages (like user, table, or


named stages) for storing files directly within the Snowflake environment.
Your selection is correct
Microsoft Azure Blob Storage

Explanation

This is correct – Azure Blob Storage is also supported for staging, enabling
data operations directly from Azure containers.
Oracle Cloud Storage

Explanation

This is incorrect – Snowflake does not currently support Oracle Cloud


Storage for staging data. Supported external stages are limited to the above
cloud providers.
Your selection is correct
Amazon S3

Explanation

This is correct – Snowflake supports external staging in Amazon S3, which


allows loading data directly from S3 buckets.
Your selection is correct
Google Cloud Storage

Explanation

This is correct – Snowflake can stage data in Google Cloud Storage,


providing a seamless integration for data loading from GCS buckets.
Overall explanation
Snowflake enables data staging using internal stages (stored within
Snowflake) or external stages hosted on Amazon S3, Google Cloud
Storage, or Microsoft Azure Blob Storage. These staging areas facilitate
efficient data loading operations into Snowflake tables. Internal stages are
part of the Snowflake account, while external stages utilize cloud storage via
integrations. Here's an example of creating an external stage in S3 using
SQL:
1. CREATE STAGE my_s3_stage
2. URL = 's3://mybucket/myfolder/'
3. STORAGE_INTEGRATION = my_s3_integration;

This command sets up an external stage for staging files in an S3 bucket,


making it easier to manage and load data into Snowflake tables.

Sources: Snowflake Documentation on Staging Data, Amazon S3, Google


Cloud Storage, and Microsoft Azure Blob Storage.

Domain
Data Loading and Unloading
Question 55Incorrect
An account-level resource monitor overrides the resource monitor
assignment for individual warehouses. (True/False)
Your answer is incorrect
TRUE

Explanation

This is incorrect – An account-level resource monitor does not override the


individual resource monitor settings for warehouses. Both levels of resource
monitors coexist, and if either one reaches its threshold and has a suspend
action, the warehouse will be impacted accordingly.
Correct answer
FALSE

Explanation

This is correct – Resource monitors in Snowflake operate at both account


and warehouse levels independently. An account-level monitor doesn't
supersede a warehouse-level monitor. If thresholds are set on both, the
limits of each are respected, and either monitor can trigger suspension
actions if their criteria are met.
Overall explanation
In Snowflake, resource monitors can be configured at both the account
level and the warehouse level. Account-level monitors observe cumulative
credit consumption across all warehouses, while warehouse-level monitors
focus on specific warehouses. The settings at one level do not override the
other; instead, they work in parallel. For example, if both an account and a
warehouse monitor hit their credit limits, either could trigger a suspension
independently. Here's how you can assign a resource monitor at the account
level:

ALTER ACCOUNT SET RESOURCE_MONITOR = 'ACCOUNT_MONITOR';

This approach allows Snowflake to manage credits effectively without


conflicts between account-wide and specific warehouse limits.

Domain
Performance Concepts
Question 56Incorrect
Search optimization is a Database-level property applied to all the tables
within the database with supported data types. (True/False)
Correct answer
FALSE
Your answer is incorrect
TRUE
Overall explanation
Search optimization is a table-level property and applies to all columns
with supported data types. The search optimization service aims to
significantly improve the performance of selective point lookup queries on
tables. A point lookup query returns only one or a small number of distinct
rows. A user can register one or more tables to the search optimization
service.
Domain
Snowflake Data Platform Features and Architecture
Question 57Incorrect
Which objects are not available for replication in the Standard Edition of
Snowflake? (Select 3)
Database
Correct selection
Integrations
Your selection is correct
Roles
Your selection is correct
Users
Your selection is incorrect
Shares
Overall explanation
Database and share replication are available in all editions, including the
Standard edition. Replication of all other objects is only available for
Business Critical Edition (or higher).
Domain
Data Protection and Data Sharing
Question 58Correct
How long do results remain in the Query results cache?
Your answer is correct
24 hours

Explanation

This is correct. Snowflake caches query results for 24 hours by default. If


identical queries are re-executed within this time frame without any changes
to the underlying data, the cached results are used to optimize performance.
12 hours

Explanation

This is incorrect. Query results in Snowflake are not cached for 12 hours.
This duration does not align with Snowflake’s official settings for query result
caching.
1 hour

Explanation
This is incorrect. The query result cache in Snowflake is not set to expire
after just 1 hour. This is not consistent with the official caching period
provided by Snowflake.
31 hours

Explanation

This is incorrect. Snowflake does not have a default 31-hour cache period
for query results. While the 24-hour cache retention can extend if reused, 31
hours is not a specific setting.
16 hours

Explanation

This is incorrect. There is no 16-hour cache retention period for query


results in Snowflake's configuration.
Overall explanation
The query results in Snowflake are cached for 24 hours. This caching
period allows the system to reuse results of identical queries, as long as the
underlying data has not changed. This caching behavior helps improve query
efficiency by avoiding redundant processing. After 24 hours, the cached
results are purged from the system unless reused, in which case the
expiration resets up to a maximum period of 31 days if accessed
regularly.
Domain
Performance Concepts
Question 59Incorrect
Micro-partitioning is the on-demand feature of Snowflake. It is required to be
enabled explicitly by ACCOUNTADMIN. (True / False)
Correct answer
FALSE

Explanation

This is correct. Micro-partitioning is a core feature of Snowflake and is


automatically applied to all tables. Administrators do not need to explicitly
enable it, as Snowflake handles the partitioning based on the size and
structure of the data.
Your answer is incorrect
TRUE

Explanation

This is incorrect – Micro-partitioning in Snowflake is automatically


enabled for all tables. It does not require manual intervention by
the ACCOUNTADMIN or any other role. Snowflake automatically divides
data into micro-partitions upon data loading.
Overall explanation
Micro-partitioning is an automatic feature in Snowflake that organizes
data into small, efficient storage units called micro-partitions. This
functionality does not require manual activation and is handled internally by
Snowflake, ensuring optimized performance without administrative
intervention.
Domain
Snowflake Data Platform Features and Architecture
Question 60Correct
Which is generally the slowest option for selecting staged data files to load
from a stage?
By path (internal stages) / prefix (Amazon S3 bucket).

Explanation

This is incorrect – Selecting files by path or prefix is generally faster than


using pattern matching because it limits the set of files to those with a
specific prefix or path. This is efficient for bulk operations.
Specifying a list of specific files to load.

Explanation

This is incorrect – Providing a list of specific files is typically the fastest


method, especially when the list is small, as it directly instructs Snowflake
which files to load without further search operations.
Your answer is correct
Using pattern matching to identify specific files by pattern.

Explanation

This is correct – Pattern matching is generally the slowest option for


selecting staged data files because it requires Snowflake to apply a regular
expression across all staged files, which can take longer, especially when
dealing with large data sets.
Overall explanation
Among the options for selecting staged data files, pattern matching is the
slowest due to the overhead of applying regular expressions to the file list. In
contrast, selecting files by path or a specific list allows for more direct access
and faster processing, making those methods more efficient for data loading
tasks.
Domain
Data Loading and Unloading
Question 61Correct
David ran a query that took approximately 30 minutes to finish. He checked
the Query profiler and noticed a high number for 'Bytes spilled to local
storage'. What might be the problem?
David is using a comparatively larger warehouse.

Explanation

This is incorrect – A larger warehouse usually has sufficient memory to


handle queries without spilling to local storage. A high byte spill to local
storage typically indicates inadequate memory.
Warehouse size has no impact on Bytes spilling.

Explanation

This is incorrect – The size of the warehouse impacts the amount of


available memory. Insufficient memory in a smaller warehouse often leads to
spilling.
Your answer is correct
David is using a comparatively smaller warehouse.

Explanation

This is correct – A smaller warehouse may not have enough memory to


process large datasets, causing data to spill to local storage when the
memory is exhausted, which slows down query performance.
David should contact Snowflake Personnel.

Explanation

This is incorrect – The issue is related to memory capacity, which can be


addressed by adjusting the warehouse size or optimizing the query.
Overall explanation
When a query's memory requirements exceed what is available in the
current warehouse, Snowflake spills data to local storage, leading to
slower performance. This typically happens in smaller warehouses with
limited memory. Increasing the warehouse size can provide additional
memory and reduce or eliminate local storage spills.
Domain
Performance Concepts
Question 62Incorrect
While transforming Semi-structure data, If you want expansion for all the
sub-elements recursively using FLATTEN function, what argument would you
need to set with FLATTEN function?
OUTER => FALSE

Explanation
This option omits rows with non-expandable elements (default behavior) but
does not affect recursive expansion.
Correct answer
RECURSIVE => TRUE

Explanation

Setting RECURSIVE to TRUE in the FLATTEN function enables recursive


expansion of all nested sub-elements within the specified path, allowing
thorough extraction of complex, deeply nested structures.
Your answer is incorrect
OUTER => TRUE

Explanation

Setting OUTER to TRUE ensures that rows with non-expandable elements (like
empty arrays) are included in the output with NULL values. This does not
enable recursive expansion of sub-elements.
RECURSIVE => FALSE

Explanation

Setting RECURSIVE to FALSE limits expansion to the top-level elements only,


without going deeper into nested levels.
Overall explanation
The FLATTEN function in Snowflake transforms semi-structured data (e.g.,
JSON arrays) into a relational format. By setting RECURSIVE => TRUE ,
the FLATTEN function recursively expands all nested levels, extracting
deeply nested sub-elements for comprehensive data transformation.

If RECURSIVE is set to FALSE (default), only top-level elements are expanded.


The OUTER parameter controls the handling of rows that can't be expanded,
with TRUE including them as NULL rows and FALSE omitting them.

Example -

1. -- Recursive flattening of nested JSON array


2. SELECT * FROM TABLE(FLATTEN(input => parse_json('[{"a": {"b": [1, 2]}}, {"a":
{"b": [3, 4]}}]'), RECURSIVE => TRUE)) f;

Domain
Data Transformation
Question 63Correct
What authentication methods does Snowflake support for REST API
authentication? (Select 2)
Snowflake Account User ID and Password
Your selection is correct
Key Pair Authentication
Authentication is not required in case Snowflake SQL API
Your selection is correct
OAuth
Overall explanation
Snowflake SQL API supports Oauth, and Key Pair authentication.
Domain
Data Transformation
Question 64Correct
What would you create (UDF or Stored procedure) if you need a function that
can be called as part of a SQL statement and must return a value that will be
used in the statement?
Stored Procedure

Explanation

Stored procedures are designed to perform administrative operations by


executing SQL statements. They are typically called independently using
the CALL statement and are not intended to be part of other SQL
expressions.
Your answer is correct
UDF

Explanation

UDFs are created to calculate and return a value that can be used within SQL
statements. They can be invoked as part of expressions in SELECT , WHERE , or
other clauses, making them suitable for scenarios where a function's return
value is needed within a SQL statement.
Overall explanation
In Snowflake, if you need a function that can be called within a SQL
statement and whose return value is utilized in that statement, you should
create a User-Defined Function (UDF). UDFs are designed to be part of
SQL expressions, allowing for seamless integration into queries. In
contrast, Stored Procedures are intended for executing a series of SQL
statements and are called independently using the CALL statement; they are
not designed to be embedded within other SQL expressions.

Example:
Creating a UDF to calculate the square of a number:

1. CREATE OR REPLACE FUNCTION square_number(x NUMBER)


2. RETURNS NUMBER
3. LANGUAGE SQL
4. AS
5. $$
6. SELECT x * x;
7. $$;

Using the UDF in a SELECT statement:

1. SELECT square_number(column_name) AS squared_value


2. FROM table_name;

This UDF can be seamlessly integrated into SQL queries, providing


calculated values as part of the result set.

Domain
Data Transformation
Question 65Correct
The suspended warehouse cannot be resized until it resumes. (True / False)
TRUE

Explanation

This is incorrect – You can resize a suspended warehouse. However,


the new resources will not be provisioned until the warehouse resumes.
Your answer is correct
FALSE

Explanation

This is correct - You can resize the warehouse while it is


suspended, but the changes take effect when the warehouse is resumed.
Overall explanation
In Snowflake, you can resize a warehouse even while it is
suspended. However, the new compute resources will only be provisioned
once the warehouse is resumed. This means the resizing command can be
issued while suspended, but its effects won't be realized until the warehouse
is active again.
Domain
Performance Concepts
Question 66Correct
If you drop or disable a user in Snowflake in an Okta IdP federated
environment, the user can still access Snowflake login through Okta.
(True/False)
Your answer is correct
FALSE
TRUE
Overall explanation
Users who are dropped or disabled in Snowflake are still able to log into their
Okta accounts, but they will receive an error message when they attempt to
connect to Snowflake. You must recreate or enable the user before they can
log in.
Domain
Account Access & Security
Question 67Incorrect
Which primary tool loads data to Snowflake from a local file system?
Your answer is incorrect
Snowflake UI

Explanation

Snowflake UI is a web-based interface used for managing and monitoring


Snowflake data warehouses and databases. While it provides a user-friendly
environment for various tasks, such as querying data and managing objects,
it is not specifically designed for loading data from a local file system to
Snowflake.
Correct answer
SnowSQL

Explanation

SnowSQL is a command-line tool provided by Snowflake for interacting with


Snowflake databases. It allows users to execute SQL queries, load data, and
perform various database operations. SnowSQL is the primary tool for
loading data to Snowflake from a local file system.
External Stage

Explanation

External Stage in Snowflake is a feature that allows users to load data from
external cloud storage locations, such as Amazon S3 or Azure Blob Storage,
into Snowflake. It is not used for loading data directly from a local file
system.
ETL tools

Explanation

ETL (Extract, Transform, Load) tools are commonly used for data integration
and loading tasks, including extracting data from various sources,
transforming it, and loading it into a target system like Snowflake. While ETL
tools can be used to load data into Snowflake, they are not the primary tool
for loading data from a local file system.
Overall explanation
SnowSQL is the primary command-line tool used to load data from a
local file system into Snowflake. It allows you to stage data using
the PUT command and then load it into tables via the COPY INTO command.
This tool is especially useful for bulk loading operations from local sources to
Snowflake's internal stages
Domain
Snowflake Data Platform Features and Architecture
Question 68Correct
Which of the following file format is not supported by Snowflake?
JSON

Explanation

Snowflake supports JSON file format for data loading and unloading
operations. JSON is a popular format for storing semi-structured data.
Your answer is correct
EDI

Explanation

EDI (Electronic Data Interchange) file format is not supported by Snowflake


for data loading and unloading operations. EDI is a standard format used for
exchanging structured data between different computer systems.
ORC

Explanation

Snowflake supports ORC (Optimized Row Columnar) file format for data
loading and unloading operations. ORC is a columnar storage format that
provides high compression and efficient data processing.
PARQUET

Explanation
Snowflake supports PARQUET file format for data loading and unloading
operations. PARQUET is a columnar storage format that is highly optimized
for query performance and efficient data storage.
CSV

Explanation

Snowflake supports CSV file format for data loading and unloading
operations. It is a common format used for storing tabular data.
AVRO

Explanation

Snowflake supports AVRO file format for data loading and unloading
operations. AVRO is a compact and efficient binary format for data
serialization.
Overall explanation
Snowflake supports a range of structured and semi-structured file formats,
including CSV, JSON, AVRO, ORC, and PARQUET, making it versatile for
diverse data scenarios. These formats are ideal for structured and analytical
data, whereas EDI is not a natively supported format.
Domain
Data Loading and Unloading
Question 69Correct
Which view in the Account Usage Schema can be used to query the
replication history for a specified database?
DATA_TRANSFER_HISTORY
DATABASE_REFRESH_HISTORY
REPLICATION_GROUP_REFRESH_HISTORY
Your answer is correct
REPLICATION_USAGE_HISTORY
Overall explanation
This REPLICATION_USAGE_HISTORY view in the Account Usage
Schema can be used to query the replication history for a specified
database. The returned results include the database name, credits
consumed, and bytes transferred for replication. Usage data is retained for
365 days (1 year).
Domain
Data Protection and Data Sharing
Question 70Correct
How many maximum columns (or expressions) are recommended for a
cluster key?
12 to 16
Higher the number of columns (or expressions) in the key, better
will be the performance
7 to 8
Your answer is correct
3 to 4
Overall explanation
A single clustering key can contain one or more columns or
expressions. Snowflake recommends a maximum of 3 or 4 columns (or
expressions) per key for most tables. Adding more than 3-4 columns
tends to increase costs more than benefits.
Domain
Snowflake Data Platform Features and Architecture
Question 71Correct
Which copyOptions can help load a file with expired metadata (if the
LAST_MODIFIED date is older than 64 days and the initial set of data was
loaded into the table more than 64 days earlier (and if the file was loaded
into the table, that also occurred more than 64 days earlier))? (Select 2)
LOAD_CERTAIN_FILES = TRUE

Explanation

The option LOAD_CERTAIN_FILES = TRUE does not address the scenario of


loading a file with expired metadata. It is not a valid option in Snowflake for
handling files with outdated metadata.
Your selection is correct
LOAD_UNCERTAIN_FILES = TRUE

Explanation

The option LOAD_UNCERTAIN_FILES = TRUE is the correct choice for loading


a file with expired metadata. This option allows loading files with uncertain
metadata, such as when the LAST_MODIFIED date is older than 64 days. It is
suitable for scenarios where the initial data load occurred more than 64 days
earlier.
LOAD_FILES = TRUE

Explanation

The option LOAD_FILES = TRUE does not specifically address the scenario of
loading a file with expired metadata. It is a general option for loading files
and does not provide a solution for the specific condition mentioned in the
question.
Your selection is correct
FORCE = TRUE

Explanation

The option FORCE = TRUE is the correct choice for loading a file with expired
metadata. When set to TRUE, it forces the loading of data files into a table,
even if the metadata is outdated or if the file was loaded into the table more
than 64 days earlier. This option helps in handling files with expired
metadata effectively.
ON_ERROR = CONTINUE

Explanation

The option ON_ERROR = CONTINUE does not specifically address the


scenario of loading a file with expired metadata. It is used to specify the
action to take when an error occurs during the data loading process, but it
does not provide a solution for handling files with outdated metadata.
FORCE = FALSE

Explanation

The option FORCE = FALSE does not directly relate to handling files with
expired metadata. It is used to control whether to force the loading of data
files into a table, but it does not address the specific condition mentioned in
the question.
Overall explanation
In Snowflake, the COPY INTO <table> command tracks loaded files to prevent
duplicate loading. If a file's metadata is older than 64 days, Snowflake may
skip it due to uncertainty about its load history. To load such files, you can
use the LOAD_UNCERTAIN_FILES = TRUE option, which attempts to load files
even when their load history is uncertain. Alternatively, setting FORCE =
TRUE forces the loading of all files, regardless of their load history, but may
result in duplicate data if files were previously loaded.

Example:

1. COPY INTO my_table


2. FROM @my_stage
3. FILE_FORMAT = (TYPE = 'CSV')
4. LOAD_UNCERTAIN_FILES = TRUE;

This command attempts to load data from @my_stage into my_table , including
files with expired metadata.

1. COPY INTO my_table


2. FROM @my_stage
3. FILE_FORMAT = (TYPE = 'CSV')
4. FORCE = TRUE;
This command forces the loading of all files from @my_stage into my_table ,
regardless of their load history. Use these options cautiously to avoid
unintended data duplication.
Domain
Data Loading and Unloading
Question 72Correct
Both external (external cloud storage) and internal (i.e., Snowflake) stages
support unstructured data. (True / False)
FALSE

Explanation

This option is incorrect. Both internal and external stages in Snowflake


support unstructured data storage, enabling data variety and broader
data management capabilities.
Your answer is correct
TRUE

Explanation

Both external stages (cloud storage like Amazon S3, Azure Blob)
and internal stages (within Snowflake) can handle unstructured
data types, including images, videos, and documents. Snowflake
allows storage and processing of unstructured data directly, improving
flexibility in handling various data formats.
Overall explanation
In Snowflake, both external stages (linked to cloud storage) and internal
stages support unstructured data storage, allowing flexible storage and
management of diverse data formats. This helps users integrate structured
and unstructured data efficiently for analysis.
Domain
Data Transformation
Question 73Correct
Monica wants to delete all the data from table t1. She wants to keep the
table structure, so she does not need to create the table again. Which
command will be appropriate for her need?
REMOVE
Your answer is correct
TRUNCATE
UNDROP
DELETE
DROP
Overall explanation
TRUNCATE will delete all of the data from a single table. So, once Monica
truncates table t1, table t1's structure remains, but the data will be deleted.
DELETE is usually used for deleting single rows of data.
Domain
Snowflake Data Platform Features and Architecture
Question 74Incorrect
What would happen if we suspend the warehouse while it is executing the
SQL statement?
Your answer is incorrect
All the compute resources of the warehouse will be shut down
immediately, and the running statement will be canceled.

Explanation

This choice is incorrect because suspending the warehouse does not


immediately cancel running statements. Only idle compute resources are
shut down, and running statements are allowed to continue until completion.
When trying to suspend the warehouse, we will get an error while
the same warehouse is executing SQL statements.

Explanation

This choice is incorrect because suspending the warehouse does not result in
an error while the warehouse is executing SQL statements. The suspension
process only affects idle compute resources.
All compute resources of the warehouse will be up until the
statement is complete.

Explanation

This choice is incorrect because suspending the warehouse does not keep all
compute resources up until the statement is complete. Only idle compute
resources are shut down.
Correct answer
Only idle compute resources of the warehouse will be shut down,
allowing any compute resources executing statements to continue
until the statement is complete.

Explanation

This choice is correct because when a warehouse is suspended, only the


idle compute resources are shut down. Any compute resources that are
actively executing statements will continue to do so until the statements are
complete.
Overall explanation
When we suspend a warehouse, Snowflake immediately shuts down all idle
compute resources for the warehouse. However, it allows any compute
resources executing statements to continue until the statements
are complete. At this time, the resources are shut down, and the
warehouse status changes to “Suspended”. Compute resources waiting to
shut down are considered to be in “quiesce” mode.
Domain
Performance Concepts
Question 75Correct
Which data does not fit into a predefined data model or schema?
All of these

Explanation

Incorrect. Only unstructured data completely lacks a predefined schema,


whereas structured and semi-structured data have some degree of
organization.
Your answer is correct
Unstructured Data

Explanation

Unstructured data lacks any predefined model or schema, encompassing


formats such as text documents, images, videos, and audio. This data type is
complex and often requires specialized processing techniques. In Snowflake,
unstructured data can be managed in internal or external stages and
processed with Snowflake’s Unstructured Data feature.
Structured-data

Explanation

Structured data adheres to a strict schema, often stored in a tabular format


(rows and columns) in relational databases. It is highly organized, making it
easy to search, analyze, and process using SQL. Examples include customer
records and transaction data tables. In Snowflake, structured data benefits
from efficient querying due to predefined data models.
Semi-Structured Data

Explanation

Semi-structured data does not conform to a fixed schema but retains some
structural elements (e.g., key-value pairs, tags). JSON, XML, and Avro are
typical formats, allowing data to be stored without strict organization but
enabling partial structure for easier parsing. Snowflake supports semi-
structured data with specialized functions (like FLATTEN ) to extract nested
elements.
Overall explanation
Unstructured data does not follow a predefined schema, making
traditional analysis methods challenging. Examples include documents,
multimedia, and social media data. Snowflake supports unstructured data
management, allowing it to be staged in Snowflake and cloud storage. Semi-
structured data, like JSON, has some structure without strict
organization, useful for nested or hierarchical data. Structured data, such
as tables, fits into a defined schema. Snowflake's capabilities allow handling
of all three data types, with functions like FLATTEN for semi-structured data
and specific stages for unstructured data.
Domain
Data Transformation
Question 76Correct
In what situations should you consider User-Managed Tasks over Serverless
Tasks? (Select 2)
Your selection is correct
Consider when adherence to the schedule interval is less important.
Consider when adherence to the schedule interval is highly
important.
Your selection is correct
Consider when you can fully utilize a single warehouse by
scheduling multiple concurrent tasks to take advantage of available
compute resources.
Consider when you cannot fully utilize a warehouse because too few
tasks run concurrently or they run to completion quickly (in less
than 1 minute).
Overall explanation
User-managed Tasks is recommended when you can fully utilize a
single warehouse by scheduling multiple concurrent tasks to take
advantage of available compute resources. Also, recommended when
adherence to the schedule interval is less critical. Serverless Tasks is
recommended when you cannot fully utilize a warehouse because too
few tasks run concurrently or they run to completion quickly (in less than 1
minute). Also, recommended when adherence to the schedule interval is
critical.
Domain
Snowflake Data Platform Features and Architecture
Question 77Correct
Which of these types of VIEW does Snowflake support? (Select 3)
Your selection is correct
STANDARD VIEW
Your selection is correct
MATERIALIZED VIEW
Your selection is correct
SECURE VIEW
PERMANENT VIEW
TEMPORARY VIEW
EXTERNAL VIEW
Overall explanation
Snowflake supports three types of views.
Standard View, Secure View, and Materialized View.

Standard View: It is a default view type. Its underlying DDL is available to


any role with access to the view. When you create a standard view,
Snowflake saves a definition of the view. Snowflake does not run the query.
When someone accesses the view, that is when the query is run. The
standard view will always execute as the owning role.

Secure View: The secure view is exactly like a standard view, except users
cannot see how that view was defined. Sometimes a secure view will run a
little slower than a standard view to protect the information in a secure view.
Snowflake may bypass some of the optimizations.

Materialized View: A materialized view is more like a table. Unlike a


standard or secure view, Snowflake runs the query right away when you
create a materialized view. It takes the results set and stores that result set
as a table in Snowflake. Because Snowflake is storing that materialized view
as a table, creating micro partitions. Snowflake is creating metadata about
those micro partitions. So when you query a materialized view, if you put a
filter on the view, you get the same benefit of micro partition pruning that
you would get from a table. With Snowflake, the materialized view is
automatically refreshed every time there is a transaction against the base
table. So it is always going to be in sync. If you want, you can also create a
secure materialized view, which again will hide the logic from the user. A
note about materialized views, because Snowflake is auto-refreshing them in
the background, they use some credits, so there is a little bit of a cost there.
Moreover, there is some storage, and Snowflake stores the result set as a
table in Snowflake. So materialized views use more storage and compute
than standard or secure views.

Domain
Snowflake Data Platform Features and Architecture
Question 78Incorrect
A user's default role is
the name used to log in to the Snowflake WebUI.
Correct answer
the role a user gets set to each time the user logs in to Snowflake.
changed each time the user logs in to Snowflake.
Your answer is incorrect
always the default PUBLIC role.
Overall explanation
A user's default role is the role a user gets set to each time the user logs in
to Snowflake. Snowflake uses roles to control the objects (virtual
warehouses, databases, tables, etc.) that users can access:
 Snowflake provides a set of predefined roles, as well as a framework
for defining a hierarchy of custom roles.
 All Snowflake users are automatically assigned the predefined PUBLIC
role, which enables login to Snowflake and basic object access.
 In addition to the PUBLIC role, each user can be assigned additional
roles, with one of these roles designated as their default role.
 A user’s default role determines the role used in the Snowflake
sessions initiated by the user; however, this is only a default. Users can
change roles within a session at any time.
 Roles can be assigned at user creation or afterward.
Domain
Account Access & Security
Question 79Incorrect
How can we add a Directory table explicitly to a stage to store a catalog of
staged files?
Your answer is incorrect
Using CREATE DIRECTORY TABLE command and then add to the
stage by ALTER STAGE command
Using CREATE DIRECTORY TABLES command and then add to the
stage by ALTER STAGE command
Correct answer
Using CREATE STAGE command
Overall explanation
A Directory table is not a separate database object; it stores a catalog of
staged files in cloud storage. Roles with sufficient privileges can query a
directory table to retrieve file URLs to access the staged files and other
metadata. A directory table can be added explicitly to a stage when the
stage is created (using CREATE STAGE) or later (using ALTER STAGE) with
supplying directoryTableParams. directoryTableParams (for internal
stages) ::= [ DIRECTORY = ( ENABLE = { TRUE | FALSE }
[ REFRESH_ON_CREATE = { TRUE | FALSE } ] ) ] ENABLE = TRUE | FALSE
Specifies whether to add a directory table to the stage. When the value is
TRUE, a directory table is created with the stage.
Domain
Data Transformation
Question 80Incorrect
How can you unload data from Snowflake using the COPY INTO command into
a single file?
Correct answer
By specifying copy option SINGLE=TRUE

Explanation

By specifying the copy option SINGLE=TRUE, you can unload data from
Snowflake using the COPY INTO command into a single file. This option
ensures that the data is unloaded into a single file instead of multiple files,
which is the requirement specified in the question.
By specifying copy option MULTIPLE_FILES=FALSE
Explanation

Specifying copy option MULTIPLE_FILES=FALSE does not align with unloading


data from Snowflake into a single file using the COPY INTO command. This
option does not address the specific requirement of unloading data into a
single file and is not the correct choice for this scenario.
By specifying copy option ONE_FILE=TRUE

Explanation

The copy option ONE_FILE=TRUE is not a valid option in Snowflake for


unloading data into a single file using the COPY INTO command. This option
does not exist in Snowflake's syntax and does not serve the purpose of
unloading data into a single file.
Your answer is incorrect
By specifying copy option MULTIPLE=FALSE

Explanation

Specifying copy option MULTIPLE=FALSE does not directly relate to


unloading data from Snowflake into a single file. This option is not the correct
choice for achieving the specific requirement mentioned in the question.
Overall explanation
When unloading data from Snowflake using the COPY INTO command, you
can control whether the output is a single file or multiple files. By default,
Snowflake may split the output into multiple files to optimize performance.
To ensure that the data is unloaded into a single file, you should set
the SINGLE copy option to TRUE . This approach is particularly useful when a
single consolidated output file is required for downstream processing or
compatibility reasons.

Example -

1. COPY INTO @my_stage/my_data.csv


2. FROM my_table
3. FILE_FORMAT = (TYPE = 'CSV')
4. SINGLE = TRUE;

In this example, data from my_table is unloaded into a single CSV file
named my_data.csv in the stage my_stage . The SINGLE = TRUE option ensures
that the output is consolidated into one file.
Domain
Data Loading and Unloading
Question 81Incorrect
Which services are managed by Snowflake's cloud services layer? (Select all
that apply)
Your selection is correct
Authentication

Explanation

Authentication is managed by Snowflake's cloud services layer to ensure


secure access to the Snowflake platform. It handles user authentication,
authorization, and identity management.
Your selection is correct
Query Parsing and Optimization

Explanation

Query Parsing and Optimization is managed by Snowflake's cloud services


layer to parse and optimize SQL queries submitted by users for efficient
execution on the underlying infrastructure.
Only Infrastructure Management

Explanation

Only Infrastructure Management is incorrect as Snowflake's cloud services


layer manages not only infrastructure but also authentication, metadata,
query optimization, and access control.
Correct selection
Infrastructure Management

Explanation

Infrastructure Management is managed by Snowflake's cloud services layer


to handle the provisioning, scaling, and maintenance of the underlying
infrastructure required to run Snowflake's services.
Your selection is correct
Metadata Management

Explanation

Metadata Management is managed by Snowflake's cloud services layer to


store and manage metadata related to databases, tables, views, and other
objects within the Snowflake platform.
Your selection is correct
Access Control

Explanation
Access Control is managed by Snowflake's cloud services layer to enforce
security policies, permissions, and roles to control access to data and
resources within the Snowflake platform.
Overall explanation
The cloud services layer in Snowflake manages several key functions,
including authentication, infrastructure management, metadata
management, query parsing and optimization, and access control.
These services ensure that user requests, from login to query execution, are
processed efficiently while Snowflake manages the underlying infrastructure.
This layer also handles security and governance, ensuring that users have
appropriate access to data. It abstracts the complexity of infrastructure and
metadata management, allowing users to focus on their data without
worrying about backend processes.
Domain
Snowflake Data Platform Features and Architecture
Question 82Correct
Which algorithm does Snowflake use to estimate the approximate number of
distinct values in a data set?
HyerAccumulateLog

Explanation

HyerAccumulateLog is not the algorithm used by Snowflake for estimating


the approximate number of distinct values in a data set. This choice is
incorrect as it does not correspond to Snowflake's estimation techniques.
HyperMedianLog

Explanation

HyperMedianLog is not the algorithm utilized by Snowflake for estimating the


approximate number of distinct values in a data set. This choice is incorrect
as it does not reflect Snowflake's approach to estimation.
HyperMeanLog

Explanation

HyperMeanLog is not the algorithm employed by Snowflake for estimating


the approximate number of distinct values in a data set. This choice is
incorrect as it does not match Snowflake's methodology for estimation.
HyperEstimateLog

Explanation

HyperEstimateLog is not the algorithm used by Snowflake to estimate the


approximate number of distinct values in a data set. This choice is incorrect
as it does not align with Snowflake's methods for estimation.
Your answer is correct
HyperLogLog

Explanation

HyperLogLog is the algorithm used by Snowflake to estimate the


approximate number of distinct values in a data set. It is a probabilistic data
structure that provides an efficient way to estimate cardinalities of data sets.
Overall explanation
Snowflake employs the HyperLogLog algorithm to approximate the
count of distinct values in extensive datasets. This probabilistic
method offers a balance between accuracy and resource efficiency,
providing near-accurate results with significantly reduced memory and
computational requirements compared to exact counting methods. It's
particularly beneficial when dealing with large-scale data where traditional
counting would be resource-intensive.
Domain
Data Transformation
Question 83Incorrect
Which privileges are provided with a share by the provider? (Select 2)
Grant access(MODIFY) to the specific tables in the database
Grant access(OPERATE) to the database and the schema containing
the tables to share
Correct selection
Grant access(USAGE) to the database and the schema containing
the tables to share
Your selection is correct
Grant access(SELECT) to the specific tables in the database
Grant access(USAGE) to the specific tables in the database
Overall explanation
Shares are named Snowflake objects that encapsulate all of the information
required to share a database. Each share consists of:
 The privileges that grant access to the database(s) and the schema
containing the objects to share.
 The privileges that grant access to the specific objects in the database.
 The consumer accounts with which the database and its objects are
shared.

Example: CREATE SHARE "SHARED_DATA" COMMENT=''; GRANT


USAGE ON DATABASE "DEMO_DB" TO SHARE "SHARED_DATA"; GRANT
USAGE ON SCHEMA "DEMO_DB"."TWITTER_DATA" TO SHARE
"SHARED_DATA"; GRANT SELECT ON VIEW
"DEMO_DB"."TWITTER_DATA"."FOLLOWERS" TO SHARE
"SHARED_DATA";
Domain
Data Protection and Data Sharing
Question 84Incorrect
Which IdP vendors provide native Snowflake support For federated
authentication and SSO? (Select 2)
Your selection is incorrect
Microsoft Azure Active Directory
Your selection is correct
Okta
Onelogin
Your selection is correct
Microsoft ADFS
Your selection is incorrect
Google G Suite
Overall explanation
Okta and Microsoft ADFS provide native Snowflake support for federated
authentication and SSO.
Domain
Account Access & Security
Question 85Correct
Which of the Snowflake editions provides a federated authorization feature?
Standard

Explanation

The Standard edition of Snowflake does not provide a federated


authorization feature. It offers basic data warehousing capabilities but lacks
advanced features like federated authorization.
Virtual Private Snowflake(VPS)

Explanation

The Virtual Private Snowflake (VPS) edition of Snowflake does not specifically
mention federated authorization as a feature. It is tailored for organizations
that require a dedicated and isolated environment for their data processing
needs.
Your answer is correct
All of the Snowflake Editions

Explanation

All of the Snowflake Editions, including Standard, Enterprise, Business


Critical, and Virtual Private Snowflake (VPS), provide the federated
authorization feature. This allows users to integrate with external identity
providers for secure access control and authentication across the Snowflake
platform.
Enterprise

Explanation

The Enterprise edition of Snowflake does not specifically provide a federated


authorization feature. While it offers more advanced features than the
Standard edition, federated authorization may not be included in its
offerings.
Business Critical

Explanation

The Business Critical edition of Snowflake does not focus on federated


authorization as a key feature. It is designed for organizations with high data
processing needs but may not include federated authorization capabilities.
Overall explanation
All Snowflake Editions (Standard, Enterprise, Business Critical, Virtual Private
Snowflake) provide Federated Authentication.
Domain
Account Access & Security
Question 86Correct
select * from t1 sample row(100); What would the above query return?
Return an empty sample

Explanation

The query does not return an empty sample.


Return an entire table, including all rows in the table

Explanation

This is incorrect; the query does not return all rows unless the table has
fewer than 100 rows.
Your answer is correct
Return a sample of 100 rows from the table

Explanation

Correct. The query returns a random sample of exactly 100 rows from t1 .
If t1 has fewer than 100 rows, all rows are returned.
Return a sample of a table in which each row has a 10% probability
of being included in the sample

Explanation
This is incorrect; the query specifies a fixed row count, not a percentage-
based sample.
Overall explanation
In Snowflake, SAMPLE ROW (100) in the SELECT statement requests a random
sample of 100 rows from the table. If the table contains fewer than 100
rows, all rows are returned. This method is useful when you want a fixed
number of rows without applying percentage-based sampling.
Domain
Data Transformation
Question 87Incorrect
Which of these is a kind of Cache in Snowflake?
Data/Local Disk Cache

Explanation

Data cache, also known as Local Disk Cache, stores frequently accessed data
on the local SSDs of virtual warehouses. It speeds up query performance by
avoiding repeated reads from cloud storage.
Correct answer
All of these

Explanation

All of these options (Metadata Cache, Data/Local Disk Cache, and Query
Result Cache) are types of caches in Snowflake that are used to optimize
query performance and improve overall system efficiency.
Metadata Cache

Explanation

Metadata cache stores information about table structure, row count, and
data attributes in Snowflake's cloud services layer. It allows queries that
don't need to read actual data (e.g., metadata-based queries) to execute
faster.
Your answer is incorrect
Query Result Cache

Explanation

Query result cache saves the results of previously executed queries for up to
24 hours. It is highly effective when identical queries are run multiple times
without changes to the underlying data.
Overall explanation
Snowflake has three types of cache.
 The metadata cache that lives in the cloud services layer.
 The data cache/local disk cache that lives on the SSD drives in the
virtual warehouses, and
 The query result cache. If a result is small, it will be stored in the
cloud services layer, but larger results are going to be stored in the
storage layer.
Domain
Performance Concepts
Question 88Correct
The data objects stored by Snowflake are not directly visible nor accessible
by customers; they are only accessible through SQL query operations run
using Snowflake. (True/False)
FALSE

Explanation

FALSE. This statement is incorrect. In Snowflake, the data objects stored are
not directly visible or accessible by customers. Customers can only access
and interact with the data objects through SQL query operations run using
Snowflake. This level of abstraction ensures data security and privacy.
Your answer is correct
TRUE

Explanation

TRUE. In Snowflake, the data objects stored are not directly visible or
accessible by customers. Customers can only access and interact with the
data objects through SQL query operations run using Snowflake. This level of
abstraction provides security and privacy for the stored data.
Overall explanation
In Snowflake, the data objects are not directly visible or accessible to
customers. All data stored by Snowflake is managed internally and can only
be accessed through SQL query operations executed within the Snowflake
environment. This ensures that Snowflake handles aspects such as file
organization, compression, and metadata management, abstracting these
complexities away from the end user
Domain
Snowflake Data Platform Features and Architecture
Question 89Incorrect
What are the two modes available for setting up a multi-cluster warehouse in
Snowflake? Choose the correct options.
Your selection is incorrect
Standard mode

Explanation
This is incorrect. There is no "Standard mode" in the official Snowflake
multi-cluster warehouse configurations. The term may seem intuitive, but
Snowflake’s documentation clearly specifies "Maximized" and "Auto-scaling"
as the only available modes.
Correct selection
Maximized mode

Explanation

This is correct. "Maximized mode" is a valid configuration in Snowflake


where all clusters in a multi-cluster warehouse remain active regardless of
the workload. It is specifically designed for scenarios requiring high
concurrency, ensuring that resources are always available to handle peak
demand.
Your selection is incorrect
Economized mode

Explanation

This is incorrect. Snowflake does not have a mode called "Economized


mode." Although the concept may suggest resource efficiency, the official
modes are focused on handling workload needs through "Maximized" and
"Auto-scaling" configurations, not through an "Economized" setting.
Correct selection
Auto-scaling mode

Explanation

This is correct. In "Auto-scaling mode," Snowflake dynamically adjusts the


number of active clusters based on current workload demands. This
configuration allows for efficient use of resources, scaling up during high
demand and scaling down when the load decreases, making it a key feature
for balancing performance and cost.
Overall explanation
The two modes officially supported for multi-cluster warehouses in Snowflake
are Maximized mode and Auto-scaling mode. "Maximized mode" ensures
all clusters are active, providing consistent performance for high-
concurrency scenarios. "Auto-scaling mode" adjusts clusters dynamically,
optimizing both performance and cost efficiency. These configurations offer
flexibility in managing compute resources to handle varying query loads
effectively.
Domain
Performance Concepts
Question 90Incorrect
Which of these SQL functions does Snowflake support? (Select all that apply)
Your selection is correct
Table

Explanation

Snowflake supports Table functions, which generate a set of rows based on


input parameters and return the result as a table. These functions are used
to expand data sets or perform complex transformations on tables.
Your selection is correct
Scalar

Explanation

Snowflake supports Scalar functions, which operate on a single input value


and return a single output value. These functions are used to manipulate
data at a row level within SQL queries.
Correct selection
Window

Explanation

Snowflake supports Window functions, which perform calculations across a


set of table rows related to the current row. These functions are used to
calculate running totals, moving averages, and other analytics within a
specified window of rows.
Your selection is correct
Aggregate

Explanation

Snowflake supports Aggregate functions, which operate on a set of values


and return a single value as a result. These functions are commonly used
with GROUP BY clauses to perform calculations on groups of rows.
Correct selection
System

Explanation

Snowflake supports System functions, which provide information about the


system and environment in which the database is running. These functions
can be used to retrieve metadata, system parameters, and other system-
related information.
Your selection is correct
User-Defined

Explanation
Snowflake supports User-Defined functions, which allow users to define their
own custom functions using SQL or JavaScript. These functions can
encapsulate complex logic and calculations for reuse within SQL queries.
Overall explanation
Snowflake supports a comprehensive range of SQL functions to facilitate
diverse data operations:
 Scalar functions process individual values, returning a single result
per input.
 Aggregate functions compute a single result from multiple input
rows, such as totals or averages.
 Window functions perform calculations across a set of table rows
related to the current row, useful for tasks like ranking or running
totals.
 System functions provide information about the system, session, or
database objects.
 Table functions return a set of rows, often used to process semi-
structured data or generate series.
 User-Defined Functions (UDFs) enable users to create custom
functions in languages like SQL, JavaScript, or Python, extending
Snowflake's capabilities.

This extensive support allows users to perform complex data manipulations


and analyses within Snowflake.

Domain
Data Transformation
Question 91Incorrect
Time Travel can be disabled for an account by ACCOUNTADMIN. (True/False)
Your answer is incorrect
TRUE
Correct answer
FALSE
Overall explanation
Time Travel cannot be disabled for an account. A user with the
ACCOUNTADMIN role can set DATA_RETENTION_TIME_IN_DAYS to 0 at the
account level, which means that all databases (and subsequently all
schemas and tables) created in the account have no retention period by
default; however, this default can be overridden at any time for any
database, schema, or table.
Domain
Data Protection and Data Sharing
Question 92Correct
Which schema can be used to find out about storage, compute, and objects
in a Snowflake account?
SNOWFLAKE_SCHEMA
Explanation

This is incorrect – SNOWFLAKE_SCHEMA is not used for account-level


metadata; it's primarily focused on system-level information within
Snowflake's ecosystem.
RESOURCE_SCHEMA

Explanation

This is incorrect – There is no specific schema named RESOURCE_SCHEMA


in Snowflake for retrieving detailed account-level data.
USAGE_SCHEMA

Explanation

This is incorrect – USAGE_SCHEMA isn't a recognized schema in Snowflake


for accessing account data regarding storage, compute, and objects.
Your answer is correct
INFORMATION_SCHEMA

Explanation

This is correct – INFORMATION_SCHEMA in Snowflake provides metadata


about various objects, including tables, views, and other components within
databases. It offers insights into storage usage, compute resources, and
various database objects, making it the appropriate choice for retrieving
such information.
Overall explanation
The correct schema to gather information about storage, compute, and
objects in a Snowflake account is INFORMATION_SCHEMA. This schema
provides a collection of views and table functions that display metadata
regarding objects, usage statistics, storage, and various database
components. The INFORMATION_SCHEMA is standard across databases,
facilitating detailed analysis of objects like tables, views, and file formats.

The schema contains specific views such as TABLES , COLUMNS , and STAGES ,
which allows users to see metadata about table structure, storage allocation,
and object history. Here’s a code snippet example that
uses INFORMATION_SCHEMA :

1. SELECT table_name, table_type, created


2. FROM your_database.INFORMATION_SCHEMA.TABLES
3. WHERE table_schema = 'PUBLIC';

This query retrieves a list of tables along with their types and creation
timestamps, filtered by schema. The INFORMATION_SCHEMA offers a robust,
standardized way to manage and analyze data assets within a Snowflake
environment, supporting effective database management and optimization.
Domain
Performance Concepts
Question 93Correct
Which object parameter can users with the ACCOUNTADMIN role use to set
the default retention period for their account?
DATA_RETENTION_TIME_IN_HOURS
DATA_RETENTION_IN_TIME_TRAVEL
Your answer is correct
DATA_RETENTION_TIME_IN_DAYS
DATA_RETENTION_TIME_MAX
Overall explanation
Users can use the DATA_RETENTION_TIME_IN_DAYS object parameter
with the ACCOUNTADMIN role to set the default retention period for their
account.
Domain
Data Protection and Data Sharing
Question 94Correct
You can create an an account level network policy using _____ (Select all that
apply)
Your selection is correct
Snowsight
Your selection is correct
Classic Web Interface
Only Snowflake Support can create the Account level Network Policy
Your selection is correct
SQL
Overall explanation
Only security administrators (i.e., users with the SECURITYADMIN role) or
higher or a role with the global CREATE NETWORK POLICY privilege can
create network policies using Snowsight, Classic Web Interface, and SQL.
Domain
Account Access & Security
Question 95Incorrect
The VALIDATION_MODE parameter does not support COPY statements that
transform data during a load. (True / False)
Your answer is incorrect
FALSE

Explanation

FALSE. The statement is incorrect. The VALIDATION_MODE parameter in


Snowflake does not support COPY statements that transform data
during a load. It is important to note that this parameter is specifically for
validation checks and does not include data transformation capabilities.
Correct answer
TRUE

Explanation

TRUE. The VALIDATION_MODE parameter in Snowflake does not


support COPY statements that transform data during a load. This
parameter is used to specify whether to perform validation checks during a
data load operation, but it does not allow for data transformation within
the COPY statement.
Overall explanation
Types of VALIDATION_MODE in Snowflake's COPY INTO Command:

Snowflake’s VALIDATION_MODE parameter is essential for validating data files


before loading them into a table. It helps to catch errors or inconsistencies in
the data format or schema early, which ensures smooth data loads.
Snowflake currently supports these VALIDATION_MODE options:

1. RETURN_ERRORS

This mode scans the data files and reports any validation errors found. It
does not load any data into the table. Using RETURN_ERRORS helps in
identifying initial issues, like type mismatches or missing columns, without
attempting a full data load. It’s useful for catching a sample of errors to
address before proceeding with an actual load.

1. COPY INTO target_table


2. FROM @my_stage/data_file.csv
3. FILE_FORMAT = (TYPE = 'CSV')
4. VALIDATION_MODE = 'RETURN_ERRORS';

2. RETURN_ALL_ERRORS

Similar to RETURN_ERRORS , this mode also scans the data for errors without
loading data. However, instead of stopping at the first few
errors, RETURN_ALL_ERRORS continues to scan the entire file and returns all
errors found. This comprehensive error report is useful when you want to see
every potential issue across the data files, although it can be slower due to
the complete scan.

1. COPY INTO target_table


2. FROM @my_stage/data_file.csv
3. FILE_FORMAT = (TYPE = 'CSV')
4. VALIDATION_MODE = 'RETURN_ALL_ERRORS';

3. RETURN_n_ROWS

The RETURN_n_ROWS mode validates the data files and returns up to the
specified number of rows that would have been loaded, without actually
loading them into the target table. It’s an effective way to preview data
before performing a full load to ensure it appears in the expected format.
The parameter n specifies how many sample rows should be returned. This
option does not check for data errors but allows users to inspect a sample of
the data.

1. COPY INTO target_table


2. FROM @my_stage/data_file.csv
3. FILE_FORMAT = (TYPE = 'CSV')
4. VALIDATION_MODE = 'RETURN_5_ROWS'; -- Returns up to 5 rows

Important Note:

These VALIDATION_MODE options do not support COPY statements that involve


transformations (e.g., reordering columns or applying expressions). If data
transformations are necessary, they must be handled separately, as
validation and transformation cannot be performed together.

Domain
Data Transformation
Question 96Correct
The major benefits of defining Clustering Keys: (Select 2)
Your selection is correct
To help improve query performance

Explanation

This is correct. Clustering Keys help improve query performance by


physically ordering the data in the table based on the specified keys. This
can reduce the amount of data that needs to be scanned or processed during
query execution, resulting in faster query response times.
To help in faster data sharing

Explanation
This is incorrect. Faster data sharing is not a direct benefit of defining
Clustering Keys. Clustering Keys primarily impact table organization and
query performance rather than data sharing speed.
Your selection is correct
To help optimize table maintenance

Explanation

This is correct. Defining Clustering Keys helps optimize table maintenance


by organizing data in a way that reduces the need for frequent
reorganization or restructuring of the table, leading to improved performance
and efficiency in managing the data.
To help in organizing small tables (<1 GB)

Explanation

This is incorrect. Clustering Keys are more beneficial for optimizing larger
tables rather than small tables (<1 GB). While they can still be used for
smaller tables, the impact on performance and maintenance optimization
may not be as significant as with larger datasets.
Overall explanation
Clustering keys provide significant benefits in improving query
performance by organizing data in a way that reduces the need for full
table scans. They also help optimize table maintenance by automatically
reclustering tables as data is modified. These advantages are particularly
impactful for large tables, where clustering ensures that queries run
efficiently. However, clustering keys do not impact data sharing or
significantly benefit small tables.
Domain
Performance Concepts
Question 97Correct
The major benefits of defining Clustering Keys: (Select 2)
To help in faster data sharing
To help in organizing small tables (<1 GB)
Your selection is correct
To help optimize table maintenance
Your selection is correct
To help improve query performance
Overall explanation
Defining clustering keys for very large tables (in the multi-terabyte
range) helps optimize table maintenance and query performance.
Small tables are not a good candidate for clustering.
Domain
Performance Concepts
Question 98Incorrect
Which SQL command determines whether a network policy is set on the
account or for a specific user?
SHOW POLICIES
SHOW PARAMETER
Your answer is incorrect
SHOW NETWORK_POLICIES
Correct answer
SHOW PARAMETERS
SHOW POLICY
Overall explanation
The SHOW PARAMETERS command determines whether a network policy
is set on the account or for a specific user.

For Account level: SHOW PARAMETERS LIKE 'network_policy' IN ACCOUNT;

For User level : SHOW PARAMETERS LIKE 'network_policy' IN USER


<username>;

Example - SHOW PARAMETERS LIKE 'network_policy' IN USER john;

Domain
Account Access & Security
Question 99Correct
Monica has successfully created a task with the 5 minutes schedule. It has
been 30 minutes, but the task did not run. What could be the reason?
Monica doesn't have the authority to run the task
Monica should run the ALTER TASK command to SUSPEND the task,
and then again run the ALTER TASK command to RESUME the task
Task schedule should not be less than 60 minutes
Your answer is correct
Monica should run the ALTER TASK command to RESUME the task
Overall explanation
The first time we create the TASK, we need to run the ALTER TASK command
to RESUME the task.
Domain
Snowflake Data Platform Features and Architecture
Question 100Correct
What will happen if a policy is assigned to a user who is already signed in?
Your answer is correct
The user can't do anything else until signed in and signed back in
again.
The user can continue running the SQL queries in the currently
opened session.
There will be no interruption until the user logoffs and signs in
again.
Overall explanation
If a policy is assigned to a user who already signed in, they can't do anything
else until they sign and signed back in again to make use of the new policy
Domain
Account Access & Security

You might also like