0% found this document useful (0 votes)
191 views70 pages

Cof 3

This document contains questions and explanations about Snowflake and data warehousing concepts. It covers topics like Snowflake editions that support federated authentication, benefits of the data cloud, Snowflake partner categories, actions that can be performed on shared data, when to use scoped URLs, default timestamp formats in Snowflake, and that direct data sharing in Snowflake is only between accounts in the same region and cloud provider.

Uploaded by

mojem69856
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
191 views70 pages

Cof 3

This document contains questions and explanations about Snowflake and data warehousing concepts. It covers topics like Snowflake editions that support federated authentication, benefits of the data cloud, Snowflake partner categories, actions that can be performed on shared data, when to use scoped URLs, default timestamp formats in Snowflake, and that direct data sharing in Snowflake is only between accounts in the same region and cloud provider.

Uploaded by

mojem69856
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

[COF-C02] SnowPro Core Certification Questions Part -3

Question 1: Incorrect
Federated Authentication is supported by the following:

(Select all that apply)

Business Critical

(Correct)

Enterprise Edition

(Correct)

Standard Edition

(Correct)

VPS

(Correct)

Explanation
Federated authentication is supported by all of the Snowflake editions.
Question 2:
Skipped
What are the key benefits of The Data Cloud? (Select 3)

Action

(Correct)


Governance

(Correct)

Maintenance

Access

(Correct)

Backup

Explanation
The benefits of The Data Cloud are Access, Governance, and Action.

Access means that organizations can easily discover data and share it internally or with
third parties without regard to geographical location.

Governance is about setting policies and rules and protecting the data in a way that can
unlock new value and collaboration while maintaining the highest levels of security and
compliance.

Action means you can empower every part of your business with data to build better
products, make faster decisions, create new revenue streams and realize the value of
your greatest untapped asset, your data.

Question 3:
Skipped
Which of these are Snowflake Cloud Partner Categories? (Select 3)

Data Integration

(Correct)

Native Programmatic Interfaces


(Correct)

Machine Learning & Data Science

(Correct)

Application Integration

Explanation
Snowflake has the following Cloud Partner Categories:
• Data Integration
• Business Intelligence (BI)
• Machine Learning & Data Science
• Security Governance & Observability
• SQL Development & Management, and
• Native Programmatic Interfaces.
Question 4:
Skipped
What actions can a consumer perform on a share? (Select 2)

Import the same share to more than one database

Copy shared data into another table in their own account with CREATE TABLE
AS

(Correct)

Clone a share

Query the shared data and join it with an existing table in their own account

(Correct)

Execute Time Travel on a share

Re-share the share

Explanation
Shared databases are read-only. A consumer cannot UPDATE a share. However, the
consumer can do a CREATE TABLE AS to make a point-in-time copy of the data that's
been shared. The consumer cannot clone and re-share a share or forward it. And also,
time travel data on a share is not available to the consumer. A share can be imported
into one database.

Note: In the exam, you may be asked for Reader Account as well.

Question 5:
Skipped
Scoped URL is ideal for

Ideal for business intelligence applications or reporting tools that need to


display the unstructured file contents

Ideal for custom applications that require access to unstructured data files

Ideal for use in custom applications, providing unstructured data to other


accounts via a share

(Correct)

None of these

Explanation
Scoped URL: Encoded URL that permits temporary access to a staged file without
granting privileges to the stage. The URL expires when the persisted query result period
ends (i.e., the results cache expires), which is currently 24 hours. Ideal for use in
custom applications, providing unstructured data to other accounts via a share, or for
downloading and ad hoc analysis of unstructured data via Snowsight. File URL: URL
that identifies the database, schema, stage, and file path to a set of files. A role that has
sufficient privileges on the stage can access the files. Ideal for custom applications
that require access to unstructured data files.

Pre-signed URL: Simple HTTPS URL used to access a file via a web browser. A file is
temporarily accessible to users via this URL using a pre-signed access token. The
expiration time for the access token is configurable. Ideal for business intelligence
applications or reporting tools that need to display unstructured file contents.

Question 6:
Skipped
Which is the default timestamp in Snowflake?

TIMESTAMP_NTZ

(Correct)

None of these

TIMESTAMP_TZ

TIMESTAMP_LTZ

Explanation
TIMESTAMP_NTZ is the default timestamp type if you just define a column as a
timestamp. Hint to remember: NTZ represents NO TIME ZONES.
Question 7:
Skipped
Which is the fastest option for selecting staged data files to load from a stage?

Using pattern matching to identify specific files by pattern

By path (internal stages) / prefix (Amazon S3 bucket)

Specifying a list of specific files to load

(Correct)

Explanation
Of the three options for identifying/specifying data files to load from a stage, providing
a discrete list of files is generally the fastest; however, the FILES parameter supports a
maximum of 1,000 files, meaning a COPY command executed with the FILES parameter
can only load up to 1,000 files. Example: copy into load1 from @%load1/data1/
files=('test1.csv', 'test2.csv', 'test3.csv', 'test4.csv')
Question 8:
Skipped
John has a table EMPLOYEE_DATA, and he wants to create another table
EMPLOYEE_DATA_OTHER, which should be the same as EMPLOYEE_DATA table with
the same data. What is the best option for John?

Create the table with LIKE SQL command as follows - CREATE TABLE
EMPLOYEE_DATA_OTHER LIKE EMPLOYEE_DATA;

Create the table with same data with SQL command as follows - CREATE TABLE
EMPLOYEE_DATA_OTHER AS SELECT * FROM EMPLOYEE_DATA;

Clone the table with same data with SQL command as follows - CREATE TABLE
EMPLOYEE_DATA_OTHER CLONE EMPLOYEE_DATA;

(Correct)


CREATE SHARE EMPLOYEE_DATA;

Explanation
The best option is to Clone the table as EMPLOYEE_DATA and
EMPLOYEE_DATA_OTHER have the same structure and data. It will help save the
storage cost. LIKE command only creates the empty table. CREATE TABLE … AS
SELECT (also referred to as CTAS) Creates a new table populated with the data returned
by a query but consumes additional storage.
Question 9:
Skipped
In which layer does Snowflake perform query execution?

Query Processing

(Correct)

Cloud Services

Database Storage

None of these

Explanation
Query execution is performed in the processing layer. Snowflake processes queries
using “virtual warehouses.”
Question 10:
Skipped
What is a key benefit of scaling out a warehouse?

Scaling out improves performance.

Scaling out improves concurrency.


(Correct)

Explanation
Scaling out is explicitly designed for handling queuing and performance issues related
to large numbers of concurrent users and/or queries.
Question 11:
Skipped
Which of these roles can configure a network policy? (Select 2)

SECURITYADMIN

(Correct)

USERADMIN

ACCOUNTADMIN

(Correct)

SYSADMIN

PUBLIC

Explanation
Only security administrators (i.e., users with the SECURITYADMIN role) or higher or a
role with the global CREATE NETWORK POLICY privilege can create network policies.
Question 12:
Skipped
Which of these are applicable for Snowflake Connector for Kafka? (Select all that
apply)


If the topics are not mapped, then the Kafka connector creates a new table for
each topic using the topic name

(Correct)

The Kafka connector subscribes to one or more Kafka topics

(Correct)

Reads data from one or more Kafka topics and loads the data into a Snowflake
table

(Correct)

Kafka connector required a pre-configured Snowflake table to map the topics


with that Snowflake table

Kafka topics can be mapped to existing Snowflake tables in the Kafka


configuration

(Correct)

Explanation
Kafka topics can be mapped to existing Snowflake tables in the Kafka configuration. If
the topics are not mapped, then the Kafka connector creates a new table for each topic
using the topic name. The Kafka connector subscribes to one or more Kafka topics
based on the configuration information provided via the Kafka configuration file or
command line (Or the Confluent Control Center; Confluent only).
Question 13:
Skipped
Snowflake network policies currently support both Internet Protocol versions 4 and 6
(i.e., IPv4 as well as IPv6). (True/False)

FALSE
(Correct)

TRUE

Explanation
Network policies currently support only Internet Protocol version 4 (i.e. IPv4) addresses.
Question 14:
Skipped
How can you create a "Super-User" or "Super-Role" in Snowflake who can bypass all
the authorization checks?

Contact Snowflake personnel to create a Super-Role or Super-User for your


account

ACCOUNTADMIN role is same as Super-Role

CREATE ROLE SUPER_ROLE;

There is no concept of SUPER-ROLE or SUPER-USER in Snowflake

(Correct)

Explanation
There is no concept of a “super-user” or “super-role” in Snowflake that can bypass
authorization checks. All-access requires appropriate access privileges.
Question 15:
Skipped
Direct data sharing can only be done with accounts in the same region and the same
cloud provider. (TRUE/FALSE)

FALSE

TRUE

(Correct)

Explanation
Direct data sharing can only be done with accounts in the same region and the same
cloud provider. Suppose you want to share with someone outside of your region. In that
case, you simply do a replication of that database into the region you want to share with
and share from there.
Question 16:
Skipped
Snowflake supports SQL UDFs that return a set of rows. Which keyword in CREATE
FUNCTION statement does need to be specified to enable UDF (i.e., UDTF) to return a
set of rows?

MULTIPLE

ROWS

TABLE

(Correct)

SCALAR

Explanation
TABLE keyword after RETURNS needs to be specified to create a UDTF (user-defined
table function). Example :

create function t()

returns table(msg varchar)


as

$$

select 'Hello'

union

select 'World'

$$;

Remember - UDF returns a singular scalar value or if defined as a TABLE function, a set
of rows. If you see UDTF in the exam, that simply means UDF that returns a set of rows.

Question 17:
Skipped
What is the preferred way to distinguish empty strings from NULLs while unloading in
CSV files?

Enclose strings in quotes by setting the FIELD_OPTIONALLY_ENCLOSED_BY


option.

(Correct)

Set EMPTY_FIELD_AS_NULL to FALSE

Set EMPTY_FIELD_AS_NULL to TRUE

Leave string fields unenclosed by setting the


FIELD_OPTIONALLY_ENCLOSED_BY option to NONE.

Explanation
An empty string is typically represented by a quoted empty string (e.g. '') to indicate that
the string contains zero characters. The preferred way is to enclose strings in quotes
by setting the FIELD_OPTIONALLY_ENCLOSED_BY option, to distinguish empty strings
from NULLs in output CSV files.
Question 18:
Skipped
If you make any changes (e.g., insert, update) in a cloned table, then __

Only the changed micro partitions are written to the data storage

(Correct)

Cloned tables are read-only, you can not make any changes

The source table also gets updated with the new changes in the cloned table

The entire table is written to data storage

Explanation
Zero-copy cloning allows us to make a snapshot of any table, schema, or database
without actually copying data. A clone is writable and is independent of its source (i.e.,
changes made to the source or clone are not reflected in the other object). A new clone
of a table points to the original table's micro partitions, using no data storage. If we
make any changes in the cloned table, then only its changed micro partitions are
written to storage.
Question 19:
Skipped
Is it possible to create a user without a password?

Yes

(Correct)

No
Explanation
Yes, it is possible to create a user in Snowflake without a password. We cannot use the
Snowflake web interface to create users with no passwords or remove passwords from
existing users, and we must use CREATE USER or ALTER USER. Without a password in
Snowflake, a user cannot log in using Snowflake authentication and must use federated
authentication instead.
Question 20:
Skipped
The warehouse performance can be evaluated by querying the

Account Usage LOAD_HISTORY view

Information Schema LOAD_HISTORY view

Information Schema QUERY_HISTORY view

Account Usage QUERY_HISTORY view

(Correct)

Explanation
The warehouse performance can be evaluated by querying the Account Usage
QUERY_HISTORY view.
Question 21:
Skipped
Which privilege is required to execute queries using a virtual warehouse?

OPERATE

MONITOR


MODIFY

USAGE

(Correct)

Explanation
Virtual Warehouse Privileges: USAGE: Enables using a virtual warehouse and, as a
result, executing queries on the warehouse. If the warehouse is configured to auto-
resume when a SQL statement (e.g. query) is submitted to it, the warehouse resumes
automatically and executes the statement.

MODIFY: Enables altering any properties of a warehouse, including changing its


size. Required assigning a warehouse to a resource monitor. Note that only the
ACCOUNTADMIN role can assign warehouses to resource monitors.

MONITOR: Enables viewing of current and past queries executed on a warehouse as


well as usage statistics on that warehouse.

OPERATE: Enables changing the state of a warehouse (stop, start, suspend, resume). In
addition, enables viewing current and past queries executed on a warehouse and
aborting any executing queries.

OWNERSHIP: Grants full control over a warehouse. Only a single role can hold this
privilege on a specific object at a time.

Question 22:
Skipped
Which table function in the Snowflake Information Schema can be used to query the
replication history for a specified database within a specified date range?

REPLICATION_GROUP_REFRESH_HISTORY

DATA_TRANSFER_HISTORY

REPLICATION_USAGE_HISTORY
(Correct)

DATABASE_REFRESH_HISTORY

Explanation
The table function REPLICATION_USAGE_HISTORY in Snowflake Information
Schema can be used to query the replication history for a specified database within a
specified date range. The information returned by the function includes the database
name, credits consumed and bytes transferred for replication.
Question 23:
Skipped
If a file in a stage has the LAST_MODIFIED date older than 64 days and the initial set of
data was loaded into the table more than 64 days earlier. In this case, to prevent any
data loss, the COPY command loads the file by default. (True / False)

TRUE

FALSE

(Correct)

Explanation
The COPY command cannot definitively determine whether a file has been loaded
already if the LAST_MODIFIED date is older than 64 days and the initial set of data was
loaded into the table more than 64 days earlier (and if the file was loaded into the table,
that also occurred more than 64 days earlier). In this case, to prevent accidental reload,
the command skips the file by default.
Question 24:
Skipped
Which Snowsight interface does help in setting up Multi-factor authentication (MFA)?

Admin Interface

Account Selector Interface


Left Nav interface

You can not setup Multi-factor authentication (MFA) using Snowsight interface

User Menu Interface

(Correct)

Explanation
There are three interfaces in Snowsight. Left Nav, User Menu, and Account Selector.

Left Navigation consists of Worksheets, Dashboards, Data, Marketplace, Activity,


Admin, Help & Support.

User Menu lets you Switch Roles, Profile including multi-factor authentication (MFA),
Partner Connect, Documentation, Support and Sign Out.

The account selector, located at the bottom of the left nav, lets you sign in to other
Snowflake accounts.

Question 25:
Skipped
Snowflake stores metadata about all rows stored in a micro-partition, including (Select
3)

The number of similar values

Additional properties used for both optimization and efficient query processing

(Correct)

The number of distinct values


(Correct)

The range of values for each of the columns in the micro-partition

(Correct)

The range of values for the first column in the micro-partition

Explanation
Micro-partitioning is automatically performed on all Snowflake tables. Tables are
transparently partitioned using the Ordering of the data as inserted/loaded. Snowflake
stores metadata about all rows stored in a micro-partition, including:
• The range of values for each of the columns in the micro-partition.
• The number of distinct values.
• Additional properties used for both optimization and efficient query processing.
Question 26:
Skipped
What is the maximum data retention period for transient databases, schemas, and
tables for Snowflake Enterprise Edition (and higher)?

90 days

0 days

1 day

(Correct)

30 days

Explanation
For Snowflake Enterprise Edition (and higher):
• For transient databases, schemas, and tables, the retention period can be set to
0 (or unset back to the default of 1 day). The same is also true for temporary
tables.
• For permanent databases, schemas, and tables, the retention period can be set
to any value from 0 up to 90 days.
Question 27:
Skipped
Select the type of function that returns one value per group of rows (for example -
AVG, MAX, MIN)

Scalar Function

Table Function

User-Defined Function

Window Function

Aggregate Function

(Correct)

System Function

Explanation
Aggregate functions operate on values across rows to perform mathematical
calculations such as sum, average, counting, minimum/maximum values, standard
deviation, and estimation, as well as some non-mathematical operations. An aggregate
function takes multiple rows (actually, zero, one, or more rows) as input and produces a
single output. In contrast, scalar functions take one row as input and produce one row
(one value) as output. An aggregate function always returns exactly one row, even when
the input contains zero rows. Typically, if the input contained zero rows, the output is
NULL. However, an aggregate function could return 0, an empty string, or some other
value when passed zero rows.
Question 28:
Skipped
John wants to load data files from an external stage to Snowflake. He has split the
large file into smaller 100 - 250 MB data files, and there is a total of 16 smaller data
files. What warehouse size would you recommend him to use for loading these data
files quickly and cost-effectively?

(Correct)

XS

XL

Explanation
XS sized warehouse can load eight files parallelly. S sized warehouse can load sixteen
files parallelly. M sized warehouse can load thirty-two files parallelly. L sized
warehouse can load sixty-four files parallelly. XL sized warehouse can load one hundred
twenty-eight files parallelly and so on.
Question 29:
Skipped
Fail-safe helps access historical data after the Time Travel retention period has ended.
(True/False)

FALSE
(Correct)

TRUE

Explanation
Fail-safe is not provided as a means for accessing historical data after the Time Travel
retention period has ended. It is for use only by Snowflake to recover data that may
have been lost or damaged due to extreme operational failures. Data recovery through
Fail-safe may take from several hours to several days to complete.
Question 30:
Skipped
Which of these objects are not replicated? (Select 2)

Permanent Tables

Views

Temporary Tables

(Correct)

Transient Tables

External Tables

(Correct)

Explanation
Temporary Tables, External Tables, Stages, Temporary Stages, Streams, and Tasks do
not get replicated.
Question 31:
Skipped
Which of these are types of the stream? (Select 3)

Update-only

Insert-only

(Correct)

External

Standard

(Correct)

Append-only

(Correct)

Explanation
The following stream types are available based on the metadata recorded by each:

Standard - Supported for streams on tables, directory tables, or views. A standard (i.e.,
delta) stream tracks all DML changes to the source object, including inserts, updates,
and deletes (including table truncates).

Append-only - Supported for streams on standard tables, directory tables, or views. An


append-only stream tracks row inserts only. Update and delete operations (including
table truncates) are not recorded.

Insert-only - Supported for streams on external tables only. An insert-only stream tracks
row inserts only; they do not record delete operations that remove rows from an
inserted set (i.e., no-ops).

Question 32:
Skipped
You can create a user-level network policy using _____

SQL

(Correct)

Only Snowflake Support can create the Account level Network Policy

Classic Web Interface

Snowsight

Explanation
You can set a policy at the user level, but this can only be done through the SQL
commands.

To activate a network policy for an individual user, set the NETWORK_POLICY


parameter for the user using ALTER USER.

Question 33:
Skipped
Which are the required parameters for creating a Network Policy? (Select 2)

Blocked IP Addresses

Comment

Allowed IP Addresses

(Correct)

Policy Name

(Correct)

Explanation
Policy Name and Allowed IP Addresses are mandatory for creating a Network Policy.

Blocked IP Addresses and Comment Parameters are optional.

Please Note - If you provide both Allowed IP Addresses and Blocked IP Addresses,
Snowflake applies the Blocked List first.

Question 34:
Skipped
What are the three layers in Snowflake's unique architecture? (Select 3)

Cloud Services

(Correct)

Query Processing

(Correct)

Database Storage

(Correct)

Computation Services

Explanation
Snowflake's unique architecture consists of three key layers:
• Database Storage
• Query Processing
• Cloud Services
Question 35:
Skipped
VARIANT is used to FLATTEN hierarchical data. (True / False)

TRUE

FALSE

(Correct)

Explanation
VARIANT is a data type that can hold a value of any other data type (including ARRAY
and OBJECT). VARIANT is used to build and store hierarchical data. VARIANT is not a
function to FLATTEN. FLATTEN is a table function that is used to produce a lateral view
of a VARIANT, OBJECT, or ARRAY column.
Question 36:
Skipped
You have a dashboard that connects to Snowflake via JDBC. The dashboard is
refreshed hundreds of times per day. The data is very stable, only changing once or
twice per day. The query run by the dashboard user never changes. How will
Snowflake manage changing and non-changing data? Mark all true statements.

Snowflake will show the most up-to-date data each time the dashboard is
refreshed.

(Correct)

Snowflake will spin up a warehouse only if the underlying data has changed.

(Correct)

Snowflake will compile result cache data from all user results, so no warehouse
is needed.

Snowflake will re-use data from the Results Cache as long as it is still the most
up-to-date data available.

(Correct)

Explanation
Until data has not changed and the query is the same - Snowflake reuses the data from
the cache. Please note, Each time the persisted result for a query is reused, Snowflake
resets the 24-hour retention period for the result up to a maximum of 31 days from the
date and time that the query was first executed. After 31 days, the result is purged, and
the next time the query is submitted, a new result is generated and persisted.
Question 37:
Skipped
A role inherits all the privileges of those higher in the hierarchy. (True / False)

TRUE

FALSE

(Correct)

Explanation
A role inherits all the privileges of its underlying roles (those "lower" in the hierarchy).
• ACOOUNTADMIN inherits privileges from SECURITYADMIN
• USERADMIN, SYSADMIN, and PUBLIC.
• SECURITYADMIN inherits privileges from USERADMIN and PUBLIC.
• USERADMIN and SYSADMIN inherit privileges from PUBLIC
• PUBLIC inherits nothing.
Question 38:
Skipped
What size of the virtual warehouse needs to be created by the sysadmin while loading
using Snowpipe? (Select the best answer)

L Size


4XL Size

M Size

None of these

(Correct)

XS Size

Explanation
Snowpipe uses compute resources provided and managed by Snowflake (i.e. a
serverless compute model). These Snowflake-provided resources are automatically
resized and scaled up or down as required, and are charged and itemized using per-
second billing. Data ingestion is charged based upon the actual workloads. User doesn't
need to create any warehouse as it is taken care by Snowflake.
Question 39:
Skipped
What size limit does VARIANT data type impose on individual rows?

10 GB

16 MB

(Correct)

100 MB

10 MB

16 GB

Explanation
The VARIANT data type imposes a 16 MB size limit on individual rows.
Question 40:
Skipped
What is the default encoding used by Snowflake while unloading data?

UTF-16

UTF-32

UTF-8

(Correct)

UTF

Explanation
Output files are always encoded using UTF-8, regardless of the file format; no other
character sets are supported.
Question 41:
Skipped
What key insights can we get from the Explain plan in Snowflake? (Select 3)

Join Ordering

(Correct)

Estimated Query Time


Partition Pruning

(Correct)

Join Types

(Correct)

Exact Query Time

Explanation
The key insights that the explain plan gives us in its results output are information
on partition pruning, join ordering, and join types.

The explain plan is a useful tool for determining the efficiency of your query. It's a
command that compiles your query to figure out all the steps Snowflake would have to
work through if it were actually to run the query.

Question 42:
Skipped
Which type of object key is only used for decryption?

Active key

Retired Key

(Correct)

None of these


Destroyed key

Explanation
Retired Key is used for decryption only.

Active Key is used for both encryption and decryption.

Destroyed Key is no longer used.

Question 43:
Skipped
Snowflake prunes micro-partitions based on a predicate with a subquery, even if the
subquery result is constant. (TRUE/FALSE)

FALSE

(Correct)

TRUE

Explanation
Please note, not all predicate expressions can be used to prune. Snowflake does not
prune micro-partitions based on a predicate with a subquery, even if the subquery
results in a constant.
Question 44:
Skipped
Which of these SQL functions helps generate the Scoped URL to access the
unstructured data file?

BUILD_SCOPED_FILE_URL

(Correct)

GET_ABSOLUTE_PATH


BUILD_STAGE_FILE_URI

GET_STAGE_LOCATION

GET_RELATIVE_PATH

GET_PRESIGNED_URL

Explanation
BUILD_SCOPED_FILE_URL generates a scoped Snowflake-hosted URL to a staged file
using the stage name and relative file path as inputs. A scoped URL is encoded and
permits access to a specified file for a limited period of time. Scoped URL: Encoded URL
that permits temporary access to a staged file without granting privileges to the stage.
The URL expires when the persisted query result period ends (i.e., the results cache
expires), which is currently 24 hours.
Question 45:
Skipped
Which of the following commands are File staging commands? (Select all that apply)

PUT

(Correct)

REMOVE

(Correct)

COPY INTO <table>

UNDROP

LIST

(Correct)

GET

(Correct)

Explanation

File Staging Commands – PUT (to a stage), GET (from a stage), LIST, and REMOVE.
These commands are specific for working with stages.

Question 46:
Skipped
What are all operations performed using Snowflake SQL API? (Select 4)

Fetch query results concurrently

(Correct)

Cancel the execution of a statement

(Correct)

Check the status of the execution of a statement

(Correct)

Submit SQL statements for execution

(Correct)

Calling stored procedures that returns a table

Explanation
The Snowflake SQL API provides operations that we can use to:
• Submit SQL statements for execution.
• Check the status of the execution of a statement.
• Cancel the execution of a statement.
• Fetch query results concurrently.

Currently, Snowflake SQL API has limitations for the call command with stored
procedures that return a table (stored procedures with the RETURNS TABLE clause).

Question 47:
Skipped
Which table types does Snowflake support? (Select all that apply)

SECURED TABLE

PERMANENT TABLE

(Correct)

EXTERNAL TABLE

(Correct)

TRANSIENT TABLE

(Correct)


MATERIALIZED TABLE

TEMPORARY TABLE

(Correct)

Explanation
Snowflake supports four different table types: Permanent Table, Temporary Table,
Transient Table, and External Table.

Permanent Table: It persists until dropped. It is designed for data requiring the highest
data protection and recovery level and is the default table type. Permanent Tables can
be protected by up to 90 days of time travel with Enterprise Edition or above. Moreover,
the failsafe is covered on all the Permanent Tables.

Temporary Table: A Temporary table is tied to a specific session, which means it is tied
to a single user. Temporary tables are used for things like materializing subquery. You
can only cover temporary tables by up to one day of time travel, and they are not
covered by a failsafe.

Transient Table: A Transient table is essentially a temporary table that more than one
user can share because multiple users share a transient table. You have to drop it when
you are finished with it, and it also is only covered by up to one day of time travel and is
not covered by a failsafe. NOTE - WE CAN ALSO HAVE TRANSIENT DATABASES AND
SCHEMAS.

External Table: An External Table is used to access data in a data lake. It is always
read-only because it is based on files that live outside of Snowflake and are not
managed by Snowflake, and Time Travel and Failsafe do not cover it.

Question 48:
Skipped
John has a SECURITYADMIN role. He created a custom DBA_ROLE and granted the
SYSADMIN role to DBA_ROLE. Then, John created a user, 'Monica.' John then granted
DBA_ROLE to Monica. Monica creates a Database Monica_DB. Monica then created a
Table T1 in Monica_DB under the PUBLIC schema. What should John do to access
Table T1, created by Monica?

USE ROLE SECURITYADMIN; USE DATABASE monica_db; Select * from t1;


GRANT ROLE DBA_ROLE TO John; USE ROLE DBA_ROLE; USE DATABASE
monica_db; Select * from t1;

(Correct)

USE ROLE dba_role; USE DATABASE monica_db; Select * from t1;

GRANT ROLE DBA_ROLE TO John; USE DATABASE monica_db; Select * from t1;

Explanation
It does not matter if John has created the DBA_ROLE. If John wants to access the
object created by DBA_ROLE, he needs to grant DBA_ROLE to himself.
Question 49:
Skipped
Materialized views can improve the performance of queries that use the same
subquery results repeatedly. (True/False)

TRUE

(Correct)

FALSE

Explanation
Materialized views are designed to improve query performance for workloads
composed of common, repeated query patterns. However, materializing intermediate
results incur additional costs. As such, before creating any materialized views, you
should consider whether the costs are offset by the savings from re-using these results
frequently enough.
Question 50:
Skipped
Which of these are types of Snowflake releases? (Select 3)

Patch Release
(Correct)

Full Release

(Correct)

Part Release

Behavior Change Release

(Correct)

Bug Fix Release

Explanation
There are three types of releases:
• Full Release: A full release may include any of the following: -
• New features
• Feature enhancements or updates
• Fixes
• Patch Release: A patch release includes fixes only.
• Behavior Release: Every month, Snowflake deploys one behavior change release.
Behavior change releases contain changes to existing behaviors that may impact
customers.
Question 51:
Skipped
What are the security layers that Snowflake takes care of?(Select 4)

Data Protection

(Correct)


Authentication

(Correct)

Authorization

(Correct)

Access

(Correct)

Infrastructure

Explanation
Infrastructure Security is managed by the cloud provider.
Question 52:
Skipped
Suppose we resize a warehouse to a smaller size while it is executing SQL statements.
There will be no impact on already executing SQL statements. (True / False)

TRUE

(Correct)

FALSE

Explanation
Resizing a warehouse does not impact the statements that the warehouse is
executing. When resizing to a smaller size, compute resources are removed from the
warehouse only when they are no longer used to execute any current statements.
Question 53:
Skipped
Select the type of function that is used to execute an action in the system or return
information about the system.

Table Function

Aggregate Function

System Function

(Correct)

User-Defined Function

Window Function

Scalar Function

Explanation
System function that is used to execute an action in the system or return information
about the system.

Snowflake provides the following types of system functions:

• Control functions that allow you to execute actions in the system (e.g. aborting a
query).
• Information functions that return information about the system (e.g. calculating
the clustering depth of a table).
• Information functions that return information about queries (e.g. information
about EXPLAIN plans).
Question 54:
Skipped
Which data types are not supported by the Search Optimization Service? (Select 2)


DATE, TIME, and TIMESTAMP

BINARY

Fixed-point numbers (e.g. INTEGER, NUMERIC)

Semi-structured data types

(Correct)

Floating-point data types

(Correct)

VARCHAR

Explanation

The search optimization service currently supports equality predicate and IN list
predicate searches for the following data types: Fixed-point numbers (e.g. INTEGER,
NUMERIC). DATE, TIME, and TIMESTAMP. VARCHAR. BINARY. Currently, the search
optimization service does not support floating point data types, semi-structured data
types, or other data types not listed above.

[Improtant term for exam: EQUALITY for Search Optimization Service]

Question 55:
Skipped
Choose the true statements about Secure views. (Select 2)

Secure views provide improved data privacy and data sharing


(Correct)

Only non-materialized views can be defined as secure

Both non-materialized and materialized views can be defined as secure

(Correct)

Secure views allow faster access than Standard views

Only materialized views can be defined as secure

Explanation
Both non-materialized and materialized views can be defined as secure. Secure views
have advantages over standard views, including improved data privacy and data
sharing; however, they also have some performance impacts to take into consideration.
Question 56:
Skipped
UDF runs with either the caller’s or the owner’s rights. (TRUE / FALSE)

FALSE

(Correct)

TRUE

Explanation
UDF only runs as the function owner. A stored procedure runs with either the caller’s
rights or the owner’s rights. It cannot run with both at the same time.
A caller’s rights stored procedure runs with the privileges of the caller. The primary
advantage of a caller’s rights stored procedure is that it can access information about
that caller or about the caller’s current session. For example, a caller’s rights stored
procedure can read the caller’s session variables and use them in a query.

An owner’s rights stored procedure runs mostly with the privileges of the stored
procedure’s owner. The primary advantage of an owner’s rights stored procedure is that
the owner can delegate specific administrative tasks, such as cleaning up old data, to
another role without granting that role more general privileges, such as privileges to
delete all data from a specific table.

At the time that the stored procedure is created, the creator specifies whether the
procedure runs with the owner’s rights or the caller’s rights. The default is owner’s
rights.

Question 57:
Skipped
How can you produce a lateral view of a VARIANT, OBJECT or ARRAY Column?

Using FLATTEN table function

(Correct)

Using RESULT_SCAN table function

Using SPLIT_TO_TABLE table function

Using INFER_SCHEMA table function

Explanation
FLATTEN is a table function that produces a lateral view of a VARIANT, OBJECT, or
ARRAY column. INFER_SCHEMA table function is used to detect the file metadata
schema in a set of staged data files that contain semi-structured data and retrieves the
column definitions. RESULT SCAN returns the result set of a previous command (within
24 hours of when you executed the query) as if the result was a table. SPLIT_TO_TABLE
table function splits a string (based on a specified delimiter) and flattens the results
into rows.
Question 58:
Skipped
Snowflake supports various actions for Unstructured Data. Which one is not supported
by Snowflake?

Extract actual data from PDF and load it into Snowflake tables using Snowflake
WebUI out of the box option.

(Correct)

Share file access URLs with collaborators and partners.

Securely access data files located in cloud storage.

Load file access URLs and other file metadata into Snowflake tables.

Explanation
Snowflake supports the following actions for Unstructured data:
• Securely access data files located in cloud storage.
• Share file access URLs with collaborators and partners.
• Load file access URLs and other file metadata into Snowflake tables.
Question 59:
Skipped
Monica wants to share a UDF with other users. She wants to permit other users to use
it, but she doesn't want them to be able to see how it was defined or the underlying
logic behind it. What would you recommend to Monica?

Underlying logic can not be hidden with UDF


Monica should create a secure UDF

(Correct)

Monica should create an unsecure UDF and then block the logic in the sql
statement

Explanation
We can create a user-defined function to be unsecure or secure. A secure user-defined
function means if we permit someone else to use this UDF, they will not be able to see
how it was defined or the underlying logic behind it.
Question 60:
Skipped
Monica has run a query SELECT * FROM t1; After a couple of hours, John ran the same
query. John has the same role as Monica and has the SELECT permissions on table t1.
John got the result sooner than Monica. What could be the reason for the faster
result?

John's query resulted from the Local Disk cache.

John's query resulted from Remote disk.

John's query resulted from the Metadata cache.

John's query resulted from the Query Result cache.

(Correct)

Explanation
It is a typical use case of Query Result Cache. It is stored and managed by the Cloud
Services Layer. It is used if the identical query is run and base tables (t1 in this case)
have not changed. Query Result Cache doesn't require Virtual Warehouse and is
available for other users in the same role with SELECT permissions on all tables in the
query.
Question 61:
Skipped
Monica is confused about which sampling method she should use with one of the very
large tables, considering better performance. Which sampling method would you
recommend from BERNOULLI | ROW and SYSTEM | BLOCK?

BERNOULLI | ROW

SYSTEM | BLOCK

(Correct)

Explanation
SYSTEM | BLOCK sampling is often faster than BERNOULLI | ROW sampling. Also,
BERNOULLI | ROW method is good for Smaller Tables, and SYSTEM | BLOCK method is
for Larger Tables.
Question 62:
Skipped
Which command can be used to resume Automatic Clustering for a table?

START TABLE

ALTER TABLE

(Correct)

TRIGGER CLUSTERING

RESUME RECLUSTER

Explanation
Example - ALTER TABLE EMPLOYEE RESUME RECLUSTER; please note that RESUME
RECLUSTER is a clause, not a command.
Question 63:
Skipped
Which of these roles is granted the MANAGE GRANTS security privilege to be able to
modify any grant globally, including revoking it?

SECURITYADMIN

(Correct)

USERADMIN

SYSADMIN

ORGADMIN

ACCOUNTADMIN

Explanation
SECURITYADMIN role can manage any object grant globally, as well as create,
monitor, and manage users and roles.

More specifically, this role:

• Is granted the MANAGE GRANTS security privilege to be able to modify any


grant, including revoking it.
• Inherits the privileges of the USERADMIN role via the system role hierarchy (i.e.
USERADMIN role is granted to SECURITYADMIN).
Question 64:
Skipped
Snowflake supports _______


SOAP for unstructured data

None of these

REST API for unstructured data

(Correct)

Both of these

Explanation
Currently, Snowflake only supports REST API for unstructured data.
Question 65:
Skipped
The automatic refresh of metadata of the directory table in the cloud storage does not
incur any charges to Snowflake Customers. (True/False)

FALSE

(Correct)

TRUE

Explanation
Snowflake customers' charges include an overhead to manage event notifications for
automatically refreshing directory table metadata. This overhead increases in relation
to the number of files added in cloud storage for customers' stages that include
directory tables. Snowflake charges 0.06 credits per 1000 event notifications received.
Question 66:
Skipped
Choose the false statements. (Select 2)


If Virtual Warehouse is suspended, then results in Local Disk Cache will be lost

Results are stored in SSD in Virtual Warehouse in case of Local Disk Cache.

Group and Execute similar queries on the different virtual warehouses to


maximize local disk cache reuse, for performance and cost optimization.

(Correct)

Results are stored in the Cloud Storage layer in the case of Local Disk Cache.

(Correct)

Group and Execute similar queries on the same virtual warehouse to maximize
local disk cache reuse, for performance and cost optimization.

Explanation
As a best practice, Group and Execute similar queries on the same virtual warehouse to
maximize local disk cache reuse for performance and cost optimization. The results get
stored in the SSD of the Virtual Warehouse. So, if the Virtual Warehouse gets
suspended, then results get lost.
Question 67:
Skipped
If you want to create a warehouse that remains in a suspended state initially, which
property do you need to set for that warehouse?

INITIALLY_SUSPENDED = TRUE

(Correct)

AUTO_RESUME = TRUE

AUTO_RESUME = FALSE

AUTO_SUSPEND = 0

AUTO_SUSPEND = TRUE

Explanation
INITIALLY_SUSPENDED = TRUE | FALSE Specifies whether the warehouse is created
initially in the ‘Suspended’ state. The valid values are TRUE and FALSE. TRUE:
The warehouse is created, but suspended. FALSE: The warehouse starts running
after it is created. Default is FALSE
Question 68:
Skipped
Suppose you have an auto-scaling mode setup with a Standard policy. In what
situation does Snowflake spin up an additional cluster?

Only if the system estimates there’s enough query load to keep the cluster busy
for at least 6 minutes.

The first cluster starts immediately when either a query is queued or the system
detects that there’s one more query than the currently-running clusters can
execute.

(Correct)

Explanation
In Standard Scaling policy, the first cluster starts immediately when either a query is
queued, or the system detects that there’s one more query than the currently-running
clusters can execute.

Each successive cluster waits to start 20 seconds after the prior one has started. For
example, if your warehouse is configured with ten max clusters, it can take 200+
seconds to start all 10 clusters.
Question 69:
Skipped
During data loading using COPY INTO <table> command, if the string exceeds the
target column length, what options do you have to truncate the string? (Select 2)

ENFORCE_LENGTH = FALSE

(Correct)

ENFORCE_LENGTH = TRUE

TRUNCATECOLUMNS = FALSE

TRUNCATECOLUMNS = TRUE

(Correct)

Explanation
ENFORCE_LENGTH:
• If TRUE, the COPY statement produces an error if a loaded string exceeds the
target column length.
• If FALSE, strings are automatically truncated to the target column length.

TRUNCATECOLUMNS:

• If TRUE, strings are automatically truncated to the target column length.


• If FALSE, the COPY statement produces an error if a loaded string exceeds the
target column length.
Question 70:
Skipped
Which one is not the Snowflake System-Defined role?

SYSADMIN

USERADMIN

ACCOUNTADMIN

SECURITYADMIN

ORGADMIN

DATABASEADMIN

(Correct)

Explanation
The following are the Snowflake System-Defined roles: ORGADMIN, ACCOUTADMIN,
SECURITYADMIN, USERADMIN, SYSADMIN, PUBLIC. System-defined roles cannot be
dropped. In addition, the privileges granted to these roles by Snowflake cannot be
revoked.
Question 71:
Skipped
An HTTP client that sends a URL (either scoped URL or file URL) to the REST API must
be configured to allow redirects. (True/False)

FALSE

TRUE

(Correct)

Explanation
True, An HTTP client that sends a URL (either scoped URL or file URL) to the REST API
must be configured to allow redirects.
Question 72:
Skipped
What sized tables will experience the most benefit from clustering?

Tables with sizes between the range of 1 GB to 10 GB compressed

Tables in the multi-terabyte (TB) range

(Correct)

Tables with sizes between the range of 100 MB to 1 GB compressed

All sizes of tables

Explanation
Generally, tables in the multi-terabyte (TB) range will experience the most benefit from
clustering, mainly if DML is performed regularly/continually on these tables.
Question 73:
Skipped
Which copy option is used to delete the file from the Snowflake stage when data from
staged files are loaded successfully?

REMOVE = TRUE

DEL = TRUE

DELETE = TRUE

PURGE = TRUE

(Correct)

Explanation
Staged files can be deleted from a Snowflake stage (user stage, table stage, or named
stage) using the following methods:

1. Files that were loaded successfully can be deleted from the stage during a load
by specifying the PURGE copy option in the COPY INTO <table> command.

2. After the load completes, use the REMOVE command to remove the files in the
stage.

Please note, DELETE or REMOVE are not COPY command options. REMOVE is a
different DML command which is used to remove files in the stage.

Question 74:
Skipped
Snowpark is a new developer framework for Snowflake. It allows data engineers, data
scientists, and data developers to code in their familiar way with their language of
choice and execute the pipeline, ML workflow, and data apps faster and more securely
in a single platform. Which of these following languages does Snowpark support?
(Select 3)

Python

(Correct)

Java

(Correct)

C++

C#

Scala

(Correct)

Explanation
Snowpark support starts with Scala API, Java UDFs, and External Functions and
expands to Java & Python.

Snowpark is a new developer framework for Snowflake. It allows data engineers, data
scientists, and data developers to code in their familiar way with their language of
choice and execute the pipeline, ML workflow, and data apps faster and more securely
in a single platform. It brings deeply integrated, DataFrame-style programming to the
languages developers like to use and functions to help you efficiently expand more data
use cases. Now all these can be executed inside Snowflake using the elastic
performance engine.

[Please note about External Functions, Important for the exam: ]

External Functions - External functions are user-defined functions that are stored and
executed outside of Snowflake.

Question 75:
Skipped
You have a table with a 30-day retention period. If you increase the retention period to
40 days, how would it affect the data that would have been removed after 30 days?

The data will now retain an additional 10 days before moving into Fail-safe

(Correct)


The data will still be moved to Fail-safe at the end of the 30-day retention
period

Explanation
Increasing Retention causes the data currently in Time Travel to be retained for a more
extended time. For example, suppose you have a table with a 30-day retention period
and increase the period to 40 days. In that case, data that would have been removed
after 30 days is now retained for an additional 10 days before moving into Fail-safe.

Note that this does not apply to any data that is older than 30 days and has already
moved into Fail-safe.

Question 76:
Skipped
Which of these configurations will set up a warehouse in auto-scale mode?

Minimum Clusters = 6 and Maximum Clusters = 6

Minimum Clusters = 2 and Maximum Clusters = 6

(Correct)

Explanation
Auto-scale mode is enabled by specifying different values for the maximum and the
minimum number of clusters.

Note: In the exam, you may be asked what you will be set to make it maximized. Look
for an answer which says - both (Minimum and Maximum) with the same value.

Question 77:
Skipped
A DBA_ROLE created a database. Later the DBA_ROLE was dropped. Who will own the
database now, which was created by the DBA_ROLE?

The database will get dropped too.


No one will be able to access the database.

The role that dropped the DBA_ROLE will own the database.

(Correct)

The DBA_ROLE can't get dropped as it is the database owner.

Explanation
The role that dropped the DBA_ROLE will own the database. It is an important question
for the exam.
Question 78:
Skipped
When staging uncompressed files in a Snowflake stage, Snowflake automatically
compresses the files unless compression is explicitly disabled. Which of the options is
used by Snowflake for compressing the file?

Brotli

Zstandard

bzip2

gzip

(Correct)

deflate

Explanation
When staging uncompressed files in a Snowflake stage, the files are automatically
compressed using gzip, unless compression is explicitly disabled.
Question 79:
Skipped
What is the default standard data retention period automatically enabled for all
Snowflake accounts?

30 days

1 day

(Correct)

90 days

0 days

Explanation
The standard retention period is 1 day (24 hours) and is automatically enabled for all
Snowflake accounts.
Question 80:
Skipped
Which AWS service is used to create private VPC endpoints that allow direct, secure
connectivity between your AWS VPCs and the Snowflake VPC without traversing the
public internet?

AWS PrivateVPC

AWS Direct Connect


Snowflake PrivateLink

AWS PrivateLink

(Correct)

Explanation
AWS PrivateLink is an AWS service for creating private VPC endpoints that allow direct,
secure connectivity between your AWS VPCs and the Snowflake VPC without traversing
the public internet. The connectivity is for AWS VPCs in the same AWS region.

For External Functions, you can also use AWS PrivateLink with private endpoints.

In addition, if you have an on-premises environment (e.g. a non-hosted data center), you
can choose to use AWS Direct Connect, in conjunction with AWS PrivateLink, to connect
all your virtual and physical environments in a single, private network.

Question 81:
Skipped
In the case of cloning massive databases or schemas, the original databases and
schemas get locked while the cloning operation is running. While cloning is in
progress, no DML operation can be done on the original databases and schemas.
(True/False)

TRUE

FALSE

(Correct)

Explanation
Cloning is not instantaneous, particularly for large objects (databases, schemas,
tables), and does not lock the object being cloned. A clone does not reflect any DML
statements applied to table data, if applicable, while the cloning operation is still
running.
Question 82:
Skipped
A Directory table is a separate database object that stores a catalog of staged files in
cloud storage. (True/False)

FALSE

(Correct)

TRUE

Explanation
A Directory table is not a separate database object; it stores a catalog of staged files in
cloud storage. Roles with sufficient privileges can query a directory table to retrieve file
URLs to access the staged files and other metadata.
Question 83:
Skipped
User-managed Tasks is recommended when you can fully utilize a single warehouse
by scheduling multiple concurrent tasks to take advantage of available compute
resources. (True /False)

FALSE

TRUE

(Correct)

Explanation
User-managed Tasks is recommended when you can fully utilize a single
warehouse by scheduling multiple concurrent tasks to take advantage of available
compute resources.

Serverless Tasks is recommended when you cannot fully utilize a warehouse because
too few tasks run concurrently or they run to completion quickly (in less than 1 minute).

Question 84:
Skipped
What is the best way to analyze the optimum warehouse size?


Execute queries of widely-varying size and/or complexity on the same
warehouse

Execute relatively homogeneous queries (size, complexity, data sets, etc.) on


the same warehouse

(Correct)

Explanation
To achieve the best results, try to execute relatively homogeneous queries (size,
complexity, data sets, etc.) on the same warehouse; executing queries of widely-
varying size and/or complexity on the same warehouse makes it more difficult to
analyze warehouse load, which can make it more difficult to select the best size to
match the size, composition, and number of queries in your workload.
Question 85:
Skipped
What value will be return by the following query? SELECT * FROM
TABLE(FLATTEN(input => parse_json('[]'), outer => true)) f;

NULL

(Correct)

[]

Explanation
The OUTER => TRUE argument with FLATTEN generates exactly one row for zero-row
expansions (with NULL in the KEY, INDEX, and VALUE columns).
Question 86:
Skipped
What does OVERWRITE parameter do with the INSERT command?


It helps ignore any errors while inserting the values into the table.

It de-duplicates while inserting and skips the insert if there is an exact similar
record in the table.

It drops the table, recreates, and inserts the values into the table.

It specifies that the target table should be truncated before inserting the values
into the table.

(Correct)

Explanation
OVERWRITE specifies that the target table should be truncated before inserting the
values into the table. Note that specifying this option does not affect the access control
privileges on the table.
Question 87:
Skipped
Which object parameter can users with the ACCOUNTADMIN role use to set the
minimum retention period for their account?

DATA_RETENTION_TIME_IN_DAYS

MIN_DATA_RETENTION_TIME_IN_DAYS

(Correct)

MIN_DATA_RETENTION_TIME_IN_HOURS

DATA_RETENTION_TIME_IN_MIN_DAYS
Explanation
The MIN_DATA_RETENTION_TIME_IN_DAYS account parameter can be set by users
with the ACCOUNTADMIN role to set a minimum retention period for the account. This
parameter does not alter or replace the DATA_RETENTION_TIME_IN_DAYS parameter
value. However, it may change the effective data retention time. When this parameter is
set at the account level, the effective minimum data retention period for an object is
determined by MAX(DATA_RETENTION_TIME_IN_DAYS,
MIN_DATA_RETENTION_TIME_IN_DAYS).
Question 88:
Skipped
Which of the following is not a type of Snowflake's Internal stage?

Schema Stage

(Correct)

Table Stage

User Stage

Name Stage

Explanation
An internal stage is a cloud repository that resides within a Snowflake account and is
managed by Snowflake. An external stage is a pointer to a cloud file repository outside
a Snowflake account, which the customer manages independently.

There are three types of stages, and they are table stage, user stage, and named
stage.

Table Stage: When you create a table, the system will create a table stage with the
same name but with the prefix @%.
User Stage: A user stage is created whenever you create a new user in Snowflake. The
user stage uses the @~.

Named Stage: Named stages are created manually. They can be internal or external and
are prefixed with an @ and then the stage's name.

[Please note, Important for the exam: Table Stage and User Stage can not be dropped.]

Question 89:
Skipped
Which roles can use SQL to view the task history within a specified date range? (Select
all that apply)

Task Owner having OWNERSHIP privilege on a task

(Correct)

Role that has the global MONITOR EXECUTION privilege

(Correct)

Account Administrator (ACCOUNTADMIN)

(Correct)

Explanation
All of these roles can use SQL to view the task history within a specified date range. To
view the run history for a single task: Query the TASK_HISTORY table function (in the
Snowflake Information Schema). To view details on a DAG run that is currently
scheduled or is executing: Query the CURRENT_TASK_GRAPHS table function (in the
Snowflake Information Schema). To view the history for DAG runs that executed
successfully, failed, or were canceled in the past 60 minutes: Query the
COMPLETE_TASK_GRAPHS table function (in the Snowflake Information Schema).
Query the COMPLETE_TASK_GRAPHS View (in Account Usage).
Question 90:
Skipped
How many servers are available in a large-sized cluster warehouse?

(Correct)

16

128

64

32

Explanation
There are eight servers available in a large-sized cluster warehouse.

Important: You may be asked for a Medium or any other-sized warehouse. (X - 1, S -


2, M - 4, L - 8, XL - 16, and so on)

Question 91:
Skipped
What is the expiration period of a Scoped URL?

The URL expires when the persisted query result period ends

(Correct)

The URL never expires. It is permanent

Length of time specified in the expiration_time argument

Explanation
The expiration period of Scoped URL: The URL expires when the persisted query result
period ends. The expiration period of the File URL: It is permanent. The expiration period
of Pre-Signed URL: Length of time specified in the expiration_time argument.
Question 92:
Skipped
Which products does Snowflake offer for secure data sharing? (Select 3)

Direct share

(Correct)

Data Replication

Data Marketplace

(Correct)

Indirect share

Data Exchange

(Correct)

Explanation
Snowflake provides three product offerings for data sharing that utilize Snowflake
Secure Data Sharing to connect providers of data with consumers. Direct Share: It is
the simplest form of data sharing that enables account-to-account sharing of data
utilizing Snowflake’s Secure Data Sharing. As a data provider, you can easily share data
with another company so that your data shows up in their Snowflake account without
having to copy it over or move it.

Data Exchange: With a Snowflake data exchange, you actually set up a private exchange
between partners that you want to have in this exchange, and any member of that
exchange can share data in this private exchange. And any member of the exchange
can also consume data from that exchange. So instead of one-to-one or one-to-many,
it's many-to-many. But it's a very exclusive club. Only people who are invited into this
exchange can access any of that data.

Data Marketplace: The Snowflake Data Marketplace is where companies can publish
their data to be consumed by anybody who has a Snowflake account and wants to
connect to the marketplace and download that data.

Question 93:
Skipped
How long does Snowflake keep Snowpipe's load history?

31 days

64 days

30 days

14 days

(Correct)

1 day

Explanation
Snowflake keeps the Snowpipe's load history for 14 days. If you recreate [CREATE OR
REPLACE ..] the PIPE then the load history will reset to empty [ very important for the
exam ].
Question 94:
Skipped
Snowflake’s approach to access control combines aspects from which of the following
models?(Select 2)

Rule-based access control (RBAC)

Role-based Access Control (RBAC)

(Correct)

Mandatory Access Control (MAC)

Discretionary Access Control (DAC)

(Correct)

Explanation
Snowflake’s approach to access control combines aspects from both of the following
models:
• Discretionary Access Control (DAC): Each object has an owner, who can in turn
grant access to that object.
• Role-based Access Control (RBAC): Access privileges are assigned to roles,
which are in turn assigned to users.
Question 95:
Skipped
Which of the following data storage does incur the cost? (Select 3)

Only Active and Fail-Sage storage

All storage except Fail-Safe storage

Only Active and Time Travel Storage

Active data Storage

(Correct)

Time Travel Storage

(Correct)

Fail-Safe Storage

(Correct)

Explanation
Storage is calculated and charged for data regardless of whether it is in the Active,
Time Travel, or Fail-safe state.
Question 96:
Skipped
Which of these Sampling methods does Snowflake support? (Select 2)

Sample a fixed, specified number of rows

(Correct)


Sample exact rows of the table with the specified sequence keys

Sample a fraction of the table with a specified probability of including a given


row

(Correct)

Explanation
SAMPLE / TABLESAMPLE returns a subset of rows sampled randomly from the
specified table. The following sampling methods are supported: Sample a fraction of a
table, with a specified probability for including a given row. The number of rows
returned depends on the size of the table and the requested probability. A seed can be
specified to make the sampling deterministic. Sample a fixed, specified number of
rows. The exact number of specified rows is returned unless the table contains fewer
rows. SAMPLE and TABLESAMPLE are synonymous and can be used interchangeably.
Question 97:
Skipped
In a Snowflake federated environment, Snowflake serves as the Identity provider (IdP).
(True/False)

FALSE

(Correct)

TRUE

Explanation
In a Snowflake federated environment, Snowflake serves as the Service Provider
(SP). The external, independent entity like Okta serves as the Identify Provider (IdP)
Question 98:
Skipped
If the micro-partitions are constant, how much is the Clustering Overlap Depth?


2

10

20

(Correct)

Explanation
When there is no overlap in the range of values across all micro-partitions, the micro-
partitions are considered to be in a constant state (i.e. they cannot be improved by
clustering).

In the case of micro-partitions are in a constant state Overlap depth = 1.

Question 99:
Skipped
Which of these SQL functions helps extract the path of a staged file relative to its
location in the stage using the stage name and absolute file path in cloud storage as
inputs?

GET_RELATIVE_PATH

(Correct)

BUILD_SCOPED_FILE_URL

GET_ABSOLUTE_PATH

GET_PRESIGNED_URL

GET_STAGE_LOCATION

BUILD_STAGE_FILE_URI

Explanation
GET_RELATIVE_PATH extracts the path of a staged file relative to its location in the
stage using the stage name and absolute file path in cloud storage as inputs.
Question 100:
Skipped
ACCOUNTADMIN role should not be used to create objects in Snowflake?(True/False)

TRUE

(Correct)

FALSE

Explanation
The ACOOUNTADMIN role is intended for performing initial setup tasks in the system
and managing account-level objects and tasks on a day-to-day basis. It should not be
used to create objects in your account unless you absolutely need these objects to have
the highest level of secure access.

You might also like