0% found this document useful (0 votes)
549 views114 pages

Securing Snowflake

This document discusses Snowflake's security features including federated authentication and single sign-on (SSO) using an identity provider (IdP). It describes the components of a federated environment including the service provider (Snowflake) and IdP. Snowflake supports Okta and Microsoft Active Directory Federation Services natively as well as other SAML 2.0 compliant IdPs. The document outlines the login, logout, and timeout workflows depending on if the action is initiated in Snowflake or the IdP.

Uploaded by

Rick V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
549 views114 pages

Securing Snowflake

This document discusses Snowflake's security features including federated authentication and single sign-on (SSO) using an identity provider (IdP). It describes the components of a federated environment including the service provider (Snowflake) and IdP. Snowflake supports Okta and Microsoft Active Directory Federation Services natively as well as other SAML 2.0 compliant IdPs. The document outlines the login, logout, and timeout workflows depending on if the action is initiated in Snowflake or the IdP.

Uploaded by

Rick V
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 114

Securing Snowflake

Snowflake provides industry-leading features that ensure the highest levels of security for your
account and users, as well as all the data you store in Snowflake.
These topics are intended primarily for administrators (i.e. users with the ACCOUNTADMIN,
SYSADMIN, or SECURITYADMIN roles).
Federated Authentication & SSO
Topics related to federated authentication to Snowflake.
Key-pair authentication and key-pair rotation
Using key-pair authentication to Snowflake.
Multi-factor authentication (MFA)
Using multi-factor authentication with Snowflake.
Snowflake OAuth
Topics related to using Snowflake OAuth to connect to Snowflake.
External OAuth
Topics using External OAuth to connect to Snowflake.
Authentication policies
Using authentication policies to restrict account and user authentication by client,
authentication methods, and more.
Controlling network traffic with network policies
Using network policies to restrict access to Snowflake.
Network rules
Using network rules with other Snowflake features to restrict access to and from Snowflake.
AWS VPC interface endpoints for internal stages
Using private connectivity to connect to Snowflake internal stages on AWS.
Azure Private Endpoints for Internal Stages
Using private connectivity to connect to Snowflake internal stages on Azure.
AWS PrivateLink and Snowflake
Using private connectivity to connect to your Snowflake account on AWS.
Azure Private Link and Snowflake
Using private connectivity to connect to your Snowflake account on Azure.
Google Cloud Private Service Connect and Snowflake
Using private connectivity to connect to your Snowflake account on Google Cloud Platform.
Snowflake Sessions & Session Policies
Using session policies to manage your Snowflake session.
SCIM
Topics related to using SCIM to provision users and groups to Snowflake.
Access Control
Topics related to role-based access control (RBAC) in Snowflake.
End to End Encryption
Using end-to-end encryption in Snowflake.
Encryption Key Management
Using encryption key management in Snowflake.
Was this page helpful?
YesNo

Overview of federated authentication and SSO


This topic describes the components that comprise a federated environment for
authenticating users, and the SSO (single sign-on) workflows supported by Snowflake.
What is a federated environment?
In a federated environment, user authentication is separated from user access through the use of one
or more external entities that provide independent authentication of user credentials. The
authentication is then passed to one or more services, enabling users to access the services through
SSO. A federated environment consists of the following components:
 Service provider (SP):
In a Snowflake federated environment, Snowflake serves as the SP.
 Identity provider (IdP):
The external, independent entity responsible for providing the following services to the SP:
 Creating and maintaining user credentials and other profile information.
 Authenticating users for SSO access to the SP.
Snowflake supports most SAML 2.0-compliant vendors as an IdP; however, certain vendors include
native support for Snowflake (see below for details).
Supported identity providers
The following vendors provide native Snowflake support for federated authentication and SSO:
 Okta — hosted service
 Microsoft AD FS (Active Directory Federation Services) — on-premises software (installed
on Windows Server)
In addition to the native Snowflake support provided by Okta and AD FS, Snowflake supports
using most SAML 2.0-compliant vendors as an IdP, including:
 Google G Suite
 Microsoft Azure Active Directory
 OneLogin
 Ping Identity PingOne
Note
To use an IdP other than Okta or AD FS, you must define a custom application for Snowflake in the
IdP.
For details about configuring Okta, AD FS, or another SAML 2.0-compliant vendor as the IdP for
Snowflake, see Configuring an identity provider (IdP) for Snowflake.
Using multiple identity providers
You can configure Snowflake so different users authenticate using different identity providers.
Once you have configured all of the identity providers, follow the guidance in Using multiple
identity providers for federated authentication.
Note
Currently, only a subset of Snowflake drivers support the use of multiple identity providers. These
drivers include JDBC, ODBC, and Python.
Supported SSO workflows
Federated authentication enables the following SSO workflows:
 Logging into Snowflake.
 Logging out of Snowflake.
 System timeout due to inactivity.
The behavior for each workflow is determined by whether the action is initiated within Snowflake or
your IdP.
Login workflow
When a user logs in, the behavior of the system is determined by whether the login is initiated
through Snowflake or the IdP:
 Snowflake-initiated login:
To log in through Snowflake:
1. User goes to the Snowflake web interface.
2. User chooses to log in using the IdP configured for your account (Okta, AD FS, or a
custom IdP).
3. User authenticates with the IdP using their IdP credentials (e.g. email address and
password).
4. If authentication is successful, the IdP sends a SAML response to Snowflake to
initiate a session and displays the Snowflake web interface.
Note
For Snowflake-initiated login, the Snowflake login page provides two options for
authentication (IdP or Snowflake). To use federated authentication, users must choose the IdP
option and then enter their credentials when prompted. Choosing the Snowflake option
bypasses federated authentication and logs the user in using Snowflake’s native
authentication.
 IdP-initiated login:
To log in through the IdP for your account:
1. User goes to the IdP site/application and authenticates using their IdP credentials (e.g.
email address and password).
2. In the IdP, user clicks on the Snowflake application (if using Okta or AD FS) or the
custom application that has been defined in the IdP (if using another IdP).
3. The IdP sends a SAML response to Snowflake to initiate a session and then displays
the Snowflake web interface.
Logout workflow
When a user logs out, the available options are dictated by whether the IdP supports global logout or
only standard logout:
Standard
Requires users to explicitly log out of both the IdP and Snowflake to completely disconnect.
All IdPs support standard logout.
Global
Enables a user to log out of the IdP and subsequently all their Snowflake sessions. Support
for global logout is IdP-dependent.
In addition, the behavior of the system is determined by whether the logout is initiated through
Snowflake or the IdP:
 Snowflake-initiated logout:
Global logout is not supported from within Snowflake, regardless of whether the IdP supports
it. When a user logs out of a Snowflake session, they are logged out of that session only. All
their other current Snowflake sessions stay open, as does their IdP session. As a result, they
can continue working in their other sessions or they can initiate additional sessions without
having to re-authenticate through the IdP.
To completely disconnect, users must explicitly log out of both Snowflake and the IdP.
 IdP-initiated logout:
When a user logs out through an IdP, the behavior depends on whether the IdP supports
standard logout only or also global logout:
 AD FS supports both standard and global logout. If global logout is enabled, the AD
FS IdP login page provides an option for signing out from all sites that the user has
accessed. Selecting this option and clicking Sign Out logs the user out of AD FS and
all their Snowflake sessions. To access Snowflake again, they must re-authenticate
using AD FS.
 Okta supports standard logout only. When a user logs out of Okta, they are not
automatically logged out of any of their active Snowflake sessions and they can
continue working. However, to initiate any new Snowflake sessions, they must
authenticate again through Okta.
 All custom providers support standard logout; support for global logout varies by
provider.
Note
For a web-based IdP (e.g. Okta), closing the browser tab/window does not necessarily end the
IdP session. If a user’s IdP session is still active, they can still access Snowflake until the IdP
session times out.
Timeout workflow
When a user’s session times out, the behavior is determined by whether it is their Snowflake session
or IdP session that timed out:
 Snowflake timeout:
If a users logs into Snowflake using SSO and their Snowflake session expires due to
inactivity, the Snowflake web interface is disabled and the prompt for IdP authentication is
displayed:
 To continue using their expired Snowflake session, the user must authenticate again
through the IdP.
 The user can exit the session by clicking the Cancel button.
 The user can also go to the IdP site/application directly and relaunch Snowflake, but
this initiates a new Snowflake session.
 IdP timeout:
After a specified period of time (defined by the IdP), a user’s session in the IdP automatically
times out, but this does not affect their Snowflake sessions. Any Snowflake sessions that are
active at the time remain open and do not require re-authentication. However, to initiate any
new Snowflake sessions, the user must log into the IdP again.
SSO with private connectivity
Snowflake supports SSO with private connectivity to the Snowflake service for Snowflake accounts
on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
Currently, for any given Snowflake account, SSO works with only one account URL at a time: either
the public account URL or the URL associated with the private connectivity service on AWS,
Microsoft Azure, or Google Cloud Platform.
Snowflake supports using SSO with organizations, and you can use the corresponding URL in the
SAML2 security integration. For more information, see Configuring Snowflake to use federated
authentication.
To use SSO with private connectivity to Snowflake, configure private
connectivity before configuring SSO:
 If your Snowflake account is on AWS or Azure, follow the self-service instructions as listed
in AWS PrivateLink and Snowflake and Azure Private Link and Snowflake.
 If your Snowflake account is on GCP, you must contact Snowflake Support and provide the
Snowflake account URL to use with Google Cloud Private Service Connect and Snowflake.
To determine the correct URL to use, call
the SYSTEM$GET_PRIVATELINK_CONFIG function in your Snowflake account on GCP.
Replicate the SSO Configuration
Snowflake supports replication and failover/failback of the SAML2 security integration from a
source account to a target account.
For details, see Replication of security integrations & network policies across multiple accounts.

Overview of federated authentication and SSO


This topic describes the components that comprise a federated environment for
authenticating users, and the SSO (single sign-on) workflows supported by Snowflake.
What is a federated environment?
In a federated environment, user authentication is separated from user access through the use of one
or more external entities that provide independent authentication of user credentials. The
authentication is then passed to one or more services, enabling users to access the services through
SSO. A federated environment consists of the following components:
 Service provider (SP):
In a Snowflake federated environment, Snowflake serves as the SP.
 Identity provider (IdP):
The external, independent entity responsible for providing the following services to the SP:
 Creating and maintaining user credentials and other profile information.
 Authenticating users for SSO access to the SP.
Snowflake supports most SAML 2.0-compliant vendors as an IdP; however, certain vendors include
native support for Snowflake (see below for details).
Supported identity providers
The following vendors provide native Snowflake support for federated authentication and SSO:
 Okta — hosted service
 Microsoft AD FS (Active Directory Federation Services) — on-premises software (installed
on Windows Server)
In addition to the native Snowflake support provided by Okta and AD FS, Snowflake supports
using most SAML 2.0-compliant vendors as an IdP, including:
 Google G Suite
 Microsoft Azure Active Directory
 OneLogin
 Ping Identity PingOne
Note
To use an IdP other than Okta or AD FS, you must define a custom application for Snowflake in the
IdP.
For details about configuring Okta, AD FS, or another SAML 2.0-compliant vendor as the IdP for
Snowflake, see Configuring an identity provider (IdP) for Snowflake.
Using multiple identity providers
You can configure Snowflake so different users authenticate using different identity providers.
Once you have configured all of the identity providers, follow the guidance in Using multiple
identity providers for federated authentication.
Note
Currently, only a subset of Snowflake drivers support the use of multiple identity providers. These
drivers include JDBC, ODBC, and Python.
Supported SSO workflows
Federated authentication enables the following SSO workflows:
 Logging into Snowflake.
 Logging out of Snowflake.
 System timeout due to inactivity.
The behavior for each workflow is determined by whether the action is initiated within Snowflake or
your IdP.
Login workflow
When a user logs in, the behavior of the system is determined by whether the login is initiated
through Snowflake or the IdP:
 Snowflake-initiated login:
To log in through Snowflake:
1. User goes to the Snowflake web interface.
2. User chooses to log in using the IdP configured for your account (Okta, AD FS, or a
custom IdP).
3. User authenticates with the IdP using their IdP credentials (e.g. email address and
password).
4. If authentication is successful, the IdP sends a SAML response to Snowflake to
initiate a session and displays the Snowflake web interface.
Note
For Snowflake-initiated login, the Snowflake login page provides two options for
authentication (IdP or Snowflake). To use federated authentication, users must choose the IdP
option and then enter their credentials when prompted. Choosing the Snowflake option
bypasses federated authentication and logs the user in using Snowflake’s native
authentication.
 IdP-initiated login:
To log in through the IdP for your account:
1. User goes to the IdP site/application and authenticates using their IdP credentials (e.g.
email address and password).
2. In the IdP, user clicks on the Snowflake application (if using Okta or AD FS) or the
custom application that has been defined in the IdP (if using another IdP).
3. The IdP sends a SAML response to Snowflake to initiate a session and then displays
the Snowflake web interface.
Logout workflow
When a user logs out, the available options are dictated by whether the IdP supports global logout or
only standard logout:
Standard
Requires users to explicitly log out of both the IdP and Snowflake to completely disconnect.
All IdPs support standard logout.
Global
Enables a user to log out of the IdP and subsequently all their Snowflake sessions. Support
for global logout is IdP-dependent.
In addition, the behavior of the system is determined by whether the logout is initiated through
Snowflake or the IdP:
 Snowflake-initiated logout:
Global logout is not supported from within Snowflake, regardless of whether the IdP supports
it. When a user logs out of a Snowflake session, they are logged out of that session only. All
their other current Snowflake sessions stay open, as does their IdP session. As a result, they
can continue working in their other sessions or they can initiate additional sessions without
having to re-authenticate through the IdP.
To completely disconnect, users must explicitly log out of both Snowflake and the IdP.
 IdP-initiated logout:
When a user logs out through an IdP, the behavior depends on whether the IdP supports
standard logout only or also global logout:
 AD FS supports both standard and global logout. If global logout is enabled, the AD
FS IdP login page provides an option for signing out from all sites that the user has
accessed. Selecting this option and clicking Sign Out logs the user out of AD FS and
all their Snowflake sessions. To access Snowflake again, they must re-authenticate
using AD FS.
 Okta supports standard logout only. When a user logs out of Okta, they are not
automatically logged out of any of their active Snowflake sessions and they can
continue working. However, to initiate any new Snowflake sessions, they must
authenticate again through Okta.
 All custom providers support standard logout; support for global logout varies by
provider.
Note
For a web-based IdP (e.g. Okta), closing the browser tab/window does not necessarily end the
IdP session. If a user’s IdP session is still active, they can still access Snowflake until the IdP
session times out.
Timeout workflow
When a user’s session times out, the behavior is determined by whether it is their Snowflake session
or IdP session that timed out:
 Snowflake timeout:
If a users logs into Snowflake using SSO and their Snowflake session expires due to
inactivity, the Snowflake web interface is disabled and the prompt for IdP authentication is
displayed:
 To continue using their expired Snowflake session, the user must authenticate again
through the IdP.
 The user can exit the session by clicking the Cancel button.
 The user can also go to the IdP site/application directly and relaunch Snowflake, but
this initiates a new Snowflake session.
 IdP timeout:
After a specified period of time (defined by the IdP), a user’s session in the IdP automatically
times out, but this does not affect their Snowflake sessions. Any Snowflake sessions that are
active at the time remain open and do not require re-authentication. However, to initiate any
new Snowflake sessions, the user must log into the IdP again.
SSO with private connectivity
Snowflake supports SSO with private connectivity to the Snowflake service for Snowflake accounts
on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).
Currently, for any given Snowflake account, SSO works with only one account URL at a time: either
the public account URL or the URL associated with the private connectivity service on AWS,
Microsoft Azure, or Google Cloud Platform.
Snowflake supports using SSO with organizations, and you can use the corresponding URL in the
SAML2 security integration. For more information, see Configuring Snowflake to use federated
authentication.
To use SSO with private connectivity to Snowflake, configure private
connectivity before configuring SSO:
 If your Snowflake account is on AWS or Azure, follow the self-service instructions as listed
in AWS PrivateLink and Snowflake and Azure Private Link and Snowflake.
 If your Snowflake account is on GCP, you must contact Snowflake Support and provide the
Snowflake account URL to use with Google Cloud Private Service Connect and Snowflake.
To determine the correct URL to use, call
the SYSTEM$GET_PRIVATELINK_CONFIG function in your Snowflake account on GCP.
Replicate the SSO Configuration
Snowflake supports replication and failover/failback of the SAML2 security integration from a
source account to a target account.
For details, see Replication of security integrations & network policies across multiple accounts.

Multi-factor authentication (MFA)


Snowflake supports multi-factor authentication (i.e. MFA) to provide increased login security for
users connecting to Snowflake. MFA support is provided as an integrated Snowflake feature,
powered by the Duo Security service, which is managed completely by Snowflake.
Users do not need to separately sign up with Duo or perform any tasks, other than installing the Duo
Mobile application, which is supported on multiple smart phone platforms (iOS, Android, Windows,
etc.). See the Duo User Guide for more information about supported platforms/devices and how Duo
multi-factor authentication works.
MFA is enabled on a per-user basis; however, at this time, users are not automatically enrolled in
MFA. To use MFA, users must enroll themselves.
Attention
At a minimum, Snowflake strongly recommends that all users with the ACCOUNTADMIN role be
required to use MFA.
Prerequisite
The Duo application service communicates through TCP port 443.
To ensure consistent behavior, update your firewall settings to include the Duo application service on
TCP port 443.
*.duosecurity.com:443
For more information, see the Duo documentation.
MFA login flow
The following diagram illustrates the overall login flow for a user enrolled in MFA, regardless of the
interface used to connect:
Enrolling a Snowflake user in MFA
Any Snowflake user can self-enroll in MFA through the web interface. For more information,
see Manage your user profile by using Snowsight.
Switching phones used for MFA
Instant Restore is a Duo feature that allows a user to backup the Duo app before switching to a new
phone. As long as a Snowflake user backs up their old phone first, they can use Instant Restore to
enable authentication on the new phone without interrupting MFA for Snowflake.
If a user does not back up the old phone or loses the old phone, the Snowflake account administrator
must disable MFA for each username that used the old phone before MFA can be re-enabled on the
new phone.
Managing MFA for an account and users
At the account level, MFA requires no management. It is automatically enabled for an account and
available for all users to self-enroll. However, the account administrator (i.e. the user granted the
ACCOUNTADMIN system role) may find the need to disable MFA for a user, either temporarily or
permanently, for example if the user loses their phone or changes their phone number and cannot log
in with MFA.
The account administrator can use the following properties for the ALTER USER command to
perform these tasks:
 MINS_TO_BYPASS_MFA
Specifies the number of minutes to temporarily disable MFA for the user so that they can log
in. After the time passes, MFA is enforced and the user cannot log in without the temporary
token generated by the Duo Mobile application.
 DISABLE_MFA
Disables MFA for the user, effectively canceling their enrollment. It may be necessary to
refresh the browser to verify that the user is no longer enrolled in MFA. To use MFA again,
the user must re-enroll.
Note
DISABLE_MFA is not a column in any Snowflake table or view. After an account
administrator executes the ALTER USER command to set DISABLE_MFA to TRUE, the value
for the EXT_AUTHN_DUO property is automatically set to FALSE.
To verify that MFA is disabled for a given user, execute a DESCRIBE USER statement and
check the value for the EXT_AUTHN_DUO property.
Connecting to Snowflake with MFA
MFA login is designed primarily for connecting to Snowflake through the web interface, but is also
fully-supported by SnowSQL and the Snowflake JDBC and ODBC drivers.
Using MFA token caching to minimize the number of prompts during authentication — optional
MFA token caching can help to reduce the number of prompts that must be acknowledged while
connecting and authenticating to Snowflake, especially when multiple connection attempts are made
within a relatively short time interval.
A cached MFA token is valid for up to four hours.
The cached MFA token is invalid if any of the following conditions are met:
1. The ALLOW_CLIENT_MFA_CACHING parameter is set to FALSE for the account.
2. The method of authentication changes.
3. The authentication credentials change (i.e. username and/or password).
4. The authentication credentials are not valid.
5. The cached token expires or is not cryptographically valid.
6. The account name associated with the cached token changes.
The overall process Snowflake uses to cache MFA tokens is similar to that used to cache connection
tokens for browser-based federated single sign-on. The client application stores the MFA token in
the keystore of the client-side operating system. Users can delete the cached MFA token from the
keystore at any time.
Snowflake supports MFA token caching with the following drivers and connectors on macOS and
Windows. This feature is not supported on Linux.
 ODBC driver version 2.23.0 (or later).
 JDBC driver version 3.12.16 (or later).
 Python Connector for Snowflake version 2.3.7 (or later).
Snowflake recommends consulting with internal security and compliance officers prior to enabling
MFA token caching.
Tip
MFA token caching can be combined with connection caching in federated single sign-on.
To combine these two features, ensure that the ALLOW_ID_TOKEN parameter is set to true in
tandem with the ALLOW_CLIENT_MFA_CACHING parameter.
To enable MFA token caching, complete the following steps:
1. As an account administrator (i.e. a user with the ACCOUNTADMIN system role), set the
ALLOW_CLIENT_MFA_CACHING parameter to true for an account using the ALTER
ACCOUNT command.
2. ALTER ACCOUNT SET ALLOW_CLIENT_MFA_CACHING = TRUE;
3. In the client connection string, update the authenticator value
to authenticator = username_password_mfa.
4. Add the package or libraries needed by the driver or connector:
 If you are using the Snowflake Connector for Python, install the optional keyring
package by running:
 pip install "snowflake-connector-python[secure-local-storage]"
You must enter the square brackets ( [ and ]) as shown in the command. The square
brackets specify the extra part of the package that should be installed.
Use quotes around the name of the package as shown to prevent the square brackets
from being interpreted as a wildcard.
If you need to install other extras (for example, pandas for using the Python Connector
APIs for Pandas), use a comma between the extras:
pip install "snowflake-connector-python[secure-local-storage,pandas]"
 For the Snowflake JDBC Driver, see Adding the JNA classes to your classpath.
To disable MFA token caching, unset the ALLOW_CLIENT_MFA_CACHING parameter:
ALTER ACCOUNT UNSET ALLOW_CLIENT_MFA_CACHING;
To find all users who use MFA token caching as the second-factor authentication to log in, you can
execute the following SQL statement as an account administrator (a user with the
ACCOUNTADMIN role):
SELECT EVENT_TIMESTAMP,
USER_NAME,
IS_SUCCESS
FROM SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY
WHERE SECOND_AUTHENTICATION_FACTOR = 'MFA_TOKEN';
Using MFA with Snowsight
To sign in to Snowsight with MFA:
1. Sign in to Snowsight.
2. Enter your credentials (user login name and password).
3. If Duo Push is enabled, you may select a notification method. If Send Me a Push is selected,
a push notification is sent to your Duo Mobile application. When you receive the notification,
select Approve and you will be logged into Snowflake.

As shown on the above, instead of using the push notification, you can also choose to:
 Select Call Me to receive login instructions from a phone call to the registered mobile
device.
 Select Enter a Passcode to log in by manually entering a passcode provided by the
Duo Mobile application.
Using MFA with the Classic Console web interface
To log into the Classic Console with MFA:
1. Point your browser at the URL for your account. For example: https://myorg-
account1.snowflakecomputing.com .
2. Enter your credentials (user login name and password).
3. If Duo Push is enabled, a push notification is sent to your Duo Mobile application. When you
receive the notification, select Approve and you will be logged into Snowflake.
As shown on the above screenshot, instead of using the push notification, you can also
choose to:
 Select Call Me to receive login instructions from a phone call to the registered mobile
device.
 Select Enter a Passcode to log in by manually entering a passcode provided by the
Duo Mobile application.
Using MFA with SnowSQL
MFA can be used for connecting to Snowflake through SnowSQL. By default, the Duo Push
authentication mechanism is used when a user is enrolled in MFA.
To use a Duo-generated passcode instead of the push mechanism, the login parameters must include
one of the following connection options:
--mfa-passcode <string> OR --mfa-passcode-in-password
For more details, see SnowSQL (CLI client).
Using MFA with JDBC
MFA can be used for connecting to Snowflake via the Snowflake JDBC driver. By default, the Duo
Push authentication mechanism is used when a user is enrolled in MFA; no changes to the JDBC
connection string are required.
To use a Duo-generated passcode instead of the push mechanism, one of the following parameters
must be included in the JDBC connection string:
passcode=<passcode_string> OR passcodeInPassword=on
Where:
 passcode_string is a Duo-generated passcode for the user who is connecting. This can be a
passcode generated by the Duo Mobile application or an SMS passcode.
 If passcodeInPassword=on , then the password and passcode are concatenated, in the form
of <password_string><passcode_string> .
For more details, see JDBC Driver.
Examples of JDBC connection strings using Duo
JDBC connection string for user demo connecting to the xy12345 account (in the US West region)
using a Duo passcode:
jdbc:snowflake://xy12345.snowflakecomputing.com/?user=demo&passcode=123456
JDBC connection string for user demo connecting to the xy12345 account (in the US West region)
using a Duo passcode that is embedded in the password:
jdbc:snowflake://xy12345.snowflakecomputing.com/?user=demo&passcodeInPassword=on
Using MFA with ODBC
MFA can be used for connecting to Snowflake via the Snowflake ODBC driver. By default, the Duo
Push authentication mechanism is used when a user is enrolled in MFA; no changes to the ODBC
settings are required.
To use a Duo-generated passcode instead of the push mechanism, one of the following parameters
must be specified for the driver:
passcode=<passcode_string> OR passcodeInPassword=on
Where:
 passcode_string is a Duo-generated passcode for the user who is connecting. This can be a
passcode generated by the Duo Mobile application or an SMS passcode.
 If passcodeInPassword=on , then the password and passcode are concatenated, in the form
of <password_string><passcode_string> .
For more details, see ODBC Driver.
Using MFA with Python
MFA can be used for connecting to Snowflake via the Snowflake Python Connector. By default, the
Duo Push authentication mechanism is used when a user is enrolled in MFA; no changes to the
Python API calls are required.
To use a Duo-generated passcode instead of the push mechanism, one of the following parameters
must be specified for the driver in the connect() method:
passcode=<passcode_string> OR passcode_in_password=True
Where:
 passcode_string is a Duo-generated passcode for the user who is connecting. This can be a
passcode generated by the Duo Mobile application or an SMS passcode.
 If passcode_in_password=True , then the password and passcode are concatenated, in the form
of <password_string><passcode_string> .
For more details, see the description of the connect() method in the Functions section of the Python
Connector API documentation.
MFA error codes
The following are error codes associated with MFA that can be returned during the authentication
flow.
The errors are displayed with each failed login attempt. Historical data is also available in Snowflake
Information Schema and Account Usage:
 Information Schema provides data from within the past 7 days and can be queried using
the LOGIN_HISTORY , LOGIN_HISTORY_BY_USER table functions.
 The Account Usage LOGIN_HISTORY View provides data from within the past year.
Error Error Description
Code
390120 EXT_AUTHN_DENIED Duo Security authentication is denied.
390121 EXT_AUTHN_PENDING Duo Security authentication is pending.
390122 EXT_AUTHN_NOT_ENROLLED User is not enrolled in Duo Security. Contact your local
system administrator.
390123 EXT_AUTHN_LOCKED User is locked from Duo Security. Contact your local
system administrator.
390124 EXT_AUTHN_REQUESTED Duo Security authentication is required.
390125 EXT_AUTHN_SMS_SENT Duo Security temporary passcode is sent via SMS.
Please authenticate using the passcode.
Error Error Description
Code
390126 EXT_AUTHN_TIMEOUT Timed out waiting for your login request approval via
Duo Mobile. If your mobile device has no data service,
generate a Duo passcode and enter it in the connect
string.
390127 EXT_AUTHN_INVALID Incorrect passcode was specified.
390128 EXT_AUTHN_SUCCEEDED Duo Security authentication is successful.
390129 EXT_AUTHN_EXCEPTION Request could not be completed due to a
communication problem with the external service
provider. Try again later.
390132 EXT_AUTHN_DUO_PUSH_DISABLED Duo Push is not enabled for your MFA. Provide a
passcode as part of the connection string.

Multi-factor authentication (MFA)


Snowflake supports multi-factor authentication (i.e. MFA) to provide increased login
security for users connecting to Snowflake. MFA support is provided as an integrated
Snowflake feature, powered by the Duo Security service, which is managed completely
by Snowflake.
Users do not need to separately sign up with Duo or perform any tasks, other than
installing the Duo Mobile application, which is supported on multiple smart phone
platforms (iOS, Android, Windows, etc.). See the Duo User Guide for more information
about supported platforms/devices and how Duo multi-factor authentication works.
MFA is enabled on a per-user basis; however, at this time, users are not automatically
enrolled in MFA. To use MFA, users must enroll themselves.
Attention
At a minimum, Snowflake strongly recommends that all users with the
ACCOUNTADMIN role be required to use MFA.
Prerequisite
The Duo application service communicates through TCP port 443.
To ensure consistent behavior, update your firewall settings to include the Duo application service on
TCP port 443.
*.duosecurity.com:443
For more information, see the Duo documentation.
MFA login flow
The following diagram illustrates the overall login flow for a user enrolled in MFA, regardless of the
interface used to connect:
Enrolling a Snowflake user in MFA
Any Snowflake user can self-enroll in MFA through the web interface. For more information,
see Manage your user profile by using Snowsight.
Switching phones used for MFA
Instant Restore is a Duo feature that allows a user to backup the Duo app before switching to a new
phone. As long as a Snowflake user backs up their old phone first, they can use Instant Restore to
enable authentication on the new phone without interrupting MFA for Snowflake.
If a user does not back up the old phone or loses the old phone, the Snowflake account administrator
must disable MFA for each username that used the old phone before MFA can be re-enabled on the
new phone.
Managing MFA for an account and users
At the account level, MFA requires no management. It is automatically enabled for an account and
available for all users to self-enroll. However, the account administrator (i.e. the user granted the
ACCOUNTADMIN system role) may find the need to disable MFA for a user, either temporarily or
permanently, for example if the user loses their phone or changes their phone number and cannot log
in with MFA.
The account administrator can use the following properties for the ALTER USER command to
perform these tasks:
 MINS_TO_BYPASS_MFA
Specifies the number of minutes to temporarily disable MFA for the user so that they can log
in. After the time passes, MFA is enforced and the user cannot log in without the temporary
token generated by the Duo Mobile application.
 DISABLE_MFA
Disables MFA for the user, effectively canceling their enrollment. It may be necessary to
refresh the browser to verify that the user is no longer enrolled in MFA. To use MFA again,
the user must re-enroll.
Note
DISABLE_MFA is not a column in any Snowflake table or view. After an account
administrator executes the ALTER USER command to set DISABLE_MFA to TRUE, the value
for the EXT_AUTHN_DUO property is automatically set to FALSE.
To verify that MFA is disabled for a given user, execute a DESCRIBE USER statement and
check the value for the EXT_AUTHN_DUO property.
Connecting to Snowflake with MFA
MFA login is designed primarily for connecting to Snowflake through the web interface, but is also
fully-supported by SnowSQL and the Snowflake JDBC and ODBC drivers.
Using MFA token caching to minimize the number of prompts during authentication — optional
MFA token caching can help to reduce the number of prompts that must be acknowledged while
connecting and authenticating to Snowflake, especially when multiple connection attempts are made
within a relatively short time interval.
A cached MFA token is valid for up to four hours.
The cached MFA token is invalid if any of the following conditions are met:
1. The ALLOW_CLIENT_MFA_CACHING parameter is set to FALSE for the account.
2. The method of authentication changes.
3. The authentication credentials change (i.e. username and/or password).
4. The authentication credentials are not valid.
5. The cached token expires or is not cryptographically valid.
6. The account name associated with the cached token changes.
The overall process Snowflake uses to cache MFA tokens is similar to that used to cache connection
tokens for browser-based federated single sign-on. The client application stores the MFA token in
the keystore of the client-side operating system. Users can delete the cached MFA token from the
keystore at any time.
Snowflake supports MFA token caching with the following drivers and connectors on macOS and
Windows. This feature is not supported on Linux.
 ODBC driver version 2.23.0 (or later).
 JDBC driver version 3.12.16 (or later).
 Python Connector for Snowflake version 2.3.7 (or later).
Snowflake recommends consulting with internal security and compliance officers prior to enabling
MFA token caching.
Tip
MFA token caching can be combined with connection caching in federated single sign-on.
To combine these two features, ensure that the ALLOW_ID_TOKEN parameter is set to true in
tandem with the ALLOW_CLIENT_MFA_CACHING parameter.
To enable MFA token caching, complete the following steps:
1. As an account administrator (i.e. a user with the ACCOUNTADMIN system role), set the
ALLOW_CLIENT_MFA_CACHING parameter to true for an account using the ALTER
ACCOUNT command.
2. ALTER ACCOUNT SET ALLOW_CLIENT_MFA_CACHING = TRUE;
3. In the client connection string, update the authenticator value
to authenticator = username_password_mfa.
4. Add the package or libraries needed by the driver or connector:
 If you are using the Snowflake Connector for Python, install the optional keyring
package by running:
 pip install "snowflake-connector-python[secure-local-storage]"
You must enter the square brackets ( [ and ]) as shown in the command. The square
brackets specify the extra part of the package that should be installed.
Use quotes around the name of the package as shown to prevent the square brackets
from being interpreted as a wildcard.
If you need to install other extras (for example, pandas for using the Python Connector
APIs for Pandas), use a comma between the extras:
pip install "snowflake-connector-python[secure-local-storage,pandas]"
 For the Snowflake JDBC Driver, see Adding the JNA classes to your classpath.
To disable MFA token caching, unset the ALLOW_CLIENT_MFA_CACHING parameter:
ALTER ACCOUNT UNSET ALLOW_CLIENT_MFA_CACHING;
To find all users who use MFA token caching as the second-factor authentication to log in, you can
execute the following SQL statement as an account administrator (a user with the
ACCOUNTADMIN role):
SELECT EVENT_TIMESTAMP,
USER_NAME,
IS_SUCCESS
FROM SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY
WHERE SECOND_AUTHENTICATION_FACTOR = 'MFA_TOKEN';
Using MFA with Snowsight
To sign in to Snowsight with MFA:
1. Sign in to Snowsight.
2. Enter your credentials (user login name and password).
3. If Duo Push is enabled, you may select a notification method. If Send Me a Push is selected,
a push notification is sent to your Duo Mobile application. When you receive the notification,
select Approve and you will be logged into Snowflake.

As shown on the above, instead of using the push notification, you can also choose to:
 Select Call Me to receive login instructions from a phone call to the registered mobile
device.
 Select Enter a Passcode to log in by manually entering a passcode provided by the
Duo Mobile application.
Using MFA with the Classic Console web interface
To log into the Classic Console with MFA:
1. Point your browser at the URL for your account. For example: https://myorg-
account1.snowflakecomputing.com .
2. Enter your credentials (user login name and password).
3. If Duo Push is enabled, a push notification is sent to your Duo Mobile application. When you
receive the notification, select Approve and you will be logged into Snowflake.
As shown on the above screenshot, instead of using the push notification, you can also
choose to:
 Select Call Me to receive login instructions from a phone call to the registered mobile
device.
 Select Enter a Passcode to log in by manually entering a passcode provided by the
Duo Mobile application.
Using MFA with SnowSQL
MFA can be used for connecting to Snowflake through SnowSQL. By default, the Duo Push
authentication mechanism is used when a user is enrolled in MFA.
To use a Duo-generated passcode instead of the push mechanism, the login parameters must include
one of the following connection options:
--mfa-passcode <string> OR --mfa-passcode-in-password
For more details, see SnowSQL (CLI client).
Using MFA with JDBC
MFA can be used for connecting to Snowflake via the Snowflake JDBC driver. By default, the Duo
Push authentication mechanism is used when a user is enrolled in MFA; no changes to the JDBC
connection string are required.
To use a Duo-generated passcode instead of the push mechanism, one of the following parameters
must be included in the JDBC connection string:
passcode=<passcode_string> OR passcodeInPassword=on
Where:
 passcode_string is a Duo-generated passcode for the user who is connecting. This can be a
passcode generated by the Duo Mobile application or an SMS passcode.
 If passcodeInPassword=on , then the password and passcode are concatenated, in the form
of <password_string><passcode_string> .
For more details, see JDBC Driver.
Examples of JDBC connection strings using Duo
JDBC connection string for user demo connecting to the xy12345 account (in the US West region)
using a Duo passcode:
jdbc:snowflake://xy12345.snowflakecomputing.com/?user=demo&passcode=123456
JDBC connection string for user demo connecting to the xy12345 account (in the US West region)
using a Duo passcode that is embedded in the password:
jdbc:snowflake://xy12345.snowflakecomputing.com/?user=demo&passcodeInPassword=on
Using MFA with ODBC
MFA can be used for connecting to Snowflake via the Snowflake ODBC driver. By default, the Duo
Push authentication mechanism is used when a user is enrolled in MFA; no changes to the ODBC
settings are required.
To use a Duo-generated passcode instead of the push mechanism, one of the following parameters
must be specified for the driver:
passcode=<passcode_string> OR passcodeInPassword=on
Where:
 passcode_string is a Duo-generated passcode for the user who is connecting. This can be a
passcode generated by the Duo Mobile application or an SMS passcode.
 If passcodeInPassword=on , then the password and passcode are concatenated, in the form
of <password_string><passcode_string> .
For more details, see ODBC Driver.
Using MFA with Python
MFA can be used for connecting to Snowflake via the Snowflake Python Connector. By default, the
Duo Push authentication mechanism is used when a user is enrolled in MFA; no changes to the
Python API calls are required.
To use a Duo-generated passcode instead of the push mechanism, one of the following parameters
must be specified for the driver in the connect() method:
passcode=<passcode_string> OR passcode_in_password=True
Where:
 passcode_string is a Duo-generated passcode for the user who is connecting. This can be a
passcode generated by the Duo Mobile application or an SMS passcode.
 If passcode_in_password=True , then the password and passcode are concatenated, in the form
of <password_string><passcode_string> .
For more details, see the description of the connect() method in the Functions section of the Python
Connector API documentation.
MFA error codes
The following are error codes associated with MFA that can be returned during the authentication
flow.
The errors are displayed with each failed login attempt. Historical data is also available in Snowflake
Information Schema and Account Usage:
 Information Schema provides data from within the past 7 days and can be queried using
the LOGIN_HISTORY , LOGIN_HISTORY_BY_USER table functions.
 The Account Usage LOGIN_HISTORY View provides data from within the past year.
Error Error Description
Code
390120 EXT_AUTHN_DENIED Duo Security authentication is denied.
390121 EXT_AUTHN_PENDING Duo Security authentication is pending.
390122 EXT_AUTHN_NOT_ENROLLED User is not enrolled in Duo Security. Contact your local
system administrator.
390123 EXT_AUTHN_LOCKED User is locked from Duo Security. Contact your local
system administrator.
390124 EXT_AUTHN_REQUESTED Duo Security authentication is required.
390125 EXT_AUTHN_SMS_SENT Duo Security temporary passcode is sent via SMS.
Please authenticate using the passcode.
Error Error Description
Code
390126 EXT_AUTHN_TIMEOUT Timed out waiting for your login request approval via
Duo Mobile. If your mobile device has no data service,
generate a Duo passcode and enter it in the connect
string.
390127 EXT_AUTHN_INVALID Incorrect passcode was specified.
390128 EXT_AUTHN_SUCCEEDED Duo Security authentication is successful.
390129 EXT_AUTHN_EXCEPTION Request could not be completed due to a
communication problem with the external service
provider. Try again later.
390132 EXT_AUTHN_DUO_PUSH_DISABLED Duo Push is not enabled for your MFA. Provide a
passcode as part of the connection string.

Introduction to OAuth
Snowflake enables OAuth for clients through integrations. An integration is a Snowflake object that
provides an interface between Snowflake and third-party services. Administrators configure OAuth
using a Security integration, which enables clients that support OAuth to redirect users to an
authorization page and generate access tokens (and optionally, refresh tokens) for accessing
Snowflake.
Snowflake supports the OAuth 2.0 protocol for authentication and authorization using one of the
options below:
 Snowflake OAuth
 External OAuth
The following table compares Snowflake OAuth and External OAuth:
Category Snowflake OAuth External OAuth
Modify client Required Required
application
Client Required Not required
application
browser access
Programmatic Requires a browser Best fit
clients
Driver property authenticator = oauth authenticator = oauth
Security create security integration type = oauth ... create security integration type = external_oauth
integration
syntax
OAuth flow OAuth 2.0 code grant flow Any OAuth flow that the client can initiate with
the External OAuth server
Auditing OAuth logins
To query login attempts by Snowflake users, Snowflake provides a login history:
 LOGIN_HISTORY , LOGIN_HISTORY_BY_USER (table function)
 LOGIN_HISTORY View (view)
When OAuth is used to authenticate (successfully or unsuccessfully), the
FIRST_AUTHENTICATION_FACTOR column in the output has the value
OAUTH_ACCESS_TOKEN.
Private connectivity
Snowflake supports External OAuth with private connectivity to the Snowflake service.
Snowflake OAuth and Tableau can be used with private connectivity to Snowflake as follows:
Tableau Desktop
Starting with Tableau 2020.4, Tableau contains an embedded OAuth client that supports
connecting to Snowflake with the account URL for private connectivity to the Snowflake
service.
After upgrading to Tableau 2020.4, no further configuration is needed; use the corresponding
private connectivity URL for either AWS or Azure to connect to Snowflake.
Tableau Server
Starting with Tableau 2020.4, users can optionally configure Tableau Server to use the
embedded OAuth Client to connect to Snowflake with the account URL for private
connectivity to the Snowflake service.
To use this feature, create a new Custom Client security integration and follow the Tableau
instructions.
Tableau Online
Tableau Online does not support the Snowflake account URL for private connectivity to the
Snowflake service because Tableau Online requires access to the public Internet.
Please contact Tableau for more information regarding when Tableau Online will support the
private connectivity Snowflake account URLs for private connectivity to the Snowflake
service.
Important
To determine the account URL to use with private connectivity to the Snowflake service, call
the SYSTEM$GET_PRIVATELINK_CONFIG function.
Looker
Currently, combining Snowflake OAuth and Looker requires access to the public Internet.
Therefore, you cannot use Snowflake OAuth and Looker with private connectivity to the
Snowflake service.
For more information, refer to:
 SSO with private connectivity
 Configure Snowflake OAuth for partner applications
Clients, drivers, and connectors
Supported clients, drivers, and connectors can use OAuth to verify user login credentials.
Note the following:
 It is necessary to set the authenticator parameter to oauth and the token parameter to
the oauth_access_token.
 When passing the token value as a URL query parameter, it is necessary to URL-encode
the oauth_access_token value.
 When passing the token value to a Properties object (e.g. JDBC Driver), no modifications are
necessary.
For more information about connection parameters, refer to the reference documentation for the
following clients, drivers, or connectors:
 SnowSQL
 Python
 Go
 JDBC
 ODBC
 Spark Connector
 .NET Driver
 Node.js Driver
Client Redirect
Snowflake supports using Client Redirect with Snowflake OAuth and External OAuth, including
using Client Redirect and OAuth with supported Snowflake Clients.
For more information, refer to Redirecting Client Connections.
Replication
Snowflake supports replication and failover/failback with both the Snowflake OAuth and External
OAuth security integrations from the source account to the target account.
For details, refer to Replication of security integrations & network policies across multiple accounts.
Was this pag

External OAuth overview


This topic teaches you how to configure External OAuth servers that use OAuth 2.0 for
accessing Snowflake.
External OAuth integrates the customer’s OAuth 2.0 server to provide a seamless SSO
experience, enabling external client access to Snowflake.
Snowflake supports the following external authorization servers, custom clients, and
partner applications:
 Okta
 Microsoft Azure AD
 Ping Identity PingFederate
 External OAuth Custom Clients
 Microsoft Power BI
 Sigma
After configuring your organization’s External OAuth server, which includes any
necessary OAuth 2.0 Scopes mapping to Snowflake roles, the user can connect to
Snowflake securely and programmatically without having to enter any additional
authentication or authorization factors or methods. The user’s access to Snowflake data is
dependent on both their role and the role being integrated into the access token for the
session. For more information, refer to Scopes (in this topic).
Use cases and benefits
1. Snowflake delegates the token issuance to a dedicated authorization server to ensure that the
OAuth Client and user properly authenticate. The result is centralized management of tokens
issued to Snowflake.
2. Customers can integrate their policies for authentication (e.g. multi-factor, subnet, biometric)
and authorization (e.g. no approval, manager approval required) into the authorization server.
The result is greater security leading to more robust data protection by issuing challenges to
the user. If the user doesn’t pass the policy challenge(s), the Snowflake session is not
instantiated, and access to Snowflake data does not occur.
3. For programmatic clients that can access Snowflake and users that only initiate their
Snowflake sessions through External OAuth, no additional authentication configuration (i.e.
set a password) is necessary in Snowflake. The result is that service accounts or users used
exclusively for programmatic access will only ever be able to use Snowflake data when going
through the External OAuth configured service.
4. Clients can authenticate to Snowflake without browser access, allowing ease of integration
with the External OAuth server.
5. Snowflake’s integration with External OAuth servers is cloud-agnostic.
 It does not matter whether the authorization server exists in a cloud provider’s cloud
or if the authorization server is on-premises. The result is that customers have many
options in terms of configuring the authorization server to interact with Snowflake.
General workflow
For each of the supported identity providers, the workflow for OAuth relating to External OAuth
authorization servers can be summarized as follows. Note that the first step only occurs once and the
remaining steps occur with each attempt to access Snowflake data.
1. Configure your External OAuth authorization server in your environment and the security
integration in Snowflake to establish a trust.
2. A user attempts to access Snowflake data through their business intelligence application, and
the application attempts to verify the user.
3. On verification, the authorization server sends a JSON Web Token (i.e. OAuth token) to the
client application.
4. The Snowflake driver passes a connection string to Snowflake with the OAuth token.
5. Snowflake validates the OAuth token.
6. Snowflake performs a user lookup.
7. On verification, Snowflake instantiates a session for the user to access data in Snowflake
based on their role.
Scopes
The scope parameter in the authorization server limits the operations and roles permitted by the
access token and what the user can access after instantiating a Snowflake session.
Note that the ACCOUNTADMIN, ORGADMIN, and SECURITYADMIN roles are blocked by
default. If it is necessary to use one or more of these roles, use the ALTER ACCOUNT command to
set the EXTERNAL_OAUTH_ADD_PRIVILEGED_ROLES_TO_BLOCKED_LIST account
parameter to FALSE.
 For Okta, PingFederate, and Custom, use the role scope pattern in the following table.
 For Azure AD, refer to Prerequisite Step: Determine the OAuth Flow in Azure AD
 If you do not want to manage Snowflake roles in your External OAuth server, pass the static
value of SESSION:ROLE-ANY in the scope attribute of the token.
The following table summarizes External OAuth scopes. Note that if you do not define a scope, the
connection attempt to Snowflake will fail.
Scope/Role Connection Description
Parameter
session:role-any Maps to the ANY role in Snowflake.
Use this scope if the user’s default role in Snowflake is desirable.
Scope/Role Connection Description
Parameter
The external_oauth_any_role_mode security integration parameter must be configured
in order to enable ANY role for a given External OAuth Provider. For
configuration details, refer to the ANY role section in Okta, Azure
AD, PingFederate, or Custom.
Note that with a Power BI to Snowflake integration, a PowerBI user cannot switch
roles using this scope.
session:role:custom_role Maps to a custom Snowflake role. For example, if your custom role is ANALYST,
your scope is session:role:analyst.
session:role:public Maps to the PUBLIC Snowflake role.
Using secondary roles with External OAuth
Snowflake supports using secondary roles with External OAuth.
Snowflake OAuth does not support in-session role switching to secondary roles.
For more information, refer to:
 Secondary roles with Okta
 Secondary roles with Azure
 Secondary roles with PingFederate
 Secondary roles with Custom Clients
 Using secondary roles with Power BI SSO to Snowflake
Configuring External OAuth support
Snowflake supports the use of partner applications and custom clients that support External OAuth.
Refer to the list below if you need to configure partner applications or custom clients:
 Configuring partner applications .
 Configuring custom clients configured by your organization .
OAuth and secondary roles
Snowflake supports using secondary roles with External Oauth.
For more information, refer to Using secondary roles with External OAuth.
Error codes
Refer to the table below for descriptions of error codes associated with External OAuth:
Error Error Description
Code
390318 OAUTH_ACCESS_TOKEN_EXPIRED OAuth access token expired. {0}
390144 JWT_TOKEN_INVALID JWT token is invalid.
Troubleshooting
 Use the SYSTEM$VERIFY_EXTERNAL_OAUTH_TOKEN function to determine whether
your External OAuth access token is valid or needs to be regenerated.
 If you encounter an error message associated with a failed External OAuth login attempt, and
the error message has a UUID, you can ask an administrator that has a MONITOR privilege
assigned to their role to use the UUID from the error message to get a more detailed
description of the error using the SYSTEM$GET_LOGIN_FAILURE_DETAILS function.

Authentication policies
PREVIEW FEATURE— OPEN
Available to all accounts.
Authentication policies provide you with control over how a client or user authenticates
by allowing you to specify:
 The clients that users can use to connect to Snowflake, such
as Snowsight or Classic Console, drivers, or SnowSQL (CLI client). For more
information, see Limitations.
 The allowed authentication methods, such as SAML, passwords, OAuth, or Key
pair authentication.
 The SAML2 security integrations that are available to users during the login
experience. For example, if there are multiple security integrations, you can specify
which identity provider (IdP) can be selected and used to authenticate.
If you are using authentication policies to control which IdP a user can use to
authenticate, you can further refine that control using
the ALLOWED_USER_DOMAINS and ALLOWED_EMAIL_PATTERNS properties of the
SAML2 security integrations associated with the IdPs. For more details, see Using
multiple identity providers for federated authentication.
You can set authentication policies on the account or users in the account. If you set an
authentication policy on the account, then the authentication policy applies to all users in
the account. If you set an authentication policy on both an account and a user, then the
user-level authentication policy overrides the account-level authentication policy.
Note
If you already have access to the identifier-first login flow, you need to migrate your
account from the unsupported SAML_IDENTITY_PROVIDER account parameter using
the SYSTEM$MIGRATE_SAML_IDP_REGISTRATION function.
Use cases
The following list describes non-exhaustive use cases for authentication policies:
 You want to control the user login flows when there are multiple login options.
 You want to control the authentication methods, specific client types, and security
integrations available for specific users or all users.
 You have customers building services on top of Snowflake using Snowflake drivers, but the
customers do not want their users accessing Snowflake through Snowsight or the Classic
Console.
 You want to offer multiple identity providers as authentication options for specific users.
Limitations
The CLIENT_TYPES property of an authentication policy is a best effort method to block user logins
based on specific clients. It should not be used as the sole control to establish a security boundary.
Considerations
 Ensure authentication methods and security integrations listed in your authentication policies
do not conflict. For example, if you add a SAML2 security integration in the list of allowed
security integrations, and you only allow OAuth as an allowed authentication method, then
you cannot create an authentication policy.
 Use an additional non-restrictive authentication policy for administrators in case users are
locked out. For an example, see Preventing a lockout.
Security policy precedence
When more than one type of security policy is activated, precedence between the policies occur. For
example, network policies take precedence over authentication policies, so if the IP address of a
request matches an IP address in the blocked list of the network policy, then the authentication policy
is not checked, and evaluation stops at the network policy.
The following list describes the order in which security policies are evaluated:
1. Network policies: Allow or deny IP addresses, VPC IDs, and VPCE IDs.
2. Authentication policies - Allow or deny clients, authentication methods, and security
integrations.
3. Password policies (For local authentication only): Specify password requirements such as
character length, characters, password age, retries, and lockout time.
4. Session policies: Require users to re-authenticate after a period of inactivity
If a policy is assigned to both the account and the user authenticating, the user-level policy is
enforced.
Combining identifier-first login with authentication policies
By default, Snowsight or the Classic Console provide a generic login experience that provides
several options for logging in, regardless if the options are relevant to users. This means that
authentication is attempted regardless of whether the login option is a valid option for the user.
You can alter this behavior to enable a identifier-first login flow for Snowsight or the Classic
Console. In this flow, Snowflake prompts the user for an email address or username before
presenting authentication options. Snowflake uses the email address or username to identify the user,
and then only displays the login options that are relevant to the user, and are allowed by the
authentication policy set on the account or user.
For instructions for enabling the identifier-first login flow, see Identifier-first login.
The following table provides example configuration on how the identifier-first login and
authentication policies can be combined to control the login experience of the user.
Configuration Result
The authentication policy’s Snowflake prompts the user for an email
AUTHENTICATION_METHODS parameter only contains address or username, and password.
PASSWORD.
The authentication policy’s Snowflake redirects the user to the identity
AUTHENTICATION_METHODS parameter only contains provider’s login page if the email address or
SAML, and there is an active SAML2 security integration. username matches only one SAML2 security
integration.
The authentication policy’s Snowflake displays a SAML SSO button if the
AUTHENTICATION_METHODS parameter contains both email address or username matches only one
PASSWORD and SAML, and there is an active SAML2 SAML2 security integration, and the option to
security integration. log in with an email address or username, and
password.
The authentication policy’s Snowflake displays multiple SAML SSO
AUTHENTICATION_METHODS parameter only contains buttons if the email address or username
SAML, and there are multiple active SAML2 security matches multiple SAML2 security integrations.
integrations.
The authentication policy’s Snowflake displays multiple SAML SSO
AUTHENTICATION_METHODS parameter contains both buttons if the email address or username
PASSWORD and SAML, and there are multiple active matches multiple SAML2 security integrations,
SAML2 security integrations. and the option to log in with an email address
or username, and password.
Creating an authentication policy
An administrator can use the CREATE AUTHENTICATION POLICY command to create a new
authentication policy, specifying which clients can connect to Snowflake, which authentication
methods can be used, and which security integrations are available to users. By default, all client
types, authentication methods, and security integrations can be used to connect to Snowflake.
See client type limitations in authentication policies.
For example, you can use the following commands to create a custom policy_admin role and an
authentication policy that only allows authentication using Snowsight or the Classic Console, only
allowing an account or user to authenticate using OAuth or a password:
USE ROLE ACCOUNTADMIN;

CREATE OR REPLACE DATABASE my_database;


USE DATABASE my_database;

CREATE OR REPLACE SCHEMA my_schema;


USE SCHEMA my_schema;

CREATE ROLE policy_admin;

GRANT USAGE ON DATABASE my_database TO ROLE policy_admin;


GRANT USAGE ON SCHEMA my_database.my_schema TO ROLE policy_admin;
GRANT CREATE AUTHENTICATION POLICY ON SCHEMA my_database.my_schema
TO ROLE policy_admin;
GRANT APPLY AUTHENTICATION POLICY ON ACCOUNT TO ROLE policy_admin;

USE ROLE policy_admin;

CREATE AUTHENTICATION POLICY my_example_authentication_policy


CLIENT_TYPES = ('SNOWFLAKE_UI')
AUTHENTICATION_METHODS = ('OAUTH', 'PASSWORD');
For detailed examples, see Example login configurations.
Setting an authentication policy on an account or user
When you set an authentication policy on an account or user, the restrictions specified in the
authentication policy apply to the account or user. You can use the ALTER ACCOUNT or ALTER
USER commands to set an authentication policy on an account or user.
In a Snowsight worksheet, use either of the following commands to set an authentication policy on
an account or user:
ALTER ACCOUNT SET AUTHENTICATION POLICY my_example_authentication_policy;
ALTER USER example_user SET AUTHENTICATION POLICY
my_example_authentication_policy;
Only a security administrator (a user with the SECURITYADMIN role) or users with a role that has
the APPLY AUTHENTICATION POLICY privilege can set authentication policies on accounts or
users. To grant this privilege to a role so users can set an authentication policy on an account or user,
execute one of the following commands:
GRANT APPLY AUTHENTICATION POLICY ON ACCOUNT TO ROLE
my_policy_admin
GRANT APPLY AUTHENTICATION POLICY ON USER TO ROLE my_policy_admin
For detailed examples, see Example login configurations.
Tracking authentication policy usage
Use the Information Schema table function POLICY_REFERENCES to return a row for each user
that is assigned to the specified authentication policy and a row for the authentication policy assigned
to the Snowflake account.
The following syntax is supported for authentication policies:
POLICY_REFERENCES( POLICY_NAME => '<authentication_policy_name>' )
POLICY_REFERENCES( REF_ENTITY_DOMAIN => 'USER', REF_ENTITY_NAME =>
'<username>')
POLICY_REFERENCES( REF_ENTITY_DOMAIN => 'ACCOUNT', REF_ENTITY_NAME
=> '<accountname>')
Where authentication_policy_name is the fully qualified name of the authentication policy.
For example, execute the following query to return a row for each user that is assigned the
authentication policy named authentication_policy_prod_1, which is stored in the database
named my_db and the schema named my_schema:
SELECT *
FROM TABLE(
my_db.INFORMATION_SCHEMA.POLICY_REFERENCES(
POLICY_NAME => 'my_db.my_schema.authentication_policy_prod_1'
)
);
Preventing a lockout
In situations where the authentication policy governing an account is strict, you can create a non-
restrictive authentication policy for an administrator to use as a recovery option in case of a lockout
caused by a security integration. For example, you can include the PASSWORD authentication method
for the administrator only. The user-level authentication policy overrides the more restrictive
account-level policy.
CREATE AUTHENTICATION POLICY admin_authentication_policy
AUTHENTICATION_METHODS = ('SAML', 'PASSWORD')
CLIENT_TYPES = ('SNOWFLAKE_UI', 'SNOWSQL', 'DRIVERS')
SECURITY_INTEGRATIONS = ('EXAMPLE_OKTA_INTEGRATION');
You can then assign this policy to an administrator:
ALTER USER <administrator_name> SET AUTHENTICATION POLICY
admin_authentication_policy
Replication of authentication policies
You can replicate authentication policies using failover and replication groups. For details,
see Replication and security policies.
Example login configurations
This section provides examples of how you can use and combine authentication policies and SAML2
security integrations to control login flow and security.
Restricting user access to Snowflake by client type
See client type limitations in authentication policies.
Create an authentication policy named restrict_client_type_policy that only allows access through
Snowsight or the Classic Console:
CREATE AUTHENTICATION POLICY restrict_client_type_policy
CLIENT_TYPES = ('SNOWFLAKE_UI')
COMMENT = 'Only allows access through the web interface';
Set the authentication policy on a user:
ALTER USER example_user SET AUTHENTICATION POLICY restrict_client_type_policy;
Allow authentication from multiple identity providers on an account
Create a SAML2 security integration that allows users to login through SAML using Okta as an IdP:
CREATE SECURITY INTEGRATION example_okta_integration
TYPE = SAML2
SAML2_SSO_URL = 'https://okta.example.com';
...
Create a security integration that allows users to login through SAML using Microsoft Azure as an
IdP:
CREATE SECURITY INTEGRATION example_azure_integration
TYPE = SAML2
SAML2_SSO_URL = 'https://azure-example_acme.com';
...
Create an authentication policy associated with
the example_okta_integration and example_azure_integration integrations:
CREATE AUTHENTICATION POLICY multiple_idps_authentication_policy
AUTHENTICATION_METHODS = ('SAML')
SECURITY_INTEGRATIONS = ('EXAMPLE_OKTA_INTEGRATION',
'EXAMPLE_AZURE_INTEGRATION');
Set the authentication policy on an account:
ALTER ACCOUNT SET AUTHENTICATION POLICY multiple_idps_authentication_policy;
Privileges and commands
Authentication Policy Privilege Reference
Snowflake supports the following authentication policy privileges to determine whether users can
create, set, and own authentication policies.
Note that operating on any object in a schema also requires the USAGE privilege on the parent
database and schema.
Privilege Usage
CREATE Enables creating a new authentication policy in a schema.
APPLY AUTHENTICATION Enables applying an authentication policy at the account or user level.
POLICY
OWNERSHIP Grants full control over the authentication policy. Required to alter most
Privilege Usage
properties of an authentication policy.
Authentication policy DDL reference
For details about authentication policy privileges and commands, see the following reference
documentation:
Command Privilege Description
CREATE CREATE AUTHENTICATION POLICY Creates a new authentication
AUTHENTICATION on SCHEMA policy.
POLICY
ALTER AUTHENTICATION OWNERSHIP on AUTHENTICATION Modifies an existing
POLICY POLICY authentication policy.
DROP AUTHENTICATION OWNERSHIP on AUTHENTICATION Removes an existing
POLICY POLICY authentication policy from the
system.
DESCRIBE OWNERSHIP on AUTHENTICATION Describes the properties of an
AUTHENTICATION POLICY existing authentication policy.
POLICY
SHOW AUTHENTICATION OWNERSHIP on AUTHENTICATION Lists all of the authentication
POLICIES POLICY or USAGE on SCHEMA policies in the system.

Controlling network traffic with network


policies
Using SQL to work with network policies and network rules is generally available.
SNOWSIGHT PREVIEW FEATURE— OPEN
Using Snowsight to work with network policies and network rules is currently in preview.
If you need a solution that is generally available, use SQL to work with network policies and
network rules.
Note
Network policies that existed before the introduction of network rules can no longer be
modified in Snowsight. Use the ALTER NETWORK POLICY command instead.
You can use network policies to control inbound access to the Snowflake service and
internal stage.
If you want to control outbound traffic from Snowflake to an external network
destination, see External network access overview.
About network policies
By default, Snowflake allows users to connect to the service and internal stage from any computer or
device. A security administrator (or higher) can use a network policy to allow or deny access to a
request based on its origin. The allowed list of the network policy controls which requests are
allowed to access the Snowflake service or internal stage, while the blocked list controls which
requests should be explicitly blocked.
A network policy does not directly specify the network identifiers in its allowed list or blocked list.
Rather, a network policy adds network rules to its allowed and blocked lists. These network rules
group related identifiers into logical units that are added to the allowed list and blocked list of a
network policy.
Important
Network policies that existed before the introduction of network rules still work. However, all new
network policies should use network rules, not the ALLOWED_IP_LIST and BLOCKED_IP_LIST
parameters, to control access from IP addresses. Best practice is to avoid using both ways to restrict
access in the same network policy.
Workflow
The general workflow of using network policies to control inbound network traffic is:
1. Create network rules based on their purpose and type of network identifier.
2. Create one or more network policies that include the network rules that contain the identifiers
to be allowed or blocked.
3. Activate the network policy for an account, user, or security integration . A network policy
does not restrict network traffic until it is activated.
Interaction between allowed lists and blocked lists
When you add a network rule to the allowed list of a network policy, you do not have to use the
blocked list to explicitly block other identifiers of the same type; only the allowed identifiers have
access. However, identifiers of a different type are not automatically blocked. For example, if you
add an IPV4 network rule with a single IP address to the allowed list, all other IPv4 addresses are
blocked. There is no need to use the blocked list to restrict access from other IP addresses. However,
VPC endpoints still have access unless additional network rules are added to the network policy.
As an example, a network rule that uses private endpoint identifiers such as Azure LinkIDs or AWS
VPCE IDs to restrict access have no effect on requests coming from the public network. If you want
to restrict access based on private endpoint identifiers, and then completely block requests from
public IPv4 addresses, you must create two separate network rules, one for the allowed list and
another for the blocked list.
The following network rules could be combined in a network policy to allow a VPCE ID while
blocking public network traffic.
CREATE NETWORK RULE block_public_access
MODE = INGRESS
TYPE = IPV4
VALUE_LIST = ('0.0.0.0/0');

CREATE NETWORK RULE allow_vpceid_access


MODE = INGRESS
TYPE = AWSVPCEID
VALUE_LIST = ('vpce-0fa383eb170331202');

CREATE NETWORK POLICY allow_vpceid_block_public_policy


ALLOWED_NETWORK_RULE_LIST = ('allow_vpceid_access')
BLOCKED_NETWORK_RULE_LIST=('block_public_access');
IP ranges
If you want to allow a range of IP addresses with the exception of a single IP address, you can create
two network rules, one for the allowed list and another for the blocked list.
For example, the following would allow requests from all IP addresses in the range
of 192.168.1.0 to 192.168.1.255, except 192.168.1.99. IP addresses outside the range are also blocked.
CREATE NETWORK RULE allow_access_rule
MODE = INGRESS
TYPE = IPV4
VALUE_LIST = ('192.168.1.0/24');

CREATE NETWORK RULE block_access_rule


MODE = INGRESS
TYPE = IPV4
VALUE_LIST = ('192.168.1.99');

CREATE NETWORK POLICY public_network_policy


ALLOWED_NETWORK_RULE_LIST = ('allow_access_rule')
BLOCKED_NETWORK_RULE_LIST=('block_access_rule');
Network policy precedence
You can apply a network policy to an account, a security integration, or a user. If there are network
policies applied to more than one of these, the most specific network policy overrides more general
network policies. The following summarizes the order of precedence:
Account
Network policies applied to an account are the most general network policies. They are
overridden by network policies applied to a security integration or user.
Security Integration
Network policies applied to a security integration override network policies applied to the
account, but are overridden by a network policy applied to a user.
User
Network policies applied to a user are the most specific network policies. They override both
accounts and security integrations.
Bypassing a network policy
It is possible to temporarily bypass a network policy for a set number of minutes by configuring the
user object property MINS_TO_BYPASS_NETWORK_POLICY, which can be viewed by
executing DESCRIBE USER. Only Snowflake can set the value for this object property. Please
contact Snowflake Support to set a value for this property.
About network rules
While restrictions on incoming requests to Snowflake are ultimately applied to an account, user, or
security integration with a network policy, the administrator can organize these restrictions
using network rules, which are schema-level objects.
Each network rule groups together the identifiers for a particular type of request origin. For example,
one network rule might include all of the IPv4 addresses that should be allowed to access Snowflake
while another groups together all of the private endpoints that should be blocked.
A network rule, however, does not specify whether it is allowing or blocking the origin of a request.
It simply organizes related origins into a logical unit. Administrators specify whether that unit should
be allowed or blocked when they create or modify a network policy.
If you already understand the strategies for using network rules with network policies, see Working
with network rules.
Best practices
 Limit the scope. Network rules are designed to group together small units of related network
identifiers. Previously, network policies often contained a large, monolithic list of IP
addresses that should be allowed or blocked. The introduction of network rules changes this
strategy. For example, you could break up network identifiers by:
 Creating a network rule to contain client IP addresses for the North American region,
and a different rule for the Europe and Middle Eastern region.
 Creating a network rule whose purpose is to allow access for a special population,
such as highly privileged users and service account users. This network rule can be
added to a network policy that is applied to individual users.
 Creating a network rule that is scoped to one or more data apps.
With the introduction of network rules, Snowflake recommends that you also limit the scope
of network policies. Whenever possible, narrowly scope a network policy to a group of users
or a security integration rather than an entire account.
 Add comments. When creating a network rule, use the COMMENT property to keep track of
what the rule is supposed to do. Comments are important because Snowflake encourages a
large number of small targeted rules over fewer monolithic ones.
You can use the SHOW NETWORK RULES command to list all of the network rules,
including their comments.
Supported identifiers
Each network rule contains a list of one or more network identifiers of the same type (e.g. an IPv4
address rule or a private endpoint rule).
A network rule’s TYPE property identifies what type of identifiers the network rule contains.
For a complete list of the types of identifiers that can be restricted using network rules,
see Supported network identifiers.
Protecting the Snowflake service
This section discusses how to use network rules to restrict access to the Snowflake service only. If
you want to restrict access to both the service and the internal stage of an account on AWS,
see Protecting internal stages on AWS.
To restrict access to the Snowflake service, set the MODE property of the network rule to INGRESS.
You can then use the TYPE property to specify the identifiers that should be allowed or blocked.
Protecting internal stages on AWS
PREVIEW FEATURE— OPEN
Using network rules to protect internal stages is in preview for all accounts on AWS. Using network
rules to protect the Snowflake service is generally available.
This section discusses how to use network rules to restrict access to internal stages on AWS,
including how to simultaneously restrict access to the Snowflake service and internal stage. It
includes:
 Limitations
 Prerequisite: Enabling internal stage restrictions
 Guidelines for internal stages
 Strategy for protecting the internal stage only
 Strategies for protecting both service and internal stage
Note
You cannot use a network rule to restrict access to an internal stage on Microsoft Azure. However,
you can block all public access to an internal stage on Azure if you are using Azure Private Link. For
details, see Blocking public access (Optional).
Limitations
 A network policy that is activated for a security integration does not restrict access to an
internal stage.
 Network rule restrictions don’t apply to requests accessing an internal stage using a presigned
URL that has been generated by the GET_PRESIGNED_URL function.
Prerequisite: Enabling internal stage restrictions
To use network rules to restrict access to the internal stage of an account, the account administrator
must enable the ENFORCE_NETWORK_RULES_FOR_INTERNAL_STAGES parameter .
Network rules do not protect an internal stage until this parameter is enabled, regardless of the rule’s
mode.
To allow network rules to restrict access to internal stages, execute:
USE ROLE ACCOUNTADMIN;
ALTER ACCOUNT SET ENFORCE_NETWORK_RULES_FOR_INTERNAL_STAGES =
true;
Guidelines for internal stages
In addition to the best practices for network rules, you should adhere to the following guidelines
when creating network rules to restrict access to internal stages.
 Limit the number of identifiers. Due to restrictions enforced by AWS S3 session policies,
your strategy for protecting an internal stage must conform to the following limits:
 A network rule that contains IPv4 addresses cannot contain more than 15 IP address
ranges per rule. If you have more than 15 address ranges, create an additional
network rule.
 When you are allowing or blocking traffic based on the VPCE IDs of VPC endpoints,
there is a cumulative limit of 10 VPCE IDs per network policy. For example, if one
network rule contains 5 VPCE IDs and another contains 6 VPCE IDs, you cannot add
both rules to the same network policy.
If you encounter PolicySizeExceeded exceptions when fetching the scoped credentials from
AWS STS, break up the network identifiers into smaller network rules.
 Use same rule to protect both service and internal stage. When a rule contains IPv4
addresses and the mode of a network rule is INGRESS, a single rule can protect both the
Snowflake service and the internal stage of the account. Snowflake recommends using a
single rule even when the IP addresses accessing the service are different from the IP
addresses accessing the internal stage. This approach improves organization, manageability,
and auditing.
 Test Network Policies. Snowflake recommends testing network rules using user-level
network policies. If you encounter PolicySizeExceeded exceptions when fetching the scoped
credentials from AWS STS, break up the network identifiers into smaller network rules.
Strategy for protecting the internal stage only
To restrict access to an AWS internal stage without affecting how network traffic accesses the
Snowflake service, create a network rule with the following settings:
 Set the MODE parameter to INTERNAL_STAGE .
 Set the TYPE parameter to AWSVPCEID.
Note
You cannot restrict access to the internal stage based on the IP address of the request without also
restricting access to the Snowflake service.
Strategies for protecting both service and internal stage
When restricting access to both the Snowflake service and internal stage, the implementation strategy
varies based on whether network traffic is traversing the public internet or AWS Private Link.
In the following comparison, “Public” indicates that traffic to the service or internal stage is
traversing the public internet while “Private” indicates traffic is using AWS Private Link. Find the
combination that matches your environment, and then choose the implementation strategy
accordingly.
Service Internal Implementation Strategy
Connection Stage
Connection
Public Public Create a single network rule with TYPE=IPV4 and MODE=INGRESS. Include all IP
addresses that access the service and internal stage.
Private Private Strategy depends on whether you want to restrict access using private IP
addresses or the VPCE ID of the VPC endpoints:
 (Recommended) If using VPCE IDs, you must create two network
rules, even if the same VPC endpoint is connecting to both the service
and the internal stage.
 For the service, create a network rule
with TYPE=AWSVPCEID and MODE=INGRESS.
 For the internal stage, create a network rule
with TYPE=AWSVPCEID and MODE=INTERNAL_STAGE.
 If using private IP addresses, create a network rule
with TYPE=IPV4 and MODE=INGRESS. Include all private IP addresses
that access the service and internal stage.
[1]
Public Private Strategy depends on whether you want to restrict access to the internal stage
using private IP addresses or VPCE ID of the VPC endpoints:
 (Recommended) If using VPCE IDs, create two network rules, one for
the service and one for the internal stage.
 For the service, create a network
with TYPE=IPV4 and MODE=INGRESS.
 For the internal stage, create a network rule
with TYPE=AWSVPCEID and MODE=INTERNAL_STAGE.
 If using private IP addresses, create a single network rule
with TYPE=IPV4 and MODE=INGRESS. Include all IP addresses that access
the service and internal stage.
Private Public [1]
You must use private IPs for the service (cannot use VPCE IDs). Create a single
network rule with TYPE=IPV4 and MODE=INGRESS. Include all IP addresses that
access the service and internal stage.
[1] (1,2)
If you have implemented private connectivity to either the service or the internal stage, Snowflake
recommends implementing it for both.
Working with network rules
You can use Snowsight or SQL to manage the lifecycle of a network rule.
Create a network rule
You need the CREATE NETWORK RULE privilege on the schema to create a network rule. By
default, only the ACCOUNTADMIN and SECURITYADMIN roles, along with the schema owner,
have this privilege.
The mode of a network rule that will be used by a network policy must
be INGRESS or INTERNAL STAGE.
To gain a better understand of best practices and strategies for creating network rules, see About
network rules.
You can create a network rule using Snowsight or by executing a SQL command:
Snowsight
1. Sign in to Snowsight.
2. Select Admin » Security.
3. Select the Network Rules tab.
4. Select + Network Rule.
5. Enter the name of the network rule.
6. Select the schema of the network rule. Network rule are schema-level objects.
7. Optionally, add a descriptive comment for the network rule to help organize and
maintain network rules in the schema.
8. In the Type drop-down, select the type of identifier being defined in the network rule.
The Host Port type is not a valid option for network rules being used with network
policies.
9. In the Mode drop-down, select Ingress or Internal Stage. The Egress mode is not a
valid option for network rules being used with network policies.
10. Enter a comma-separated list of the identifiers that will be allowed or blocked when
the network rule is added to a network policy. The identifiers in this list must all be of
the type specified in the Type drop-down.
11. Select Create Network Policy.
SQL
An administrator can execute the CREATE NETWORK RULE command to create a new
network rule, specifying a list of network identifiers along with the type of those identifiers.
For example, to use a custom role to create a network rule that can be used to allow or block
traffic from a range of IP addresses:
GRANT USAGE ON DATABASE securitydb TO ROLE network_admin;
GRANT USAGE ON SCHEMA securitydb.myrules TO ROLE network_admin;
GRANT CREATE NETWORK RULE ON SCHEMA securitydb.myrules TO ROLE
network_admin;
USE ROLE network_admin;

CREATE NETWORK RULE cloud_network TYPE = IPV4 VALUE_LIST =


('47.88.25.32/27');
Modify a network rule
You can modify the identifiers and comment of an existing network rule, but you cannot modify its
type, mode, name, or schema.
You can add or remove identifiers and comments from an existing network rule using Snowsight or
SQL.
Snowsight
1. Sign in to Snowsight.
2. Select Admin » Security.
3. Select the Network Rules tab.
4. Find the network rule, select the … button, and then select Edit.
5. Modify the comma-delimited list of identifiers or the comment.
6. Select Update Network Rule.
SQL
Execute an ALTER NETWORK RULE statement.
Working with network policies
Once you have grouped network identifiers into network rules, you are ready to add those network
rules to the allowed list and blocked list of a new or existing network policy. There is no limit on
how many network rules can be added to a network policy.
For general information about how network policies control inbound access to the Snowflake service
and internal stage, see About network policies.
Create a network policy
Only security administrators (i.e. users with the SECURITYADMIN role) or higher or a role with
the global CREATE NETWORK POLICY privilege can create network policies. Ownership of a
network policy can be transferred to another role.
You can create a network policy using Snowsight or SQL:
Snowsight
1. Sign in to Snowsight.
2. Select Admin » Security.
3. Select the Network Policies tab.
4. Select + Network Policy.
5. Enter the name of the network policy.
6. Optionally, enter a descriptive comment.
7. To add a network rule to the allowed list, select Allowed, and then select Select rule.
You can add multiple network rules to the allowed list by re-selecting Select rule.
8. To add a network rule to the blocked list, select Blocked, and then select Select rule.
You can add multiple network rules to the blocked list by re-selecting Select rule.
9. Select Create Network Policy.
SQL
Execute a CREATE NETWORK POLICY statement.
Identify network policies in your account
You can identify the network policies in your account using Snowsight or SQL.
Snowsight
1. Sign in to Snowsight.
2. Select Admin » Security.
3. Select the Network Policies tab.
SQL
Call the POLICY_REFERENCES Information Schema table function, or query
the POLICY_REFERENCES Account Usage view.
Modify a network policy
You can add or remove network rules from the allowed list and blocked list of an existing network
policy using Snowsight or SQL. If you are editing a network policy that uses the
ALLOWED_IP_LIST and BLOCKED_IP_LIST parameters instead of a network rule, you must use
SQL to modify the network policy.
Snowsight
1. Sign in to Snowsight.
2. Select Admin » Security.
3. Select the Network Policies tab.
4. Find the network policy, select the … button, and then select Edit.
5. To add a network rule to the allowed list, select Allowed, and then select Select rule.
You can add multiple network rules to the allowed list by re-selecting Select rule.
6. To add a network rule to the blocked list, select Blocked, and then select Select rule.
You can add multiple network rules to the blocked list by re-selecting Select rule.
7. To remove a network rule from the allowed list or blocked list of the network policy:
a. Select Allowed or Blocked.
b. Find the network rule in the list and select X to remove.
SQL
Use the ALTER NETWORK POLICY command to add or remove network rules from an
existing network policy.
When adding a network rule to the allowed list or blocked list, you can either replace all
existing network rules in the list or add the new rule while keeping the existing list. The
following examples show each of these options:
 Use the SET clause to replace network rules in the blocked list with a new network
rule named other_network:
 ALTER NETWORK POLICY my_policy SET
BLOCKED_NETWORK_RULE_LIST = ( 'other_network' );
 Use the ADD clause to add a single network rule to the allowed list of an existing
network policy. Network rules that were previously added to the policy’s allowed list
remain in effect.
 ALTER NETWORK POLICY my_policy ADD
ALLOWED_NETWORK_RULE_LIST = ( 'new_rule' );
You can also remove a network rule from an existing list without replacing the entire list. For
example, to remove a network rule from the network policy’s blocked list:
ALTER NETWORK POLICY my_policy REMOVE
BLOCKED_NETWORK_RULE_LIST = ( 'other_network' );
Activating a network policy
A network rule does not restrict inbound network traffic until it has been activated for an account,
user, or security integration. For instructions on how to activate at each level, see:
 Activate a network policy for your account
 Activate network policies for individual users
 Activate network policies for security integrations
If you are activating multiple network policies at different levels (for example, both account- and
user-level network policies), see Network policy precedence.
Activate a network policy for your account
Activating a network policy for an account enforces the policy for all users in the account.
Only security administrators (i.e. users with the SECURITYADMIN role) or higher or a role with
the global ATTACH POLICY privilege can activate a network policy for an account.
Once the policy is associated with your account, Snowflake restricts access to your account based on
the allowed list and blocked list. Any user who attempts to log in from an network origin restricted
by the rules is denied access. In addition, when a network policy is associated with your account, any
restricted users who are already logged into Snowflake are prevented from executing further queries.
You can create multiple network policies, however only one network policy can be associated with
an account at any one time. Associating a network policy with your account automatically removes
the currently-associated network policy (if any).
Note that your current IP address or private endpoint identifier must be included in the allowed list in
the policy. Otherwise, when you activate the policy, Snowflake returns an error. In addition, your
current identifier cannot be included in the blocked list.
If you want to determine whether there is already an account-level network policy before activating a
new one, see Identifying a network policy activated at the account or user level.
You can activate a network policy for your account using Snowsight or SQL:
Snowsight
1. Select Admin » Security.
2. Select the Network Policies tab.
3. Find the network policy, select the … button, and then select Activate.
4. Select Activate policy.
SQL
Execute the ALTER ACCOUNT statement to set the NETWORK_POLICY parameter for
the account. For example:
ALTER ACCOUNT SET NETWORK_POLICY = my_policy;
Activate network policies for individual users
To enforce a network policy for a specific user in your Snowflake account, activate the network
policy for the user. Only a single network policy can be activated for each user at a time. The ability
to activate different network policies for different users allows for granular control. Associating a
network policy with a user automatically removes the currently-associated network policy (if any).
Note
Only the role with the OWNERSHIP privilege on both the user and the network policy, or a higher
role, can activate a network policy for an individual user.
Once the policy is associated with the user, Snowflake restricts access to the user based on the
allowed list and blocked list. If the user with an activated user-level network policy attempts to log in
from a network location restricted by the rules, the user is denied access to Snowflake.
In addition, when a user-level network policy is associated with the user and the user is already
logged into Snowflake, if the user’s network location does not match the user-level network policy
rules, Snowflake prevents the user from executing further queries.
If you want to determine whether there is already a user-level network policy before activating a new
one, see Identifying a network policy activated at the account or user level.
To activate a network policy for an individual user, execute the ALTER USER command to set
the NETWORK_POLICY parameter for the user. For example, execute:
ALTER USER joe SET NETWORK_POLICY = my_policy;
Activate network policies for security integrations
Some security integrations support activating a network policy to control network traffic that is
governed by that integration. These security integrations have a NETWORK_POLICY parameter
that activates the network policy for the integration. Currently, SCIM and Snowflake OAuth support
integration-level network policies.
Note
A network policy that is activated for a security integration does not restrict access to an internal
stage.
For example, you could activate a network policy when creating a new Snowflake OAuth security
integration. The network policy would restrict the access of requests trying to authenticate.
CREATE SECURITY INTEGRATION oauth_kp_int
TYPE = oauth
ENABLED = true
OAUTH_CLIENT = custom
OAUTH_CLIENT_TYPE = 'CONFIDENTIAL'
OAUTH_REDIRECT_URI = 'https://example.com'
NETWORK_POLICY = mypolicy;
You can execute the ALTER SECURITY INTEGRATION … SET NETWORK_POLICY statement
to activate a network policy for an existing security integration.
Identifying a network policy activated at the account or user level
You can identify which network policy is activated at the account or user level.
Account
1. Select Admin » Security.
2. Select the Network Policies tab.
3. Sort the Status column to view the active network policy. This is the account-level
policy.
Alternatively, you can execute the following to list the account-level network policy:
SHOW PARAMETERS LIKE 'network_policy' IN ACCOUNT;
User
To determine whether a network policy is set for a specific user, execute the SHOW
PARAMETERS command.
SHOW PARAMETERS LIKE 'network_policy' IN USER <username>;
For example:
SHOW PARAMETERS LIKE 'network_policy' IN USER jsmith;
Using replication with network policies and network rules
Snowflake supports replication and failover/failback for network policies and network rules,
including the assignment of the network policy.
For details, refer to Replication of security integrations & network policies across multiple accounts.
Using the Classic Console
Note
New features are not being released for Classic Console. Snowflake recommends using Snowsight or
SQL so you can use network rules in conjunction with network policies.
Creating a network policy with Classic Console
Only security administrators (i.e. users with the SECURITYADMIN role) or higher or a role with
the global CREATE NETWORK POLICY privilege can create network policies. Ownership of a
network policy can be transferred to another role.

1. Click Account » Policies. The Policies page appears.


2. Click the Create button. The Create Network Policy dialog appears.
3. In the Name field, enter a name for the network policy.
4. In the Allowed IP Addresses field, enter one or more IPv4 addresses that are allowed access
to this Snowflake account, separated by commas.
Note
To block all IP addresses except for a set of specific addresses, you only need to define an
allowed IP address list. Snowflake automatically blocks all IP addresses not included in the
allowed list.
5. In the Blocked IP Addresses field, optionally enter one or more IPv4 addresses that are
denied access to this Snowflake account, separated by commas. Note that this field is not
required and is used primarily to deny specific addresses in a range of addresses in the
allowed list.
Caution
 When a network policy includes values in both the allowed and blocked IP address
lists, Snowflake applies the blocked IP address list first.
 Do not add 0.0.0.0/0 to the blocked IP address list. 0.0.0.0/0 is interpreted to be “all IPv4
addresses on the local machine”. Because Snowflake resolves this list first, this would
block your own access. Also, note that it is not necessary to include this IP address in
the allowed IP address list.
6. Enter other information for the network policy, as needed, and click Finish.
Modifying a network policy with Classic Console
If you are using the Classic Console, do the following to modify a network policy:

1. Click Account » Policies.


2. Click on a policy to select it and populate the side panel on the right.
3. Click the Edit button in the right panel.
4. Modify the fields as necessary:
 To remove an IP address from the Allowed IP Addresses or Blocked IP
Addresses list, click the x next to the entry.
 To add an IP address to either list, enter one or more comma-separated IPv4 addresses
in the appropriate field, and click the Add button.
5. Click Save.
Activating a network policy with Classic Console
If you are using the Classic Console, you can enforce a network policy for all users in your
Snowflake account by activating the network policy for your account.

1. Click Account » Policies.


2. Click on a policy to select it and populate the side panel on the right.
3. Click the Activate button in the right panel.
Network rules
Using SQL to work with network rules is generally available.
SNOWSIGHT PREVIEW FEATURE— OPEN
Using Snowsight to work with network rules is currently in preview.
If you need a solution that is generally available, use SQL to work with network rules.
Network rules are schema-level objects that group network identifiers into logical units.
Snowflake features that restrict network traffic can reference network rules rather than
defining network identifiers directly in the feature. A network rule does not define
whether its identifiers should be allowed or blocked. The Snowflake feature that uses the
network rule specifies whether the identifiers in the rule are permitted or prohibited.
The following features use network rules to control network traffic:
 Network policies use network rules to control inbound network traffic to the
Snowflake service and internal stages.
 External network access uses network rules to restrict access to external network
locations from a Snowflake UDF or procedure.
Supported network identifiers
Administrators need to be able to restrict access based on the network identifier associated with the
origin or destination of a request. Network rules allow administrators to allow or block the following
network identifiers:
Incoming requests
 IPv4 addresses. Snowflake supports ranges of IP addresses using Classless Inter-
Domain Routing (CIDR) notation. For example, 192.168.1.0/24 represents all IPv4
addresses in the range of 192.168.1.0 to 192.168.1.255.
 VPCE IDs of AWS VPC endpoints . VPC IDs are not supported.
 LinkIDs of Azure private endpoints. Execute
the SYSTEM$GET_PRIVATELINK_AUTHORIZED_ENDPOINTS function to
retrieve the LinkID associated with an account.
Outgoing requests
Domains, including a port range.
The valid port range is 1-65535. If you do not specify a port, it defaults to 80 and 443. If an
external network location supports dynamic ports, you need to specify all possible ports.
To allow access to all ports, define the port as 0. For example, company.com:0.
Each network rule contains a list of one or more network identifiers of the same type. The network
rule’s TYPE property indicates the type of identifiers that are included in the rule. For example, if
the TYPE property is IPV4, then the network rule’s value list must contain valid IPv4 addresses or
address ranges in CIDR notation.
Incoming vs. outgoing requests
The mode of a network rule indicates whether the Snowflake feature that uses the rule restricts
incoming or outgoing requests.
Incoming requests
Network policies protect the Snowflake service and internal stages from incoming traffic. When a
network rule is used with a network policy, the administrator can set the mode to one of the
following:
INGRESS
The behavior of the INGRESS mode depends on the value of the network
rule’s TYPE property.
 If TYPE=IPV4, by default the network rule controls access to the Snowflake service
only.
If the account administrator enables
the ENFORCE_NETWORK_RULES_FOR_INTERNAL_STAGES parameter,
then MODE=INGRESS and TYPE=IPV4 also protects an AWS internal stage.
 If TYPE=AWSVPCEID, then the network rule controls access to the Snowflake service
only.
If you want to restrict access to the AWS internal stage based on the VPCE ID of an
interface endpoint, you must create a separate network rule using
the INTERNAL_STAGE mode.
INTERNAL_STAGE
Controls access to an AWS internal stage without restricting access to the Snowflake service.
Using this mode requires the following:
 The account administrator must enable
the ENFORCE_NETWORK_RULES_FOR_INTERNAL_STAGES parameter.
 The TYPE property of the network rule must be AWSVPCEID.
For accounts on Microsoft Azure, you cannot use a network rule to restrict access to the
internal stage. However, you can block all public network traffic from accessing the internal
stage.
Outgoing requests
Administrators can use network rules with features that control where requests can be sent. In these
cases, the administrator defines the network rule with the following mode:
EGRESS
Indicates that the network rule is used for traffic sent from Snowflake.
Currently used with external network access, which allows a UDF or procedure to send
requests to an external network location.
Creating a network rule
You need the CREATE NETWORK RULE privilege on the schema to create a network rule. By
default, only the ACCOUNTADMIN and SECURITYADMIN roles, along with the schema owner,
have this privilege.
You can create a network rule using Snowsight or by executing a SQL command:
Snowsight
1. Sign in to Snowsight.
2. Select Admin » Security.
3. Select the Network Rules tab.
4. Select + Network Rule.
5. Enter the name of the network rule.
6. Select the schema of the network rule. Network rule are schema-level objects.
7. Optionally, add a descriptive comment for the network rule to help organize and
maintain network rules in the schema.
8. In the Type drop-down, select the type of identifier being defined in the network rule.
9. In the Mode drop-down, select the mode of the network rule.
The INGRESS and INTERNAL STAGE modes indicate the network rule will be used
with a network policy to restrict incoming requests and the EGRESS mode indicates
the network rule will be used with an external access integration to restrict outgoing
requests.
10. Enter a comma-separated list of the identifiers that will be allowed or blocked when
the network rule is added to a network policy. The identifiers in this list must all be of
the type specified in the Type drop-down.
11. Select Create Network Policy.
SQL
An administrator can execute the CREATE NETWORK RULE command to create a new
network rule, specifying a list of network identifiers along with the type of those identifiers.
For example, to use a custom role to create a network rule that can be used to allow or block
traffic from a range of IP addresses:
GRANT USAGE ON DATABASE securitydb TO ROLE network_admin;
GRANT USAGE ON SCHEMA securitydb.myrules TO ROLE network_admin;
GRANT CREATE NETWORK RULE ON SCHEMA securitydb.myrules TO ROLE
network_admin;
USE ROLE network_admin;

CREATE NETWORK RULE cloud_network TYPE = IPV4 MODE = INGRESS


VALUE_LIST = ('47.88.25.32/27');
IPv4 addresses
When specifying IP addresses for a network rule, Snowflake supports ranges of IP addresses
using Classless Inter-Domain Routing (CIDR) notation.
For example, 192.168.1.0/24 represents all IPv4 addresses in the range of 192.168.1.0 to 192.168.1.255.
Identifying network rules in your account
You can identify the network rules in your account using Snowsight or SQL.
Snowsight
1. Sign in to Snowsight.
2. Select Admin » Security.
3. Select the Network Rules tab.
SQL
Call the NETWORK_RULE_REFERENCES Information Schema table function, or query
the NETWORK_RULE_REFERENCES Account Usage view.
Modifying a network rule
You can modify the identifiers and comment of an existing network rule, but you cannot modify its
type, mode, name, or schema.
To add or remove identifiers and comments from an existing network rule using Snowsight or SQL,
do one of the following:
Snowsight
1. Sign in to Snowsight.
2. Select Admin » Security.
3. Select the Network Rules tab.
4. Find the network rule, select the … button, and then select Edit.
5. Modify the comma-delimited list of identifiers or the comment.
6. Select Update Network Rule.
SQL
Execute an ALTER NETWORK RULE statement.
Replication of network rules
Network rules are schema-level objects and are replicated with the database in which they are
contained.
For information about replicating the network policies that use network rules, see Replicating
network policies.
Privileges and commands
Command Privilege Description
CREATE CREATE NETWORK RULE on Creates a new network rule.
NETWORK RULE SCHEMA
ALTER OWNERSHIP on NETWORK Modifies an existing
NETWORK RULE RULE network rule.
DROP NETWORK OWNERSHIP on NETWORK Removes an existing
RULE RULE network rule from the
system.
DESCRIBE OWNERSHIP on NETWORK Describes the properties of
NETWORK RULE RULE an existing network rule.
SHOW NETWORK OWNERSHIP on NETWORK Lists all of the network rules
RULES RULE or USAGE on SCHEMA in the system.

AWS VPC interface endpoints for internal


stages
BUSINESS CRITICAL FEATURE
This feature requires Business Critical (or higher).
To inquire about upgrading, please contact Snowflake Support.
This topic provides concepts as well as detailed instructions for connecting to Snowflake
internal stages through AWS VPC Interface Endpoints.
Overview
AWS VPC interface endpoints and AWS PrivateLink for Amazon S3 can be combined to provide
secure connectivity to Snowflake internal stages. This setup ensures that data loading and data
unloading operations to Snowflake internal stages use the AWS internal network and do not take
place over the public Internet.
Prior to AWS supporting VPC interface endpoints for internal stage access, it was necessary to create
a proxy farm within the AWS VPC to facilitate secure access to Snowflake internal stages. With the
added support of VPC interface endpoints for Snowflake internal stages, users and client applications
can now access Snowflake internal stages over the private AWS network. The following diagram
summarizes this new support:

Note the following regarding the numbers in the BEFORE diagram:


 Users have two options to connect to a Snowflake internal stage:
 Option A allows an on-premises connection directly to the internal stage as shown by
the number 1.
 Option B allows a connection to the internal stage through a proxy farm as shown by
the numbers 2 and 3.
 If using the proxy farm, users can also connect to Snowflake directly as denoted by the
number 4.
Note the following regarding the numbers in the AFTER diagram:
 The updates in this feature remove the need to connect to Snowflake or a Snowflake internal
stage through a proxy farm.
 An on-premises user can connect to Snowflake directly as shown in number 5.
 To connect to a Snowflake internal stage, on-premises users connect to an interface endpoint,
number 6, and then use AWS PrivateLink for Amazon S3 to connect to the Snowflake
internal stage as shown in number 7.
There is a single Amazon S3 bucket per internal stage deployment. A prefix in the internal stage
Amazon S3 bucket is used to organize the data in each Snowflake account. The Amazon S3 bucket
endpoint URLs are different depending on whether the connection to the bucket uses private
connectivity (i.e. AWS PrivateLink for S3).
Public Amazon S3 Global Endpoint URL
<bucket_name>.s3.region.amazonaws.com/prefix
Private Amazon S3 Endpoint URL
<bucket_name>.<vpceID>.s3.<region>.vpce.amazonaws.com/prefix
Benefits
Implementing VPC interface endpoints to access Snowflake internal stages provides the following
advantages:
 Internal stage data does not traverse the public Internet.
 Client and SaaS applications, such as Microsoft PowerBI, that run outside of the AWS VPC
can connect to Snowflake securely.
 Administrators are not required to modify firewall settings to access internal stage data.
 Administrators can implement consistent security and monitoring regarding how users
connect to storage accounts.
Limitations
For limitations of AWS PrivateLink, see the AWS documentation.
Getting started
Before configuring AWS and Snowflake to allow requests to access a Snowflake internal stage via
AWS PrivateLink, you must:
 Meet the prerequisites .
 Choose the implementation strategy that fits your environment.
Prerequisites
 Set the ENABLE_INTERNAL_STAGES_PRIVATELINK parameter to enable support for
connecting to an internal stage over AWS PrivateLink. For both implementation strategies
discussed in this topic, the account administrator must execute:
 use role accountadmin;
 alter account set ENABLE_INTERNAL_STAGES_PRIVATELINK = true;
 AWS PrivateLink for S3 .
Important
AWS PrivateLink for S3 is an AWS service that must be enabled in your cloud environment.
For help with configuring and implementing this service, contact your internal AWS
administrator.
 Update the firewall allow-listing as follows:
 If using an outbound firewall, ensure that it allows all the URLs required by
Snowflake. For details, see SnowCD (Connectivity Diagnostic Tool).
 For us-east-1 customers only: If using one of the following Snowflake clients to connect to
Snowflake, please upgrade to the client version as follows:
 JDBC driver: 3.13.3 (or higher)
 ODBC driver: 2.23.2 (or higher)
 Python Connector for Snowflake: 2.5.1 (or higher)
 SnowSQL: 1.2.17 (or higher)
 Upgrade SnowSQL before using this feature. For more information,
see Installing SnowSQL.
 Note that if using the SnowSQL --noup option, SnowSQL auto-upgrade is
disabled and a new SnowSQL version cannot be downloaded. To upgrade,
disable the --noup option and re-enable after the upgrade is complete.
Choosing an implementation strategy
Choosing the right implementation strategy depends on whether your organization is using AWS
PrivateLink to access a single internal stage or multiple internal stages.
 If your organization is accessing the internal stage of a single account, see Accessing an
internal stage with an interface endpoint.
 If your organization is accessing the internal stages of multiple accounts, see Accessing
Internal stages with dedicated interface endpoints. This strategy uses multiple interface
endpoints to connect, one for each internal stage.
Accessing an internal stage with an interface endpoint
The following implementation strategy is recommended when your organization is accessing the
internal stage of a single account. If you are accessing multiple internal stages from your VPC,
see Accessing Internal stages with dedicated interface endpoints.
To configure a VPC interface endpoint to access a Snowflake internal stage, it is necessary to have
support from the following three roles in your organization:
1. The Snowflake account administrator (i.e. a user with the Snowflake ACCOUNTADMIN
system role).
2. The AWS administrator.
3. The network administrator.
Depending on the organization, it might be necessary to coordinate the configuration efforts with
more than one person or team to implement the following configuration steps.
Procedure
Complete the following steps to configure and implement secure access to a Snowflake internal stage
through a VPC endpoint:
1. As a Snowflake account administrator, execute the following statements in your Snowflake
account and record the value defined by the privatelink_internal_stage key. Note that the Amazon
S3 bucket name is defined in the first segment of the URL when read from left to right. For
more information,
see ENABLE_INTERNAL_STAGES_PRIVATELINK and SYSTEM$GET_PRIVATELIN
K_CONFIG.
2. use role accountadmin;
3. alter account set ENABLE_INTERNAL_STAGES_PRIVATELINK = true;
4. select key, value from table(flatten(input=>parse_json(system$get_privatelink_config())));
5. As the AWS administrator, create a VPC endpoint for AWS PrivateLink for S3 using the
AWS Console. Record the VPCE DNS Name for use in the next step; do not record any
VPCE DNS zonal names.
The VPCE DNS Name can be found by describing an interface endpoint once the endpoint is
created.
In this example, a wildcard (i.e. *) is listed as the leading character in the VPCE DNS Name.
Replace the leading wildcard with the Amazon S3 bucket name from the previous step. For
example:
Replace
*.vpce-000000000000a12-abc00ef0.s3.us-west-2.vpce.amazonaws.com
With
<bucket_name>.vpce-000000000000a12-abc00ef0.s3.us-west-2.vpce.amazonaws.com
6. As the network administrator, update the DNS settings to resolve the following URL:
<bucket_name>.s3.<region>.amazonaws.com to the VPCE DNS name after the leading wildcard is
replaced with the Amazon S3 bucket name.
In this example, resolve <bucket_name>.s3.<region>.amazonaws.com to <bucket_name>.vpce-
000000000000a12-abc00ef0.s3.us-west-2.vpce.amazonaws.com .
Tip
 Do not use wildcard characters (i.e. *) with DNS mapping because of the possible
impact of accessing other Amazon S3 buckets outside of Snowflake.
 Use a separate Snowflake account for testing, and configure a private hosted DNS
zone in a test VPC to test the feature so that the testing is isolated and does not impact
your other workloads.
 If using a separate Snowflake account is not possible, use a test user to access
Snowflake from a test VPC where the DNS changes are made.
 To test from on-premises applications, use DNS forwarding to forward requests to the
AWS private hosted zone in the VPC where the DNS settings are made. If there are
client applications in both the VPC and on-premises, use AWS Transit Gateway.
 Execute the following command from the client machine to verify that the IP address
returned is the private IP address for the storage account:
 dig <bucket_name>.s3.<region>.amazonaws.com
7. For Snowflake accounts in us-east-1, verify your Snowflake clients are on their latest versions.
Accessing Internal stages with dedicated interface endpoints
The following implementation strategy is recommended when your organization is accessing the
internal stages of multiple accounts.
The S3_STAGE_VPCE_DNS_NAME parameter allows users to associate a Snowflake account with
the DNS name of an Amazon S3 interface endpoint. This allows organizations with multiple
Snowflake accounts in an AWS deployment to associate each internal stage with a different interface
endpoint. When each internal stage has its own interface endpoint, network traffic to a specific
internal stage is isolated from network traffic to other internal stages.
Before continuing, make sure you have met the prerequisites.
Benefits
The strategy in which an internal stage within an AWS deployment has a dedicated Amazon S3
interface endpoint has the following benefits:
Security
Each account can have a different security strategy because individual interface endpoints can
have different security configurations.
Chargeback models
Companies can isolate network traffic based on the type of account (for example, production
vs. development), and attribute costs associated with data flowing through an endpoint to the
correct account.
DNS Management
The DNS name of an Amazon S3 interface endpoint is a globally unique name that locates
the specific endpoint within a specific region. AWS automatically registers this DNS name in
its public DNS service, meaning it is publicly resolvable. For these reasons, an administrator
does not need to do any additional DNS configuration to route traffic through an Amazon S3
interface endpoint to an internal stage. For example, the administrator does not need to set up
a private hosted zone (PHZ) when configuring the Amazon Route 53 DNS service or register
a DNS name to point to an endpoint.
Configuration
The network isolation strategy consists of the following:
1. In AWS, an administrator creates a new Amazon S3 interface endpoint for every Snowflake
account in the organization. For example, if an organization has two accounts in the
Snowflake deployment, the administrator creates two interface endpoints.
2. In Snowflake, an administrator uses the S3_STAGE_VPCE_DNS_NAME parameter to
associate each Snowflake account with the DNS name of its dedicated interface endpoint. All
traffic to the account’s internal stage goes through this interface endpoint.
AWS configuration
In your VPC as an AWS administrator:
1. Create a separate Amazon S3 interface endpoint for each of your Snowflake accounts.
2. For each of these endpoints, use the AWS VPC Management Console to:
a. Open the endpoint to view its Details.
b. Find the DNS Names field, and copy the region-scoped DNS name. The Snowflake
S3_STAGE_VPCE_DNS_NAME parameter will be set to this value.
The format of the region-scoped DNS name looks like *.vpce-sd98fs0d9f8g.s3.us-west-
2.vpce.amazonaws.com . Though AWS also provides an availability zone DNS name,
Snowflake recommends the region-scoped DNS name because it provides high
availability with failover capabilities.
Snowflake configuration
After the AWS administrator creates the interface endpoint for a Snowflake account’s internal stage,
the Snowflake administrator can use the S3_STAGE_VPCE_DNS_NAME parameter to associate
the DNS name of that endpoint with the account.
The S3_STAGE_VPCE_DNS_NAME parameter should be set to the region-scoped DNS Name of
the interface endpoint associated with a specific internal stage. The standard format begins with an
asterisk (*) and ends with vpce.amazonaws.com (for example, *.vpce-sd98fs0d9f8g.s3.us-west-
2.vpce.amazonaws.com ).
As an example, the account administrator can execute the following to associate an endpoint with the
current account:
alter account set S3_STAGE_VPCE_DNS_NAME='*.vpce-sd98fs0d9f8g.s3.us-
west2.vpce.amazonaws.com';
Final DNS value
The final DNS name associated with an account has the form: <bucketname>.bucket.vpce-
<vpceid>.s3.<region>.vpce.amazonaws.com
Where:
 <bucketname> is the name of the internal stage’s Amazon S3 bucket.
 <vpceid> is the unique identifier of the Amazon S3 interface endpoint associated with the
account.
 <region> is the cloud region that hosts your Snowflake account.
The final DNS name appears in logs for each driver that connects to the internal stage.

Azure Private Endpoints for Internal Stages


BUSINESS CRITICAL FEATURE
This feature requires Business Critical (or higher).
To inquire about upgrading, please contact Snowflake Support.
This topic provide concepts as well as detailed instructions for connecting to Snowflake
internal stages through Microsoft Azure Private Endpoints.
Overview
Azure Private Endpoints and Azure Private Link can be combined to provide secure connectivity to
Snowflake internal stages. This setup ensures that data loading and data unloading operations to
Snowflake internal stages use the Azure internal network and do not take place over the public
Internet.
Prior to Microsoft supporting Private Endpoints for internal stage access, it was necessary to create a
proxy farm within the Azure VNet to facilitate secure access to Snowflake internal stages. With the
added support of Private Endpoints for Snowflake internal stages, users and client applications can
now access Snowflake internal stages over the private Azure network. The following diagram
summarizes this new support:

Note the following regarding the numbers in the BEFORE diagram:


 Users have two options to connect to a Snowflake internal stage:
 Option A allows an on-premises connection directly to the internal stage as shown by
the number 1.
 Option B allows a connection to the internal stage through a proxy farm as shown by
the numbers 2 and 3.
 If using the proxy farm, users can also connect to Snowflake directly as denoted by the
number 4.
Note the following regarding the numbers in the AFTER diagram:
 For clarity, the diagram shows a single Private Endpoint from one Azure VNet pointing to a
single Snowflake internal stage (6 and 7).
Note that it is possible to configure multiple Private Endpoints, each within a different VNet,
that point to the same Snowflake internal stage.
 The updates in this feature remove the need to connect to Snowflake or a Snowflake internal
stage through a proxy farm.
 An on-premises user can connect to Snowflake directly as shown in number 5.
 To connect to a Snowflake internal stage, on-premises user connects to a Private Endpoint,
number 6, and then uses Azure Private Link to connect to the Snowflake internal stage as
shown in number 7.
In Azure, each Snowflake account has a dedicated storage account to use as an internal stage. The
storage account URIs are different depending on whether the connection to the storage account uses
private connectivity (i.e. Azure Private Link). The private connectivity URL includes
a privatelink segment in the URL.
Public storage account URI
<storage_account_name>.blob.core.windows.net
Private connectivity storage account URI
<storage_account_name>.privatelink.blob.core.windows.net
Benefits
Implementing Private Endpoints to access Snowflake internal stages provides the following
advantages:
 Internal stage data does not traverse the public Internet.
 Client and SaaS applications, such as Microsoft PowerBI, that run outside of the Azure VNet
can connect to Snowflake securely.
 Administrators are not required to modify firewall settings to access internal stage data.
 Administrators can implement consistent security and monitoring regarding how users
connect to storage accounts.
Limitations
Microsoft Azure defines how a Private Endpoint can interact with Snowflake:
 A single Private Endpoint can communicate to a single Snowflake Service Endpoint. You can
have multiple one-to-one configurations that connect to the same Snowflake internal stage.
 The maximum number of private endpoints in your storage account that can connect to a
Snowflake internal stage is fixed. For details, see Standard storage account limits.
Configuring private endpoints to access Snowflake internal stages
To configure Private Endpoints to access Snowflake internal stages, it is necessary to have support
from the following three roles in your organization:
1. The Snowflake account administrator (i.e. a user with the Snowflake ACCOUNTADMIN
system role).
2. The Microsoft Azure administrator.
3. The network administrator.
Depending on the organization, it may be necessary to coordinate the configuration efforts with more
than one person or team to implement the following configuration steps.
Complete the following steps to configure and implement secure access to Snowflake internal stages
through Azure Private Endpoints:
1. Verify that your Azure subscription is registered with the Azure Storage resource manager.
This step allows you to connect to the internal stage from a private endpoint.
2. As a Snowflake account administrator, execute the following statements in your Snowflake
account and record the ResourceID of the internal stage storage account defined by
the privatelink_internal_stage key. For more information,
see ENABLE_INTERNAL_STAGES_PRIVATELINK and SYSTEM$GET_PRIVATELIN
K_CONFIG.
3. use role accountadmin;
4. alter account set ENABLE_INTERNAL_STAGES_PRIVATELINK = true;
5. select key, value from table(flatten(input=>parse_json(system$get_privatelink_config())));
6. As the Azure administrator, create a Private Endpoint through the Azure portal.
View the Private Endpoint properties and record the resource ID value. This value will be
the privateEndpointResourceID value in the next step.
Verify that the Target sub-resource value is set to blob.
For more information, see the Microsoft Azure Private Link documentation.
7. As the Snowflake administrator, call
the SYSTEM$AUTHORIZE_STAGE_PRIVATELINK_ACCESS function using
the privateEndpointResourceID value as the function argument. This step authorizes access to the
Snowflake internal stage through the Private Endpoint.
8. use role accountadmin;
9. select system$authorize_stage_privatelink_access('<privateEndpointResourceID>');
If necessary, complete these steps to revoke access to the internal stage.
10. As the network administrator, update the DNS settings to resolve the URLs as follows:
<storage_account_name>.blob.core.windows.net to <storage_account_name>.privatelink.blob.core.windows.n
et
When using a private DNS zone in an Azure VNet, create the alias record
for <storage_account_name>.privatelink.blob.core.windows.net .
For more information, see Azure Private Endpoint DNS configuration.
Tip
 Use a separate Snowflake account for testing, and configure a private DNS zone in a
test VNet to test the feature so that the testing is isolated and does not impact your
other workloads.
 If using a separate Snowflake account is not possible, use a test user to access
Snowflake from a test VPC where the DNS changes are made.
 To test from on-premises applications, use DNS forwarding to forward requests to the
Azure private DNS in the VNet where the DNS settings are made. Execute the
following command from the client machine to verify that the IP address returned is
the private IP address for the storage account:
 dig <storage_account_name>.blob.core.windows.net
Blocking public access (Optional)
Once you have configured Private Endpoints to access the internal stage via Azure Private Link, you
can optionally block requests from public IP addresses to the internal stage. Once public access is
blocked, all traffic must be through the Private Endpoint.
Controlling public access to an Azure internal stage is different from controlling public access to the
Snowflake service. You use
the SYSTEM$BLOCK_INTERNAL_STAGES_PUBLIC_ACCESS function, not a network policy,
to block requests to the internal stage. Unlike network policies, this function cannot block some
public IP addresses while allowing others. When you call
SYSTEM$BLOCK_INTERNAL_STAGES_PUBLIC_ACCESS, all public IP addresses are
blocked.
Important
Confirm that traffic via private connectivity is successfully reaching the internal
stage before blocking public access. Blocking public access without configuring private connectivity
can cause unintended disruptions, including interference with managed services like Azure Data
Factory.
The SYSTEM$BLOCK_INTERNAL_STAGES_PUBLIC_ACCESS function enforces its
restrictions by altering the Networking settings of the Azure storage account where the internal
stage is located. These Azure settings are commonly referred to as the “storage account firewall
settings”. Executing the Snowflake function does the following in Azure:
 Sets the Public network access field to Enabled from selected virtual networks and IP
addresses.
 Adds Snowflake VNet subnet ids to the Virtual Networks section.
 Clears all IP addresses from the Firewall section.
To block all traffic from public IP addresses to the internal stage, execute:
SELECT SYSTEM$BLOCK_INTERNAL_STAGES_PUBLIC_ACCESS();
The function can take a few minutes to finish executing.
Ensuring public access is blocked
You can determine whether public IP addresses are able to access an internal stage by executing
the SYSTEM$INTERNAL_STAGES_PUBLIC_ACCESS_STATUS function.
If the Azure settings are currently blocking all public traffic, the function
returns Public Access to internal stages is blocked. This verifies that the settings have not been changed
since the SYSTEM$BLOCK_INTERNAL_STAGES_PUBLIC_ACCESS function was executed.
If at least some public IP addresses can access the internal stage, the function
returns Public Access to internal stages is unblocked.
Unblocking public Access
You can execute the SYSTEM$UNBLOCK_INTERNAL_STAGES_PUBLIC_ACCESS function to
allow public access to an internal stage that was previously blocked.
Executing the function alters the Networking settings of the Azure storage account where the
internal stage is located. It sets the Azure Public network access field to Enabled from all
networks.
Revoking Private Endpoints to access Snowflake internal stages
Complete the following steps to revoke access to Snowflake internal stages through Microsoft Azure
Private Endpoints:
1. As a Snowflake administrator, set
the ENABLE_INTERNAL_STAGES_PRIVATELINK parameter to FALSE and call
the SYSTEM$REVOKE_STAGE_PRIVATELINK_ACCESS function to revoke access to
the Private Endpoint, using the same privateEndpointResourceID value that was used to originally
authorize access to the Private Endpoint.
2. use role accountadmin;
3. alter account set enable_internal_stages_privatelink = false;
4. select system$revoke_stage_privatelink_access('<privateEndpointResourceID>');
5. As an Azure administrator, delete the Private Endpoint through the Azure portal.
6. As a network administrator, remove the DNS and alias records that were used to resolve the
storage account URLs.
At this point, the access to the Private Endpoint is now revoked and the query result from calling
the SYSTEM$GET_PRIVATELINK_CONFIG function should not return
the privatelink_internal_stage key and its value.
Troubleshooting
Azure applications that access Snowflake stages over the public Internet and also use a private DNS
service to resolve service hostnames cannot access Snowflake stages if a private endpoint connection
is established to the stage as described in this topic.
Once a private endpoint connection is created, Microsoft Azure automatically creates a CNAME
record in the public DNS service that points the storage account host to its Azure Private Link
counterpart (i.e. .privatelink.blob.core.windows.net ). If any application has configured a private DNS
region for the same domain, then Microsoft Azure tries to resolve the storage account host by
querying the private DNS service. If the entry for the storage account is not found in the private DNS
service, a connection error occurs.
There are two options to address this issue:
1. Remove or dissociate the private DNS region from the application.
2. Create a CNAME record for the storage account private hostname
(i.e. <storage_account_name>.privatelink.blob.core.windows.net ) in the private DNS service and point
it to the hostname specified by the output of this command:
dig CNAME <storage_account_name>.privatelink.blob.core.windows.net

AWS PrivateLink and Snowflake


BUSINESS CRITICAL FEATURE
This feature requires Business Critical (or higher).
This topic describes how to configure AWS PrivateLink to directly connect your
Snowflake account to one or more AWS VPCs.
Note that AWS PrivateLink is not a service provided by Snowflake. It is an AWS service
that Snowflake supports to use with your Snowflake account.
What is AWS PrivateLink?
AWS PrivateLink is an AWS service for creating private VPC endpoints that allow direct, secure
connectivity between your AWS VPCs and the Snowflake VPC without traversing the public
Internet. The connectivity is for AWS VPCs in the same AWS region.
For Writing External Functions, you can also use AWS PrivateLink with private endpoints.
In addition, if you have an on-premises environment (e.g. a non-hosted data center), you can choose
to use AWS Direct Connect, in conjunction with AWS PrivateLink, to connect all your virtual and
physical environments in a single, private network.
Note
AWS Direct Connect is a separate AWS service that must be implemented independently from AWS
PrivateLink and is outside the scope of this topic. To inquire about implementing AWS Direct
Connect, please contact Amazon.
Enabling AWS PrivateLink
Note
Currently, the self-service enablement process in this section does not support authorizing an AWS
account identifier from a managed cloud service or a third party vendor.
To authorize an AWS account identifier for this use case, please retrieve the AWS account identifier
from the vendor and contact Snowflake Support.
To enable AWS PrivateLink for your Snowflake account, complete the following steps:
1. In your command line environment, run the following AWS CLI STS command and save the
output. The output will be used as the value for the federated_token argument in the next step.
2. aws sts get-federation-token --name sam
Note that get-federation-token requires either an identity and access management user in AWS
or the AWS account root user. For details, refer to the AWS documentation.
Extract the 12-digit number in the "FederatedUserId" value (truncated). For example, if your
token contains:
{
...
"FederatedUser": {
"FederatedUserId": "185...:sam",
"Arn": "arn:aws:sts::185...:federated-user/sam"
},
"PackedPolicySize": 0
}
Extract 185.... This 12-digit number will be the value for the aws_id in the next step.
3. As a Snowflake account administrator (i.e. a user with the ACCOUNTADMIN system role),
call the SYSTEM$AUTHORIZE_PRIVATELINK function to authorize (i.e. enable) AWS
PrivateLink for your Snowflake account:
4. select SYSTEM$AUTHORIZE_PRIVATELINK ( '<aws_id>' , '<federated_token>' );
Where:
 'aws_id'
The 12-digit identifier that uniquely identifies your Amazon Web Services (AWS) account,
as a string.
 'federated_token'
The federated token value that contains access credentials for a federated user as a string.
For example:
use role accountadmin;

select SYSTEM$AUTHORIZE_PRIVATELINK (
'185...',
'{
"Credentials": {
"AccessKeyId": "ASI...",
"SecretAccessKey": "enw...",
"SessionToken": "Fwo...",
"Expiration": "2021-01-07T19:06:23+00:00"
},
"FederatedUser": {
"FederatedUserId": "185...:sam",
"Arn": "arn:aws:sts::185...:federated-user/sam"
},
"PackedPolicySize": 0
}'
);
To verify your authorized configuration, call the SYSTEM$GET_PRIVATELINK function in your
Snowflake account on AWS. This function uses the same argument values
for 'aws_id' and 'federated_token' that were used to authorize your Snowflake account.
Snowflake returns Account is authorized for PrivateLink. for a successful authorization.
If it is necessary to disable AWS PrivateLink in your Snowflake account, call
the SYSTEM$REVOKE_PRIVATELINK function, using the same argument values for 'aws-
id' and 'federated_token'.
Important
The federated_token expires after 12 hours.
If calling any of the system functions to authorize, verify, or disable your Snowflake account to use
AWS PrivateLink and the token is not valid, regenerate the token using the AWS CLI STS command
shown at the beginning of the procedure in this section.
Configuring your AWS VPC environment
Attention
This section only covers the Snowflake-specific details for configuring your VPC environment.
Also, note that Snowflake is not responsible for the actual configuration of the required AWS VPC
endpoints, security group rules, and DNS records. If you encounter issues with any of these
configuration tasks, please contact AWS Support directly.
Create and configure a VPC endpoint (VPCE)
Complete the following steps to create and configure a VPC endpoint: In your AWS VPC
environment:
1. As a Snowflake account administrator (i.e. a user with the ACCOUNTADMIN system role),
call the SYSTEM$GET_PRIVATELINK_CONFIG function and record the privatelink-vpce-
id value.
2. In your AWS environment, create a VPC endpoint using the privatelink-vpce-id value from the
previous step.
3. In your AWS environment, authorize a security group of services that connect the Snowflake
outgoing connection to port 443 and 80 of the VPCE CIDR (Classless Inter-Domain Routing).
For details, see the AWS documentation:
 Working with VPCs and subnets
 VPC endpoints
 VPC endpoint services (AWS PrivateLink)
 Security groups for your VPC
Configure your VPC network
To access Snowflake via an AWS PrivateLink endpoint, it is necessary to create CNAME records in
your DNS to resolve the endpoint values from
the SYSTEM$GET_PRIVATELINK_CONFIG function to the DNS name of your VPC Endpoint.
These endpoint values allow you to access Snowflake, Snowsight, and the Snowflake Marketplace
while also using OCSP to determine whether a certificate is revoked when Snowflake clients attempt
to connect to an endpoint through HTTPS and connection URLs.
The function values to obtain are:
 privatelink-account-url
 privatelink-connection-ocsp-urls
 privatelink-connection-urls
 privatelink-ocsp-url
 regionless-privatelink-account-url
 regionless-snowsight-privatelink-url
 snowsight-privatelink-url
Note that the values for regionless-snowsight-privatelink-url
and snowsight-privatelink-url allow access to
Snowsight and the Snowflake Marketplace using private connectivity. However, there is additional
configuration if you want to enable URL redirects.
For details, see Snowsight & Private Connectivity.
For additional help with DNS configuration, please contact your internal AWS administrator.
Important
The structure of the OCSP cache server hostname depends on the version of your installed clients, as
described in Step 1 of Configuring Your Snowflake Clients (in this topic):
 If you are using the listed versions (or higher), use the form described above, which allows
for better DNS resolution when you have multiple Snowflake accounts (e.g. dev, test, and
production) in the same region. When updating client drivers and using OCSP with
PrivateLink, update the firewall rules to allow the OCSP hostname.
 If you are using older client versions, then the OCSP cache server hostname takes the
form ocsp.<region_id>.privatelink.snowflakecomputing.com (i.e. no account identifier).
 Note that your DNS record must resolve to private IP addresses within your VPC. If it
resolves to public IP addresses, the record is not configured correctly.
Create AWS VPC interface endpoints for Amazon S3
This step is required for Amazon S3 traffic from Snowflake clients to stay on the AWS backbone.
The Snowflake clients (e.g. SnowSQL, JDBC driver) require access to Amazon S3 to perform
various runtime operations.
If your AWS VPC network does not allow access to the public internet, you can configure private
connectivity to internal stages or more gateway endpoints to the Amazon S3 hostnames required by
the Snowflake clients.
Overall, there are three options to configure access to Amazon S3. The first two options avoid the
public Internet and the third option does not:
1. Configure an AWS VPC interface endpoint for internal stages. This option is recommended.
2. Configure an Amazon S3 gateway endpoint. For more information, see the note below.
3. Do not configure an interface endpoint or a gateway endpoint. This results in access using the
public Internet.
Attention
To prevent communications between an Amazon S3 bucket and an AWS VPC with Snowflake from
using the public Internet, you can set up an Amazon S3 gateway endpoint in the same AWS region
as the Amazon S3 bucket. The reason for this is AWS PrivateLink only allows communications
between VPCs, and the Amazon S3 bucket is not included in the VPC.
You can configure the Amazon S3 gateway endpoint to limit access to specific users, S3 resources,
routes, and subnets; however, Snowflake does not require this configuration. For more details,
see Endpoints for Amazon S3.
To configure the Amazon S3 gateway endpoint policies to specifically restrict them to use only the
Amazon S3 resources for Snowflake, choose one of the following options:
 Use the specific Amazon S3 hostname addresses used by your Snowflake account. For the
complete list of hostnames used by your account, see SYSTEM$ALLOWLIST.
 Use an Amazon S3 hostname pattern that matches the Snowflake S3 hostnames. In this
scenario, there are two possible types of connections to Snowflake, VPC-to-VPC or On-
Premises-to-VPC.
Based on your connection type, note the following:
VPC-to-VPC
Ensure the Amazon S3 gateway endpoint exists. Optionally modify the S3 gateway endpoint
policy to match the specific hostname patterns shown in the Amazon S3 Hostnames table.
On-Premises-to-VPC
You must define a setup to include the S3 hostname patterns in the firewall or proxy
configuration if Amazon S3 traffic is not permitted on the public gateway.
The following table lists the Amazon S3 hostname patterns for which you may create gateway
endpoints if you do not require them to be specific to your account’s Snowflake-managed S3
buckets:
Amazon S3 Hostnames Notes
All regions
sfc-*-stage.s3.amazonaws.com:443
All regions other than US East
sfc-*-stage.s3-<region_id>.amazonaws.com:443 Note that the pattern uses a hyphen ( -) before
the region ID.
Amazon S3 Hostnames Notes
sfc-*-stage.s3.<region_id>.amazonaws.com:443 Note that the pattern uses a period ( .) before the
region ID.
For details about creating gateway endpoints, see Gateway VPC endpoints.
Connect to Snowflake
Prior to connecting to Snowflake, you can optionally leverage SnowCD (Snowflake Connectivity
Diagnostic tool) to evaluate the network connection with Snowflake and AWS PrivateLink.
For more information, see SnowCD and SYSTEM$ALLOWLIST_PRIVATELINK.
Otherwise, connect to Snowflake with your private connectivity account URL.
Note that if you want to connect to Snowsight via AWS PrivateLink, follow the instructions in
the Snowsight documentation.
Blocking public access — Optional
After testing private connectivity to Snowflake using AWS PrivateLink, you can optionally block
public access to Snowflake. This means that users can access Snowflake only if their connection
request originates from an IP address within a particular CIDR block range specified in a Snowflake
network policy.
To block public access using a network policy:
1. Create a new network policy or edit an existing network policy. Add the CIDR block range
for your organization.
2. Activate the network policy for your account.
For details, see Controlling network traffic with network policies.
Configuring your Snowflake clients
Ensure Snowflake clients support OCSP cache server
The Snowflake OCSP cache server mitigates connectivity issues between Snowflake clients and the
server. To enable your installed Snowflake clients to take advantage of the OCSP server cache,
ensure you are using the following client versions:
 SnowSQL 1.1.57 (or higher)
 Python Connector 1.8.2 (or higher)
 JDBC Driver 3.8.3 (or higher)
 ODBC Driver 2.19.3 (or higher)
Note
The Snowflake OCSP cache server listens on port 80, which is why you were instructed in Create
and configure a VPC endpoint (VPCE) to configure your AWS PrivateLink VPCE security group to
accept this port, along with port 443 (required for all other Snowflake traffic).
Specify hostname for Snowflake clients
Each Snowflake client requires a hostname to connect to your Snowflake account.
The hostname is the same as the hostname you specified in the CNAME record(s) in Configure your
VPC network.
This step is not applicable to access the Snowflake Marketplace.
For example, for an account named xy12345:
 If the account is in US West, the hostname is xy12345.us-west-2.privatelink.snowflakecomputing.com .
 If the account is in EU (Frankfurt), the hostname is xy12345.eu-central-
1.privatelink.snowflakecomputing.com .
Important
The method for specifying the hostname differs depending on the client:
 For the Spark connector and the ODBC and JDBC drivers, specify the entire hostname.
 For all the other clients, do not specify the entire hostname.
Instead, specify the account identifier with the privatelink segment
(i.e. <account_identifier>.privatelink ), which Snowflake concatenates
with snowflakecomputing.com to dynamically construct the hostname.
For more details about specifying the account name or hostname for a Snowflake client, see the
documentation for each client.
Using SSO with AWS PrivateLink
Snowflake supports using SSO with AWS PrivateLink. For more information, see:
 SSO with private connectivity
 Partner applications
Using Client Redirect with AWS PrivateLink
Snowflake supports using Client Redirect with AWS PrivateLink.
For more information, see Redirecting Client Connections.
Using replication and Tri-Secret Secure with private connectivity
Snowflake supports replicating your data from the source account to the target account, regardless of
whether you enable Tri-Secret Secure or this feature in the target account.
For details, refer to Database replication and encryption.
Troubleshooting
Note the following Snowflake Community articles:
 How to retrieve a Federation Token from AWS for PrivateLink Self-Service
 FAQ: PrivateLink Self-Service with AWS
 Troubleshooting: Snowflake self-service functions for AWS PrivateLink

Azure Private Link and Snowflake


BUSINESS CRITICAL FEATURE
This feature requires Business Critical (or higher).
This topic describes how to configure Azure Private Link to connect your Azure Virtual
Network (VNet) to the Snowflake VNet in Azure.
Note that Azure Private Link is not a service provided by Snowflake. It is a Microsoft
service that Snowflake enables for use with your Snowflake account.
Overview
Azure Private Link provides private connectivity to Snowflake by ensuring that access to Snowflake
is through a private IP address. Traffic can only occur from the customer virtual network (VNet) to
the Snowflake VNet using the Microsoft backbone and avoids the public Internet. This significantly
simplifies the network configuration by keeping access rules private while providing secure and
private communication.
The following diagram summarizes the Azure Private Link architecture with respect to the customer
VNet and the Snowflake VNet.
From either a virtual machine (1) or through peering (2), you can connect to the Azure Private Link
endpoint (3) in your virtual network. That endpoint then connects to the Private Link Service (4) and
routes to Snowflake.
Here are the high-level steps to integrate Snowflake with Azure Private Link:
1. Create a Private Endpoint.
2. Generate and retrieve an access token from your Azure subscription.
Note that if you plan to use Azure Private Link to connect to a Snowflake internal stage on
Azure, you must register your subscription with the Azure Storage resource provider before
connecting to the internal stage from a private endpoint.
3. Enable your Snowflake account on Azure to use Azure Private Link.
4. Update your outbound firewall settings to allow the Snowflake account URL and OCSP
URL.
5. Update your DNS server to resolve your account URL and OCSP URL to the Private Link IP
address. You can add the DNS entry to your on-premises DNS server or private DNS on your
VNet, and use DNS forwarding to direct queries for the entry from other locations where
your users will access Snowflake.
6. After the Private Endpoint displays a CONNECTION STATE value of Approved, test your
connection to Snowflake with SnowCD (Connectivity Diagnostic
Tool) and SYSTEM$ALLOWLIST_PRIVATELINK.
7. Connect to Snowflake using your private connectivity account URL.
Requirements and limitations
Before attempting to configure Azure Private Link to connect your Azure VNet to the Snowflake
VNet on Azure, note the following:
 In Azure at the subnet level, optionally enable a network policy for the Private Endpoint.
Verify that the TCP ports 443 and 80 allow traffic to 0.0.0.0 in the network security group of
the Private Endpoint network card.
For help with the port configuration, contact your internal Azure administrator.
 Use ARM VNets.
 Use IPv4 TCP traffic only.
 Currently, the self-service enablement process described in this topic does not support
authorizing a managed Private Endpoint from Azure Data Factory, Synapse, or other
managed services.
For details on how to configure a managed private endpoint for this use case, see
this article (in the Snowflake community).
For more information on the requirements and limitations of Microsoft Azure Private Link, see the
Microsoft documentation on Private Endpoint Limitations and Private Link Service Limitations.
Configuring access to Snowflake with Azure Private Link
Attention
This section only covers the Snowflake-specific details for configuring your VNet environment.
Also, note that Snowflake is not responsible for the actual configuration of the required firewall
updates and DNS records. If you encounter issues with any of these configuration tasks, please
contact Microsoft Support directly.
This section describes how to configure your Azure VNet to connect to the Snowflake VNet on
Azure using Azure Private Link. After initiating the connection to Snowflake using Azure Private
Link, you can determine the approval state of the connection in the Azure portal.
For installation help, see the Microsoft documentation on the Azure CLI or Azure PowerShell.
Complete the configuration procedure to configure your Microsoft Azure VNet and initiate the
Azure Private Link connection to Snowflake.
Procedure
This procedure manually creates and initializes the necessary Azure Private Link resources to use
Azure Private Link to connect to Snowflake on Azure. Note that this procedure assumes that your
use case does not involve Using SSO with Azure Private Link (in this topic).
1. As a representative example using the Azure CLI, execute az account list --output table.
Note the output values in the Name, SubscriptionID and CloudName columns.
2. Name CloudName SubscriptionId State IsDefault
3. ------- ---------- ------------------------------------ ------- ----------
4. MyCloud AzureCloud 13c... Enabled True
5. Navigate to the Azure portal. Search for Private Link and click Private Link.

6. Click Private endpoints and then click Add.

7. In the Basics section, complete the Subscription, Resource group, Name,


and Region fields for your environment and then click Next: Resource.
8. In the Resource section, complete the Connection method and the Resource ID or alias
Field fields.
 For Connection Method, select the Connect to an Azure resource by resource ID
or alias.

 In Snowflake, execute SYSTEM$GET_PRIVATELINK_CONFIG and input the


value for privatelink-pls-id into the Resource ID or alias field. Note that the
screenshot in this step uses the alias value for the east-us-2 region as a representative
example, and that Azure confirms a valid alias value with a green checkmark.
 If you receive an error message regarding the alias value, contact Snowflake
Support to receive the resource ID value and then repeat this step using the resource
ID value.
9. Return to the Private endpoints section and allow a few minutes to wait. On approval, the
Private Endpoint displays a CONNECTION STATE value of Pending. This value will
update to Approved after completing the authorization in the next step.

10. Enable your Snowflake account on Azure to use Azure Private Link by completing the
following steps:
 In your command line environment, record the private endpoint resource ID value
using the following Azure CLI network command:
 az network private-endpoint show
The private endpoint was created in the previous steps using the template files. The
resource ID value takes the following form, which has a truncated value:
/subscriptions/26d.../resourcegroups/sf-1/providers/microsoft.network/
privateendpoints/test-self-service
 In your command line environment, execute the following Azure CLI
account command and save the output. The output will be used as the value for
the federated_token argument in the next step.
 az account get-access-token --subscription <SubscriptionID>
Extract the access token value from the command output. This value will be used as
the federated_token value in the next step. In this example, the values are truncated
and the access token value is eyJ...:
{
"accessToken": "eyJ...",
"expiresOn": "2021-05-21 21:38:31.401332",
"subscription": "0cc...",
"tenant": "d47...",
"tokenType": "Bearer"
}
Important
The user generating the Azure access Token must have Read permissions on the
Subscription. The least privilege permission
is Microsoft.Subscription/subscriptions/acceptOwnershipStatus/read. Alternatively,
the default role Reader grants more coarse-grained permissions.
The accessToken value is sensitive information and should be treated like a password
value — do not share this value.
If it is necessary to contact Snowflake Support, redact the access token from any
commands and URLs before creating a support ticket.
 In Snowflake, call the SYSTEM$AUTHORIZE_PRIVATELINK function, using
the private-endpoint-resource-id value and the federated_token value as arguments,
which are truncated in this example:
 USE ROLE ACCOUNTADMIN;

 SELECT SYSTEM$AUTHORIZE_PRIVATELINK (
 '/subscriptions/26d.../resourcegroups/sf-1/providers/microsoft.network/
privateendpoints/test-self-service',
 'eyJ...'
 );
To verify your authorized configuration, call the SYSTEM$GET_PRIVATELINK function
in your Snowflake account on Azure. Snowflake
returns Account is authorized for PrivateLink. for a successful authorization.
If it is necessary to disable Azure Private Link in your Snowflake account, call
the SYSTEM$REVOKE_PRIVATELINK function, using the argument values for private-
endpoint-resource-id and federated_token.
11. DNS Setup. All requests to Snowflake need to be routed via the Private Endpoint. Update
your DNS to resolve the Snowflake account and OCSP URLs to the private IP address of
your Private Endpoint.
 To get the endpoint IP address, navigate to Azure portal search bar and enter the name
of the endpoint (i.e. the NAME value from Step 5). Locate the Network Interface
result and click it.

 Copy the value for the Private IP address (i.e. 10.0.27.5).


 Configure your DNS to have the following endpoint values from
the SYSTEM$GET_PRIVATELINK_CONFIG function resolve to the private IP
address.
These endpoint values allow you to access Snowflake, Snowsight, and the Snowflake
Marketplace while also using OCSP to determine whether a certificate is revoked
when Snowflake clients attempt to connect to an endpoint through HTTPS
and connection URLs.
The function values to obtain are:
 privatelink-account-url
 privatelink-connection-ocsp-urls
 privatelink-connection-urls
 privatelink-ocsp-url
 regionless-privatelink-account-url
 regionless-snowsight-privatelink-url
 snowsight-privatelink-url
Note that the values for regionless-snowsight-privatelink-url and snowsight-
privatelink-url allow access to Snowsight and the Snowflake Marketplace using
private connectivity. However, there is additional configuration if you want to enable
URL redirects.
For details, see Snowsight & Private Connectivity.
Note
A full explanation of DNS configuration is beyond the scope of this procedure. For
example, you can choose to integrate an Azure Private DNS zone into your
environment. Please consult your internal Azure and Cloud Infrastructure
administrators to configure and resolve the URLs in DNS properly.
12. After verifying your outbound firewall settings and DNS records include your Azure Private
Link account and OCSP URLs, test your connection to Snowflake with SnowCD
(Connectivity Diagnostic Tool) and SYSTEM$ALLOWLIST_PRIVATELINK.
13. Connect to Snowflake with your private connectivity account URL.
Note that if you want to connect to Snowsight via Azure Private Link, follow the instructions
in the Snowsight documentation.
Using SSO with Azure Private Link
Snowflake supports using SSO with Azure Private Link. For more information, see:
 SSO with private connectivity
 Partner applications
Using Client Redirect with Azure Private Link
Snowflake supports using Client Redirect with Azure Private Link.
For more information, see Redirecting Client Connections.
Using Replication & Tri-Secret Secure with Private Connectivity
Snowflake supports replicating your data from the source account to the target account, regardless of
whether you enable Tri-Secret Secure or this feature in the target account.
For details, refer to Database replication and encryption.
Blocking public access — Optional
After testing the Azure Private Link connectivity with Snowflake, you can optionally block public
access to Snowflake using Controlling network traffic with network policies.
Configure the CIDR block range to block public access to Snowflake using your organization’s IP
address range. This range can be from within your virtual network.
Once the CIDR Block ranges are set, only IP addresses within the CIDR block range can access
Snowflake.
To block public access using a network policy:
1. Create a new network policy or edit an existing network policy. Add the CIDR block range
for your organization.
2. Activate the network policy for your account.
Google Cloud Private Service Connect and
Snowflake
BUSINESS CRITICAL FEATURE
This feature requires Business Critical (or higher).
If you are using Business Critical (or higher) and wish to use this feature with your account, please
contact Snowflake Support and request it to be enabled, as described in this topic.
This topic describes concepts and how to configure Google Cloud Private Service
Connect to connect your Google Cloud Virtual Private Cloud (VPC) network subnet to
your Snowflake account hosted on Google Cloud Platform without traversing the public
Internet.
Note that Google Cloud Private Service Connect is not a service provided by Snowflake.
It is a Google service that Snowflake enables for use with your Snowflake account.
If you are using Business Critical Edition (or higher) and wish to use Google Cloud
Private Service Connect with your account, please contact Snowflake Support and request
it to be enabled.
Overview
Google Cloud Private Service Connect provides private connectivity to Snowflake by ensuring that
access to Snowflake is through a private IP address. Snowflake appears as a resource in your
network (i.e. customer network), but the traffic flows one-way from the your VPC to Snowflake
VPC over the Google networking backbone. This setup significantly simplifies the network
configuration while providing secure and private communication.
The following diagram summarizes the Google Cloud Private Service Connect architecture with
respect to the customer Google Cloud VPC and the Snowflake service.

The Google Compute Engine (i.e. a virtual machine) connects to a private, virtual IP address which
routes to a forwarding rule (1). The forwarding rule connects to the service attachment through a
private connection (2). The connection is routed through a load balancer (3) that redirects to
Snowflake (4).
Limitations
 The Snowflake system functions for self-service management are not supported. For details,
see Current Limitations for Accounts on GCP.
For details, see:
 Account Identifiers
 Connecting to Your Accounts
Configuration procedure
This section describes how to configure Google Cloud Private Service Connect to connect to
Snowflake.
Attention
This section only covers the Snowflake-specific details for configuring your Google Cloud VPC
environment. Also, note that Snowflake is not responsible for the actual configuration of the required
firewall updates and DNS records.
If you encounter issues with any of these configuration tasks, please contact Google Support directly.
For installation help, see the Google documentation on the Cloud SDK: Command Line Interface.
For additional help, contact your internal Google Cloud administrator.
1. Contact Snowflake Support and provide a list of your Google Cloud <project_id> values and
the corresponding URLs that you use to access Snowflake with a note to enable Google
Cloud Private Service Connect. After receiving a response from Snowflake Support, continue
to the next step.
Important
If you are using VPC Service Controls in your VPC, ensure that the policy allows access to
the Snowflake service before contacting Snowflake Support.
If this action is not taken, Snowflake will not be able to add your project ID to the Snowflake
service attachment allow list. The result is that you will be blocked from being able to
connect to Snowflake using this feature.
2. In a Snowflake worksheet, run the SYSTEM$GET_PRIVATELINK_CONFIG function with
the ACCOUNTADMIN system role, and save the command output for use in the following
steps:
3. use role accountadmin;
4. select key, value from table(flatten(input=>parse_json(system$get_privatelink_config())));
5. In a command line interface (e.g. the Terminal application), update the gcloud library to the
latest version:
6. gcloud components update
7. Authenticate to Google Cloud Platform using the following command:
8. gcloud auth login
9. In your Google Cloud VPC, set the project ID in which the forwarding rule should reside.
10. gcloud config set project <project_id>
To obtain a list of project IDs, execute the following command:
gcloud projects list --sort-by=projectId
11. In your Google Cloud VPC, create a virtual IP address:
12. gcloud compute addresses create <customer_vip_name> \
13. --subnet=<subnet_name> \
14. --addresses=<customer_vip_address>
15. --region=<region>
For example:
gcloud compute addresses create psc-vip-1 \
--subnet=psc-subnet \
--addresses=192.168.3.3 \
--region=us-central1

# returns
Created [https://www.googleapis.com/compute/v1/projects/docstest-123456/regions/us-
central1/addresses/psc-vip-1].
Where:
 <customer_vip_name> specifies the name of the virtual IP rule (i.e. psc-vip-1 ).
 <subnet_name> specifies the name of the subnet.
 <customer_vip_address> : all private connectivity URLs resolve to this address. Specify
an IP address from your network or use CIDR notation to specify a range of IP
addresses.
 <region> specifies the cloud region where your Snowflake account is located.
16. Create a forwarding rule to have your subnet route to the Private Service Connect endpoint
and then to the Snowflake service endpoint:
17. gcloud compute forwarding-rules create <name> \
18. --region=<region> \
19. --network=<network_name> \
20. --address=<customer_vip_name> \
21. --target-service-attachment=<privatelink-gcp-service-attachment>
For example:
gcloud compute forwarding-rules create test-psc-rule \
--region=us-central1 \
--network=psc-vpc \
--address=psc-vip-1 \
--target-service-attachment=projects/us-central1-deployment1-c8cc/regions/us-central1/
serviceAttachments/snowflake-us-central1-psc

# returns

Created [https://www.googleapis.com/compute/projects/mdlearning-293607/regions/us-
central1/forwardingRules/test-psc-rule].
Where:
 <name> specifies the name of the forwarding rule.
 <region> specifies the cloud region where your Snowflake account is located.
 <network_name> specifies the name of the network for this forwarding rule.
 <customer_vip_name> specifies the <name> value (i.e. psc-vip-1 ) of the virtual IP address
created in the previous step.
 <privatelink-gcp-service-attachment> specifies the endpoint for the Snowflake service (see
step 2).
22. Use the following command to verify the forwarding-rule was created successfully:
23. gcloud compute forwarding-rules list --regions=<region>
The cloud region in this command must match the cloud region where your Snowflake
account is located.
For example, if your Snowflake account is located in the europe-west-2 region,
replace <region> with europe-west2.
For a complete list of Google Cloud regions and their formatting, see Viewing a list of
available regions.
24. Update your DNS settings.
All requests to Snowflake need to be routed through the Private Service Connect endpoint so
that the URLs in step 2 (from the SYSTEM$GET_PRIVATELINK_CONFIG function)
resolve to the VIP address that you created ( <customer_vip_address>).
These endpoint values allow you to access Snowflake, Snowsight, and the Snowflake
Marketplace while also using OCSP to determine whether a certificate is revoked when
Snowflake clients attempt to connect to an endpoint through HTTPS and connection URLs.
The function values to obtain are:
 privatelink-account-url
 privatelink-connection-ocsp-urls
 privatelink-connection-urls
 privatelink-ocsp-url
 regionless-privatelink-account-url
 regionless-snowsight-privatelink-url
 snowsight-privatelink-url
Note that the values for regionless-snowsight-privatelink-url
and snowsight-privatelink-url allow access
to Snowsight and the Snowflake Marketplace using private connectivity. However, there is
additional configuration if you want to enable URL redirects.
For details, see Snowsight & Private Connectivity.
Note
A full explanation of DNS configuration is beyond the scope of this procedure. For example,
you can choose to integrate a private DNS zone into your environment using Cloud DNS.
Please consult your internal Google Cloud and cloud infrastructure administrators to
configure and resolve the URLs in DNS properly.
25. Test your connection to Snowflake using SnowCD (Connectivity Diagnostic Tool).
26. Connect to Snowflake with your private connectivity account URL.
Note that if you want to connect to Snowsight via Google Cloud Private Service Connect,
follow the instructions in the Snowsight documentation.
Using SSO with Google Private Service Connect
Snowflake supports using SSO with Google Cloud Private Service Connect. For more information,
see:
 SSO with private connectivity
 Partner applications
Using Client Redirect with Google Cloud Private Service Connect
Snowflake supports using Client Redirect with Google Cloud Private Service Connect.
For more information, see Redirecting Client Connections.
Using Replication & Tri-Secret Secure with Private Connectivity
Snowflake supports replicating your data from the source account to the target account, regardless of
whether you enable Tri-Secret Secure or this feature in the target account.
For details, refer to Database replication and encryption.
Blocking public Access — Optional
After testing the Google Cloud Private Service Connect connectivity with Snowflake, you
can optionally block public access to Snowflake using Controlling network traffic with network
policies.
Configure the CIDR block range to block public access to Snowflake using your organization’s IP
address range. This range can be from within your virtual network.
Once the CIDR Block ranges are set, only IP addresses within the CIDR block range can access
Snowflake.
To block public access using a network policy:
1. Create a new network policy or edit an existing network policy. Add the CIDR block range
for your organization.
2. Activate the network policy for your account.

Snowflake Sessions & Session Policies


ENTERPRISE EDITION FEATURE
Session policies requires Enterprise Edition or higher. To inquire about upgrading, please
contact Snowflake Support.
This topic describes Snowflake sessions and session policies and provides instructions for
configuring session policies at the account or user level.
Snowflake Sessions
A session begins when a user connects to Snowflake and authenticates successfully using a
Snowflake programmatic client, Snowsight, or the Classic Console. A session is independent of an
identity provider (i.e. IdP) session. If the Snowflake session expires but the IdP session remains
active, a user can log in to Snowflake without entering their login credentials again (i.e. silent
authentication).
A session is maintained indefinitely with continued user activity. After a period of inactivity in the
session, known as the idle session timeout, the user must authenticate to Snowflake again. The idle
session timeout has a maximum value of four hours and a session policy can modify the idle session
timeout period. The idle session timeout applies to the following:
 Classic Console .
 Snowsight.
 SnowSQL (CLI client) .
 Supported connectors and drivers.
 Third-party clients that connect to Snowflake using a supported connector or driver.
Snowflake recommends reusing existing sessions when possible and to close the connection to
Snowflake when a session is no longer needed.
Snowsight Sessions
Snowflake creates a new session for each worksheet in Snowsight. A worksheet session enforces the
session policy that applies to the user that creates the worksheet.
Caution
Active queries are not canceled when the session ends and the user is logged out, even if
the ABORT_DETACHED_QUERY parameter is set to true.
Classic Console Sessions

In the Worksheets tab, Snowflake creates a new session every time a new worksheet is
created. Each worksheet is limited to a maximum of 4 hours of idle behavior, and the idle timeout for
each worksheet is tracked separately.
When a worksheet is closed, the user session for the worksheet ends.
After the 4-hour time limit expires for any open worksheet, Snowflake logs the user out of the web
interface.
Note
Note that passive behaviors such as scrolling through the query result set or sorting a data set do not
reset the idle session timeout tracker.
To prevent a session from closing too early and being logged out of the Classic Console, save any
necessary SQL statements to a local file and close any open worksheets that are not in use.
Session Policies
A session policy defines the idle session timeout period in minutes and provides the option to
override the default idle timeout value of 4 hours.
The session policy can be set for an account or user with configurable idle timeout periods to address
compliance requirements. If a user is associated with both an account and user-level session policy,
the user-level session policy takes precedence. After the session policy is set on the account or user,
Snowflake enforces the session policy.
There are two properties that govern the session policy behavior:
 SESSION_IDLE_TIMEOUT_MINS for programmatic and Snowflake Clients.
 SESSION_UI_IDLE_TIMEOUT_MINS for the Classic Console and Snowsight.
The timeout period begins upon a successful authentication to Snowflake. If a session policy is not
set, Snowflake uses a default value of 240 minutes (i.e. 4 hours). The minimum configurable idle
timeout value for a session policy is 5 minutes. When the session expires, the user must authenticate
to Snowflake again. However, Snowflake does not enforce any setting defined by the Custom logout
endpoint.
For more information, see:
 Managing session policies .
 The SESSIONS view to monitor session usage.
Considerations
 If a client supports the CLIENT_SESSION_KEEP_ALIVE option and the option is set
to TRUE, the client preserves the Snowflake session indefinitely as long as the connection to
Snowflake is active. Otherwise, if the option is set to FALSE, the session ends after 4 hours.
When possible, avoid using this option since it can result in many open sessions and place a
greater demand on resources which can lead to a performance degradation.
 You can use
the CLIENT_SESSION_KEEP_ALIVE_HEARTBEAT_FREQUENCY parameter to specify
the number of seconds between client attempts to update the token for the session. The web
interface session can be refreshed as Snowflake objects continue to be used, such as
executing DDL and DML statements. Snowflake checks for this behavior every 30 seconds.
 Creating a new worksheet or opening an existing worksheet continues to use the established
user session but with its idle session timeout reset to 0.
 Tracking session policy usage:
 Query the Account Usage SESSION_POLICIES view to return a row for each session
policy in your Snowflake account.
 Use the Information Schema table function POLICY_REFERENCES to return a row
for each user that is assigned to the specified session policy and a row for the session
policy assigned to the Snowflake account.
Currently, only the following syntax is supported for session policies:
POLICY_REFERENCES( POLICY_NAME => '<session_policy_name>' )
Where session_policy_name is the fully qualified name of the session policy.
For example, execute the following query to return a row for each user that is
assigned the session policy named session_policy_prod_1, which is stored in the database
named my_db and the schema named my_schema:
SELECT *
FROM TABLE(
MY_DB.INFORMATION_SCHEMA.POLICY_REFERENCES(
POLICY_NAME => 'my_db.my_schema.session_policy_prod_1'
)
);
Limitations
Future grants
Future grants of privileges on session policies are not supported.
As a workaround, grant the APPLY SESSION POLICY privilege to a custom role to allow
that role to apply session policies on a user or the Snowflake account.
Implementing a Session Policy
The following steps are a representative guide to implementing a session policy.
These steps assume a centralized management approach in which a custom role
named policy_admin owns the session policy (i.e. has the OWNERSHIP privilege on the session
policy) and is responsible for setting the session policy on an account or user (i.e. has the APPLY
SESSION POLICY on ACCOUNT privilege or the APPLY SESSION POLICY ON USER
privilege).
Note
To set a policy on an account, the policy_admin custom role must have the following permissions:
 USAGE on the database and schema that contain the session policy.
 CREATE SESSION POLICY on the schema that contains the session policy.
Follow these steps to implement a session policy.
Step 1: Create the POLICY_ADMIN Custom Role
Create a custom role that allows users to create and manage session policies. Throughout this topic,
the example custom role is named policy_admin, although the role could have any appropriate name.
If the custom role already exists, continue to the next step.
Otherwise, create the POLICY_ADMIN custom role.
USE ROLE USERADMIN;

CREATE ROLE policy_admin;


Step 2: Grant Privileges to the POLICY_ADMIN Custom Role
If the POLICY_ADMIN custom role does not already have the following privileges, grant these
privileges as shown below:
 USAGE on the database and schema that will contain the session policy.
 CREATE SESSION POLICY on the schema that will contain the session policy.
 APPLY SESSION POLICY on the account.
 APPLY SESSION POLICY on each user, if you plan to set session policies at the user level.
USE ROLE SECURITYADMIN;

GRANT USAGE ON DATABASE mydb TO ROLE policy_admin;

GRANT USAGE, CREATE SESSION POLICY ON SCHEMA mydb.policies TO ROLE


policy_admin;

GRANT APPLY SESSION POLICY ON ACCOUNT TO ROLE policy_admin;


If associating a session policy with an individual user:
GRANT APPLY SESSION POLICY ON USER jsmith TO ROLE policy_admin;
For more information, see Summary of DDL Commands, Operations, and Privileges.
Step 3: Create a New Session Policy
Using the POLICY_ADMIN custom role, create a new session policy where the idle timeout value
for programmatic clients, Snowflake clients, and the web interface is 60 minutes each. For more
information, see CREATE SESSION POLICY.
USE ROLE POLICY_ADMIN;

CREATE SESSION POLICY mydb.policies.session_policy_prod_1


SESSION_IDLE_TIMEOUT_MINS = 60
SESSION_UI_IDLE_TIMEOUT_MINS = 60
COMMENT = 'Session policy for the prod_1 environment'
;
Where:
mydb.policies.session_policy_prod_1
The fully qualified name of the session policy.
session_idle_timeout_mins = 60
The idle timeout period in minutes for Snowflake Clients and programmatic clients.
session_ui_idle_timeout_mins = 30
The idle timeout period in minutes for the Snowflake web interface.
comment = 'Session policy for the prod_1 environment'
A comment specifying the purpose of the session policy.
Step 4: Set the Session Policy on an Account or User
Using the POLICY_ADMIN custom role, set the policy on an account with the ALTER
ACCOUNT command, or a user (e.g. username jsmith) with the ALTER USER command.
USE ROLE policy_admin;

ALTER ACCOUNT SET SESSION POLICY mydb.policies.session_policy_prod_1;

ALTER USER jsmith SET SESSION POLICY


my_database.my_schema.session_policy_prod_1_jsmith;
Important
To replace a session policy that is already set for an account or user, unset the session policy first and
then set the new session policy for the account or user. For example:
ALTER ACCOUNT UNSET session policy;

ALTER ACCOUNT SET SESSION POLICY mydb.policies.session_policy_prod_2;


Step 5: Replicate the Session Policy to a Target Account
A session policy and its references (i.e. assignments to a user or the account) can be replicated from
the source account to the target account using database replication and account replication. For
details, refer to:
 Account replication .
 Database replication .
Managing Session Policies
Session Policy Privilege Reference
Snowflake supports the following session policy privileges to determine whether users can create,
set, and own session policies.
Note that operating on any object in a schema also requires the USAGE privilege on the parent
database and schema.
Privilege Usage
CREATE Enables creating a new session policy in a schema.
APPLY SESSION Enables applying a session policy at the account or user level.
POLICY
OWNERSHIP Grants full control over the session policy. Required to alter most
properties of a session policy.
Summary of DDL Commands, Operations, and Privileges
The following table summarizes the relationship between the session policy DDL operations and
their necessary privileges.
Operation Privilege required
Create session A role with the CREATE SESSION POLICY privilege on the schema.
policy
Alter session A role with the OWNERSHIP privilege on the session policy.
policy
Drop session A role with the OWNERSHIP privilege on the session policy.
policy
Describe A role with the OWNERSHIP privilege on the session policy or the APPLY
session policy SESSION POLICY privilege on the account.
Show session A role with the OWNERSHIP privilege on the session policy or the APPLY
policies SESSION POLICY privilege on the account.
Set & unset For accounts, a role with the APPLY SESSION POLICY privilege on the
session policy account and the OWNERSHIP privilege on the session policy, or a role with
the APPLY SESSION POLICY privilege on the account and the APPLY ON
SESSION POLICY privilege on a specific session policy.
For users, a role with the APPLY SESSION POLICY on USER <username>
privilege.
Session Policy DDL Reference
Snowflake provides the following DDL commands to manage session policy objects:
 CREATE SESSION POLICY
 ALTER SESSION POLICY
 DROP SESSION POLICY
 SHOW SESSION POLICIES
 DESCRIBE SESSION POLICY
To set or unset a session policy on the account, execute the ALTER ACCOUNT command as shown
below.
ALTER ACCOUNT SET SESSION POLICY mydb.policies.session_policy_prod_1;
ALTER ACCOUNT UNSET SESSION POLICY;
To set or unset a user-level session policy, execute the ALTER USER command as shown below.
ALTER USER jsmith SET SESSION POLICY mydb.policies.session_policy_prod_1_jsmith;
ALTER USER jsmith UNSET SESSION POLICY;
Troubleshooting Session Policies
 If a session policy is assigned to an account or a user and the database or schema that
contains the session policy is dropped, and then a new session policy is assigned to the
account or user, the user will not be held to the idle session timeout value(s) in the new
session policy.
The workaround is to unset the original session policy from the account using an ALTER
ACCOUNT command or from the user using an ALTER USER command as shown in this
topic.
 The following table summarizes some error messages that can occur with session policies.
Behavior Error Message Troubleshooting Action
Cannot Cannot perform CREATE Specify a database prior to
create a SESSION POLICY. This executing CREATE SESSION
session session does not have a current POLICY or use the fully
policy. database. Call ‘USE qualified object name in the
DATABASE’, or use a qualified CREATE SESSION POLICY
name. statement.
Cannot SQL access control error: Verify that the role executing the
create a Insufficient privileges to operate CREATE SESSION POLICY
session on schema ‘<schema_name>’ statement has the CREATE
policy. SESSION POLICY on
SCHEMA privilege.
Cannot SQL compilation error: Verify that the database exists
create a Database ‘<database_name>’ and that the role executing the
session does not exist or not authorized. CREATE SESSION POLICY
policy. statement has the USAGE
privilege on the schema in which
the session policy should exist.
Cannot SQL compilation error: Schema Verify that the role executing the
execute a ‘<schema_name>’ does not exist DESC SESSION POLICY
describe or not authorized. statement has the OWNERSHIP
statement. privilege on the session
policy or the APPLY privilege
on the session policy.
Cannot SQL compilation error: Session Verify that the role executing the
drop a policy ‘<policy_name>’ does DROP SESSION POLICY
session not exist or not authorized. statement has the OWNERSHIP
policy. privilege on the session policy.
Cannot Session policy <policy_name> Unset the session policy from the
drop a cannot be dropped because it is account with an ALTER
session attached to an account. ACCOUNT statement and try
policy. the drop statement again.
Cannot set Session policy ‘<policy_name> An account can only have one
a session is already attached to account active session policy. Determine
policy on <account_name>. which session policy should be
an set for the account. If necessary,
account. unset the current session policy
from the account with a ALTER
ACCOUNT command; then set
the other session policy on the
account with another ALTER
ACCOUNT command.
Cannot set SQL compilation error: invalid The session timeout value, in
a timeout value ‘<integer>’ for property minutes, must be an integer
Behavior Error Message Troubleshooting Action
value. ‘session_idle_timeout_mins’ between 5 and 240,
inclusive. Choose a valid integer
for the session timeout and
execute the CREATE or ALTER
SESSION POLICY statement
again.
Cannot SQL compilation error: Session Verify the name of the session
update an policy ‘<policy_name>’ does policy, the syntax of the ALTER
existing not exist or not authorized. SESSION POLICY command,
session and the privileges to operate on
policy. the session policy, database, and
schema.

Managing users and groups with SCIM


This topic describes:
1. SCIM integrations with Snowflake.
2. SCIM use cases with Snowflake.
3. Using the Snowflake SCIM APIs to make requests to an identity provider.
SCIM overview
SCIM is an open specification to help facilitate the automated management of user identities and
groups (i.e. roles) in cloud applications using RESTful APIs.
These APIs use common methods (e.g. GET, POST) with key-value pair attributes in JSON format.
The set of user attributes are unique to the user. Similarly, the set of group attributes are unique to
the group
Currently, Snowflake supports SCIM 2.0 to integrate Snowflake with Okta and Microsoft Azure AD,
which both function as identity providers. Snowflake also supports identity providers that are neither
Okta nor Microsoft Azure (i.e. Custom). Users and groups from the identity provider can be
provisioned into Snowflake, which functions as the service provider.
Important
The specific SCIM role in Snowflake must own any users and roles that are imported from the
identity provider. If the Snowflake SCIM role does not own the imported users or roles, updates in
the identity provider will not be synced to Snowflake. The Snowflake SCIM role is specific to the
identity provider (IdP) and is as follows:
 Okta SCIM Role: okta_provisioner
 Azure AD SCIM Role: aad_provisioner
 Custom SCIM Role: generic_scim_provisioner
For more information on how to use the Snowflake SCIM Role, see the SCIM configuration sections
for Okta, Azure AD, and the Custom SCIM integration.
The identity provider uses a SCIM client to make the RESTful API request to the Snowflake SCIM
server. Upon validating the API request, Snowflake performs actions on the user or group.
The authentication process uses an OAuth Bearer token and this token is valid for six months.
Customers must keep track of their authentication token and can generate a new token on demand.
During each API request, the token is passed into the header as an authorization Bearer parameter.
After the initial Snowflake SCIM configuration, you can use the Snowflake SCIM API to address the
following use cases:
1. User Lifecycle Management
2. Map Active Directory Groups to Snowflake Roles
Caution
Currently, the Snowflake SCIM API allows administrators to manage users and groups from the
Customer’s identity provider to Snowflake.
Therefore, if using the Snowflake SCIM API for user and group identity and access management
administration, do not make user and group changes in Snowflake (because those changes will not
synchronize back to the Customer identity provider).
SCIM use cases
The Snowflake SCIM API can address the following use cases.
Manage Users
Administrators can provision and manage their users from their organization’s identity
provider to Snowflake. User management is a one-to-one mapping from the identity provider
to Snowflake.
Manage Roles
Administrators can provision and manage their groups (i.e. Roles) from their organization’s
identity provider to Snowflake. Role management is a one-to-one mapping from the identity
provider to Snowflake.
Auditing
Administrators can query the rest_event_history table to determine whether the identity provider
is sending updates (i.e. SCIM API requests) to Snowflake.
Managing users with SCIM
Snowflake supports user lifecycle management with SCIM APIs. User lifecycle management is an
administrative activity that coincides with the pathway the user takes throughout the organization.
Common events in the user’s lifecycle include these activities.
1. Creating and activating a user.
2. Updating user attributes, such as email or password.
3. Deactivating a user upon termination.
In Snowflake, user lifecycle automation using SCIM requires passing user attributes in the body of
the API request. A successful API request from the identity provider affects the user in Snowflake
almost immediately.
User attributes
User attributes are the key-value pairs that are associated with the user. These pairs are the
information about the user, such as their display name and their email address. Identity providers
sometimes define attribute keys differently, such as surname, familyName, or lastName, all of which
indicate the user’s last name. Snowflake does not support multi-valued user attributes.
The Snowflake SCIM API passes user attributes in JSON format, which are shown in the
corresponding User API examples.
Snowflake supports the following SCIM attributes for user lifecycle management. Attributes are
writable unless otherwise noted.
Important
In the table, the Snowflake userName and loginName attributes are both mapped to the SCIM
attribute userName. Snowflake supports different values for the Snowflake attribute
of userName and loginName. This attribute value separation allows customers to have more granular
control over which attribute value can be used to access Snowflake.
For more information, see the User API examples for POST and PATCH.
SCIM User Snowflake Typ Description
Attribute User e
Attribute
id id strin The immutable, unique identifier (i.e. GUID)
g of the user in Snowflake.
Snowflake does not expose this value in
the DESCRIBE USER or SHOW
USERS output.
For certain request paths that contain this
attribute (e.g. PATCH), the id can be found
by calling
the REST_EVENT_HISTORY table
function.
Check the IdP logs to ensure the values
match.
userName userName, logi strin The identifier to login into Snowflake. For
SCIM User Snowflake Typ Description
Attribute User e
Attribute
nName g details about these Snowflake attributes,
see CREATE USER.
name.givenNa firstName strin The first name of the user.
me g
name.familyN familyName strin The last name of the user.
ame g
emails email strin The user email address. The value supports a
g single email address.
displayName displayName strin The name the user interface displays for the
g user.
externalID N/A strin The unique identifier set by the provisioning
g client (e.g. Azure, Okta).
password password strin The password for the user. This value is not
g returned in the JSON response.
Note: If the SCIM security
integration SYNC_PASSWORD property is set
to FALSE and the SCIM API request specifies
the password attribute, Snowflake ignores the
value for the password attribute. All other
attributes in the API request are processed
accordingly.
active disabled bool Disables the user when set to false.
ean
groups N/A strin A list of groups to which the user belongs.
g The group displayName is required.
The user’s groups are read-only and their
membership must be updated with the SCIM
Groups API.
meta.created createdON strin The time the user is added to Snowflake.
g
meta.lastModi lastModified strin The time the user is last modified in
fied g Snowflake.
meta.resource N/A strin Users should have a value of user.
Type g
schemas N/A strin A comma-separated array of strings to
g indicate the namespace URIs. Supported
values include:
 urn:ietf:params:scim:schemas:core:2.0:User
 urn:ietf:params:scim:schemas:extension:enter
prise:2.0:User
 urn:ietf:params:scim:schemas:extension:2.0:
User
Custom attributes
Snowflake supports setting custom attributes that are not defined by RFC 7643, such
as defaultRole, defaultWarehouse, and defaultSecondaryRoles.
Currently, Snowflake supports using two namespaces to set custom attributes for the following
POST, PUT, and PATCH API requests:
urn:ietf:params:scim:schemas:extension:enterprise:2.0:User
This namespace was part of the original SCIM implementation in Snowflake and can be used
to set custom attributes for Okta SCIM integrations only.
This namespace is not supported to set custom attributes for Microsoft Azure SCIM
integrations or custom SCIM integrations.
urn:ietf:params:scim:schemas:extension:2.0:User
This newer namespace can be used to set custom attributes for all SCIM integrations
and must be used for custom attributes with either a custom SCIM integration or a Microsoft
Azure SCIM integration.
User APIs
Snowflake supports the following APIs to facilitate user lifecycle management. You can obtain user
information with filters and also create, update, deactivate, and delete users.
Purpose Method & API Notes
Check if a GET scim/v2/Users?filter=userName eq "{{user_name}}" Returns details of a
user user account with the
exists. given userName and
with an HTTP status
code of 200 if
successful.
Get details GET scim/v2/Users/{{user_id}} Returns details of a
for a user account with the
specific given user_id and with
user. an HTTP status code
of 200 if successful.
Get details GET scim/v2/Users?startIndex=0&count=1 Returns a sample
for a user to allow the
specific SCIM client to read
user. the schema and with
an HTTP status code
of 200 if successful.
Create a POST scim/v2/Users Creates a user in
user. Snowflake and
returns an
HTTP 201 status code
if successful.
If the user already
exists or another
conflict arises,
Snowflake returns an
HTTP 409 status
code.
Update PATCH scim/v2/Users/{id} Deactivates the
user with corresponding user if
partial the active key is set to
attributes. false.
To activate a
currently deactivated
user, set active to true.
PATCH requires an
Operations key
(i.e. op) with an array
stating to replace an
attribute value.
Returns an HTTP
status code of 200 if
successful or 204 (i.e.
no content) if not
Purpose Method & API Notes
successful.
Update a PUT scim/v2/Users/{id} Checks if a user with
user. the given id exists.
This operation fails if
immutable attributes
are requested to
change.
Otherwise, it will
replace the read-write
or write-only
attributes based on
the request.
Returns an HTTP
status code of 400 if
the immutable
attribute values in the
request body do not
match the values in
Snowflake.
Delete a DELETE scim/v2/Users/{id}
user.
For more information on SCIM API requests, see Making a SCIM API request.

Overview of Access Control


This topic provides information on the main access control topics in Snowflake.
Access control framework
Snowflake’s approach to access control combines aspects from both of the following models:
 Discretionary Access Control (DAC): Each object has an owner, who can in turn grant
access to that object.
 Role-based Access Control (RBAC): Access privileges are assigned to roles, which are in
turn assigned to users.
The key concepts to understanding access control in Snowflake are:
 Securable object: An entity to which access can be granted. Unless allowed by a grant,
access is denied.
 Role: An entity to which privileges can be granted. Roles are in turn assigned to users. Note
that roles can also be assigned to other roles, creating a role hierarchy.
 Privilege: A defined level of access to an object. Multiple distinct privileges may be used to
control the granularity of access granted.
 User: A user identity recognized by Snowflake, whether associated with a person or
program.
In the Snowflake model, access to securable objects is allowed via privileges assigned to roles,
which are in turn assigned to users or other roles. Granting a role to another role creates a role
hierarchy, which is explained in the Role hierarchy and privilege inheritance section (in this topic).
In addition, each securable object has an owner that can grant access to other roles. This model is
different from a user-based access control model in which rights and privileges are assigned to each
user or groups of users. The Snowflake model is designed to provide a significant amount of both
control and flexibility.
Securable objects
Every securable object resides within a logical container in a hierarchy of containers. The top-most
container is the customer organization. Securable objects such as tables, views, functions, and stages
are contained in a schema object, which are in turn contained in a database. All databases for your
Snowflake account are contained in the account object. This hierarchy of objects and containers is
illustrated below:

To own an object means that a role has the OWNERSHIP privilege on the object. Each securable
object is owned by a single role, which by default is the role used to create the object. When this role
is assigned to users, they effectively have shared control over the object. The GRANT
OWNERSHIP command lets you transfer the ownership of an object from one role to another role,
including to database roles. This command also specifies the securable objects in each container.
In a regular schema, the owner role has all privileges on the object by default, including the ability to
grant or revoke privileges on the object to other roles. In addition, ownership can be transferred from
one role to another. However, in a managed access schema, object owners lose the ability to make
grant decisions. Only the schema owner (i.e. the role with the OWNERSHIP privilege on the
schema) or a role with the MANAGE GRANTS privilege can grant privileges on objects in the
schema.
The ability to perform SQL actions on objects is defined by the privileges granted to the active role
in a user session. The following are examples of SQL actions available on various objects in
Snowflake:
 Ability to create a warehouse.
 Ability to list tables contained in a schema.
 Ability to add data to a table.
Roles
Roles are the entities to which privileges on securable objects can be granted and revoked. Roles are
assigned to users to allow them to perform actions required for business functions in their
organization. A user can be assigned multiple roles. This allows users to switch roles (i.e. choose
which role is active in the current Snowflake session) to perform different actions using separate sets
of privileges.
There are a small number of system-defined roles in a Snowflake account. System-defined roles
cannot be dropped. In addition, the privileges granted to these roles by Snowflake cannot be revoked.
Users who have been granted a role with the necessary privileges can create custom roles to meet
specific business and security needs.
Roles can be also granted to other roles, creating a hierarchy of roles. The privileges associated with
a role are inherited by any roles above that role in the hierarchy. For more information about role
hierarchies and privilege inheritance, see Role Hierarchy and Privilege Inheritance (in this topic).
Note
A role owner (i.e. the role that has the OWNERSHIP privilege on the role) does not inherit the
privileges of the owned role. Privilege inheritance is only possible within a role hierarchy.
Although additional privileges can be granted to the system-defined roles, it is not recommended.
System-defined roles are created with privileges related to account-management. As a best practice,
it is not recommended to mix account-management privileges and entity-specific privileges in the
same role. If additional privileges are needed, Snowflake recommends granting the additional
privileges to a custom role and assigning the custom role to the system-defined role.
Types of roles
The following role types are essentially the same, except for their scope. Both types enable
administrators to authorize and restrict access to objects in your account.
Note
Except where noted in the product documentation, the term role refers to either type.
Account roles
To permit SQL actions on any object in your account, grant privileges on the object to an
account role.
Database roles
To limit SQL actions to a single database, as well as any object in the database, grant
privileges on the object to a database role in the same database.
Note that database roles cannot be activated directly in a session. Grant database roles to
account roles, which can be activated in a session.
For more information about database roles, see:
 Role hierarchy and privilege inheritance (in this topic)
 Database roles and role hierarchies (in this topic)
 Managing database object access using database roles
 Database roles in the shared SNOWFLAKE database.
 CREATE <object> … CLONE
Instance roles
To permit access to an instance of a class, grant an instance role to an account role.
A class may have one or more class roles with different privileges granted to each role. When
an instance of a class is created, the instance role(s) can be granted to account roles to grant
access to instance methods.
Note that instance roles cannot be activated directly in a session. Grant instance roles to
account roles, which can be activated in a session.
For more information, see Instance Roles.
Active roles
Active roles serve as the source of authorization for any action taken by a user in a session. Both
the primary role and any secondary roles can be activated in a user session.
A role becomes an active role in either of the following ways:
 When a session is first established, the user’s default role and default secondary roles are
activated as the session primary and secondary roles, respectively.
Note that client connection properties used to establish the session could explicitly override
the primary role or secondary roles to use.
 Executing a USE ROLE or USE SECONDARY ROLES statement activates a different
primary role or secondary roles, respectively. These roles can change over the course of a
session if either command is executed again.
System-defined roles
ORGADMIN
(aka Organization Administrator)
Role that manages operations at the organization level. More specifically, this role:
 Can create accounts in the organization.
 Can view all accounts in the organization (using SHOW ORGANIZATION
ACCOUNTS) as well as all regions enabled for the organization (using SHOW
REGIONS).
 Can view usage information across the organization.
ACCOUNTADMIN
(aka Account Administrator)
Role that encapsulates the SYSADMIN and SECURITYADMIN system-defined roles. It is
the top-level role in the system and should be granted only to a limited/controlled number of
users in your account.
SECURITYADMIN
(aka Security Administrator)
Role that can manage any object grant globally, as well as create, monitor, and manage users
and roles. More specifically, this role:
 Is granted the MANAGE GRANTS security privilege to be able to modify any grant,
including revoking it.
 Inherits the privileges of the USERADMIN role via the system role hierarchy (i.e.
USERADMIN role is granted to SECURITYADMIN).
USERADMIN
(aka User and Role Administrator)
Role that is dedicated to user and role management only. More specifically, this role:
 Is granted the CREATE USER and CREATE ROLE security privileges.
 Can create users and roles in the account.
This role can also manage users and roles that it owns. Only the role with the
OWNERSHIP privilege on an object (i.e. user or role), or a higher role, can modify
the object properties.
SYSADMIN
(aka System Administrator)
Role that has privileges to create warehouses and databases (and other objects) in an account.
If, as recommended, you create a role hierarchy that ultimately assigns all custom roles to the
SYSADMIN role, this role also has the ability to grant privileges on warehouses, databases,
and other objects to other roles.
PUBLIC
Pseudo-role that is automatically granted to every user and every role in your account. The
PUBLIC role can own securable objects, just like any other role; however, the objects owned
by the role are, by definition, available to every other user and role in your account.
This role is typically used in cases where explicit access control is not needed and all users
are viewed as equal with regard to their access rights.
Custom roles
Custom account roles can be created using the USERADMIN role (or a higher role) as well as by
any role to which the CREATE ROLE privilege has been granted.
Custom database roles can be created by the database owner (i.e. the role that has the OWNERSHIP
privilege on the database).
By default, a newly-created role is not assigned to any user, nor granted to any other role.
When creating roles that will serve as the owners of securable objects in the system, Snowflake
recommends creating a hierarchy of custom roles, with the top-most custom role assigned to the
system role SYSADMIN. This role structure allows system administrators to manage all objects in
the account, such as warehouses and database objects, while restricting management of users and
roles to the USERADMIN role.
Conversely, if a custom role is not assigned to SYSADMIN through a role hierarchy, the system
administrators cannot manage the objects owned by the role. Only those roles granted the MANAGE
GRANTS privilege (only the SECURITYADMIN role by default) can view the objects and modify
their access grants.
For instructions to create custom roles, see Creating custom roles.
Privileges
Access control privileges determine who can access and perform operations on specific objects in
Snowflake. For each securable object, there is a set of privileges that can be granted on it. For
existing objects, privileges must be granted on individual objects (e.g. the SELECT privilege on
the mytable table). To simplify grant management, future grants allow defining an initial set of
privileges on objects created in a schema (i.e. grant the SELECT privilege on all new tables created
in the myschema schema to a specified role).
Privileges are managed using the GRANT <privileges> and REVOKE <privileges> commands.
 In regular (i.e. non-managed) schemas, use of these commands is restricted to the role that
owns an object (i.e. has the OWNERSHIP privilege on the object) or any roles that have the
MANAGE GRANTS global privilege for the object (only the SECURITYADMIN role by
default).
 In managed access schemas, object owners lose the ability to make grant decisions. Only the
schema owner or a role with the MANAGE GRANTS privilege can grant privileges on
objects in the schema, including future grants, centralizing privilege management.
Note that a role that holds the global MANAGE GRANTS privilege can grant additional privileges
to the current (grantor) role.
For more details, see Access control privileges.
Role hierarchy and privilege inheritance
The following diagram illustrates the hierarchy for the system-defined roles, as well as the
recommended structure for additional, user-defined account roles and database roles. The highest-
level database role in the example hierarchy is granted to a custom (i.e. user-defined) account role. In
turn, this role is granted to another custom role in a recommended structure that allows the system-
defined SYSADMIN role to inherit the privileges of custom account roles and database roles:
Note
ORGADMIN is a separate system role that manages operations at the organization level. This role is
not included in the hierarchy of system roles.
For a more specific example of role hierarchy and privilege inheritance, consider the following
scenario:
 Role 3 has been granted to Role 2.
 Role 2 has been granted to Role 1.
 Role 1 has been granted to User 1.
In this scenario:
 Role 2 inherits Privilege C.
 Role 1 inherits Privileges B and C.
 User 1 has all three privileges.
Database roles and role hierarchies
The following limitations currently apply to database roles:
 If a database role is granted to a share, then no other database roles can be granted to that
database role. For example, if database role d1.r1 is granted to a share, then attempting to
grant database role d1.r2 to d1.r1 is blocked.
In addition, if a database role is granted to another database role, the grantee database role
cannot be granted to a share.
Database roles that are granted to a share can be granted to other database roles, as well as
account roles.
 Account roles cannot be granted to database roles in a role hierarchy.
Enforcement model with primary role and secondary roles
Every active user session has a “current role,” also referred to as a primary role. When a session is
initiated (e.g. a user connects via JDBC/ODBC or logs in to the Snowflake web interface), the
current role is determined based on the following criteria:
1. If a role was specified as part of the connection and that role is a role that has already been
granted to the connecting user, the specified role becomes the current role.
2. If no role was specified and a default role has been set for the connecting user, that role
becomes the current role.
3. If no role was specified and a default role has not been set for the connecting user, the system
role PUBLIC is used.
In addition, a set of secondary roles can be activated in a user session. A user can perform SQL
actions on objects in a session using the aggregate privileges granted to the primary and secondary
roles. The roles must be granted to the user before they can be activated in a session. Note that while
a session must have exactly one active primary role at a time, one can activate any number of
secondary roles at the same time.
Note
A database role can neither be a primary nor a secondary role. To assume the privileges granted to a
database role, grant the database role to an account role. Only account roles can be activated in a
session.
Authorization to execute CREATE <object> statements comes from the primary role only. When an
object is created, its ownership is set to the currently active primary role. However, for any other
SQL action, any permission granted to any active primary or secondary role can be used to authorize
the action. For example, if any role in a secondary role hierarchy owns an object (i.e. has the
OWNERSHIP privilege on the object), the secondary roles would authorize performing any DDL
actions on the object. Both the primary role as well as all secondary roles inherit privileges from any
roles lower in their role hierarchies.

For organizations whose security model includes a large number of roles, each with a fine granularity
of authorization via permissions, the use of secondary roles simplifies role management. All roles
that were granted to a user can be activated in a session. Secondary roles are particularly useful for
SQL operations such as cross-database joins that would otherwise require creating a parent role of
the roles that have permissions to access the objects in each database.
During the course of a session, the user can use the USE ROLE or USE SECONDARY
ROLES command to change the current primary or secondary roles, respectively. The user can use
the CURRENT_SECONDARY_ROLES function to show all active secondary roles for the current
session.
When you create an object that requires one or more privileges to use, only the primary role and
those roles that it directly or indirectly inherits are considered when searching for the grants of those
privileges.
For any other statement that requires one or more privileges (e.g. querying a table requires the
SELECT privilege on a table with the USAGE privilege on the database and schema), the primary
role, the secondary roles, and any other roles that are inherited are considered when searching for the
grants of those privileges.
Note
There is no concept of a “super-user” or “super-role” in Snowflake that can bypass authorization
checks. All access requires appropriate access privileges.

Understanding end-to-end encryption in


Snowflake
This topic provides concepts related to end-to-end encryption in Snowflake.
Overview
End-to-end encryption (E2EE) is a method to secure data that prevents third parties from reading
data while at-rest or in transit to and from Snowflake and to minimize the attack surface.
The figure illustrates the E2EE system in Snowflake:

The E2EE system includes the following components:


 The Snowflake customer in a corporate network.
 A customer-provided or Snowflake-provided data file staging area.
 Snowflake runs in a secure virtual private cloud (VPC) or virtual network (VNet), depending
on the cloud platform.
Snowflake supports both internal (Snowflake-provided) and external (customer-provided) stages for
data files. Snowflake provides internal stages where you can upload and group your data files before
loading the data into tables (image B).
Customer-provided stages are containers or directories in a supported cloud storage service (e.g.
Amazon S3) that you control and manage (image A). Customer-provided stages are an attractive
option for customers that already have data stored in a cloud storage service that they want to copy
into Snowflake.
Per the figure in this section, the flow of E2EE in Snowflake is as follows:
1. A user uploads one or more data files to a stage.
If the stage is an external stage (Image A), the user may optionally encrypt the data files
using client-side encryption (see Client-Side Encryption for more information). We
recommend client-side encryption for data files in external stages; but if the data is not
encrypted, Snowflake immediately encrypts the data when it is loaded into a table.
If the stage is an internal (i.e. Snowflake) stage (Image B) data files are automatically
encrypted by the Snowflake client on the user’s local machine prior to being transmitted to
the internal stage, in addition to being encrypted after they are loaded into the stage.
2. The user loads the data from the stage into a table.
The data is transformed into Snowflake’s proprietary file format and stored in a cloud storage
container. In Snowflake, all data at rest is always encrypted and encrypted with TLS in
transit. Snowflake also decrypts data when data is transformed or operated on in a table, and
then re-encrypts the data when the transformations and operations are complete.
3. The user can unload query results into an external or internal stage.
Results are optionally encrypted using client-side encryption when unloaded into a customer-
managed stage, and are automatically encrypted when unloaded to a Snowflake-provided
stage.
4. The user downloads data files from the stage and decrypts the data on the client side.
Client-side encryption
Client-side encryption means that a client encrypts data before copying it into a cloud storage staging
area. Client-side encryption provides a secure system for managing data in cloud storage.
Client-side encryption follows a specific protocol defined by the cloud storage service. The service
SDK and third-party tools implement this protocol.
The following image summarizes client-side encryption:

The client-side encryption protocol works as follows:


1. The customer creates a secret master key, which is shared with Snowflake.
2. The client, which is provided by the cloud storage service, generates a random encryption key
and encrypts the file before uploading it into cloud storage. The random encryption key, in
turn, is encrypted with the customer’s master key.
3. Both the encrypted file and the encrypted random key are uploaded to the cloud storage
service. The encrypted random key is stored with the file’s metadata.
When downloading data, the client downloads both the encrypted file and the encrypted random key.
The client decrypts the encrypted random key using the customer’s master key.
Next, the client decrypts the encrypted file using the now decrypted random key. This encryption and
decryption happens on the client side.
At no time does the cloud storage service or any other third party (such as an ISP) see the data in the
clear. Customers may upload client-side encrypted data using any client or tool that supports client-
side encryption.
Ingesting client-side encrypted data into Snowflake
Snowflake supports the client-side encryption protocol using a client-side master key when reading
or writing data between a cloud storage service stage and Snowflake, as shown in the following
image:
To load client-side encrypted data from a customer-provided stage, you create a named stage object
with an additional MASTER_KEY parameter using a CREATE STAGE command, and then load data
from the stage into your Snowflake tables. The MASTER_KEY parameter requires either a 128-bit or
256-bit Advanced Encryption Standard (AES) key encoded in Base64.
A named stage object stores settings related to a stage and provides a convenient way to load or
unload data between Snowflake and a specific container in cloud storage. The following SQL snippet
creates an example Amazon S3 stage object in Snowflake that supports client-side encryption:
-- create encrypted stage
create stage encrypted_customer_stage
url='s3://customer-bucket/data/'
credentials=(AWS_KEY_ID='ABCDEFGH' AWS_SECRET_KEY='12345678')
encryption=(MASTER_KEY='eSxX...=');
The truncated master key specified in this SQL command is the Base64-encoded string of the
customer’s secret master key. As with all other credentials, this master key is transmitted over
Transport Layer Security (HTTPS) to Snowflake and is stored encrypted in metadata storage. Only
the customer and the query-processing components of Snowflake are exposed to the master key.
A benefit of named stage objects is that they can be granted to other users within a Snowflake
account without revealing access credentials or client-side encryption keys to those users. Users with
the appropriate access control privileges simply reference the named stage object when loading or
unloading data.
The following SQL commands create a table named users and copy data from the encrypted stage
into the users table:
-- create table and ingest data from stage
CREATE TABLE users (id bigint, name varchar(500), purchases int);
COPY INTO users FROM @encrypted_customer_stage/users;
The data is now ready to be analyzed using Snowflake.
You can also unload data into the stage. The following SQL command creates a most_purchases table
and populates it with the results of a query that finds the top 10 users with the most purchases, and
then unloads the table data into the stage:
-- find top 10 users by purchases, unload into stage
CREATE TABLE most_purchases as select * FROM users ORDER BY purchases desc LIMIT
10;
COPY INTO @encrypted_customer_stage/most_purchases FROM most_purchases;
Snowflake encrypts the data files copied into the customer’s stage using the master key stored in the
stage object. Snowflake adheres to the client-side encryption protocol for the cloud storage service.
A customer can download the encrypted data files using any client or tool that supports client-side
encryption.
Next Topics:
 Understanding Encryption Key Management in Snowflake
Understanding Encryption Key Management
in Snowflake
This topic provides concepts related to Snowflake-managed keys, customer-managed
keys, and Tri-Secret Secure.
Overview
Snowflake manages data encryption keys to protect customer data. This management occurs
automatically without any need for customer intervention.
Customers can use the key management service in the cloud platform that hosts their Snowflake
account to maintain their own additional encryption key.
When enabled, the combination of a Snowflake-maintained key and a customer-managed key creates
a composite master key to protect the Snowflake data. This is called Tri-Secret Secure.
Snowflake-managed keys
All Snowflake customer data is encrypted by default using the latest security standards and best
practices. Snowflake uses strong AES 256-bit encryption with a hierarchical key model rooted in a
hardware security module.
Keys are automatically rotated on a regular basis by the Snowflake service, and data can be
automatically re-encrypted (“rekeyed”) on a regular basis. Data encryption and key management is
entirely transparent and requires no configuration or management.
Hierarchical key model
A hierarchical key model provides a framework for Snowflake’s encryption key management. The
hierarchy is composed of several layers of keys in which each higher layer of keys (parent keys)
encrypts the layer below (child keys). In security terminology, a parent key encrypting all child keys
is known as “wrapping”.
Snowflake’s hierarchical key model consists of four levels of keys:
 The root key
 Account master keys
 Table master keys
 File keys
Each customer account has a separate key hierarchy of account-level, table-level, and file-level keys,
as shown in the following image:
In a multi-tenant cloud service like Snowflake, the hierarchical key model isolates every account
with the use of separate account master keys. In addition to the access control model, which
separates storage of customer data, the hierarchical key model provides another layer of account
isolation.
A hierarchical key model reduces the scope of each layer of keys. For example, a table master key
encrypts a single table. A file key encrypts a single file. A hierarchical key model constrains the
amount of data each key protects and the duration of time for which it is usable.
Encryption key rotation
All Snowflake-managed keys are automatically rotated by Snowflake when they are more than 30
days old. Active keys are retired, and new keys are created. When Snowflake determines the retired
key is no longer needed, the key is automatically destroyed. When active, a key is used to encrypt
data and is available for usage by the customer. When retired, the key is used solely to decrypt data
and is only available for accessing the data.
When wrapping child keys in the key hierarchy, or when inserting data into a table, only the current,
active key is used to encrypt data. When a key is destroyed, it is not used for either encryption or
decryption. Regular key rotation limits the life cycle for the keys to a limited period of time.
The following image illustrates key rotation for one table master key (TMK) over a period of three
months:

The TMK rotation works as follows:


 Version 1 of the TMK is active in April. Data inserted into this table in April is protected
with TMK v1.
 In May, this TMK is rotated: TMK v1 is retired and a new, completely random key, TMK v2,
is created. TMK v1 is now used only to decrypt data from April. New data inserted into the
table is encrypted using TMK v2.
 In June, the TMK is rotated again: TMK v2 is retired and a new TMK, v3, is created. TMK
v1 is used to decrypt data from April, TMK v2 is used to decrypt data from May, and TMK
v3 is used to encrypt and decrypt new data inserted into the table in June.
As stated previously, key rotation limits the duration of time in which a key is actively used to
encrypt data. In conjunction with the hierarchical key model, key rotation further constrains the
amount of data a key version protects. Limiting the lifetime of a key is recommended by the National
Institute of Standards and Technology (NIST) to enhance security.
Periodic rekeying
ENTERPRISE EDITION FEATURE
Periodic rekeying requires Enterprise Edition (or higher). To inquire about upgrading, please
contact Snowflake Support.
This section continues the explanation of the account and table master key lifecycle. Encryption Key
Rotation described key rotation, which replaces active keys with new keys on a periodic basis and
retires the old keys. Periodic data rekeying completes the life cycle.
While key rotation ensures that a key is transferred from its active state to a retired state, rekeying
ensures that a key is transferred from its retired state to being destroyed.
If periodic rekeying is enabled, then when the retired encryption key for a table is older than one
year, Snowflake automatically creates a new encryption key and re-encrypts all data previously
protected by the retired key using the new key. The new key is used to decrypt the table data going
forward.
Note
For Enterprise Edition accounts, users with the ACCOUNTADMIN role (i.e. your account
administrators) can enable rekeying using ALTER ACCOUNT and
the PERIODIC_DATA_REKEYING parameter:
ALTER ACCOUNT SET PERIODIC_DATA_REKEYING = true;
The following image shows periodic rekeying for a TMK for a single table:

Periodic rekeying works as follows:


 In April of the following year, after TMK v1 has been retired for an entire year, it is rekeyed
(generation 2) using a fully new random key.
The data files protected by TMK v1 generation 1 are decrypted and re-encrypted using TMK
v1 generation 2. Having no further purpose, TMK v1 generation 1 is destroyed.
 In May, Snowflake performs the same rekeying process on the table data protected by TMK
v2.
 And so on.
In this example, the lifecycle of a key is limited to a total duration of one year.
Rekeying constrains the total duration in which a key is used for recipient usage, following NIST
recommendations. Furthermore, when rekeying data, Snowflake can increase encryption key sizes
and utilize better encryption algorithms that may be standardized since the previous key generation
was created.
Rekeying, therefore, ensures that all customer data, new and old, is encrypted with the latest security
technology.
Snowflake rekeys data files online, in the background, without any impact to currently running
customer workloads. Data that is being rekeyed is always available to you. No service downtime is
necessary to rekey data, and you encounter no performance impact on your workload. This benefit is
a direct result of Snowflake’s architecture of separating storage and compute resources.
Impact of rekeying on Time Travel and Fail-safe
Time Travel and Fail-safe retention periods are not affected by rekeying. Rekeying is transparent to
both features. However, some additional storage charges are associated with rekeying of data in Fail-
safe (see next section).
Impact of rekeying on storage utilization
Snowflake customers are charged with additional storage for Fail-safe protection of data files that
were rekeyed. For these files, 7 days of Fail-safe protection is charged.
That is, for example, the data files with the old key on Amazon S3 are already protected by Fail-safe,
and the data files with the new key on Amazon S3 are also added to Fail-safe, leading to a second
charge, but only for the 7-day period.
Hardware security module
Snowflake relies on cloud-hosted hardware security modules (HSMs) to ensure that key storage and
usage is secure. Each cloud platform has different HSM services and that affects how Snowflake
uses the HSM service on each platform:
 On AWS and Azure, Snowflake uses the HSM to create and store the root key.
 On Google Cloud, the HSM service is made available through the Google Cloud KMS (key
management service) API. Snowflake uses Google Cloud KMS to create and store the root
key in multi-tenant HSM partitions.
For all cloud platforms and all keys in the key hierarchy, a key that is stored in the HSM is used to
unwrap a key in the hierarchy. For example, to decrypt the table master key, the key in the HSM
unwraps the account master key. This process occurs in the HSM. After this process completes, a
software operation decrypts the table master key with the account master key.
The following image shows the relationship between the HSM, the account master keys, table master
keys, and the file keys:

Customer-managed keys
A customer-managed key is a master encryption key that the customer maintains in the key
management service for the cloud provider that hosts your Snowflake account. The key management
services for each platform are:
 AWS: AWS Key Management Service (KMS)
 Google Cloud: Cloud Key Management Service (Cloud KMS)
 Microsoft Azure: Azure Key Vault
The customer-managed key can then be combined with a Snowflake-managed key to create a
composite master key. When this occurs, Snowflake refers to this as Tri-Secret Secure (in this topic).
You can call these system functions in your Snowflake account to obtain information about your
keys:
 AWS: SYSTEM$GET_CMK_KMS_KEY_POLICY
 Microsoft Azure: SYSTEM$GET_CMK_AKV_CONSENT_URL
 Google Cloud: SYSTEM$GET_GCP_KMS_CMK_GRANT_ACCESS_CMD
Important
Snowflake does not support key rotation for customer-managed keys and does not recommend
implementing an automatic key rotation policy on the customer-managed key.
The reason for this recommendation is that the key rotation can lead to a loss of data if the rotated
key is deleted because Snowflake will not be able to decrypt the data. For more information, see Tri-
Secret Secure (in this topic).
Benefits of customer-managed keys
Benefits of customer-managed keys include:
Control over data access
You have complete control over your master key in the key management service and,
therefore, your data in Snowflake. It is impossible to decrypt data stored in your Snowflake
account without you releasing this key.
Disable access due to a data breach
If you experience a security breach, you can disable access to your key and halt all data
operations running in your Snowflake account.
Ownership of the data lifecycle
Using customer-managed keys, you can align your data protection requirements with your
business processes. Explicit control over your key provides safeguards throughout the entire
data lifecycle, from creation to deletion.
Important requirements for customer-managed keys
Customer-managed keys provide significant security benefits, but they also have crucial,
fundamental requirements that you must continuously follow to safeguard your master key:
Confidentiality
You must keep your key secure and confidential at all times.
Integrity
You must ensure your key is protected against improper modification or deletion.
Availability
To execute queries and access your data, you must ensure your key is continuously available
to Snowflake.
By design, an invalid or unavailable key will result in a disruption to your Snowflake data operations
until a valid key is made available again to Snowflake.
However, Snowflake is designed to handle temporary availability issues (up to 10 minutes) caused
by common issues, such as network communication failures. After 10 minutes, if the key remains
unavailable, all data operations in your Snowflake account will cease completely. Once access to the
key is restored, data operations can be started again.
Failure to comply with these requirements can significantly jeopardize the integrity of your data,
ranging from your data being temporarily inaccessible to it being permanently disabled. In addition,
Snowflake cannot be responsible for 3rd-party issues that occur or administrative mishaps caused by
your organization in the course of maintaining your key.
For example, if an issue with the key management service results in your key becoming unavailable,
your data operations will be impacted. These issues must be resolved between you and the Support
team for the key management service. Similarly, if your key is tampered with or destroyed, all
existing data in your Snowflake account will become unreadable until the key is restored.
Tri-Secret Secure
BUSINESS CRITICAL FEATURE
Requires Business Critical Edition (or higher). To inquire about upgrading, please contact Snowflake
Support.
Tri-Secret Secure is the combination of a Snowflake-maintained key and a customer-managed key in
the cloud provider platform that hosts your Snowflake account to create a composite master key to
protect your Snowflake data. The composite master key acts as an account master key and wraps all
of the keys in the hierarchy; however, the composite master key never encrypts raw data.
If the customer-managed key in the composite master key hierarchy is revoked, your data can no
longer be decrypted by Snowflake, providing a level of security and control above Snowflake’s
standard encryption. This dual-key encryption model, together with Snowflake’s built-in user
authentication, enables the three levels of data protection offered by Tri-Secret Secure.
Attention
Before engaging with Snowflake to enable Tri-Secret Secure for your account, you should carefully
consider your responsibility for safeguarding your key as mentioned in the Customer-Managed
Keys section (in this topic). If you have any questions or concerns, we are more than happy to
discuss them with you.
Note that Snowflake also bears the same responsibility for the keys that we maintain. As with all
security-related aspects of our service, we treat this responsibility with the utmost care and vigilance.
All of our keys are maintained under strict policies that have enabled us to earn the highest security
accreditations, including SOC 2 Type II, PCI-DSS, HIPAA and HITRUST CSF.
Note
Image registry currently does not support Tri-Secret Secure.
Enabling Tri-Secret Secure
To enable Snowflake Tri-Secret Secure for your Business Critical (or higher) account, please
contact Snowflake Support.

Optimizing performance in Snowflake


The following topics help guide efforts to improve the performance of Snowflake.
Exploring execution times
Gain insights into the historical performance of queries using the web interface or
by writing queries against data in the ACCOUNT_USAGE schema.
Optimizing warehouses for performance
Learn about strategies to fine-tune computing power in order to improve the
performance of a query or set of queries running on a warehouse, including
enabling the Query Acceleration Service.
Optimizing storage for performance
Learn how storing similar data together, creating optimized data structures, and
defining specialized data sets can improve the performance of queries.
Helpful when choosing between Automatic Clustering, Search Optimization
Service, and materialized views.
Exploring execution times
This topic explains how to examine the past performance of queries and tasks. This
information helps identify candidates for performance optimizations and allows you to
see whether your optimization strategies are having the desired effect.
You can explore historical performance using Snowsight or by writing queries against
views in the ACCOUNT_USAGE schema. A user without access to the
ACCOUNT_USAGE schema can query similar data using the Information Schema.
View execution times and load
You can use Snowsight to gain visual insights into the performance of queries and tasks as well as
the load of a warehouse.
Queries
1. Sign in to Snowsight.
2. Select Monitoring » Query History.
3. Use the Duration column to understand how long it took a query to execute. You can
sort the column to find the queries that ran the longest.
4. If you want to focus on a particular user’s queries, use the User drop-down to select
the user.
5. If you want to focus on the queries that ran on a particular warehouse,
select Filters » Warehouse, and then select the warehouse.
Warehouses
1. Sign in to Snowsight.
2. Switch to a role that has privileges for the warehouse.
3. Select Admin » Warehouses.
4. Select a warehouse.
5. Use the Warehouse Activity chart to visualize the load of the warehouse, including
whether queries were queued.
Tasks
1. Sign in to Snowsight.
2. Select Monitoring » Task History to view how long it took to execute a task’s SQL
code.
Drill down into execution times
The Query Profile allows you to examine which parts of a query are taking the longest to execute. It
includes a Most Expensive Nodes pane that identifies the operator nodes that are taking the longest
to execute. You can drill down even further by viewing what percentage of a node’s execution time
was spent in a particular category of query processing.
To access the Query Profile for a query:
1. Sign in to Snowsight.
2. Select Monitoring » Query History.
3. Select the query ID of a query.
4. Select the Query Profile tab.
Tip
You can programmatically access the performance statistics of the Query Profile by executing
the GET_QUERY_OPERATOR_STATS function.
Write queries to explore execution times
The Account Usage schema contains views related to the execution times of queries and tasks. It also
contains a view related to the load of a warehouse as it executes queries. You can write queries
against these views to drill down into performance data and create custom reports and dashboards.
By default, only the account administrator (i.e. user with the ACCOUNTADMIN role) can access
views in the ACCOUNT_USAGE schema. To allow other users to access these views, refer
to Enabling the SNOWFLAKE Database Usage for Other Roles.
Users without access to the ACCOUNT_USAGE schema (e.g. a user who ran a query or a
warehouse administrator) can still return recent execution times and other query metadata using
the QUERY_HISTORY table functions of the Information Schema.
Be aware that the ACCOUNT_USAGE views are not updated immediately after running a query or
task. If you want to check the execution time of a query right after running it, use Snowsight to view
its performance. The Information Schema is also updated quicker than the ACCOUNT_USAGE
views.
ACCOUNT_USAGE View Description Latency
QUERY_HISTORY Used to analyze the Snowflake query Up to 45
history by various dimensions (time minutes
range, execution time, session, user,
ACCOUNT_USAGE View Description Latency
warehouse, etc.) within the last 365
days (1 year).
WAREHOUSE_LOAD_HISTORY Used to analyze the workload on a Up to 3
warehouse within a specified date hours
range.
TASK_HISTORY Used to retrieve the history of task Up to 45
usage within the last 365 days (1 year). minutes
Example queries
The following queries against the ACCOUNT_USAGE schema provide insight into the past
performance of queries, warehouses, and tasks. Click the name of a query to see the full SQL
example.
Query Performance
 Query: Top n longest-running queries
 Query: Queries organized by execution time over past month
 Query: Find long running repeated queries
 Query: Track the average performance of a query over time
Warehouse Load
 Query: Total warehouse load
Task Performance
 Query: Longest running tasks
Query performance
Query: Top n longest-running queries
This query provides a listing of the top n (50 in the example below) longest-running queries in the
last day. You can adjust the DATEADD function to focus on a shorter or longer period of time.
Replace my_warehouse with the name of a warehouse.
SELECT query_id,
ROW_NUMBER() OVER(ORDER BY partitions_scanned DESC) AS query_id_int,
query_text,
total_elapsed_time/1000 AS query_execution_time_seconds,
partitions_scanned,
partitions_total,
FROM snowflake.account_usage.query_history Q
WHERE warehouse_name = 'my_warehouse' AND TO_DATE(Q.start_time) > DATEADD(day,-
1,TO_DATE(CURRENT_TIMESTAMP()))
AND total_elapsed_time > 0 --only get queries that actually used compute
AND error_code IS NULL
AND partitions_scanned IS NOT NULL
ORDER BY total_elapsed_time desc
LIMIT 50;
Query: Queries organized by execution time over past month
This query groups queries for a given warehouse by buckets for execution time over the last month.
These trends in query completion time can help inform decisions to resize warehouses or separate
out some queries to another warehouse. Replace MY_WAREHOUSE with the name of a warehouse.
SELECT
CASE
WHEN Q.total_elapsed_time <= 60000 THEN 'Less than 60 seconds'
WHEN Q.total_elapsed_time <= 300000 THEN '60 seconds to 5 minutes'
WHEN Q.total_elapsed_time <= 1800000 THEN '5 minutes to 30 minutes'
ELSE 'more than 30 minutes'
END AS BUCKETS,
COUNT(query_id) AS number_of_queries
FROM snowflake.account_usage.query_history Q
WHERE TO_DATE(Q.START_TIME) > DATEADD(month,-
1,TO_DATE(CURRENT_TIMESTAMP()))
AND total_elapsed_time > 0
AND warehouse_name = 'my_warehouse'
GROUP BY 1;
Query: Find long running repeated queries
You can use the query hash (the value of the query_hash column in the ACCOUNT_USAGE
QUERY_HISTORY view) to find patterns in query performance that might not be obvious. For
example, although a query might not be excessively expensive during any single execution, a
frequently repeated query could lead to high costs, based on the number of times the query runs.
You can use the query hash to identify the queries that you should focus on optimizing first. For
example, the following query uses the value in the query_hash column to identify the query IDs for the
100 longest-running queries:
SELECT
query_hash,
COUNT(*),
SUM(total_elapsed_time),
ANY_VALUE(query_id)
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE warehouse_name = 'MY_WAREHOUSE'
AND DATE_TRUNC('day', start_time) >= CURRENT_DATE() - 7
GROUP BY query_hash
ORDER BY SUM(total_elapsed_time) DESC
LIMIT 100;
Query: Track the average performance of a query over time
The following statement computes the daily average total elapsed time for all queries that have a
specific parameterized query hash ( cbd58379a88c37ed6cc0ecfebb053b03).
SELECT
DATE_TRUNC('day', start_time),
SUM(total_elapsed_time),
ANY_VALUE(query_id)
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE query_parameterized_hash = 'cbd58379a88c37ed6cc0ecfebb053b03'
AND DATE_TRUNC('day', start_time) >= CURRENT_DATE() - 30
GROUP BY DATE_TRUNC('day', start_time);
Warehouse load
Query: Total warehouse load
This query provides insight into the total load of a warehouse for executed and queued queries.
These load values represent the ratio of the total execution time (in seconds) of all queries in a
specific state in an interval by the total time (in seconds) for that interval.
For example, if 276 seconds was the total time for 4 queries in a 5 minute (300 second) interval, then
the query load value is 276 / 300 = 0.92.
SELECT TO_DATE(start_time) AS date,
warehouse_name,
SUM(avg_running) AS sum_running,
SUM(avg_queued_load) AS sum_queued
FROM snowflake.account_usage.warehouse_load_history
WHERE TO_DATE(start_time) >= DATEADD(month,-1,CURRENT_TIMESTAMP())
GROUP BY 1,2
HAVING SUM(avg_queued_load) >0;
Task performance
Query: Longest running tasks
This query lists the longest running tasks in the last day, which can indicate an opportunity to
optimize the SQL being executed by the task.
SELECT DATEDIFF(seconds, query_start_time,completed_time) AS duration_seconds,*
FROM snowflake.account_usage.task_history
WHERE state = 'SUCCEEDED'
AND query_start_time >= DATEADD (day, -1, CURRENT_TIMESTAMP())
ORDER BY duration_seconds DESC;

Optimizing warehouses for performance


In the Snowflake architecture, virtual warehouses provide the computing power that is required to
execute queries. Fine-tuning the compute resources provided by a warehouse can improve the
performance of a query or set of queries.
A warehouse owner or administrator can try the following warehouse-related strategies as they
attempt to improve the performance of one or more queries. As they adjust a warehouse based on one
of these strategies, they can test the change by re-running the query and checking its execution time.
Warehouse-related strategies are just one way to boost the performance of queries. For performance
strategies involving how data is stored, refer to Optimizing storage for performance.
Strategy Description
Reduce queues Minimizing queuing can improve performance because the time between
submitting a query and getting its results is longer when the query must
wait in a queue before starting.
Resolve memory Adjusting the available memory of a warehouse can improve
spillage performance because a query runs substantially slower when a
warehouse runs out of memory, which results in bytes “spilling” onto
storage.
Increase The larger a warehouse, the more compute resources are available to
warehouse size execute a query or set of queries.
Try query The query acceleration service offloads portions of query processing to
acceleration serverless compute resources, which speeds up the processing of a query
while reducing its demand on the warehouse’s compute resources.
Optimize the Query performance improves if a query can read from the warehouse’s
warehouse cache cache instead of from tables.
Limit concurrently Limiting the number of queries that are running concurrently in a
running queries warehouse can improve performance because there are fewer queries
putting demands on the warehouse’s resources.
Tip
Optimizing a warehouse for query performance is more straightforward when the warehouse runs
similar workloads. For example, if a warehouse runs significantly different queries, the cost of a
performance enhancement might be wasted on a query that does not benefit from the optimization.
For general guidelines about distributing workloads to your organization’s warehouses, see the
Analyzing Your Workloads section of the Managing Snowflake’s Compute Resources (Snowflake
blog).

Was Optimizing storage for performance


This topic discusses storage optimizations that can improve query performance, such as storing
similar data together, creating optimized data structures, and defining specialized data sets.
Snowflake provides three of these storage strategies: automatic clustering, search optimization, and
materialized views.
In general, these storage strategies do not substantially improve the performance of queries that
already execute in a second or faster.
The strategies discussed in this topic are just one way to boost the performance of queries. For
strategies related to the computing resources used to execute a query, refer to Optimizing warehouses
for performance.
Introduction to storage strategies
Automatic Clustering
Snowflake stores a table’s data in micro-partitions. Among these micro-partitions, Snowflake
organizes (i.e. clusters) data based on dimensions of the data. If a query filters, joins, or aggregates
along those dimensions, fewer micro-partitions must be scanned to return results, which speeds up
the query considerably.
You can set a cluster key to change the default organization of the micro-partitions so data is
clustered around specific dimensions (i.e. columns). Choosing a cluster key improves the
performance of queries that filter, join, or aggregate by the columns defined in the cluster key.
Snowflake enables Automatic Clustering to maintain the clustering of the table as soon as you define
a cluster key. Once enabled, Automatic Clustering updates micro-partitions as new data is added to
the table. Learn More
Search Optimization Service
The Search Optimization Service improves the performance of point lookup queries (i.e. “needle in a
haystack searches”) that return a small number of rows from a table using highly selective filters.
The Search Optimization Service is ideal when it is critical to have low-latency point lookup queries
(e.g. investigative log searches, threat or anomaly detection, and critical dashboards with selective
filters).
The Search Optimization Service reduces the latency of point lookup queries by building a persistent
data structure that is optimized for a particular type of search.
You can enable the Search Optimization Service for an entire table or for specific columns. As long
as they are selective enough, equality searches, substring searches, and geo searches against those
columns can be sped up significantly.
The Search Optimization Service supports both structured and semi-structured data (see supported
data types).
The Search Optimization Service requires Snowflake Enterprise Edition or higher. Learn More
Materialized views
A materialized view is a pre-computed data set derived from a SELECT statement that is stored for
later use. Because the data is pre-computed, querying a materialized view is faster than executing a
query against the base table on which the view is defined. For example, if you
specify SELECT SUM(column1) when creating the materialized view, then a query that
returns SUM(column1) from the view executes faster because column1 has already been aggregated.
Materialized views are designed to improve query performance for workloads composed of common,
repeated query patterns that return a small number of rows and/or columns relative to the base table.
A materialized view cannot be based on more than one table.
Materialized views require Snowflake Enterprise Edition or higher. Learn More
Choosing an optimization strategy
Different types of queries benefit from different storage strategies. You can use the following
sections to discover which strategy best fits a workload.
Automatic Clustering is the broadest option that can benefit a range of queries that access the same
columns of a table. An administrator often picks the most important queries based on frequency and
latency requirements, and then chooses a cluster key that maximizes the performance of those
queries. Automatic Clustering makes sense when many queries filter, join, or aggregate the same few
columns.
The Search Optimization Service and materialized views have a narrower scope. When specific
queries access a well-defined subset of a table’s data, the administrator can use the characteristics of
the query to decide whether using the Search Optimization Service or a materialized view might
improve performance. For example, administrators could identify important point lookup queries and
implement the Search Optimization Service for a table or column. Likewise, the administrator could
optimize specific query patterns by creating a materialized view.
You can implement more than one of these strategies for a table, and an individual query with
multiple filters could potentially benefit from both Automatic Clustering and the Search
Optimization Service. However, enabling the Search Optimization Service or creating a materialized
view on a clustered table can be more expensive. To learn why this increases compute costs, refer
to Ongoing Costs (in this topic).
If more than one strategy could potentially improve the performance of a particular query, you might
want to start with Automatic Clustering or the Search Optimization Service because other queries
with similar access patterns could also be improved.
Differentiating considerations
The following is not an exhaustive comparison of the storage strategies, but rather provides the most
important considerations when differentiating between them.
Automatic Clustering
 Biggest performance boost comes from a WHERE clause that filters on a column of
the cluster key, but it can also improve the performance of other clauses and functions
that act upon that same column (e.g. joins and aggregations).
 Ideal for range queries or queries with an inequality filter. Also improves an equality
filter, but the Search Optimization Service is usually faster for point lookup queries.
 Available in Standard Edition of Snowflake.
 There can be only one cluster key. [1] If different queries against a table act upon
different columns, consider using the Search Optimization Service or a materialized
view instead.
Search Optimization Service
 Improves point lookup queries that return a small number of rows. If the query returns
more than a few records, consider Automatic Clustering instead.
 Includes support for point lookup queries that:
 Match substrings or regular expressions using predicates such as LIKE and
RLIKE.
 Search for specific fields in VARIANT, ARRAY, or OBJECT columns.
 Use geospatial functions with GEOGRAPHY values.
Materialized view
 Improves intensive and frequent calculations such as aggregation and analyzing semi-
structured data (not just filtering).
 Usually focused on a specific query/subquery calculation.
 Improves queries against external tables.
[1]
If there is an important reason to define multiple cluster keys, you could create multiple
materialized views, each with its own cluster key.
Prototypical queries
The following examples are intended to highlight which type of query typically runs faster with a
particular storage strategy.
Prototypical Query for Clustering
Automatic Clustering provides a performance boost for range queries with large table scans.
For example, the following query will execute faster if the shipdate column is the table’s
cluster key because the WHERE clause scans a lot of data.
SELECT
SUM(quantity) AS sum_qty,
SUM(extendedprice) AS sum_base_price,
AVG(quantity) AS avg_qty,
AVG(extendedprice) AS avg_price,
COUNT(*) AS count_order
FROM lineitem
WHERE shipdate >= DATEADD(day, -90, to_date('2023-01-01));
For an additional example of a query that might run faster if the table was clustered, refer
to Benefits of Defining Clustering Keys (for Very Large Tables).
Prototypical Query for Search Optimization
The Search Optimization Service can provide a performance boost for point lookup
queries that scan a large table to return a small subset of records. For example, the following
query will execute faster with the Search Optimization Service if the sender_ip column has a
large number of distinct values.
SELECT error_message, receiver_ip
FROM logs
WHERE sender_ip IN ('198.2.2.1', '198.2.2.2');
To review other queries that might run faster with the Search Optimization Service, refer to
the following examples:
 Equality operators
 Geospatial functions
 Substring and Regular Expressions
 Fields in VARIANT Columns
Prototypical Query for Materialized View
A materialized view can provide a performance boost for queries that access a small subset of
data using expensive operations like aggregation. As an example, suppose that an
administrator aggregated the totalprice column when creating a materialized view mv_view1.
The following query against the materialized view will execute faster than it would against
the base table.
SELECT
orderdate,
SUM(totalprice)
FROM mv_view1
GROUP BY 1;
For more use cases where materialized views can speed up queries, refer to Examples of Use
Cases For Materialized Views.
Implementation and cost considerations
This section discusses cost considerations of using a storage strategy to improve query performance,
along with implementation considerations as you balance cost and performance.
Initial investment
Implementing a storage strategy can require a bigger time commitment and upfront financial
investment than other types of performance optimizations (e.g. re-writing SQL statements
or optimizing the warehouse running the query), but the performance improvements can be
significant.
Snowflake uses serverless compute resources to implement each storage strategy, which consumes
credits before you can test how well the optimization improves performance. In addition, it can take
Snowflake a significant amount of time to fully implement Automatic Clustering and the Search
Optimization Service (e.g. a week for a very large table).
The Search Optimization Service and materialized views also require the Enterprise Edition or
higher, which increases the price of a credit.
Ongoing cost
Storage strategies incur both compute and storage costs.
Compute Costs
Snowflake uses serverless compute resources to maintain storage optimizations as new data is
added to a table. The more changes to a table, the higher the maintenance costs. If a table is
constantly updated, the cost of maintaining a storage optimization might be prohibitive.
The cost of maintaining materialized views or the Search Optimization Service can be
significant when Automatic Clustering is enabled for the underlying table. With Automatic
Clustering, Snowflake is constantly reclustering its micro-partitions around the dimensions of
the cluster key. Every time the base table is reclustered, Snowflake must use serverless
compute resources to update the storage used by materialized views and the Search
Optimization Service. As a result, Automatic Clustering activities on the base table can
trigger maintenance costs for materialized views and the Search Optimization Service beyond
the cost of the DML commands on the base table.
Storage Costs
Automatic Clustering
Unlike the Search Optimization Service and materialized views, Automatic Clustering
reorganizes existing data rather than creating additional storage. However, reclustering can
incur additional storage costs if it increases the size of Fail-safe storage. For details, refer
to Credit and Storage Impact of Reclustering.
Search Optimization / Materialized Views
Materialized views and the Search Optimization Service incur the cost of additional storage,
which is billed at the standard rate.
Estimating costs
Search Optimization Service
You can run the SYSTEM$ESTIMATE_SEARCH_OPTIMIZATION_COSTS function to
help estimate the cost of adding the Search Optimization Service to a column or entire table.
The estimated costs are proportional to the number of columns that will be enabled and how
much the table has recently changed.
Implementation strategy
Because the compute costs and storage costs of a storage strategy can be significant, you might want
to start small and carefully track the initial and ongoing costs before committing to a more extensive
implementation. For example, you might choose a cluster key for just one or two tables, and then
assess the cost before choosing a key for other tables.
When tracking the ongoing cost associated with a storage strategy, remember that virtual warehouses
consume credits only during the time they are running a query, so a faster query costs less to run.
Snowflake recommends carefully reporting on the cost of running a query before the storage
optimization and comparing it to the cost of running the same query after the storage optimization so
it can be factored into the cost assessment.
page helpful?
Aggregation policies
ENTERPRISE EDITION FEATURE
This feature requires Enterprise Edition (or higher). To inquire about upgrading, please
contact Snowflake Support.
PREVIEW FEATURE— OPEN
Available to all accounts that are Enterprise Edition (or higher).
To inquire about upgrading, please contact Snowflake Support.
An aggregation policy is a schema-level object that controls what type of query can access data from
a table or view. When an aggregation policy is applied to a table, queries against that table must
aggregate data into groups of a minimum size in order to return results, thereby preventing a query
from returning information from an individual record. A table or view with an aggregation policy
assigned to it is said to be aggregation-constrained.
Overview
A core feature of Snowflake is the ability to share data sets with other entities. Aggregation policies
allow a provider (data owner) to exercise control over what can be done with their data even after it
is shared with a consumer. Specifically, the provider can require a consumer of a table to aggregate
the data rather than retrieve individual records.
When creating an aggregation policy, the provider’s policy administrator specifies a minimum group
size (i.e. the number of rows that must be aggregated together into a group). The larger the minimum
group size, the less likely it is that a consumer could use the query results to deduce the contents of a
single record.
Once the aggregation policy is applied to a table or view, a query against it must conform to two
requirements:
 The query must aggregate the data. If the query uses an aggregation function, it must be one
of the allowed aggregation functions.
 Each group created by the query must include the aggregate of at least X records, where X is
the minimum group size of the aggregation policy.
If the query returns a group that contains fewer records than the minimum group size of the policy,
then Snowflake combines those groups into a remainder group. Snowflake applies the aggregation
function to the appropriate column to return a value for the remainder group. However, because that
value is calculated from rows that belong to more than one group, the value of the GROUP BY key
column is NULL. For example, if the query includes the clause GROUP BY state, then the value
of state in the remainder group is NULL.
A query that does not return enough results to populate a remainder group still works, but returns a
NULL value in every field of the results.
Limitations
For this preview:
 If the query uses an explicit grouping construct, it must be a GROUP BY clause. The query
cannot use related constructs like GROUP BY ROLLUP, GROUP BY CUBE, or GROUP
BY GROUPING SETS.
 Most set operators are not allowed when one of the queries acts on an aggregation-
constrained table. As an exception, UNION ALL is supported, but each result group must
satisfy the minimum group size of the aggregation-constrained tables being queried
(see Query requirements for details).
 If a column of an aggregation-constrained table is protected by a projection policy, a query
against that table cannot use the column as an argument of the COUNT function.
 Recursive CTEs are not allowed in queries against an aggregation-constrained table or view.
 Window functions are not allowed in queries against an aggregation-constrained table or
view.
 A query against an aggregation-constrained table cannot use a correlated subquery or lateral
join when there are references to or from the portion of the query that meets the requirements
of the aggregation policy. The following examples illustrate the types of queries that are
prohibited.
Example 1
Assuming protected_table is aggregation-constrained, the following query is not allowed
because the portion of the query that aggregates data references another part of the query
outside of the subquery:
SELECT c1, c2
FROM open_table
WHERE c1 = (SELECT x FROM protected_table WHERE y = open_table.c2);
Example 2
Assuming protected_table is aggregation-constrained, the following query is not allowed
because the subquery references the part of the query that aggregates data, which is outside of
the subquery:
SELECT
SUM(SELECT COUNT(*) FROM open_table ot WHERE pt.id = ot.id)
FROM protected_table pt;
Considerations
Consider the following when using aggregation policies to protect sensitive data:
 Aggregation policies protect data for an individual record, not an entity. If a data set contains
multiple records belonging to the same entity, an aggregation policy only protects the privacy
of a specific record pertaining to that entity, not the entire entity.
 While aggregation policies limit access to individual records, they do not guarantee a
malicious actor could not use deliberate queries to obtain potentially sensitive data from an
aggregation-constrained table. With enough query attempts, a malicious actor could
potentially work around the aggregation requirements to ascertain a value from an individual
row. Aggregation policies are best suited for use with partners and customers with whom you
have an existing level of trust. In addition, providers should be vigilant about potential
misuses of their data (for example, reviewing the access history for their listings).
Create an aggregation policy
The syntax for creating an aggregation policy is:
CREATE [ OR REPLACE ] AGGREGATION POLICY <name>
AS () RETURNS AGGREGATION_CONSTRAINT -> <body>
[ COMMENT = '<string_literal>' ];
Where:
 name specifies the name of the policy.
 AS () RETURNS AGGREGATION_CONSTRAINT is the signature and return type of the policy.
The signature does not accept any arguments and the return type is
AGGREGATION_CONSTRAINT, which is an internal data type. All aggregation policies
have the same signature and return type.
 body is a SQL expression that determines the restrictions of an aggregation policy.
Calling internal functions from the body
The body of an aggregation policy uses two internal functions to define the constraints of the
policy: NO_AGGREGATION_CONSTRAINT and AGGREGATION_CONSTRAINT. When the
conditions of the body call one of these functions, the return value from the function determines how
queries against the aggregation-constrained table or view must be formulated to return results.
NO_AGGREGATION_CONSTRAINT
When the policy body returns a value from this function, queries can return data from an
aggregation-constrained table or view without restriction. For example, the body of the policy
could call this function when an administrator needs to obtain unaggregated results from the
aggregation-constrained table or view.
Call NO_AGGREGATION_CONSTRAINT without an argument.
AGGREGATION_CONSTRAINT
When the policy body returns a value from this function, queries must aggregate data in order
to return results. Use the MIN_GROUP_SIZE argument to specify how many records must
be included in each aggregation group.
The syntax of the AGGREGATION_CONSTRAINT function is:
AGGREGATION_CONSTRAINT ( MIN_GROUP_SIZE => <integer_expression> )
Where integer_expression resolves to the minimum group size of the policy.
There is a difference between passing a 1 and a 0 as the argument to the function. Both
require results to be aggregated.
 Passing a 1 also requires that each aggregation group contain at least one record from
the aggregation-constrained table. So for outer joins, at least one record from the
aggregation-constrained table must match a record from an unprotected table.
 Passing a 0 allows the query to return groups that consist entirely of records from
another table. So for outer joins between an aggregation-constrained table and an
unprotected table, a group could consist of records from the unprotected table that do
not match any records in the aggregation-constrained table.
Note
The body of an aggregation policy cannot reference a user-defined function, table, or view.
Example policies
Fixed minimum group size
The simplest aggregation policy calls the AGGREGATION_CONSTRAINT function
directly and defines a constant minimum group size that is applied to all queries against the
table. For example, the following command creates an aggregation policy with a minimum
group size of 5:
CREATE AGGREGATION POLICY my_agg_policy
AS () RETURNS AGGREGATION_CONSTRAINT ->
AGGREGATION_CONSTRAINT(MIN_GROUP_SIZE => 5);
Conditional policy
Policy administrators can define the SQL expression of an aggregation policy so different
queries have different restrictions based on factors such as the role of the user executing the
query. This strategy can allow one user to query a table without restriction while requiring
others to aggregate results.
For example, the following aggregation policy gives users with the role ADMIN unrestricted
access to a table while requiring all other queries to aggregate data into groups of at least 5
rows.
CREATE AGGREGATION POLICY my_agg_policy
AS () RETURNS AGGREGATION_CONSTRAINT ->
CASE
WHEN CURRENT_ROLE() = 'ADMIN'
THEN NO_AGGREGATION_CONSTRAINT()
ELSE AGGREGATION_CONSTRAINT(MIN_GROUP_SIZE => 5)
END;
Modify an aggregation policy
You can use the ALTER AGGREGATION POLICY command to modify the SQL expression that
determines the minimum group size of the aggregation policy. You can also rename the policy or
change its comment.
Before modifying an aggregation policy, you can execute the DESCRIBE AGGREGATION
POLICY command or GET_DDL function to review the current SQL expression of the policy. The
SQL expression that determines the minimum group size appears in the BODY column.
As an example, you can execute the following command to change the SQL expression of the
aggregation policy my_policy to require a minimum group size of 2 rows in all circumstances:
ALTER AGGREGATION POLICY my_policy SET BODY ->
AGGREGATION_CONSTRAINT(MIN_GROUP_SIZE=>2);
Assign an aggregation policy
Once created, an aggregation policy can be applied to one or more tables or views to make it
aggregation-constrained. A table or view can only have one aggregation policy attached.
Use the SET AGGREGATION POLICY clause of a ALTER TABLE or ALTER VIEW command
to assign an aggregation policy to an existing table or view:
ALTER { TABLE | VIEW } <name> SET AGGREGATION POLICY <policy_name> [
FORCE ]
Where:
 name specifies the name of the table or view.
 policy_name specifies the name of the aggregation policy.
 FORCE is an optional parameter that allows the command to assign the aggregation policy to a
table or view that already has an aggregation policy assigned to it. The new aggregation
policy atomically replaces the existing one.
For example, to assign the policy my_agg_policy to the table t1, execute:
ALTER TABLE t1 SET AGGREGATION POLICY my_agg_policy;
You can also use the WITH clause of the CREATE TABLE and CREATE VIEW commands to
assign an aggregation policy to a table or view at creation time. For example, to assign the
policy my_agg_policy to a new table, execute:
CREATE TABLE t1 WITH AGGREGATION POLICY my_agg_policy;
Replace an aggregation policy
The recommended method of replacing an aggregation policy is to use the FORCE parameter to
detach the existing aggregation policy and assign the new one in a single command. This allows you
to atomically replace the old policy, leaving no gap in protection.
For example, to assign a new aggregation policy to a table that is already aggregation-constrained:
ALTER TABLE privacy SET AGGREGATION POLICY agg_policy_2 FORCE;
You can also detach the aggregation policy from a table or view in one statement (… UNSET
AGGREGATION POLICY) and then set a new policy on the table or view in a different statement
(… SET AGGREGATION POLICY <name>). If you choose this method, the table is not protected
by an aggregation policy in between detaching one policy and assigning another. A query could
potentially access sensitive data during this time.
Detach an aggregation policy
Use the UNSET AGGREGATION POLICY clause of an ALTER TABLE or ALTER VIEW
command to detach an aggregation policy from a table or view in order to remove the need to
aggregate data. The name of the aggregation policy is not required because a table or view cannot
have more than one aggregation policy attached.
ALTER {TABLE | VIEW} <name> UNSET AGGREGATION POLICY
Where:
 name specifies the name of the table or view.
For example, to detach an aggregation policy from view v1, execute:
ALTER VIEW v1 UNSET AGGREGATION POLICY;
Monitor aggregation policies
It can be helpful to think of two general approaches to determine how to monitor aggregation policy
usage.
 Discover aggregation policies
 Identify aggregation policy references
Discover aggregation policies
You can use the AGGREGATION_POLICIES view in the Account Usage schema of the shared
SNOWFLAKE database. This view is a catalog for all aggregation policies in your Snowflake
account. For example:
SELECT * FROM SNOWFLAKE.ACCOUNT_USAGE.AGGREGATION_POLICIES
ORDER BY POLICY_NAME;
Identify aggregation policy references
The POLICY_REFERENCES Information Schema table function can identify aggregation policy
references. There are two different syntax options:
1. Return a row for each object (i.e. table or view) that has the specified aggregation policy set
on it:
2. USE DATABASE my_db;
3. USE SCHEMA information_schema;
4. SELECT policy_name,
5. policy_kind,
6. ref_entity_name,
7. ref_entity_domain,
8. ref_column_name,
9. ref_arg_column_names,
10. policy_status
11. FROM TABLE(information_schema.policy_references(policy_name =>
'my_db.my_schema.aggpolicy'));
12. Return a row for each policy assigned to the table named my_table:
13. USE DATABASE my_db;
14. USE SCHEMA information_schema;
15. SELECT policy_name,
16. policy_kind,
17. ref_entity_name,
18. ref_entity_domain,
19. ref_column_name,
20. ref_arg_column_names,
21. policy_status
22. FROM TABLE(information_schema.policy_references(ref_entity_name =>
'my_db.my_schema.my_table', ref_entity_domain => 'table'));
Query requirements
After an aggregation policy has been applied to a table or view, queries against that table or view
must conform to certain requirements. This section discusses what is and isn’t allowed in a query
against an aggregation-constrained table or view.
Note
Once part of the query properly aggregates data to satisfy the requirements of the aggregation policy,
these query restrictions do not apply, and another part of the query can include things that are
otherwise prohibited.
For example, the following query can use a SELECT statement that does not aggregate results
because another part of the query has already satisfied the aggregation requirements of the policy
that is assigned to protected_table:
SELECT * FROM open_table ot WHERE ot.a > (SELECT SUM(id) FROM protected_table pt)
For additional restrictions on what can be included in a query, refer to Limitations.
Aggregation Functions
The following aggregation functions are allowed in a query against an aggregation-
constrained table:
 AVG
 COUNT [DISTINCT]
 HLL
 SUM
A query can contain more than one of these allowed aggregation functions. A query fails if it
attempts to use an aggregation function that is not allowed.
Grouping Statement
A query against an aggregation-constrained table must aggregate data into groups of a
minimum size. It can use an explicit grouping statement (i.e. a GROUP BY clause) or a
scalar aggregation function that aggregates the entire data set (e.g. COUNT(*)).
Filters
In general, Snowflake does not restrict how a query uses WHERE and ON clauses to filter
the aggregation-constrained table as long as it aggregates the rows selected by the filter.
Joins
A query can join an aggregation-constrained table with another table, including another
aggregation-constrained table.
Snowflake checks each aggregation group to make sure that the number of rows taken from
an aggregation-constrained table meets or exceeds the minimum group size of that table. For
example, if an aggregation-constrained table table_a with a minimum group size of 5 is joined
with table_b with a minimum group size of 3, each group returned by the query must be
created using at least 5 rows from table_a and 3 rows from table_b.
Whether a query with a join meets the requirements of an aggregation-constrained table is
determined by the number of rows taken from the table, not the size of a group. As a result,
the size of a group created from the joined data could be greater than the minimum group size
of the aggregation-constrained table, but still result in filtered data. For example, suppose:
 agg_t is aggregation constrained with a minimum group size of 2. This table contains a
single integer column c that has the following content: { 1, 2, 2 }.
 open_t is unconstrained, and contains an integer column c with the following content:
{ 1, 1, 1, 2 }.
A user executes the following query that joins the two tables:
SELECT c, COUNT(*)
FROM agg_t, open_t
WHERE agg_t.c = open_t.c
GROUP BY agg_t.c;
The query will return:
+-----------------+
| c | COUNT(*) |
|------+----------|
| 2 | 2 |
|------+----------|
| null | 3 |
+-----------------+
Even though the second group has 3 records, which is greater than the minimum group size,
all of those records correspond to a single record in the aggregation-constrained table, so the
value is filtered out.
UNION ALL
A query can use UNION ALL to combine results of two subqueries, even if one or more of
the queried tables are aggregation-constrained. Similar to joins, each group in the results must
satisfy the minimum group size of every aggregation-constrained table being queried. For
example, suppose:
 Table protected_table1 has a minimum group size of 2.
 Table protected_table2 has a minimum group size of 5.
If you run the query:
SELECT a, COUNT(*)
FROM (
SELECT a, b FROM protected_table1
UNION ALL
SELECT a, b FROM protected_table2
)
GROUP BY a;
Each group formed by the key a must contain 2 records from protected_table1 and 5 records
from protected_table2, otherwise the records are placed in a remainder group.
External Functions
A query cannot call an external function unless another part of the query has properly
aggregated results to meet the requirements of the aggregation-constrained table.
Logging & Metrics
A query cannot log a column of an aggregation-constrained table via UDF logging or metrics.
Data Type Conversions
A query that includes a data type conversion function in the SELECT statement must use the
TRY version of the function. For example, the TRY_CAST function is allowed, but the
CAST function is prohibited. The following data type conversion functions are allowed for
numeric types:
 TRY_CAST
 TRY_TO_DECIMAL
 TRY_TO_DOUBLE
 TRY_TO_NUMBER
 TRY_TO_NUMERIC
PIVOT
A query cannot use the PIVOT operator against a column in an aggregation-constrained
table.
Extended example
Creating an aggregation policy and assigning the aggregation policy to a table follows the same
general procedure as creating and assigning other policies, such as masking and projection policies:
1. If you are using a centralized management approach, create a custom role
(e.g. agg_policy_admin) to manage the policy. Alternatively, you can use an existing role.
2. Grant this role the privileges to create and assign an aggregation policy.
3. Create the aggregation policy.
4. Assign the aggregation policy to a table.
Once the aggregation policy is assigned to a table, successful queries against the table must
aggregate its data.
The following extended example provides insight into each step in this process, from the provider’s
access control administrator creating a custom role to a data consumer executing a query to return
aggregated results.
Access Control Administrator Tasks
1. Create a custom role to manage the aggregation policy. You could also re-use an
existing role.
2. USE ROLE USERADMIN;
3.
4. CREATE ROLE AGG_POLICY_ADMIN;
5. Grant the agg_policy_admin custom role the privileges to create an aggregation policy in
a schema and assign the aggregation policy to a table or view in the Snowflake
account.
This step assumes the aggregation policy will be stored in a database and schema
named privacy.agg_policies and this database and schema already exist:
GRANT USAGE ON DATABASE privacy TO ROLE agg_policy_admin;
GRANT USAGE ON SCHEMA privacy.agg_policies TO ROLE
agg_policy_admin;

GRANT CREATE AGGREGATION POLICY


ON SCHEMA privacy.agg_policies TO ROLE agg_policy_admin;

GRANT APPLY AGGREGATION POLICY ON ACCOUNT TO ROLE


agg_policy_admin;
The agg_policy_admin role can now be assigned to one or more users.
For details about the privileges needed to work with aggregation policies, refer
to Privileges and commands (in this topic).
Aggregation Policy Administrator Tasks
1. Create an aggregation policy to require aggregation and define a minimum group size
of 3:
2. USE ROLE agg_policy_admin;
3. USE SCHEMA privacy.aggpolicies;
4.
5. CREATE AGGREGATION POLICY my_policy
6. AS () RETURNS AGGREGATION_CONSTRAINT ->
AGGREGATION_CONSTRAINT(MIN_GROUP_SIZE => 3);
7. Assign the aggregation policy to a table t1:
8. ALTER TABLE t1 SET AGGREGATION POLICY my_policy;
Consumer Query
Once the provider shares the aggregation-constrained table, the data consumer can execute
queries against it. For this example, assume the aggregation-constrained table t1 contains the
following rows:
peak stat elevation
e
washington NH 6288
cannon NH 4080
kearsarge NH 2937
mansfield VT 4395
killington VT 4229
wachusett MA 2006
Now, assume that the consumer executes the following query against t1:
SELECT state, AVG(elevation) AS avg_elevation
FROM t1
GROUP BY state;
The results are:
+----------+-----------------+
| STATE | AVG_ELEVATION |
|----------+-----------------+
| NH | 4435 |
| NULL | 3543 |
+----------+-----------------+
Note that the value of state in the second group is NULL because it is a remainder group that
averages the elevation of peaks in both VT and MA.
Aggregation policies with Snowflake features
The following subsections briefly summarize how aggregation policies interact with various
Snowflake features and services.
Other policies
This section describes how an aggregation policy interacts with other policies, including masking
policies, row access policies, and projection policies.
You can attach other policies to an aggregation-constrained table. A successful query against the
table must meet the requirements of all policies.
If a row access policy is assigned to an aggregation-constrained table, a row excluded from the query
results based on the row access policy is not included when calculating the aggregated results.
The body of a masking policy, row access policy, or projection policy cannot reference an
aggregation-constrained table, including its columns. Similarly, the body of the other policy cannot
include a UDF that references the aggregation-constrained table.
Views and materialized views
You can assign an aggregation policy to both views and materialized views. When an aggregation
policy is applied to a view, the underlying table does not become aggregation-constrained. This base
table can still be queried without restriction.
To avoid the possibility of exposing sensitive data, all aggregation-constrained views are treated as if
they are secure views even if they are not.
Whether you can create a view from an aggregation-constrained table depends on the type of view:
 You can create a regular view from one or more aggregation-constrained tables, however
queries against that view must aggregate data in a way that meets the restrictions of those
base tables.
 You cannot create a materialized view based on an aggregation-constrained table or view, nor
can you assign an aggregation policy to a table or view upon which a materialized view is
based.
Cloned objects
The following approach helps to safeguard data from users with the SELECT privilege on a cloned
table or view that is stored in the cloned database or schema:
 Cloning an individual aggregation policy object is not supported.
 Cloning a database results in the cloning of all aggregation policies within the database.
 Cloning a schema results in the cloning of all aggregation policies within the schema.
 A cloned table maps to the same aggregation policies as the source table.
 When a table is cloned in the context of its parent schema cloning, if the source table
has a reference to an aggregation policy in the same parent schema (i.e. a local
reference), the cloned table will have a reference to the cloned aggregation policy.
 If the source table refers to an aggregation policy in a different schema (i.e. a foreign
reference), then the cloned table retains the foreign reference.
For more information, see CREATE <object> … CLONE.
Replication
Aggregation policies and their assignments can be replicated using database replication and
replication groups.
For database replication, the replication operation fails if either of the following conditions is true:
 The primary database is in an Enterprise (or higher) account and contains a policy but one or
more of the accounts approved for replication are on lower editions.
 A table or view contained in the primary database has a dangling reference to an aggregation
policy in another database.
The dangling reference behavior for database replication can be avoided when replicating multiple
databases in a replication group.
Privileges and commands
The following subsections provide information to help manage aggregation policies.
Aggregation policy privileges
Snowflake supports the following privileges on the aggregation policy object.
Note that operating on any object in a schema also requires the USAGE privilege on the parent
database and schema.
Privilege Usage
APPLY Enables the set and unset operations for an aggregation policy on a table.
OWNERSHIP Transfers ownership of the aggregation policy, which grants full control over
the aggregation policy. Required to alter most properties of an aggregation
policy.
For details, see Summary of DDL commands, operations, and privileges (in this topic).
Aggregation policy DDL reference
Snowflake supports the following DDL to create and manage aggregation policies.
 CREATE AGGREGATION POLICY
 ALTER AGGREGATION POLICY
 DESCRIBE AGGREGATION POLICY
 DROP AGGREGATION POLICY
 SHOW AGGREGATION POLICIES
Summary of DDL commands, operations, and privileges
The following table summarizes the relationship between aggregation policy privileges and DDL
operations.
Note that operating on any object in a schema also requires the USAGE privilege on the parent
database and schema.
Operation Privilege required
Create aggregation A role with the CREATE AGGREGATION POLICY privilege in
policy. the same schema.
Alter aggregation policy. The role with the OWNERSHIP privilege on the aggregation
policy.
Describe aggregation One of the following:
policy  A role with the global APPLY AGGREGATION POLICY
privilege, or
 A role with the OWNERSHIP privilege on the aggregation
policy, or
 A role with the APPLY privilege on the aggregation
policy.
Drop aggregation policy. A role with the OWNERSHIP privilege on the aggregation policy.
Show aggregation One of the following:
policies.  A role with the USAGE privilege on the schema in which
the aggregation policy exists, or
 A role with the APPLY AGGREGATION POLICY on the
account.
Set or unset an One of the following:
aggregation policy on a  A role with the APPLY AGGREGATION POLICY
table. privilege on the account, or
 A role with the APPLY privilege on the aggregation policy
and the OWNERSHIP privilege on the table or view.
Snowflake supports different permissions to create and set an aggregation policy on an object.
1. For a centralized aggregation policy management approach in which
the aggregation_policy_admin custom role creates and sets aggregation policies on all tables, the
following permissions are necessary:
2. USE ROLE securityadmin;
3. GRANT USAGE ON DATABASE mydb TO ROLE agg_policy_admin;
4. GRANT USAGE ON SCHEMA mydb.schema TO ROLE proj_policy_admin;
5. GRANT CREATE AGGREGATION POLICY ON SCHEMA mydb.schema TO ROLE
aggregation_policy_admin;
6. GRANT APPLY ON AGGREGATION POLICY ON ACCOUNT TO ROLE
aggregation_policy_admin;
7. In a hybrid management approach, a single role has the CREATE AGGREGATION
POLICY privilege to ensure aggregation policies are named consistently and individual
teams or roles have the APPLY privilege for a specific aggregation policy.
For example, the custom role finance_role role can be granted the permission to set the
aggregation policy cost_center on tables and views the role owns (i.e. the role has the
OWNERSHIP privilege on the table or view):
USE ROLE securityadmin;
GRANT CREATE AGGREGATION POLICY ON SCHEMA mydb.schema TO ROLE
aggregation_policy_admin;
GRANT APPLY ON AGGREGATION POLICY cost_center TO ROLE finance_role;
Was this page h

Projection policies
ENTERPRISE EDITION FEATURE
This feature requires Enterprise Edition (or higher). To inquire about upgrading, please
contact Snowflake Support.
PREVIEW FEATURE— OPEN
Available to all accounts that are Enterprise Edition (or higher).
To inquire about upgrading, please contact Snowflake Support.
This topic provides information about using projection policies to allow or prevent column
projection in the output of a SQL query result.
Overview
A projection policy is a first-class, schema-level object that defines whether a column can be
projected in the output of a SQL query result. A column with a projection policy assigned to it is said
to be projection constrained.
Projection policies can be used to constrain identifier columns (e.g. name, phone number) for objects
when sharing data securely. For example, consider two companies that would like to work as
solution partners and want to identify a set of common customers prior to developing an integrated
solution. The provider can assign a projection policy to identifier columns in the share that are
necessary to allow the consumer to identify sensitive information. The consumer can use the shared
data to perform a match based on previously agreed columns that are necessary to the solution, but
cannot return values from those columns.
The benefits of using the projection policy in this example include:
 The consumer can match records based on a particular value without being able to view that
value.
 Sensitive provider information cannot be output in the SQL query result. For details, see
the Considerations section (in this topic).
After creating the projection policy, a policy administrator can assign the projection policy to a
column. A column can only have one projection policy assigned to it at any given time. A user can
project the column only if their active role matches a projection policy condition that allows the
column to be projected.
Note that a projection constrained column can also be protected by a masking policy and the table
containing the projection constrained column can be protected by a row access policy. For more
details, see Masking & row access policies (in this topic).
Column usage
Snowflake tracks column usage. Indirect references to a column, such as a view definition, UDF (in
this topic), and common table expression, impact column projection when a projection policy is set
on a column.
When a projection policy is set on the column and the column cannot be projected, the column:
 Is not included in the output of a query result.
 Cannot be inserted into another table.
 Cannot be an argument for an external function or stored procedure.
Limitations
UDFs
For limitations regarding user-defined functions (UDFs), see User-defined functions
(UDFs) (in this topic).
Policy
A projection policy cannot be applied to:
 A tag, and that tag cannot be assigned to a table or column (i.e. “tag-based projection
policies”).
 A virtual column or to the VALUE column in an external table.
As a workaround, create a view and assign a projection policy to each column that
should not be projected.
 The value_column in a PIVOT construct. For related details, see UNPIVOT (in this
topic).
A projection policy body cannot reference a column protected by a masking policy or a table
protected by a row access policy. For additional details, see Masking & row access
policies (in this topic).
Considerations
Use projection policies when the use case calls for querying a sensitive column without directly
exposing the column value to an analyst or similar role. The column value within a projection
constrained column can be analyzed with greater flexibility than a masked or tokenized value.
However, consider the following prior to setting a projection policy on a column:
 A projection policy does not prevent the targeting of an individual.
For example, a user can filter rows where the name column corresponds to a particular
individual, even if the column is projection constrained. However, the user cannot run a
SELECT statement to view names of the individuals in the table.
 When a projection constrained column is the join key for a query that combines data from the
protected table with data from an unprotected table, nothing prevents the user from projecting
values from the column in the unprotected table. As a result, if a value in the unprotected
table matches a value in the protected column, the user can obtain that value by projecting it
from the unprotected table.
For example, suppose a projection policy was assigned to the email column of
the t_protected table. A user can still ascertain values in the t_protected.email column by
executing:
SELECT t_unprotected.email
FROM t_unprotected JOIN t_protected ON t_unprotected.email = t_protected.email;
 A projection constraint does not guarantee a malicious actor could not use deliberate queries
to obtain potentially sensitive data from a projection-constrained column. Projection policies
are bested suited for use with partners and customers with whom you have an existing level
of trust. In addition, providers should be vigilant about potential misuses of their data (e.g.
reviewing the access history for their listings).
Create a projection policy
A projection policy contains a body that calls the internal PROJECTION_CONSTRAINT function to
determine whether to project a column.
CREATE OR REPLACE PROJECTION POLICY <name>
AS () RETURNS PROJECTION_CONSTRAINT -> <body>
Where:
 name specifies the name of the policy.
 AS () RETURNS PROJECTION_CONSTRAINT is the signature and return type of the
policy. The signature does not accept any arguments and the return type is
PROJECTION_CONSTRAINT, which is an internal data type. All projection policies have
the same signature and return type.
 body is the SQL expression that determines whether to project the column. The expression
calls the internal PROJECTION_CONSTRAINT function to allow or prevent the projection
of a column:
 PROJECTION_CONSTRAINT(ALLOW => true) allows projecting a column.
 PROJECTION_CONSTRAINT(ALLOW => false) does not allow projecting a
column.
Example policies
The simplest projection policies call the PROJECTION_CONSTRAINT function directly:
Allow column projection
CREATE OR REPLACE PROJECTION POLICY mypolicy
AS () RETURNS PROJECTION_CONSTRAINT ->
PROJECTION_CONSTRAINT(ALLOW => true)
Prevent column projection
CREATE OR REPLACE PROJECTION POLICY mypolicy
AS () RETURNS PROJECTION_CONSTRAINT ->
PROJECTION_CONSTRAINT(ALLOW => false)
More complicated SQL expressions can be written to call the PROJECTION_CONSTRAINT
function. The expression can use Conditional Expression Functions and Context Functions to
introduce logic to allow certain users with a particular role to project a column and prevent all other
users from projecting a column.
For example, use a CASE expression to only allow users with the analyst custom role to project a
column:
CREATE OR REPLACE PROJECTION POLICY mypolicy
AS () RETURNS PROJECTION_CONSTRAINT ->
CASE
WHEN CURRENT_ROLE() = 'ANALYST'
THEN PROJECTION_CONSTRAINT(ALLOW => true)
ELSE PROJECTION_CONSTRAINT(ALLOW => false)
END;
For data sharing use cases, the provider can write a projection policy to constrain column projection
for all consumer accounts using the CURRENT_ACCOUNT context function, or selectively restrict
column projection in specific shares using the INVOKER_SHARE context function. For example:
Restrict all consumer accounts
In this example, provider.account is the account identifier in the account name format:
CREATE OR REPLACE PROJECTION POLICY restrict_consumer_accounts
AS () RETURNS PROJECTION_CONSTRAINT ->
CASE
WHEN CURRENT_ACCOUNT() = 'provider.account'
THEN PROJECTION_CONSTRAINT(ALLOW => true)
ELSE PROJECTION_CONSTRAINT(ALLOW => false)
END;
Restrict to specific shares
Consider a data sharing provider account that has a projection policy set on a column of a
secure view. There are two different shares (SHARE1 and SHARE2) that can access the
secure view to support two different data sharing consumers.
If a user in the data sharing consumer account attempts to project the column through either
share they can project the column, otherwise the column cannot be projected:
CREATE OR REPLACE PROJECTION POLICY projection_share
AS () RETURNS PROJECTION_CONSTRAINT ->
CASE
WHEN INVOKER_SHARE() IN ('SHARE1', 'SHARE2')
THEN PROJECTION_CONSTRAINT(ALLOW => true)
ELSE PROJECTION_CONSTRAINT(ALLOW => false)
END;
Assign a projection policy
A projection policy is applied to a table column using an ALTER TABLE … ALTER
COLUMN command and a view column using an ALTER VIEW command. Each column supports
only one projection policy.
ALTER { TABLE | VIEW } <name>
{ ALTER | MODIFY } COLUMN <col1_name>
SET PROJECTION POLICY <policy_name> [ FORCE ]
[ , <col2_name> SET PROJECTION POLICY <policy_name> [ FORCE ] ... ]
Where:
 name specifies the name of the table or view.
 col1_name specifies the name of the column in the table or view.
 col2_name specifies the name of an additional column in the table or view.
 policy_name specifies the name of the projection policy set on the column.
 FORCE is an optional parameter that allows the command to assign the projection policy to a
column that already has a projection policy assigned to it. The new projection policy
atomically replaces the existing one.
For example, to set a projection policy proj_policy_acctnumber on the account_number column of a
table:
ALTER TABLE finance.accounting.customers
MODIFY COLUMN account_number
SET PROJECTION POLICY proj_policy_acctnumber;
You can also use the WITH clause of the CREATE TABLE and CREATE VIEW commands to
assign a projection policy to a column when the table or view is created. For example, to assign the
policy my_proj_policy to the account_number column of a new table, execute:
CREATE TABLE t1 (account_number NUMBER WITH PROJECTION POLICY
my_proj_policy);
Replace a projection policy
The recommended method of replacing a projection policy is to use the FORCE parameter to detach
the existing projection policy and assign the new one in a single command. This allows you to
atomically replace the old policy, leaving no gap in protection.
For example, to assign a new projection policy to a column that is already projection-constrained:
ALTER TABLE finance.accounting.customers
MODIFY COLUMN account_number
SET PROJECTION POLICY proj_policy2 FORCE;
You can also detach the projection policy from a column in one statement (… UNSET
PROJECTION POLICY) and then set a new policy on the column in a different statement (… SET
PROJECTION POLICY <name>). If you choose this method, the column is not protected by a
projection policy in between detaching one policy and assigning another. A query could potentially
access sensitive data during this time.
Detach a projection policy
Use the UNSET PROJECTION POLICY clause of an ALTER TABLE or ALTER VIEW command
to detach a projection policy from the column of a table or view. The name of the projection policy is
not required because a column cannot have more than one projection policy attached.
ALTER { TABLE | VIEW } <name>
{ ALTER | MODIFY } COLUMN <col1_name>
UNSET PROJECTION POLICY
[ , <col2_name> UNSET PROJECTION POLICY ... ]
Where:
 name specifies the name of the table or view.
 col1_name specifies the name of the column in the table or view.
 col2_name specifies the name of an additional column in the table or view.
For example, to remove the projection policy from the account_number column:
ALTER TABLE finance.accounting.customers
MODIFY COLUMN account_number
UNSET PROJECTION POLICY;
Monitor projection policies with SQL
It can be helpful to think of two general approaches to determine how to monitor projection policy
usage.
 Discover projection policies
 Identify projection policy references
Discover projection policies
You can use the PROJECTION_POLICIES view in the Account Usage schema of the shared
SNOWFLAKE database. This view is a catalog for all projection policies in your Snowflake
account. For example:
SELECT * FROM SNOWFLAKE.ACCOUNT_USAGE.PROJECTION_POLICIES
ORDER BY POLICY_NAME;
Identify projection policy references
The POLICY_REFERENCES Information Schema table function can identify projection policy
references. There are two different syntax options:
1. Return a row for each object (i.e. table or view) that has the specified projection policy set on
a column:
2. USE DATABASE my_db;
3. SELECT policy_name,
4. policy_kind,
5. ref_entity_name,
6. ref_entity_domain,
7. ref_column_name,
8. ref_arg_column_names,
9. policy_status
10. FROM TABLE(information_schema.policy_references(policy_name =>
'my_db.my_schema.projpolicy'));
11. Return a row for each policy assigned to the table named my_table:
12. USE DATABASE my_db;
13. USE SCHEMA information_schema;
14. SELECT policy_name,
15. policy_kind,
16. ref_entity_name,
17. ref_entity_domain,
18. ref_column_name,
19. ref_arg_column_names,
20. policy_status
21. FROM TABLE(information_schema.policy_references(ref_entity_name =>
'my_db.my_schema.my_table', ref_entity_domain => 'table'));
Extended example
Creating a projection policy and assigning the projection policy to a column follows the same
general procedure as creating and assigning other policies, such as masking and row access policies:
1. For a centralized management approach, create a custom role (e.g. proj_policy_admin) to
manage the policy.
2. Grant this role the privileges to create and assign a projection policy.
3. Create the projection policy.
4. Assign the projection policy to a column.
Based on this general procedure, complete the following steps to assign a projection policy to a
column:
1. Create a custom role to manage the projection policy:
2. USE ROLE useradmin;
3.
4. CREATE ROLE proj_policy_admin;
5. Grant the proj_policy_admin custom role the privileges to create a projection policy in a
schema and assign the projection policy to any table or view column in the Snowflake
account.
This step assumes the projection policy will be stored in a database and schema
named privacy.projpolicies and this database and schema already exist:
GRANT USAGE ON DATABASE privacy TO ROLE proj_policy_admin;
GRANT USAGE ON SCHEMA privacy.projpolicies TO ROLE proj_policy_admin;

GRANT CREATE PROJECTION POLICY


ON SCHEMA privacy.projpolicies TO ROLE proj_policy_admin;
GRANT APPLY PROJECTION POLICY ON ACCOUNT TO ROLE
proj_policy_admin;
For details, see Privileges and commands (in this topic).
6. Create a projection policy to prevent column projection:
7. USE ROLE proj_policy_admin;
8. USE SCHEMA privacy.projpolicies;
9.
10. CREATE OR REPLACE PROJECTION POLICY proj_policy_false
11. AS () RETURNS PROJECTION_CONSTRAINT ->
12. PROJECTION_CONSTRAINT(ALLOW => false);
13. Assign the projection policy to a table column:
14. ALTER TABLE customers MODIFY COLUMN active
15. SET PROJECTION POLICY privacy.projpolicies.proj_policy_false;
Projection policies with Snowflake features
The following subsections briefly summarize how projection policies interact with various
Snowflake features and services.
Masking & row access policies
This section describes how a projection policy interacts with a masking policy and a row access
policy.
Multiple policies
A column can have a masking policy and a projection policy at the same time, and the table
containing this column can be protected by a row access policy. If all three policies are
present, Snowflake processes the table and policies as follows:
1. Apply row filters according to the row access policy.
2. Determine if the query is attempting to project any columns that are restricted by the
projection policy, and if so, reject the query.
3. Apply column masks according to the masking policy.
A column protected by a masking policy can also be projection constrained. For example, a
masking policy set on a column containing account numbers can have a condition that allows
users with the finance_admin custom role to see the account numbers and another condition
to replace the account numbers with a hash for all other roles.
A projection policy can further restrict the column such that users with the analyst custom
role cannot project the column. Note that users with the analyst custom role can still analyze
the column by grouping hashes or joining on these hashes.
Snowflake recommends that policy administrators work with internal compliance and
regulatory officers to determine the columns that should be projection constrained.
Policy evaluation
A projection constrained column cannot be referenced by a masking policy or a row access
policy when:
 Assigning a row access policy to a table.
 Enumerating one or more columns in a conditional masking policy.
 Performing a mapping table lookup.
As mentioned in the Limitations (in this topic), a projection policy body cannot reference a
column protected by a masking policy or a table protected by a row access policy.
Dependent objects with other projection policies
Consider the following series of objects:
base_table » v1 » v2
Where:
 v1 is a view built from the table named base_table.
 v2 is a view built from v1.
If there is a query on a column in a view that is projection-constrained and that column depends on a
projection constrained column in base_table, the view column will be projected only
if both projection policies allow the column to be projected.
Snowflake checks the column lineage chain all the way to the base table to ensure that any references
to the column are not projection constrained. If any column in the lineage chain is projection
constrained and the column is not allowed to be projected, Snowflake blocks the query.
Views & materialized views
A projection policy on a view column constrains the view column and not the underlying base table
column.
Regarding references, a projection policy that constrains a table column carries over to a view that
references the constrained table column.
Streams & tasks
Projection policies on columns in a table carry over to a stream on the same table. Note that a
projection policy cannot be set on a stream.
Similarly, a projection constrained column remains constrained when a task references the
constrained column.
UNPIVOT
The result of an UNPIVOT construct depends on whether a column was initially constrained by a
projection policy. Note:
 Constrained columns prior to and after executing UNPIVOT remain projection constrained.
 The name_column always appears in the query result.
 If any columns in the column_list are projection constrained, the value_column is also
projection constrained.
Cloned objects
The following approach helps to safeguard data from users with the SELECT privilege on a cloned
table or view that is stored in the cloned database or schema:
 Cloning an individual projection policy object is not supported.
 Cloning a schema results in the cloning of all projection policies within the schema.
 A cloned table maps to the same projection policies as the source table.
 When a table is cloned in the context of its parent schema cloning, if the source table
has a reference to a projection policy in the same parent schema (i.e. a local
reference), the cloned table will have a reference to the cloned projection policy.
 If the source table refers to a projection policy in a different schema (i.e. a foreign
reference), then the cloned table retains the foreign reference.
For more information, see CREATE <object> … CLONE.
Replication
Projection policies and their assignments can be replicated using database replication and replication
groups.
For database replication, the replication operation fails if either of the following conditions is true:
 The primary database is in an Enterprise (or higher) account and contains a policy but one or
more of the accounts approved for replication are on lower editions.
 A table or view contained in the primary database has a dangling reference to a projection
policy in another database.
The dangling reference behavior for database replication can be avoided when replicating multiple
databases in a replication group.
User-defined functions (UDFs)
Note the following regarding projection constraints and UDFs:
Scalar SQL UDFs
Snowflake evaluates the UDF and then applies the projection policy to the projection
constrained column.
If a column in a SELECT statement is transitively derived from a UDF, which is also derived
from a projection constrained column, Snowflake blocks the query. In other words:
pc_column » UDF » column (in SELECT statement)
Where:
 pc_column refers to a projection constrained column.
Because the column in the SELECT statement can be traced to a projection constrained
column, Snowflake blocks the query.
SQL UDTFs
SQL user-defined table functions (UDTF) follow the same behavior as SQL UDFs, except
that because rows are returned in the function output, Snowflake evaluates each table column
independently to determine whether to project the column in the function output.
Other UDFs
The following applies to Introduction to Java UDFs, Introduction to JavaScript
UDFs, Introduction to Python UDFs:
 A projection constrained column is constrained in the UDTF output.
Logging & Event Tables
When a UDF, UDTF, or JavaScript UDF has a projection-constrained argument, Snowflake
does not capture log and event details in the corresponding event table. However, Snowflake
allows the UDF/UDTF to execute and does not fail the statement calling the UDF/UDTF due
to logging reasons.
Privileges and commands
The following subsections provide information to help manage projection policies.
Projection policy privileges
Snowflake supports the following privileges on the projection policy object.
Note that operating on any object in a schema also requires the USAGE privilege on the parent
database and schema.
Privilege Usage
APPLY Enables the set and unset operations for a projection policy on a column.
OWNERSHIP Transfers ownership of the projection policy, which grants full control over
the projection policy. Required to alter most properties of a projection policy.
For details, see Summary of DDL commands, operations, and privileges (in this topic).
Projection policy DDL reference
Snowflake supports the following DDL to create and manage projection policies.
 CREATE PROJECTION POLICY
 ALTER PROJECTION POLICY
 DESCRIBE PROJECTION POLICY
 DROP PROJECTION POLICY
 SHOW PROJECTION POLICIES
Summary of DDL commands, operations, and privileges
The following table summarizes the relationship between projection policy privileges and DDL
operations.
Note that operating on any object in a schema also requires the USAGE privilege on the parent
database and schema.
Operation Privilege required
Create projection policy. A role with the CREATE PROJECTION POLICY privilege in the
same schema.
Alter projection policy. The role with the OWNERSHIP privilege on the projection
policy.
Describe projection policy One of the following:
 A role with the global APPLY PROJECTION POLICY
privilege, or
 A role with the OWNERSHIP privilege on the projection
policy, or
 A role with the APPLY privilege on the projection policy.
Drop projection policy. A role with the OWNERSHIP privilege on the projection policy.
Show projection policies. One of the following:
 A role with the USAGE privilege on the schema in which
the projection policy exists, or
 A role with the APPLY PROJECTION POLICY on the
account.
Operation Privilege required
Set or unset a projection One of the following:
policy on a column.  A role with the APPLY PROJECTION POLICY privilege
on the account, or
 A role with the APPLY privilege on the projection policy
and the OWNERSHIP privilege on the table or view.
Snowflake supports different permissions to create and set a projection policy on an object.
1. For a centralized projection policy management approach in which
the projection_policy_admin custom role creates and sets projection policies on all columns,
the following permissions are necessary:
2. USE ROLE securityadmin;
3. GRANT USAGE ON DATABASE mydb TO ROLE projection_policy_admin;
4. GRANT USAGE ON SCHEMA mydb.schema TO ROLE projection_policy_admin;
5.
6. GRANT CREATE PROJECTION POLICY ON SCHEMA mydb.schema TO ROLE
projection_policy_admin;
7. GRANT APPLY ON PROJECTION POLICY ON ACCOUNT TO ROLE
projection_policy_admin;
8. In a hybrid management approach, a single role has the CREATE PROJECTION POLICY
privilege to ensure projection policies are named consistently and individual teams or roles
have the APPLY privilege for a specific projection policy.
For example, the custom role finance_role role can be granted the permission to set the
projection policy cost_center on tables and views the role owns (i.e. the role has the
OWNERSHIP privilege on the table or view):
USE ROLE securityadmin;
GRANT CREATE PROJECTION POLICY ON SCHEMA mydb.schema TO ROLE
projection_policy_admin;
GRANT APPLY ON PROJECTION POLICY cost_center TO ROLE finance_role;
Was this page helpful?
YesNo

You might also like