Securing Snowflake
Securing Snowflake
Snowflake provides industry-leading features that ensure the highest levels of security for your
account and users, as well as all the data you store in Snowflake.
These topics are intended primarily for administrators (i.e. users with the ACCOUNTADMIN,
SYSADMIN, or SECURITYADMIN roles).
Federated Authentication & SSO
Topics related to federated authentication to Snowflake.
Key-pair authentication and key-pair rotation
Using key-pair authentication to Snowflake.
Multi-factor authentication (MFA)
Using multi-factor authentication with Snowflake.
Snowflake OAuth
Topics related to using Snowflake OAuth to connect to Snowflake.
External OAuth
Topics using External OAuth to connect to Snowflake.
Authentication policies
Using authentication policies to restrict account and user authentication by client,
authentication methods, and more.
Controlling network traffic with network policies
Using network policies to restrict access to Snowflake.
Network rules
Using network rules with other Snowflake features to restrict access to and from Snowflake.
AWS VPC interface endpoints for internal stages
Using private connectivity to connect to Snowflake internal stages on AWS.
Azure Private Endpoints for Internal Stages
Using private connectivity to connect to Snowflake internal stages on Azure.
AWS PrivateLink and Snowflake
Using private connectivity to connect to your Snowflake account on AWS.
Azure Private Link and Snowflake
Using private connectivity to connect to your Snowflake account on Azure.
Google Cloud Private Service Connect and Snowflake
Using private connectivity to connect to your Snowflake account on Google Cloud Platform.
Snowflake Sessions & Session Policies
Using session policies to manage your Snowflake session.
SCIM
Topics related to using SCIM to provision users and groups to Snowflake.
Access Control
Topics related to role-based access control (RBAC) in Snowflake.
End to End Encryption
Using end-to-end encryption in Snowflake.
Encryption Key Management
Using encryption key management in Snowflake.
Was this page helpful?
YesNo
As shown on the above, instead of using the push notification, you can also choose to:
Select Call Me to receive login instructions from a phone call to the registered mobile
device.
Select Enter a Passcode to log in by manually entering a passcode provided by the
Duo Mobile application.
Using MFA with the Classic Console web interface
To log into the Classic Console with MFA:
1. Point your browser at the URL for your account. For example: https://myorg-
account1.snowflakecomputing.com .
2. Enter your credentials (user login name and password).
3. If Duo Push is enabled, a push notification is sent to your Duo Mobile application. When you
receive the notification, select Approve and you will be logged into Snowflake.
As shown on the above screenshot, instead of using the push notification, you can also
choose to:
Select Call Me to receive login instructions from a phone call to the registered mobile
device.
Select Enter a Passcode to log in by manually entering a passcode provided by the
Duo Mobile application.
Using MFA with SnowSQL
MFA can be used for connecting to Snowflake through SnowSQL. By default, the Duo Push
authentication mechanism is used when a user is enrolled in MFA.
To use a Duo-generated passcode instead of the push mechanism, the login parameters must include
one of the following connection options:
--mfa-passcode <string> OR --mfa-passcode-in-password
For more details, see SnowSQL (CLI client).
Using MFA with JDBC
MFA can be used for connecting to Snowflake via the Snowflake JDBC driver. By default, the Duo
Push authentication mechanism is used when a user is enrolled in MFA; no changes to the JDBC
connection string are required.
To use a Duo-generated passcode instead of the push mechanism, one of the following parameters
must be included in the JDBC connection string:
passcode=<passcode_string> OR passcodeInPassword=on
Where:
passcode_string is a Duo-generated passcode for the user who is connecting. This can be a
passcode generated by the Duo Mobile application or an SMS passcode.
If passcodeInPassword=on , then the password and passcode are concatenated, in the form
of <password_string><passcode_string> .
For more details, see JDBC Driver.
Examples of JDBC connection strings using Duo
JDBC connection string for user demo connecting to the xy12345 account (in the US West region)
using a Duo passcode:
jdbc:snowflake://xy12345.snowflakecomputing.com/?user=demo&passcode=123456
JDBC connection string for user demo connecting to the xy12345 account (in the US West region)
using a Duo passcode that is embedded in the password:
jdbc:snowflake://xy12345.snowflakecomputing.com/?user=demo&passcodeInPassword=on
Using MFA with ODBC
MFA can be used for connecting to Snowflake via the Snowflake ODBC driver. By default, the Duo
Push authentication mechanism is used when a user is enrolled in MFA; no changes to the ODBC
settings are required.
To use a Duo-generated passcode instead of the push mechanism, one of the following parameters
must be specified for the driver:
passcode=<passcode_string> OR passcodeInPassword=on
Where:
passcode_string is a Duo-generated passcode for the user who is connecting. This can be a
passcode generated by the Duo Mobile application or an SMS passcode.
If passcodeInPassword=on , then the password and passcode are concatenated, in the form
of <password_string><passcode_string> .
For more details, see ODBC Driver.
Using MFA with Python
MFA can be used for connecting to Snowflake via the Snowflake Python Connector. By default, the
Duo Push authentication mechanism is used when a user is enrolled in MFA; no changes to the
Python API calls are required.
To use a Duo-generated passcode instead of the push mechanism, one of the following parameters
must be specified for the driver in the connect() method:
passcode=<passcode_string> OR passcode_in_password=True
Where:
passcode_string is a Duo-generated passcode for the user who is connecting. This can be a
passcode generated by the Duo Mobile application or an SMS passcode.
If passcode_in_password=True , then the password and passcode are concatenated, in the form
of <password_string><passcode_string> .
For more details, see the description of the connect() method in the Functions section of the Python
Connector API documentation.
MFA error codes
The following are error codes associated with MFA that can be returned during the authentication
flow.
The errors are displayed with each failed login attempt. Historical data is also available in Snowflake
Information Schema and Account Usage:
Information Schema provides data from within the past 7 days and can be queried using
the LOGIN_HISTORY , LOGIN_HISTORY_BY_USER table functions.
The Account Usage LOGIN_HISTORY View provides data from within the past year.
Error Error Description
Code
390120 EXT_AUTHN_DENIED Duo Security authentication is denied.
390121 EXT_AUTHN_PENDING Duo Security authentication is pending.
390122 EXT_AUTHN_NOT_ENROLLED User is not enrolled in Duo Security. Contact your local
system administrator.
390123 EXT_AUTHN_LOCKED User is locked from Duo Security. Contact your local
system administrator.
390124 EXT_AUTHN_REQUESTED Duo Security authentication is required.
390125 EXT_AUTHN_SMS_SENT Duo Security temporary passcode is sent via SMS.
Please authenticate using the passcode.
Error Error Description
Code
390126 EXT_AUTHN_TIMEOUT Timed out waiting for your login request approval via
Duo Mobile. If your mobile device has no data service,
generate a Duo passcode and enter it in the connect
string.
390127 EXT_AUTHN_INVALID Incorrect passcode was specified.
390128 EXT_AUTHN_SUCCEEDED Duo Security authentication is successful.
390129 EXT_AUTHN_EXCEPTION Request could not be completed due to a
communication problem with the external service
provider. Try again later.
390132 EXT_AUTHN_DUO_PUSH_DISABLED Duo Push is not enabled for your MFA. Provide a
passcode as part of the connection string.
As shown on the above, instead of using the push notification, you can also choose to:
Select Call Me to receive login instructions from a phone call to the registered mobile
device.
Select Enter a Passcode to log in by manually entering a passcode provided by the
Duo Mobile application.
Using MFA with the Classic Console web interface
To log into the Classic Console with MFA:
1. Point your browser at the URL for your account. For example: https://myorg-
account1.snowflakecomputing.com .
2. Enter your credentials (user login name and password).
3. If Duo Push is enabled, a push notification is sent to your Duo Mobile application. When you
receive the notification, select Approve and you will be logged into Snowflake.
As shown on the above screenshot, instead of using the push notification, you can also
choose to:
Select Call Me to receive login instructions from a phone call to the registered mobile
device.
Select Enter a Passcode to log in by manually entering a passcode provided by the
Duo Mobile application.
Using MFA with SnowSQL
MFA can be used for connecting to Snowflake through SnowSQL. By default, the Duo Push
authentication mechanism is used when a user is enrolled in MFA.
To use a Duo-generated passcode instead of the push mechanism, the login parameters must include
one of the following connection options:
--mfa-passcode <string> OR --mfa-passcode-in-password
For more details, see SnowSQL (CLI client).
Using MFA with JDBC
MFA can be used for connecting to Snowflake via the Snowflake JDBC driver. By default, the Duo
Push authentication mechanism is used when a user is enrolled in MFA; no changes to the JDBC
connection string are required.
To use a Duo-generated passcode instead of the push mechanism, one of the following parameters
must be included in the JDBC connection string:
passcode=<passcode_string> OR passcodeInPassword=on
Where:
passcode_string is a Duo-generated passcode for the user who is connecting. This can be a
passcode generated by the Duo Mobile application or an SMS passcode.
If passcodeInPassword=on , then the password and passcode are concatenated, in the form
of <password_string><passcode_string> .
For more details, see JDBC Driver.
Examples of JDBC connection strings using Duo
JDBC connection string for user demo connecting to the xy12345 account (in the US West region)
using a Duo passcode:
jdbc:snowflake://xy12345.snowflakecomputing.com/?user=demo&passcode=123456
JDBC connection string for user demo connecting to the xy12345 account (in the US West region)
using a Duo passcode that is embedded in the password:
jdbc:snowflake://xy12345.snowflakecomputing.com/?user=demo&passcodeInPassword=on
Using MFA with ODBC
MFA can be used for connecting to Snowflake via the Snowflake ODBC driver. By default, the Duo
Push authentication mechanism is used when a user is enrolled in MFA; no changes to the ODBC
settings are required.
To use a Duo-generated passcode instead of the push mechanism, one of the following parameters
must be specified for the driver:
passcode=<passcode_string> OR passcodeInPassword=on
Where:
passcode_string is a Duo-generated passcode for the user who is connecting. This can be a
passcode generated by the Duo Mobile application or an SMS passcode.
If passcodeInPassword=on , then the password and passcode are concatenated, in the form
of <password_string><passcode_string> .
For more details, see ODBC Driver.
Using MFA with Python
MFA can be used for connecting to Snowflake via the Snowflake Python Connector. By default, the
Duo Push authentication mechanism is used when a user is enrolled in MFA; no changes to the
Python API calls are required.
To use a Duo-generated passcode instead of the push mechanism, one of the following parameters
must be specified for the driver in the connect() method:
passcode=<passcode_string> OR passcode_in_password=True
Where:
passcode_string is a Duo-generated passcode for the user who is connecting. This can be a
passcode generated by the Duo Mobile application or an SMS passcode.
If passcode_in_password=True , then the password and passcode are concatenated, in the form
of <password_string><passcode_string> .
For more details, see the description of the connect() method in the Functions section of the Python
Connector API documentation.
MFA error codes
The following are error codes associated with MFA that can be returned during the authentication
flow.
The errors are displayed with each failed login attempt. Historical data is also available in Snowflake
Information Schema and Account Usage:
Information Schema provides data from within the past 7 days and can be queried using
the LOGIN_HISTORY , LOGIN_HISTORY_BY_USER table functions.
The Account Usage LOGIN_HISTORY View provides data from within the past year.
Error Error Description
Code
390120 EXT_AUTHN_DENIED Duo Security authentication is denied.
390121 EXT_AUTHN_PENDING Duo Security authentication is pending.
390122 EXT_AUTHN_NOT_ENROLLED User is not enrolled in Duo Security. Contact your local
system administrator.
390123 EXT_AUTHN_LOCKED User is locked from Duo Security. Contact your local
system administrator.
390124 EXT_AUTHN_REQUESTED Duo Security authentication is required.
390125 EXT_AUTHN_SMS_SENT Duo Security temporary passcode is sent via SMS.
Please authenticate using the passcode.
Error Error Description
Code
390126 EXT_AUTHN_TIMEOUT Timed out waiting for your login request approval via
Duo Mobile. If your mobile device has no data service,
generate a Duo passcode and enter it in the connect
string.
390127 EXT_AUTHN_INVALID Incorrect passcode was specified.
390128 EXT_AUTHN_SUCCEEDED Duo Security authentication is successful.
390129 EXT_AUTHN_EXCEPTION Request could not be completed due to a
communication problem with the external service
provider. Try again later.
390132 EXT_AUTHN_DUO_PUSH_DISABLED Duo Push is not enabled for your MFA. Provide a
passcode as part of the connection string.
Introduction to OAuth
Snowflake enables OAuth for clients through integrations. An integration is a Snowflake object that
provides an interface between Snowflake and third-party services. Administrators configure OAuth
using a Security integration, which enables clients that support OAuth to redirect users to an
authorization page and generate access tokens (and optionally, refresh tokens) for accessing
Snowflake.
Snowflake supports the OAuth 2.0 protocol for authentication and authorization using one of the
options below:
Snowflake OAuth
External OAuth
The following table compares Snowflake OAuth and External OAuth:
Category Snowflake OAuth External OAuth
Modify client Required Required
application
Client Required Not required
application
browser access
Programmatic Requires a browser Best fit
clients
Driver property authenticator = oauth authenticator = oauth
Security create security integration type = oauth ... create security integration type = external_oauth
integration
syntax
OAuth flow OAuth 2.0 code grant flow Any OAuth flow that the client can initiate with
the External OAuth server
Auditing OAuth logins
To query login attempts by Snowflake users, Snowflake provides a login history:
LOGIN_HISTORY , LOGIN_HISTORY_BY_USER (table function)
LOGIN_HISTORY View (view)
When OAuth is used to authenticate (successfully or unsuccessfully), the
FIRST_AUTHENTICATION_FACTOR column in the output has the value
OAUTH_ACCESS_TOKEN.
Private connectivity
Snowflake supports External OAuth with private connectivity to the Snowflake service.
Snowflake OAuth and Tableau can be used with private connectivity to Snowflake as follows:
Tableau Desktop
Starting with Tableau 2020.4, Tableau contains an embedded OAuth client that supports
connecting to Snowflake with the account URL for private connectivity to the Snowflake
service.
After upgrading to Tableau 2020.4, no further configuration is needed; use the corresponding
private connectivity URL for either AWS or Azure to connect to Snowflake.
Tableau Server
Starting with Tableau 2020.4, users can optionally configure Tableau Server to use the
embedded OAuth Client to connect to Snowflake with the account URL for private
connectivity to the Snowflake service.
To use this feature, create a new Custom Client security integration and follow the Tableau
instructions.
Tableau Online
Tableau Online does not support the Snowflake account URL for private connectivity to the
Snowflake service because Tableau Online requires access to the public Internet.
Please contact Tableau for more information regarding when Tableau Online will support the
private connectivity Snowflake account URLs for private connectivity to the Snowflake
service.
Important
To determine the account URL to use with private connectivity to the Snowflake service, call
the SYSTEM$GET_PRIVATELINK_CONFIG function.
Looker
Currently, combining Snowflake OAuth and Looker requires access to the public Internet.
Therefore, you cannot use Snowflake OAuth and Looker with private connectivity to the
Snowflake service.
For more information, refer to:
SSO with private connectivity
Configure Snowflake OAuth for partner applications
Clients, drivers, and connectors
Supported clients, drivers, and connectors can use OAuth to verify user login credentials.
Note the following:
It is necessary to set the authenticator parameter to oauth and the token parameter to
the oauth_access_token.
When passing the token value as a URL query parameter, it is necessary to URL-encode
the oauth_access_token value.
When passing the token value to a Properties object (e.g. JDBC Driver), no modifications are
necessary.
For more information about connection parameters, refer to the reference documentation for the
following clients, drivers, or connectors:
SnowSQL
Python
Go
JDBC
ODBC
Spark Connector
.NET Driver
Node.js Driver
Client Redirect
Snowflake supports using Client Redirect with Snowflake OAuth and External OAuth, including
using Client Redirect and OAuth with supported Snowflake Clients.
For more information, refer to Redirecting Client Connections.
Replication
Snowflake supports replication and failover/failback with both the Snowflake OAuth and External
OAuth security integrations from the source account to the target account.
For details, refer to Replication of security integrations & network policies across multiple accounts.
Was this pag
Authentication policies
PREVIEW FEATURE— OPEN
Available to all accounts.
Authentication policies provide you with control over how a client or user authenticates
by allowing you to specify:
The clients that users can use to connect to Snowflake, such
as Snowsight or Classic Console, drivers, or SnowSQL (CLI client). For more
information, see Limitations.
The allowed authentication methods, such as SAML, passwords, OAuth, or Key
pair authentication.
The SAML2 security integrations that are available to users during the login
experience. For example, if there are multiple security integrations, you can specify
which identity provider (IdP) can be selected and used to authenticate.
If you are using authentication policies to control which IdP a user can use to
authenticate, you can further refine that control using
the ALLOWED_USER_DOMAINS and ALLOWED_EMAIL_PATTERNS properties of the
SAML2 security integrations associated with the IdPs. For more details, see Using
multiple identity providers for federated authentication.
You can set authentication policies on the account or users in the account. If you set an
authentication policy on the account, then the authentication policy applies to all users in
the account. If you set an authentication policy on both an account and a user, then the
user-level authentication policy overrides the account-level authentication policy.
Note
If you already have access to the identifier-first login flow, you need to migrate your
account from the unsupported SAML_IDENTITY_PROVIDER account parameter using
the SYSTEM$MIGRATE_SAML_IDP_REGISTRATION function.
Use cases
The following list describes non-exhaustive use cases for authentication policies:
You want to control the user login flows when there are multiple login options.
You want to control the authentication methods, specific client types, and security
integrations available for specific users or all users.
You have customers building services on top of Snowflake using Snowflake drivers, but the
customers do not want their users accessing Snowflake through Snowsight or the Classic
Console.
You want to offer multiple identity providers as authentication options for specific users.
Limitations
The CLIENT_TYPES property of an authentication policy is a best effort method to block user logins
based on specific clients. It should not be used as the sole control to establish a security boundary.
Considerations
Ensure authentication methods and security integrations listed in your authentication policies
do not conflict. For example, if you add a SAML2 security integration in the list of allowed
security integrations, and you only allow OAuth as an allowed authentication method, then
you cannot create an authentication policy.
Use an additional non-restrictive authentication policy for administrators in case users are
locked out. For an example, see Preventing a lockout.
Security policy precedence
When more than one type of security policy is activated, precedence between the policies occur. For
example, network policies take precedence over authentication policies, so if the IP address of a
request matches an IP address in the blocked list of the network policy, then the authentication policy
is not checked, and evaluation stops at the network policy.
The following list describes the order in which security policies are evaluated:
1. Network policies: Allow or deny IP addresses, VPC IDs, and VPCE IDs.
2. Authentication policies - Allow or deny clients, authentication methods, and security
integrations.
3. Password policies (For local authentication only): Specify password requirements such as
character length, characters, password age, retries, and lockout time.
4. Session policies: Require users to re-authenticate after a period of inactivity
If a policy is assigned to both the account and the user authenticating, the user-level policy is
enforced.
Combining identifier-first login with authentication policies
By default, Snowsight or the Classic Console provide a generic login experience that provides
several options for logging in, regardless if the options are relevant to users. This means that
authentication is attempted regardless of whether the login option is a valid option for the user.
You can alter this behavior to enable a identifier-first login flow for Snowsight or the Classic
Console. In this flow, Snowflake prompts the user for an email address or username before
presenting authentication options. Snowflake uses the email address or username to identify the user,
and then only displays the login options that are relevant to the user, and are allowed by the
authentication policy set on the account or user.
For instructions for enabling the identifier-first login flow, see Identifier-first login.
The following table provides example configuration on how the identifier-first login and
authentication policies can be combined to control the login experience of the user.
Configuration Result
The authentication policy’s Snowflake prompts the user for an email
AUTHENTICATION_METHODS parameter only contains address or username, and password.
PASSWORD.
The authentication policy’s Snowflake redirects the user to the identity
AUTHENTICATION_METHODS parameter only contains provider’s login page if the email address or
SAML, and there is an active SAML2 security integration. username matches only one SAML2 security
integration.
The authentication policy’s Snowflake displays a SAML SSO button if the
AUTHENTICATION_METHODS parameter contains both email address or username matches only one
PASSWORD and SAML, and there is an active SAML2 SAML2 security integration, and the option to
security integration. log in with an email address or username, and
password.
The authentication policy’s Snowflake displays multiple SAML SSO
AUTHENTICATION_METHODS parameter only contains buttons if the email address or username
SAML, and there are multiple active SAML2 security matches multiple SAML2 security integrations.
integrations.
The authentication policy’s Snowflake displays multiple SAML SSO
AUTHENTICATION_METHODS parameter contains both buttons if the email address or username
PASSWORD and SAML, and there are multiple active matches multiple SAML2 security integrations,
SAML2 security integrations. and the option to log in with an email address
or username, and password.
Creating an authentication policy
An administrator can use the CREATE AUTHENTICATION POLICY command to create a new
authentication policy, specifying which clients can connect to Snowflake, which authentication
methods can be used, and which security integrations are available to users. By default, all client
types, authentication methods, and security integrations can be used to connect to Snowflake.
See client type limitations in authentication policies.
For example, you can use the following commands to create a custom policy_admin role and an
authentication policy that only allows authentication using Snowsight or the Classic Console, only
allowing an account or user to authenticate using OAuth or a password:
USE ROLE ACCOUNTADMIN;
select SYSTEM$AUTHORIZE_PRIVATELINK (
'185...',
'{
"Credentials": {
"AccessKeyId": "ASI...",
"SecretAccessKey": "enw...",
"SessionToken": "Fwo...",
"Expiration": "2021-01-07T19:06:23+00:00"
},
"FederatedUser": {
"FederatedUserId": "185...:sam",
"Arn": "arn:aws:sts::185...:federated-user/sam"
},
"PackedPolicySize": 0
}'
);
To verify your authorized configuration, call the SYSTEM$GET_PRIVATELINK function in your
Snowflake account on AWS. This function uses the same argument values
for 'aws_id' and 'federated_token' that were used to authorize your Snowflake account.
Snowflake returns Account is authorized for PrivateLink. for a successful authorization.
If it is necessary to disable AWS PrivateLink in your Snowflake account, call
the SYSTEM$REVOKE_PRIVATELINK function, using the same argument values for 'aws-
id' and 'federated_token'.
Important
The federated_token expires after 12 hours.
If calling any of the system functions to authorize, verify, or disable your Snowflake account to use
AWS PrivateLink and the token is not valid, regenerate the token using the AWS CLI STS command
shown at the beginning of the procedure in this section.
Configuring your AWS VPC environment
Attention
This section only covers the Snowflake-specific details for configuring your VPC environment.
Also, note that Snowflake is not responsible for the actual configuration of the required AWS VPC
endpoints, security group rules, and DNS records. If you encounter issues with any of these
configuration tasks, please contact AWS Support directly.
Create and configure a VPC endpoint (VPCE)
Complete the following steps to create and configure a VPC endpoint: In your AWS VPC
environment:
1. As a Snowflake account administrator (i.e. a user with the ACCOUNTADMIN system role),
call the SYSTEM$GET_PRIVATELINK_CONFIG function and record the privatelink-vpce-
id value.
2. In your AWS environment, create a VPC endpoint using the privatelink-vpce-id value from the
previous step.
3. In your AWS environment, authorize a security group of services that connect the Snowflake
outgoing connection to port 443 and 80 of the VPCE CIDR (Classless Inter-Domain Routing).
For details, see the AWS documentation:
Working with VPCs and subnets
VPC endpoints
VPC endpoint services (AWS PrivateLink)
Security groups for your VPC
Configure your VPC network
To access Snowflake via an AWS PrivateLink endpoint, it is necessary to create CNAME records in
your DNS to resolve the endpoint values from
the SYSTEM$GET_PRIVATELINK_CONFIG function to the DNS name of your VPC Endpoint.
These endpoint values allow you to access Snowflake, Snowsight, and the Snowflake Marketplace
while also using OCSP to determine whether a certificate is revoked when Snowflake clients attempt
to connect to an endpoint through HTTPS and connection URLs.
The function values to obtain are:
privatelink-account-url
privatelink-connection-ocsp-urls
privatelink-connection-urls
privatelink-ocsp-url
regionless-privatelink-account-url
regionless-snowsight-privatelink-url
snowsight-privatelink-url
Note that the values for regionless-snowsight-privatelink-url
and snowsight-privatelink-url allow access to
Snowsight and the Snowflake Marketplace using private connectivity. However, there is additional
configuration if you want to enable URL redirects.
For details, see Snowsight & Private Connectivity.
For additional help with DNS configuration, please contact your internal AWS administrator.
Important
The structure of the OCSP cache server hostname depends on the version of your installed clients, as
described in Step 1 of Configuring Your Snowflake Clients (in this topic):
If you are using the listed versions (or higher), use the form described above, which allows
for better DNS resolution when you have multiple Snowflake accounts (e.g. dev, test, and
production) in the same region. When updating client drivers and using OCSP with
PrivateLink, update the firewall rules to allow the OCSP hostname.
If you are using older client versions, then the OCSP cache server hostname takes the
form ocsp.<region_id>.privatelink.snowflakecomputing.com (i.e. no account identifier).
Note that your DNS record must resolve to private IP addresses within your VPC. If it
resolves to public IP addresses, the record is not configured correctly.
Create AWS VPC interface endpoints for Amazon S3
This step is required for Amazon S3 traffic from Snowflake clients to stay on the AWS backbone.
The Snowflake clients (e.g. SnowSQL, JDBC driver) require access to Amazon S3 to perform
various runtime operations.
If your AWS VPC network does not allow access to the public internet, you can configure private
connectivity to internal stages or more gateway endpoints to the Amazon S3 hostnames required by
the Snowflake clients.
Overall, there are three options to configure access to Amazon S3. The first two options avoid the
public Internet and the third option does not:
1. Configure an AWS VPC interface endpoint for internal stages. This option is recommended.
2. Configure an Amazon S3 gateway endpoint. For more information, see the note below.
3. Do not configure an interface endpoint or a gateway endpoint. This results in access using the
public Internet.
Attention
To prevent communications between an Amazon S3 bucket and an AWS VPC with Snowflake from
using the public Internet, you can set up an Amazon S3 gateway endpoint in the same AWS region
as the Amazon S3 bucket. The reason for this is AWS PrivateLink only allows communications
between VPCs, and the Amazon S3 bucket is not included in the VPC.
You can configure the Amazon S3 gateway endpoint to limit access to specific users, S3 resources,
routes, and subnets; however, Snowflake does not require this configuration. For more details,
see Endpoints for Amazon S3.
To configure the Amazon S3 gateway endpoint policies to specifically restrict them to use only the
Amazon S3 resources for Snowflake, choose one of the following options:
Use the specific Amazon S3 hostname addresses used by your Snowflake account. For the
complete list of hostnames used by your account, see SYSTEM$ALLOWLIST.
Use an Amazon S3 hostname pattern that matches the Snowflake S3 hostnames. In this
scenario, there are two possible types of connections to Snowflake, VPC-to-VPC or On-
Premises-to-VPC.
Based on your connection type, note the following:
VPC-to-VPC
Ensure the Amazon S3 gateway endpoint exists. Optionally modify the S3 gateway endpoint
policy to match the specific hostname patterns shown in the Amazon S3 Hostnames table.
On-Premises-to-VPC
You must define a setup to include the S3 hostname patterns in the firewall or proxy
configuration if Amazon S3 traffic is not permitted on the public gateway.
The following table lists the Amazon S3 hostname patterns for which you may create gateway
endpoints if you do not require them to be specific to your account’s Snowflake-managed S3
buckets:
Amazon S3 Hostnames Notes
All regions
sfc-*-stage.s3.amazonaws.com:443
All regions other than US East
sfc-*-stage.s3-<region_id>.amazonaws.com:443 Note that the pattern uses a hyphen ( -) before
the region ID.
Amazon S3 Hostnames Notes
sfc-*-stage.s3.<region_id>.amazonaws.com:443 Note that the pattern uses a period ( .) before the
region ID.
For details about creating gateway endpoints, see Gateway VPC endpoints.
Connect to Snowflake
Prior to connecting to Snowflake, you can optionally leverage SnowCD (Snowflake Connectivity
Diagnostic tool) to evaluate the network connection with Snowflake and AWS PrivateLink.
For more information, see SnowCD and SYSTEM$ALLOWLIST_PRIVATELINK.
Otherwise, connect to Snowflake with your private connectivity account URL.
Note that if you want to connect to Snowsight via AWS PrivateLink, follow the instructions in
the Snowsight documentation.
Blocking public access — Optional
After testing private connectivity to Snowflake using AWS PrivateLink, you can optionally block
public access to Snowflake. This means that users can access Snowflake only if their connection
request originates from an IP address within a particular CIDR block range specified in a Snowflake
network policy.
To block public access using a network policy:
1. Create a new network policy or edit an existing network policy. Add the CIDR block range
for your organization.
2. Activate the network policy for your account.
For details, see Controlling network traffic with network policies.
Configuring your Snowflake clients
Ensure Snowflake clients support OCSP cache server
The Snowflake OCSP cache server mitigates connectivity issues between Snowflake clients and the
server. To enable your installed Snowflake clients to take advantage of the OCSP server cache,
ensure you are using the following client versions:
SnowSQL 1.1.57 (or higher)
Python Connector 1.8.2 (or higher)
JDBC Driver 3.8.3 (or higher)
ODBC Driver 2.19.3 (or higher)
Note
The Snowflake OCSP cache server listens on port 80, which is why you were instructed in Create
and configure a VPC endpoint (VPCE) to configure your AWS PrivateLink VPCE security group to
accept this port, along with port 443 (required for all other Snowflake traffic).
Specify hostname for Snowflake clients
Each Snowflake client requires a hostname to connect to your Snowflake account.
The hostname is the same as the hostname you specified in the CNAME record(s) in Configure your
VPC network.
This step is not applicable to access the Snowflake Marketplace.
For example, for an account named xy12345:
If the account is in US West, the hostname is xy12345.us-west-2.privatelink.snowflakecomputing.com .
If the account is in EU (Frankfurt), the hostname is xy12345.eu-central-
1.privatelink.snowflakecomputing.com .
Important
The method for specifying the hostname differs depending on the client:
For the Spark connector and the ODBC and JDBC drivers, specify the entire hostname.
For all the other clients, do not specify the entire hostname.
Instead, specify the account identifier with the privatelink segment
(i.e. <account_identifier>.privatelink ), which Snowflake concatenates
with snowflakecomputing.com to dynamically construct the hostname.
For more details about specifying the account name or hostname for a Snowflake client, see the
documentation for each client.
Using SSO with AWS PrivateLink
Snowflake supports using SSO with AWS PrivateLink. For more information, see:
SSO with private connectivity
Partner applications
Using Client Redirect with AWS PrivateLink
Snowflake supports using Client Redirect with AWS PrivateLink.
For more information, see Redirecting Client Connections.
Using replication and Tri-Secret Secure with private connectivity
Snowflake supports replicating your data from the source account to the target account, regardless of
whether you enable Tri-Secret Secure or this feature in the target account.
For details, refer to Database replication and encryption.
Troubleshooting
Note the following Snowflake Community articles:
How to retrieve a Federation Token from AWS for PrivateLink Self-Service
FAQ: PrivateLink Self-Service with AWS
Troubleshooting: Snowflake self-service functions for AWS PrivateLink
10. Enable your Snowflake account on Azure to use Azure Private Link by completing the
following steps:
In your command line environment, record the private endpoint resource ID value
using the following Azure CLI network command:
az network private-endpoint show
The private endpoint was created in the previous steps using the template files. The
resource ID value takes the following form, which has a truncated value:
/subscriptions/26d.../resourcegroups/sf-1/providers/microsoft.network/
privateendpoints/test-self-service
In your command line environment, execute the following Azure CLI
account command and save the output. The output will be used as the value for
the federated_token argument in the next step.
az account get-access-token --subscription <SubscriptionID>
Extract the access token value from the command output. This value will be used as
the federated_token value in the next step. In this example, the values are truncated
and the access token value is eyJ...:
{
"accessToken": "eyJ...",
"expiresOn": "2021-05-21 21:38:31.401332",
"subscription": "0cc...",
"tenant": "d47...",
"tokenType": "Bearer"
}
Important
The user generating the Azure access Token must have Read permissions on the
Subscription. The least privilege permission
is Microsoft.Subscription/subscriptions/acceptOwnershipStatus/read. Alternatively,
the default role Reader grants more coarse-grained permissions.
The accessToken value is sensitive information and should be treated like a password
value — do not share this value.
If it is necessary to contact Snowflake Support, redact the access token from any
commands and URLs before creating a support ticket.
In Snowflake, call the SYSTEM$AUTHORIZE_PRIVATELINK function, using
the private-endpoint-resource-id value and the federated_token value as arguments,
which are truncated in this example:
USE ROLE ACCOUNTADMIN;
SELECT SYSTEM$AUTHORIZE_PRIVATELINK (
'/subscriptions/26d.../resourcegroups/sf-1/providers/microsoft.network/
privateendpoints/test-self-service',
'eyJ...'
);
To verify your authorized configuration, call the SYSTEM$GET_PRIVATELINK function
in your Snowflake account on Azure. Snowflake
returns Account is authorized for PrivateLink. for a successful authorization.
If it is necessary to disable Azure Private Link in your Snowflake account, call
the SYSTEM$REVOKE_PRIVATELINK function, using the argument values for private-
endpoint-resource-id and federated_token.
11. DNS Setup. All requests to Snowflake need to be routed via the Private Endpoint. Update
your DNS to resolve the Snowflake account and OCSP URLs to the private IP address of
your Private Endpoint.
To get the endpoint IP address, navigate to Azure portal search bar and enter the name
of the endpoint (i.e. the NAME value from Step 5). Locate the Network Interface
result and click it.
The Google Compute Engine (i.e. a virtual machine) connects to a private, virtual IP address which
routes to a forwarding rule (1). The forwarding rule connects to the service attachment through a
private connection (2). The connection is routed through a load balancer (3) that redirects to
Snowflake (4).
Limitations
The Snowflake system functions for self-service management are not supported. For details,
see Current Limitations for Accounts on GCP.
For details, see:
Account Identifiers
Connecting to Your Accounts
Configuration procedure
This section describes how to configure Google Cloud Private Service Connect to connect to
Snowflake.
Attention
This section only covers the Snowflake-specific details for configuring your Google Cloud VPC
environment. Also, note that Snowflake is not responsible for the actual configuration of the required
firewall updates and DNS records.
If you encounter issues with any of these configuration tasks, please contact Google Support directly.
For installation help, see the Google documentation on the Cloud SDK: Command Line Interface.
For additional help, contact your internal Google Cloud administrator.
1. Contact Snowflake Support and provide a list of your Google Cloud <project_id> values and
the corresponding URLs that you use to access Snowflake with a note to enable Google
Cloud Private Service Connect. After receiving a response from Snowflake Support, continue
to the next step.
Important
If you are using VPC Service Controls in your VPC, ensure that the policy allows access to
the Snowflake service before contacting Snowflake Support.
If this action is not taken, Snowflake will not be able to add your project ID to the Snowflake
service attachment allow list. The result is that you will be blocked from being able to
connect to Snowflake using this feature.
2. In a Snowflake worksheet, run the SYSTEM$GET_PRIVATELINK_CONFIG function with
the ACCOUNTADMIN system role, and save the command output for use in the following
steps:
3. use role accountadmin;
4. select key, value from table(flatten(input=>parse_json(system$get_privatelink_config())));
5. In a command line interface (e.g. the Terminal application), update the gcloud library to the
latest version:
6. gcloud components update
7. Authenticate to Google Cloud Platform using the following command:
8. gcloud auth login
9. In your Google Cloud VPC, set the project ID in which the forwarding rule should reside.
10. gcloud config set project <project_id>
To obtain a list of project IDs, execute the following command:
gcloud projects list --sort-by=projectId
11. In your Google Cloud VPC, create a virtual IP address:
12. gcloud compute addresses create <customer_vip_name> \
13. --subnet=<subnet_name> \
14. --addresses=<customer_vip_address>
15. --region=<region>
For example:
gcloud compute addresses create psc-vip-1 \
--subnet=psc-subnet \
--addresses=192.168.3.3 \
--region=us-central1
# returns
Created [https://www.googleapis.com/compute/v1/projects/docstest-123456/regions/us-
central1/addresses/psc-vip-1].
Where:
<customer_vip_name> specifies the name of the virtual IP rule (i.e. psc-vip-1 ).
<subnet_name> specifies the name of the subnet.
<customer_vip_address> : all private connectivity URLs resolve to this address. Specify
an IP address from your network or use CIDR notation to specify a range of IP
addresses.
<region> specifies the cloud region where your Snowflake account is located.
16. Create a forwarding rule to have your subnet route to the Private Service Connect endpoint
and then to the Snowflake service endpoint:
17. gcloud compute forwarding-rules create <name> \
18. --region=<region> \
19. --network=<network_name> \
20. --address=<customer_vip_name> \
21. --target-service-attachment=<privatelink-gcp-service-attachment>
For example:
gcloud compute forwarding-rules create test-psc-rule \
--region=us-central1 \
--network=psc-vpc \
--address=psc-vip-1 \
--target-service-attachment=projects/us-central1-deployment1-c8cc/regions/us-central1/
serviceAttachments/snowflake-us-central1-psc
# returns
Created [https://www.googleapis.com/compute/projects/mdlearning-293607/regions/us-
central1/forwardingRules/test-psc-rule].
Where:
<name> specifies the name of the forwarding rule.
<region> specifies the cloud region where your Snowflake account is located.
<network_name> specifies the name of the network for this forwarding rule.
<customer_vip_name> specifies the <name> value (i.e. psc-vip-1 ) of the virtual IP address
created in the previous step.
<privatelink-gcp-service-attachment> specifies the endpoint for the Snowflake service (see
step 2).
22. Use the following command to verify the forwarding-rule was created successfully:
23. gcloud compute forwarding-rules list --regions=<region>
The cloud region in this command must match the cloud region where your Snowflake
account is located.
For example, if your Snowflake account is located in the europe-west-2 region,
replace <region> with europe-west2.
For a complete list of Google Cloud regions and their formatting, see Viewing a list of
available regions.
24. Update your DNS settings.
All requests to Snowflake need to be routed through the Private Service Connect endpoint so
that the URLs in step 2 (from the SYSTEM$GET_PRIVATELINK_CONFIG function)
resolve to the VIP address that you created ( <customer_vip_address>).
These endpoint values allow you to access Snowflake, Snowsight, and the Snowflake
Marketplace while also using OCSP to determine whether a certificate is revoked when
Snowflake clients attempt to connect to an endpoint through HTTPS and connection URLs.
The function values to obtain are:
privatelink-account-url
privatelink-connection-ocsp-urls
privatelink-connection-urls
privatelink-ocsp-url
regionless-privatelink-account-url
regionless-snowsight-privatelink-url
snowsight-privatelink-url
Note that the values for regionless-snowsight-privatelink-url
and snowsight-privatelink-url allow access
to Snowsight and the Snowflake Marketplace using private connectivity. However, there is
additional configuration if you want to enable URL redirects.
For details, see Snowsight & Private Connectivity.
Note
A full explanation of DNS configuration is beyond the scope of this procedure. For example,
you can choose to integrate a private DNS zone into your environment using Cloud DNS.
Please consult your internal Google Cloud and cloud infrastructure administrators to
configure and resolve the URLs in DNS properly.
25. Test your connection to Snowflake using SnowCD (Connectivity Diagnostic Tool).
26. Connect to Snowflake with your private connectivity account URL.
Note that if you want to connect to Snowsight via Google Cloud Private Service Connect,
follow the instructions in the Snowsight documentation.
Using SSO with Google Private Service Connect
Snowflake supports using SSO with Google Cloud Private Service Connect. For more information,
see:
SSO with private connectivity
Partner applications
Using Client Redirect with Google Cloud Private Service Connect
Snowflake supports using Client Redirect with Google Cloud Private Service Connect.
For more information, see Redirecting Client Connections.
Using Replication & Tri-Secret Secure with Private Connectivity
Snowflake supports replicating your data from the source account to the target account, regardless of
whether you enable Tri-Secret Secure or this feature in the target account.
For details, refer to Database replication and encryption.
Blocking public Access — Optional
After testing the Google Cloud Private Service Connect connectivity with Snowflake, you
can optionally block public access to Snowflake using Controlling network traffic with network
policies.
Configure the CIDR block range to block public access to Snowflake using your organization’s IP
address range. This range can be from within your virtual network.
Once the CIDR Block ranges are set, only IP addresses within the CIDR block range can access
Snowflake.
To block public access using a network policy:
1. Create a new network policy or edit an existing network policy. Add the CIDR block range
for your organization.
2. Activate the network policy for your account.
In the Worksheets tab, Snowflake creates a new session every time a new worksheet is
created. Each worksheet is limited to a maximum of 4 hours of idle behavior, and the idle timeout for
each worksheet is tracked separately.
When a worksheet is closed, the user session for the worksheet ends.
After the 4-hour time limit expires for any open worksheet, Snowflake logs the user out of the web
interface.
Note
Note that passive behaviors such as scrolling through the query result set or sorting a data set do not
reset the idle session timeout tracker.
To prevent a session from closing too early and being logged out of the Classic Console, save any
necessary SQL statements to a local file and close any open worksheets that are not in use.
Session Policies
A session policy defines the idle session timeout period in minutes and provides the option to
override the default idle timeout value of 4 hours.
The session policy can be set for an account or user with configurable idle timeout periods to address
compliance requirements. If a user is associated with both an account and user-level session policy,
the user-level session policy takes precedence. After the session policy is set on the account or user,
Snowflake enforces the session policy.
There are two properties that govern the session policy behavior:
SESSION_IDLE_TIMEOUT_MINS for programmatic and Snowflake Clients.
SESSION_UI_IDLE_TIMEOUT_MINS for the Classic Console and Snowsight.
The timeout period begins upon a successful authentication to Snowflake. If a session policy is not
set, Snowflake uses a default value of 240 minutes (i.e. 4 hours). The minimum configurable idle
timeout value for a session policy is 5 minutes. When the session expires, the user must authenticate
to Snowflake again. However, Snowflake does not enforce any setting defined by the Custom logout
endpoint.
For more information, see:
Managing session policies .
The SESSIONS view to monitor session usage.
Considerations
If a client supports the CLIENT_SESSION_KEEP_ALIVE option and the option is set
to TRUE, the client preserves the Snowflake session indefinitely as long as the connection to
Snowflake is active. Otherwise, if the option is set to FALSE, the session ends after 4 hours.
When possible, avoid using this option since it can result in many open sessions and place a
greater demand on resources which can lead to a performance degradation.
You can use
the CLIENT_SESSION_KEEP_ALIVE_HEARTBEAT_FREQUENCY parameter to specify
the number of seconds between client attempts to update the token for the session. The web
interface session can be refreshed as Snowflake objects continue to be used, such as
executing DDL and DML statements. Snowflake checks for this behavior every 30 seconds.
Creating a new worksheet or opening an existing worksheet continues to use the established
user session but with its idle session timeout reset to 0.
Tracking session policy usage:
Query the Account Usage SESSION_POLICIES view to return a row for each session
policy in your Snowflake account.
Use the Information Schema table function POLICY_REFERENCES to return a row
for each user that is assigned to the specified session policy and a row for the session
policy assigned to the Snowflake account.
Currently, only the following syntax is supported for session policies:
POLICY_REFERENCES( POLICY_NAME => '<session_policy_name>' )
Where session_policy_name is the fully qualified name of the session policy.
For example, execute the following query to return a row for each user that is
assigned the session policy named session_policy_prod_1, which is stored in the database
named my_db and the schema named my_schema:
SELECT *
FROM TABLE(
MY_DB.INFORMATION_SCHEMA.POLICY_REFERENCES(
POLICY_NAME => 'my_db.my_schema.session_policy_prod_1'
)
);
Limitations
Future grants
Future grants of privileges on session policies are not supported.
As a workaround, grant the APPLY SESSION POLICY privilege to a custom role to allow
that role to apply session policies on a user or the Snowflake account.
Implementing a Session Policy
The following steps are a representative guide to implementing a session policy.
These steps assume a centralized management approach in which a custom role
named policy_admin owns the session policy (i.e. has the OWNERSHIP privilege on the session
policy) and is responsible for setting the session policy on an account or user (i.e. has the APPLY
SESSION POLICY on ACCOUNT privilege or the APPLY SESSION POLICY ON USER
privilege).
Note
To set a policy on an account, the policy_admin custom role must have the following permissions:
USAGE on the database and schema that contain the session policy.
CREATE SESSION POLICY on the schema that contains the session policy.
Follow these steps to implement a session policy.
Step 1: Create the POLICY_ADMIN Custom Role
Create a custom role that allows users to create and manage session policies. Throughout this topic,
the example custom role is named policy_admin, although the role could have any appropriate name.
If the custom role already exists, continue to the next step.
Otherwise, create the POLICY_ADMIN custom role.
USE ROLE USERADMIN;
To own an object means that a role has the OWNERSHIP privilege on the object. Each securable
object is owned by a single role, which by default is the role used to create the object. When this role
is assigned to users, they effectively have shared control over the object. The GRANT
OWNERSHIP command lets you transfer the ownership of an object from one role to another role,
including to database roles. This command also specifies the securable objects in each container.
In a regular schema, the owner role has all privileges on the object by default, including the ability to
grant or revoke privileges on the object to other roles. In addition, ownership can be transferred from
one role to another. However, in a managed access schema, object owners lose the ability to make
grant decisions. Only the schema owner (i.e. the role with the OWNERSHIP privilege on the
schema) or a role with the MANAGE GRANTS privilege can grant privileges on objects in the
schema.
The ability to perform SQL actions on objects is defined by the privileges granted to the active role
in a user session. The following are examples of SQL actions available on various objects in
Snowflake:
Ability to create a warehouse.
Ability to list tables contained in a schema.
Ability to add data to a table.
Roles
Roles are the entities to which privileges on securable objects can be granted and revoked. Roles are
assigned to users to allow them to perform actions required for business functions in their
organization. A user can be assigned multiple roles. This allows users to switch roles (i.e. choose
which role is active in the current Snowflake session) to perform different actions using separate sets
of privileges.
There are a small number of system-defined roles in a Snowflake account. System-defined roles
cannot be dropped. In addition, the privileges granted to these roles by Snowflake cannot be revoked.
Users who have been granted a role with the necessary privileges can create custom roles to meet
specific business and security needs.
Roles can be also granted to other roles, creating a hierarchy of roles. The privileges associated with
a role are inherited by any roles above that role in the hierarchy. For more information about role
hierarchies and privilege inheritance, see Role Hierarchy and Privilege Inheritance (in this topic).
Note
A role owner (i.e. the role that has the OWNERSHIP privilege on the role) does not inherit the
privileges of the owned role. Privilege inheritance is only possible within a role hierarchy.
Although additional privileges can be granted to the system-defined roles, it is not recommended.
System-defined roles are created with privileges related to account-management. As a best practice,
it is not recommended to mix account-management privileges and entity-specific privileges in the
same role. If additional privileges are needed, Snowflake recommends granting the additional
privileges to a custom role and assigning the custom role to the system-defined role.
Types of roles
The following role types are essentially the same, except for their scope. Both types enable
administrators to authorize and restrict access to objects in your account.
Note
Except where noted in the product documentation, the term role refers to either type.
Account roles
To permit SQL actions on any object in your account, grant privileges on the object to an
account role.
Database roles
To limit SQL actions to a single database, as well as any object in the database, grant
privileges on the object to a database role in the same database.
Note that database roles cannot be activated directly in a session. Grant database roles to
account roles, which can be activated in a session.
For more information about database roles, see:
Role hierarchy and privilege inheritance (in this topic)
Database roles and role hierarchies (in this topic)
Managing database object access using database roles
Database roles in the shared SNOWFLAKE database.
CREATE <object> … CLONE
Instance roles
To permit access to an instance of a class, grant an instance role to an account role.
A class may have one or more class roles with different privileges granted to each role. When
an instance of a class is created, the instance role(s) can be granted to account roles to grant
access to instance methods.
Note that instance roles cannot be activated directly in a session. Grant instance roles to
account roles, which can be activated in a session.
For more information, see Instance Roles.
Active roles
Active roles serve as the source of authorization for any action taken by a user in a session. Both
the primary role and any secondary roles can be activated in a user session.
A role becomes an active role in either of the following ways:
When a session is first established, the user’s default role and default secondary roles are
activated as the session primary and secondary roles, respectively.
Note that client connection properties used to establish the session could explicitly override
the primary role or secondary roles to use.
Executing a USE ROLE or USE SECONDARY ROLES statement activates a different
primary role or secondary roles, respectively. These roles can change over the course of a
session if either command is executed again.
System-defined roles
ORGADMIN
(aka Organization Administrator)
Role that manages operations at the organization level. More specifically, this role:
Can create accounts in the organization.
Can view all accounts in the organization (using SHOW ORGANIZATION
ACCOUNTS) as well as all regions enabled for the organization (using SHOW
REGIONS).
Can view usage information across the organization.
ACCOUNTADMIN
(aka Account Administrator)
Role that encapsulates the SYSADMIN and SECURITYADMIN system-defined roles. It is
the top-level role in the system and should be granted only to a limited/controlled number of
users in your account.
SECURITYADMIN
(aka Security Administrator)
Role that can manage any object grant globally, as well as create, monitor, and manage users
and roles. More specifically, this role:
Is granted the MANAGE GRANTS security privilege to be able to modify any grant,
including revoking it.
Inherits the privileges of the USERADMIN role via the system role hierarchy (i.e.
USERADMIN role is granted to SECURITYADMIN).
USERADMIN
(aka User and Role Administrator)
Role that is dedicated to user and role management only. More specifically, this role:
Is granted the CREATE USER and CREATE ROLE security privileges.
Can create users and roles in the account.
This role can also manage users and roles that it owns. Only the role with the
OWNERSHIP privilege on an object (i.e. user or role), or a higher role, can modify
the object properties.
SYSADMIN
(aka System Administrator)
Role that has privileges to create warehouses and databases (and other objects) in an account.
If, as recommended, you create a role hierarchy that ultimately assigns all custom roles to the
SYSADMIN role, this role also has the ability to grant privileges on warehouses, databases,
and other objects to other roles.
PUBLIC
Pseudo-role that is automatically granted to every user and every role in your account. The
PUBLIC role can own securable objects, just like any other role; however, the objects owned
by the role are, by definition, available to every other user and role in your account.
This role is typically used in cases where explicit access control is not needed and all users
are viewed as equal with regard to their access rights.
Custom roles
Custom account roles can be created using the USERADMIN role (or a higher role) as well as by
any role to which the CREATE ROLE privilege has been granted.
Custom database roles can be created by the database owner (i.e. the role that has the OWNERSHIP
privilege on the database).
By default, a newly-created role is not assigned to any user, nor granted to any other role.
When creating roles that will serve as the owners of securable objects in the system, Snowflake
recommends creating a hierarchy of custom roles, with the top-most custom role assigned to the
system role SYSADMIN. This role structure allows system administrators to manage all objects in
the account, such as warehouses and database objects, while restricting management of users and
roles to the USERADMIN role.
Conversely, if a custom role is not assigned to SYSADMIN through a role hierarchy, the system
administrators cannot manage the objects owned by the role. Only those roles granted the MANAGE
GRANTS privilege (only the SECURITYADMIN role by default) can view the objects and modify
their access grants.
For instructions to create custom roles, see Creating custom roles.
Privileges
Access control privileges determine who can access and perform operations on specific objects in
Snowflake. For each securable object, there is a set of privileges that can be granted on it. For
existing objects, privileges must be granted on individual objects (e.g. the SELECT privilege on
the mytable table). To simplify grant management, future grants allow defining an initial set of
privileges on objects created in a schema (i.e. grant the SELECT privilege on all new tables created
in the myschema schema to a specified role).
Privileges are managed using the GRANT <privileges> and REVOKE <privileges> commands.
In regular (i.e. non-managed) schemas, use of these commands is restricted to the role that
owns an object (i.e. has the OWNERSHIP privilege on the object) or any roles that have the
MANAGE GRANTS global privilege for the object (only the SECURITYADMIN role by
default).
In managed access schemas, object owners lose the ability to make grant decisions. Only the
schema owner or a role with the MANAGE GRANTS privilege can grant privileges on
objects in the schema, including future grants, centralizing privilege management.
Note that a role that holds the global MANAGE GRANTS privilege can grant additional privileges
to the current (grantor) role.
For more details, see Access control privileges.
Role hierarchy and privilege inheritance
The following diagram illustrates the hierarchy for the system-defined roles, as well as the
recommended structure for additional, user-defined account roles and database roles. The highest-
level database role in the example hierarchy is granted to a custom (i.e. user-defined) account role. In
turn, this role is granted to another custom role in a recommended structure that allows the system-
defined SYSADMIN role to inherit the privileges of custom account roles and database roles:
Note
ORGADMIN is a separate system role that manages operations at the organization level. This role is
not included in the hierarchy of system roles.
For a more specific example of role hierarchy and privilege inheritance, consider the following
scenario:
Role 3 has been granted to Role 2.
Role 2 has been granted to Role 1.
Role 1 has been granted to User 1.
In this scenario:
Role 2 inherits Privilege C.
Role 1 inherits Privileges B and C.
User 1 has all three privileges.
Database roles and role hierarchies
The following limitations currently apply to database roles:
If a database role is granted to a share, then no other database roles can be granted to that
database role. For example, if database role d1.r1 is granted to a share, then attempting to
grant database role d1.r2 to d1.r1 is blocked.
In addition, if a database role is granted to another database role, the grantee database role
cannot be granted to a share.
Database roles that are granted to a share can be granted to other database roles, as well as
account roles.
Account roles cannot be granted to database roles in a role hierarchy.
Enforcement model with primary role and secondary roles
Every active user session has a “current role,” also referred to as a primary role. When a session is
initiated (e.g. a user connects via JDBC/ODBC or logs in to the Snowflake web interface), the
current role is determined based on the following criteria:
1. If a role was specified as part of the connection and that role is a role that has already been
granted to the connecting user, the specified role becomes the current role.
2. If no role was specified and a default role has been set for the connecting user, that role
becomes the current role.
3. If no role was specified and a default role has not been set for the connecting user, the system
role PUBLIC is used.
In addition, a set of secondary roles can be activated in a user session. A user can perform SQL
actions on objects in a session using the aggregate privileges granted to the primary and secondary
roles. The roles must be granted to the user before they can be activated in a session. Note that while
a session must have exactly one active primary role at a time, one can activate any number of
secondary roles at the same time.
Note
A database role can neither be a primary nor a secondary role. To assume the privileges granted to a
database role, grant the database role to an account role. Only account roles can be activated in a
session.
Authorization to execute CREATE <object> statements comes from the primary role only. When an
object is created, its ownership is set to the currently active primary role. However, for any other
SQL action, any permission granted to any active primary or secondary role can be used to authorize
the action. For example, if any role in a secondary role hierarchy owns an object (i.e. has the
OWNERSHIP privilege on the object), the secondary roles would authorize performing any DDL
actions on the object. Both the primary role as well as all secondary roles inherit privileges from any
roles lower in their role hierarchies.
For organizations whose security model includes a large number of roles, each with a fine granularity
of authorization via permissions, the use of secondary roles simplifies role management. All roles
that were granted to a user can be activated in a session. Secondary roles are particularly useful for
SQL operations such as cross-database joins that would otherwise require creating a parent role of
the roles that have permissions to access the objects in each database.
During the course of a session, the user can use the USE ROLE or USE SECONDARY
ROLES command to change the current primary or secondary roles, respectively. The user can use
the CURRENT_SECONDARY_ROLES function to show all active secondary roles for the current
session.
When you create an object that requires one or more privileges to use, only the primary role and
those roles that it directly or indirectly inherits are considered when searching for the grants of those
privileges.
For any other statement that requires one or more privileges (e.g. querying a table requires the
SELECT privilege on a table with the USAGE privilege on the database and schema), the primary
role, the secondary roles, and any other roles that are inherited are considered when searching for the
grants of those privileges.
Note
There is no concept of a “super-user” or “super-role” in Snowflake that can bypass authorization
checks. All access requires appropriate access privileges.
Customer-managed keys
A customer-managed key is a master encryption key that the customer maintains in the key
management service for the cloud provider that hosts your Snowflake account. The key management
services for each platform are:
AWS: AWS Key Management Service (KMS)
Google Cloud: Cloud Key Management Service (Cloud KMS)
Microsoft Azure: Azure Key Vault
The customer-managed key can then be combined with a Snowflake-managed key to create a
composite master key. When this occurs, Snowflake refers to this as Tri-Secret Secure (in this topic).
You can call these system functions in your Snowflake account to obtain information about your
keys:
AWS: SYSTEM$GET_CMK_KMS_KEY_POLICY
Microsoft Azure: SYSTEM$GET_CMK_AKV_CONSENT_URL
Google Cloud: SYSTEM$GET_GCP_KMS_CMK_GRANT_ACCESS_CMD
Important
Snowflake does not support key rotation for customer-managed keys and does not recommend
implementing an automatic key rotation policy on the customer-managed key.
The reason for this recommendation is that the key rotation can lead to a loss of data if the rotated
key is deleted because Snowflake will not be able to decrypt the data. For more information, see Tri-
Secret Secure (in this topic).
Benefits of customer-managed keys
Benefits of customer-managed keys include:
Control over data access
You have complete control over your master key in the key management service and,
therefore, your data in Snowflake. It is impossible to decrypt data stored in your Snowflake
account without you releasing this key.
Disable access due to a data breach
If you experience a security breach, you can disable access to your key and halt all data
operations running in your Snowflake account.
Ownership of the data lifecycle
Using customer-managed keys, you can align your data protection requirements with your
business processes. Explicit control over your key provides safeguards throughout the entire
data lifecycle, from creation to deletion.
Important requirements for customer-managed keys
Customer-managed keys provide significant security benefits, but they also have crucial,
fundamental requirements that you must continuously follow to safeguard your master key:
Confidentiality
You must keep your key secure and confidential at all times.
Integrity
You must ensure your key is protected against improper modification or deletion.
Availability
To execute queries and access your data, you must ensure your key is continuously available
to Snowflake.
By design, an invalid or unavailable key will result in a disruption to your Snowflake data operations
until a valid key is made available again to Snowflake.
However, Snowflake is designed to handle temporary availability issues (up to 10 minutes) caused
by common issues, such as network communication failures. After 10 minutes, if the key remains
unavailable, all data operations in your Snowflake account will cease completely. Once access to the
key is restored, data operations can be started again.
Failure to comply with these requirements can significantly jeopardize the integrity of your data,
ranging from your data being temporarily inaccessible to it being permanently disabled. In addition,
Snowflake cannot be responsible for 3rd-party issues that occur or administrative mishaps caused by
your organization in the course of maintaining your key.
For example, if an issue with the key management service results in your key becoming unavailable,
your data operations will be impacted. These issues must be resolved between you and the Support
team for the key management service. Similarly, if your key is tampered with or destroyed, all
existing data in your Snowflake account will become unreadable until the key is restored.
Tri-Secret Secure
BUSINESS CRITICAL FEATURE
Requires Business Critical Edition (or higher). To inquire about upgrading, please contact Snowflake
Support.
Tri-Secret Secure is the combination of a Snowflake-maintained key and a customer-managed key in
the cloud provider platform that hosts your Snowflake account to create a composite master key to
protect your Snowflake data. The composite master key acts as an account master key and wraps all
of the keys in the hierarchy; however, the composite master key never encrypts raw data.
If the customer-managed key in the composite master key hierarchy is revoked, your data can no
longer be decrypted by Snowflake, providing a level of security and control above Snowflake’s
standard encryption. This dual-key encryption model, together with Snowflake’s built-in user
authentication, enables the three levels of data protection offered by Tri-Secret Secure.
Attention
Before engaging with Snowflake to enable Tri-Secret Secure for your account, you should carefully
consider your responsibility for safeguarding your key as mentioned in the Customer-Managed
Keys section (in this topic). If you have any questions or concerns, we are more than happy to
discuss them with you.
Note that Snowflake also bears the same responsibility for the keys that we maintain. As with all
security-related aspects of our service, we treat this responsibility with the utmost care and vigilance.
All of our keys are maintained under strict policies that have enabled us to earn the highest security
accreditations, including SOC 2 Type II, PCI-DSS, HIPAA and HITRUST CSF.
Note
Image registry currently does not support Tri-Secret Secure.
Enabling Tri-Secret Secure
To enable Snowflake Tri-Secret Secure for your Business Critical (or higher) account, please
contact Snowflake Support.
Projection policies
ENTERPRISE EDITION FEATURE
This feature requires Enterprise Edition (or higher). To inquire about upgrading, please
contact Snowflake Support.
PREVIEW FEATURE— OPEN
Available to all accounts that are Enterprise Edition (or higher).
To inquire about upgrading, please contact Snowflake Support.
This topic provides information about using projection policies to allow or prevent column
projection in the output of a SQL query result.
Overview
A projection policy is a first-class, schema-level object that defines whether a column can be
projected in the output of a SQL query result. A column with a projection policy assigned to it is said
to be projection constrained.
Projection policies can be used to constrain identifier columns (e.g. name, phone number) for objects
when sharing data securely. For example, consider two companies that would like to work as
solution partners and want to identify a set of common customers prior to developing an integrated
solution. The provider can assign a projection policy to identifier columns in the share that are
necessary to allow the consumer to identify sensitive information. The consumer can use the shared
data to perform a match based on previously agreed columns that are necessary to the solution, but
cannot return values from those columns.
The benefits of using the projection policy in this example include:
The consumer can match records based on a particular value without being able to view that
value.
Sensitive provider information cannot be output in the SQL query result. For details, see
the Considerations section (in this topic).
After creating the projection policy, a policy administrator can assign the projection policy to a
column. A column can only have one projection policy assigned to it at any given time. A user can
project the column only if their active role matches a projection policy condition that allows the
column to be projected.
Note that a projection constrained column can also be protected by a masking policy and the table
containing the projection constrained column can be protected by a row access policy. For more
details, see Masking & row access policies (in this topic).
Column usage
Snowflake tracks column usage. Indirect references to a column, such as a view definition, UDF (in
this topic), and common table expression, impact column projection when a projection policy is set
on a column.
When a projection policy is set on the column and the column cannot be projected, the column:
Is not included in the output of a query result.
Cannot be inserted into another table.
Cannot be an argument for an external function or stored procedure.
Limitations
UDFs
For limitations regarding user-defined functions (UDFs), see User-defined functions
(UDFs) (in this topic).
Policy
A projection policy cannot be applied to:
A tag, and that tag cannot be assigned to a table or column (i.e. “tag-based projection
policies”).
A virtual column or to the VALUE column in an external table.
As a workaround, create a view and assign a projection policy to each column that
should not be projected.
The value_column in a PIVOT construct. For related details, see UNPIVOT (in this
topic).
A projection policy body cannot reference a column protected by a masking policy or a table
protected by a row access policy. For additional details, see Masking & row access
policies (in this topic).
Considerations
Use projection policies when the use case calls for querying a sensitive column without directly
exposing the column value to an analyst or similar role. The column value within a projection
constrained column can be analyzed with greater flexibility than a masked or tokenized value.
However, consider the following prior to setting a projection policy on a column:
A projection policy does not prevent the targeting of an individual.
For example, a user can filter rows where the name column corresponds to a particular
individual, even if the column is projection constrained. However, the user cannot run a
SELECT statement to view names of the individuals in the table.
When a projection constrained column is the join key for a query that combines data from the
protected table with data from an unprotected table, nothing prevents the user from projecting
values from the column in the unprotected table. As a result, if a value in the unprotected
table matches a value in the protected column, the user can obtain that value by projecting it
from the unprotected table.
For example, suppose a projection policy was assigned to the email column of
the t_protected table. A user can still ascertain values in the t_protected.email column by
executing:
SELECT t_unprotected.email
FROM t_unprotected JOIN t_protected ON t_unprotected.email = t_protected.email;
A projection constraint does not guarantee a malicious actor could not use deliberate queries
to obtain potentially sensitive data from a projection-constrained column. Projection policies
are bested suited for use with partners and customers with whom you have an existing level
of trust. In addition, providers should be vigilant about potential misuses of their data (e.g.
reviewing the access history for their listings).
Create a projection policy
A projection policy contains a body that calls the internal PROJECTION_CONSTRAINT function to
determine whether to project a column.
CREATE OR REPLACE PROJECTION POLICY <name>
AS () RETURNS PROJECTION_CONSTRAINT -> <body>
Where:
name specifies the name of the policy.
AS () RETURNS PROJECTION_CONSTRAINT is the signature and return type of the
policy. The signature does not accept any arguments and the return type is
PROJECTION_CONSTRAINT, which is an internal data type. All projection policies have
the same signature and return type.
body is the SQL expression that determines whether to project the column. The expression
calls the internal PROJECTION_CONSTRAINT function to allow or prevent the projection
of a column:
PROJECTION_CONSTRAINT(ALLOW => true) allows projecting a column.
PROJECTION_CONSTRAINT(ALLOW => false) does not allow projecting a
column.
Example policies
The simplest projection policies call the PROJECTION_CONSTRAINT function directly:
Allow column projection
CREATE OR REPLACE PROJECTION POLICY mypolicy
AS () RETURNS PROJECTION_CONSTRAINT ->
PROJECTION_CONSTRAINT(ALLOW => true)
Prevent column projection
CREATE OR REPLACE PROJECTION POLICY mypolicy
AS () RETURNS PROJECTION_CONSTRAINT ->
PROJECTION_CONSTRAINT(ALLOW => false)
More complicated SQL expressions can be written to call the PROJECTION_CONSTRAINT
function. The expression can use Conditional Expression Functions and Context Functions to
introduce logic to allow certain users with a particular role to project a column and prevent all other
users from projecting a column.
For example, use a CASE expression to only allow users with the analyst custom role to project a
column:
CREATE OR REPLACE PROJECTION POLICY mypolicy
AS () RETURNS PROJECTION_CONSTRAINT ->
CASE
WHEN CURRENT_ROLE() = 'ANALYST'
THEN PROJECTION_CONSTRAINT(ALLOW => true)
ELSE PROJECTION_CONSTRAINT(ALLOW => false)
END;
For data sharing use cases, the provider can write a projection policy to constrain column projection
for all consumer accounts using the CURRENT_ACCOUNT context function, or selectively restrict
column projection in specific shares using the INVOKER_SHARE context function. For example:
Restrict all consumer accounts
In this example, provider.account is the account identifier in the account name format:
CREATE OR REPLACE PROJECTION POLICY restrict_consumer_accounts
AS () RETURNS PROJECTION_CONSTRAINT ->
CASE
WHEN CURRENT_ACCOUNT() = 'provider.account'
THEN PROJECTION_CONSTRAINT(ALLOW => true)
ELSE PROJECTION_CONSTRAINT(ALLOW => false)
END;
Restrict to specific shares
Consider a data sharing provider account that has a projection policy set on a column of a
secure view. There are two different shares (SHARE1 and SHARE2) that can access the
secure view to support two different data sharing consumers.
If a user in the data sharing consumer account attempts to project the column through either
share they can project the column, otherwise the column cannot be projected:
CREATE OR REPLACE PROJECTION POLICY projection_share
AS () RETURNS PROJECTION_CONSTRAINT ->
CASE
WHEN INVOKER_SHARE() IN ('SHARE1', 'SHARE2')
THEN PROJECTION_CONSTRAINT(ALLOW => true)
ELSE PROJECTION_CONSTRAINT(ALLOW => false)
END;
Assign a projection policy
A projection policy is applied to a table column using an ALTER TABLE … ALTER
COLUMN command and a view column using an ALTER VIEW command. Each column supports
only one projection policy.
ALTER { TABLE | VIEW } <name>
{ ALTER | MODIFY } COLUMN <col1_name>
SET PROJECTION POLICY <policy_name> [ FORCE ]
[ , <col2_name> SET PROJECTION POLICY <policy_name> [ FORCE ] ... ]
Where:
name specifies the name of the table or view.
col1_name specifies the name of the column in the table or view.
col2_name specifies the name of an additional column in the table or view.
policy_name specifies the name of the projection policy set on the column.
FORCE is an optional parameter that allows the command to assign the projection policy to a
column that already has a projection policy assigned to it. The new projection policy
atomically replaces the existing one.
For example, to set a projection policy proj_policy_acctnumber on the account_number column of a
table:
ALTER TABLE finance.accounting.customers
MODIFY COLUMN account_number
SET PROJECTION POLICY proj_policy_acctnumber;
You can also use the WITH clause of the CREATE TABLE and CREATE VIEW commands to
assign a projection policy to a column when the table or view is created. For example, to assign the
policy my_proj_policy to the account_number column of a new table, execute:
CREATE TABLE t1 (account_number NUMBER WITH PROJECTION POLICY
my_proj_policy);
Replace a projection policy
The recommended method of replacing a projection policy is to use the FORCE parameter to detach
the existing projection policy and assign the new one in a single command. This allows you to
atomically replace the old policy, leaving no gap in protection.
For example, to assign a new projection policy to a column that is already projection-constrained:
ALTER TABLE finance.accounting.customers
MODIFY COLUMN account_number
SET PROJECTION POLICY proj_policy2 FORCE;
You can also detach the projection policy from a column in one statement (… UNSET
PROJECTION POLICY) and then set a new policy on the column in a different statement (… SET
PROJECTION POLICY <name>). If you choose this method, the column is not protected by a
projection policy in between detaching one policy and assigning another. A query could potentially
access sensitive data during this time.
Detach a projection policy
Use the UNSET PROJECTION POLICY clause of an ALTER TABLE or ALTER VIEW command
to detach a projection policy from the column of a table or view. The name of the projection policy is
not required because a column cannot have more than one projection policy attached.
ALTER { TABLE | VIEW } <name>
{ ALTER | MODIFY } COLUMN <col1_name>
UNSET PROJECTION POLICY
[ , <col2_name> UNSET PROJECTION POLICY ... ]
Where:
name specifies the name of the table or view.
col1_name specifies the name of the column in the table or view.
col2_name specifies the name of an additional column in the table or view.
For example, to remove the projection policy from the account_number column:
ALTER TABLE finance.accounting.customers
MODIFY COLUMN account_number
UNSET PROJECTION POLICY;
Monitor projection policies with SQL
It can be helpful to think of two general approaches to determine how to monitor projection policy
usage.
Discover projection policies
Identify projection policy references
Discover projection policies
You can use the PROJECTION_POLICIES view in the Account Usage schema of the shared
SNOWFLAKE database. This view is a catalog for all projection policies in your Snowflake
account. For example:
SELECT * FROM SNOWFLAKE.ACCOUNT_USAGE.PROJECTION_POLICIES
ORDER BY POLICY_NAME;
Identify projection policy references
The POLICY_REFERENCES Information Schema table function can identify projection policy
references. There are two different syntax options:
1. Return a row for each object (i.e. table or view) that has the specified projection policy set on
a column:
2. USE DATABASE my_db;
3. SELECT policy_name,
4. policy_kind,
5. ref_entity_name,
6. ref_entity_domain,
7. ref_column_name,
8. ref_arg_column_names,
9. policy_status
10. FROM TABLE(information_schema.policy_references(policy_name =>
'my_db.my_schema.projpolicy'));
11. Return a row for each policy assigned to the table named my_table:
12. USE DATABASE my_db;
13. USE SCHEMA information_schema;
14. SELECT policy_name,
15. policy_kind,
16. ref_entity_name,
17. ref_entity_domain,
18. ref_column_name,
19. ref_arg_column_names,
20. policy_status
21. FROM TABLE(information_schema.policy_references(ref_entity_name =>
'my_db.my_schema.my_table', ref_entity_domain => 'table'));
Extended example
Creating a projection policy and assigning the projection policy to a column follows the same
general procedure as creating and assigning other policies, such as masking and row access policies:
1. For a centralized management approach, create a custom role (e.g. proj_policy_admin) to
manage the policy.
2. Grant this role the privileges to create and assign a projection policy.
3. Create the projection policy.
4. Assign the projection policy to a column.
Based on this general procedure, complete the following steps to assign a projection policy to a
column:
1. Create a custom role to manage the projection policy:
2. USE ROLE useradmin;
3.
4. CREATE ROLE proj_policy_admin;
5. Grant the proj_policy_admin custom role the privileges to create a projection policy in a
schema and assign the projection policy to any table or view column in the Snowflake
account.
This step assumes the projection policy will be stored in a database and schema
named privacy.projpolicies and this database and schema already exist:
GRANT USAGE ON DATABASE privacy TO ROLE proj_policy_admin;
GRANT USAGE ON SCHEMA privacy.projpolicies TO ROLE proj_policy_admin;