Access Control Overview -2
Access Control Overview -2
Roles
A role is an entity to which privileges on securable objects
can be granted or evoked.
Here are some example commands granting privileges to
roles.
In the second command, we're granting SELECT on a table
object to the role, TEST_ROLE.
Roles are then assigned to users, giving them authorization
to perform certain actions.
A user can have multiple roles assigned to them, and
switch between them within a Snowflake session.
A role itself is a securable object. This means roles could be
granted to other roles, creating a role hierarchy.
ORGADMIN
• Manages operations at organization level.
• Can create account in an organization.
• Can view all accounts in an organization.
• Can view usage information across an organization.
ACCOUNTADMIN
• Top-level and most powerful role for an account.
• Encapsulates SYSADMIN & SECURITYADMIN.
• Responsible for configuring account-level
parameters.
• View and operate on all objects in an account.
• View and manage Snowflake billing and credit data.
• Stop any running SQL statements.
SYSADMIN
• Can create warehouses, databases, schemas and
other objects in an account.
SECURITYADMIN
• Manage grants globally via the MANAGE GRANTS
privilege.
• Create, monitor and manage users and roles
USERADMIN
• User and Role management via CREATE USER and
CREATE ROLE security privileges.
• Can create users and roles in an account.
PUBLIC
• Automatically granted to every user and every role
in an account.
• Can own securable objects, however objects owned
by
PUBLIC role are available to every other user and role in an
account
Custom Roles
User Authentication
So far, we've discussed authorization, setting up a system
of privileges, roles, and users, which determine levels of
access to Snowflake objects.
Authentication.
User authentication is the process of authenticating with
Snowflake via user provided username and password
credentials.
User authentication is the default method of
authentication.
Users with the USERADMIN role can create additional
Snowflake users, which makes use of the CREATE USER
privilege.
A password can be any case-sensitive string up to 256
characters.
Must be at least 8 characters long.
Must contain at least 1 digit.
Must contain at least 1 uppercase letter and 1
lowercase letter.
Let's quickly run through the login flow for a user who's
enrolled in MFA.
The top box contains actions performed in the Snowflake
UI, and the bottom box contains actions performed on the
Duo Security application.
So as usual, you would enter your Snowflake credentials.
However, once enrolled in multifactor authentication and
have the Duo Security app running, there are three ways
you can prove your second factor.
The quickest is to approve a DUO push notification that
pops up on your phone.
The second is to click call me from the app. You then follow
the instructions from a phone call, which enables you to
successfully log in.
And lastly, you can click enter a passcode from the DUO
app. You'll enter a passcode generated into Snowflake,
allowing you to log in.
Configurable properties in relation to MFA a user with the
alter user privilege could perform.
MINS_TO_BYPASS_MFA
ALTER USER USER1 SET
MINS_TO_BYPASS_MFA=10;
DISABLE_MFA
ALTER USER USER1 SET
DISABLE_MFA=TRUE;
ALLOWS_CLIENT_MFA_CACHING
ALTER USER USER1 SET
ALLOWS_CLIENT_MFA_CACHING=TRUE;
SAML_IDENTITY_PROVIDER
The SAML_IDENTITY_PROVIDER is the account level
property that enables federated authentication.
The property accepts the JSON object with the following
fields.
Network Policies
Network Policies provide the user with the ability to
allow or deny access to their Snowflake account based
on a single IP address or list of addresses.
Network Policies currently support only IPv4
addresses.
Network policies use CIDR notation to express an IP
subnet range.
Network Policies can be applied on the account level
or to individual users.
If a user is associated to both an account-level and
user-level network policy, the user-level policy takes
precedence.
Network Policies are composed of an allowed IP range
and optionally a blocked IP range. Blocked IP ranges
are applied first.
USER
Only one Network Policy can be associated with an user at
any one time.
ALTER USER USER1 SET NETWORK_POLICY = MYPOLICY;
Data Encryption
All data in the storage layer, data loaded into Snowflake
tables is encrypted using AES-256 strong encryption
automatically during the loading process.
All files stored in internal stages for data loading and
unloading are also automatically encrypted using AES-256
strong encryption virtual warehouse and query result
caches are also encrypted.
The main takeaway here is that all data that Snowflake has
control over is automatically encrypted at rest.
Data encryption in Snowflake is entirely transparent and
requires no configuration or management by the user.
And because Snowflake is a remote service generally
connected to over the internet, it's also important to
discuss the data that is traversing the network.
Things like the text of queries when issuing commands on
the UI or the data of input files during data loading.
These also need to be encrypted.
Secure HTTPS is always used when connecting to a
Snowflake account, URL, whether through a browser on the
UI or using a JDBC driver, for example.
Snowflake makes use of the TLS 1.2 protocol to encrypt all
network communications from your client machine to the
Snowflake endpoints.
Using both encryption at rest and encryption in transit at
every stage during the data loading and unloading process
gives us end-to-end encryption.
This reduces the attack surface of Snowflake and allows us
to only expose data to authorize users.
Our aim is to get a raw file you have on a client machine
into a snowflake table in a secure way.
There are two main flows when thinking about end-to-end
encryption in Snowflake, and they're determined by which
type of stage you're copying data from.
So we've not yet reviewed stages, but for now, understand
that there are areas to temporarily store raw files used in
data loading and unloading.
There are two types, one internal
to Snowflake which they manage, and one you as a user
can set up called an external stage.
When uploading a file to an internal stage, for example,
with a put command, more on that in the data loading
section, snowflake transparently encrypts the loaded files.
The encryption computation itself is performed on the
client machine when uploading to an internal stage.
And then using a different key, the data is encrypted when
loaded into long-term table storage.
If you're loading from an external stage, say an S3 bucket,
which is not managed by Snowflake, you can choose
yourself to leave the contents of the external stage
unencrypted.
In which case when data is loaded into snowflake tables
from the external stage, it'll be encrypted at that point.
However, if you choose to use client side encryption in the
external stage prior to data loading into a table, decryption
information will have to be provided so Snowflake can
decrypt and then re-encrypt when loading into long-term
table storage.
So how does Snowflake manage the keys used to encrypt
all this data?
Secure Views
Secure views are a type of view designed to limit access to
the underlying tables or internal structural details of the
view.
Secure views are a type of view designed to limit
access to the underlying tables or internal structural
details of a view.
Both standard and materialized views can be
designated as secure.
A secure view is created by adding the keyword
SECURE in the view DDL.
The definition of a secure view is only available to the
object owner.
Secure views bypass query optimizations which may
inadvertently expose data in the underlying table.
Both standard and materialized views can be designated as
secure.
Account Usage
Snowflake provide a shared read-only databased
called SNOWFLAKE, imported using a Share object
called ACCOUNT_USAGE.
It is comprised of 6 schemas, which contain many
views providing fine-grained usage metrics at the
account and object level.
By default, only users with the ACCOUNTADMIN role
can access the SNOWFLAKE database.
Account usage views record dropped objects, not just
those that are currently active.
There is latency between an event and when that
event is recorded in an account usage view.
Certain account usage views provide historical usage
metrics. The retention period for these views is 1 year.
The SNOWFLAKE database is imported using a secure data
share called ACCOUNT_USAGE.
Its purpose is to share views containing many different
types of fine-grained usage metrics in order to query and
report on account and object usage.
For example, you can use it to programmatically check how
many queries were executed in the last hour or do
something like check how many credits an individual
warehouse used in the last three months.
Essentially, it's a home for long-term historical metadata
about what's going on in your Snowflake account.
The SNOWFLAKE database is comprised of six schemas.
The first is perhaps confusingly called ACCOUNT_USAGE,
the same name of the share object.
This is the main schema you'll be using. It contains views
that display object metadata and historical usage metrics
for your account.
For example, it has a view called Tables, which contains
metadata for all the tables created in your account.
It's by interacting with these views that you can govern and
maintain your Snowflake account.