0% found this document useful (0 votes)
3 views

Snowflake SQL

CSE

Uploaded by

kdravidamani
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Snowflake SQL

CSE

Uploaded by

kdravidamani
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Snowsql

sql workbench/J

SnowSQL is the next-generation command line client for connecting to Snowflake to


execute SQL queries and perform all DDL and DML operations, including loading
data into and unloading data out of database tables.

Snowflake Connector for Python

Related Topics
•Client Versions & Support Policy

Related Info
•Python Connector Release Notes (GitHub)

The Snowflake Connector for Python provides an interface for developing Python
applications that can connect to Snowflake and perform all standard operations. It
provides a programming alternative to developing applications in Java or C/C++
using the Snowflake JDBC or ODBC drivers.

Loading Data into Snowflake:


Bulk Loading Using COPY

This section describes bulk data loading into Snowflake tables using the COPY INTO
<table> command. The information is similar regardless if you are loading from
data files on your local file system or in Amazon S3 buckets or Microsoft Azure
containers.

Data Loading Process

Data loading is performed in two, separate steps:

1.Upload (i.e. stage) one or more data files into either an internal stage (i.e. within
Snowflake) or an external location:
Internal:
Use the PUT command to stage the files.
External:
Currently, Amazon S3 and Microsoft Azure are the only services supported for
staging external data. Snowflake assumes the files have already been staged in one
of these locations. If they haven’t been staged already, use the upload
interfaces/utilities provided by the service that hosts the location.
2.Use the COPY INTO <table> command to load
the contents of the staged file(s) into a Snowflake database table.
This step requires a running virtual warehouse that is also the current warehouse
(i.e. in use) for the session. The warehouse provides the compute resources to
perform the actual insertion of rows into the table.

You might also like