Snowflake SQL
Snowflake SQL
sql workbench/J
Related Topics
•Client Versions & Support Policy
Related Info
•Python Connector Release Notes (GitHub)
The Snowflake Connector for Python provides an interface for developing Python
applications that can connect to Snowflake and perform all standard operations. It
provides a programming alternative to developing applications in Java or C/C++
using the Snowflake JDBC or ODBC drivers.
This section describes bulk data loading into Snowflake tables using the COPY INTO
<table> command. The information is similar regardless if you are loading from
data files on your local file system or in Amazon S3 buckets or Microsoft Azure
containers.
1.Upload (i.e. stage) one or more data files into either an internal stage (i.e. within
Snowflake) or an external location:
Internal:
Use the PUT command to stage the files.
External:
Currently, Amazon S3 and Microsoft Azure are the only services supported for
staging external data. Snowflake assumes the files have already been staged in one
of these locations. If they haven’t been staged already, use the upload
interfaces/utilities provided by the service that hosts the location.
2.Use the COPY INTO <table> command to load
the contents of the staged file(s) into a Snowflake database table.
This step requires a running virtual warehouse that is also the current warehouse
(i.e. in use) for the session. The warehouse provides the compute resources to
perform the actual insertion of rows into the table.