Gcloud Python
Gcloud Python
Release 0.27.1
1 Configuration 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Authentication 3
2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Client-Provided Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Explicit Credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Long-Running Operations 7
5 BigQuery 15
5.1 Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.3 Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.4 Query . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.5 Schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.6 Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.7 Authentication / Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.8 Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.9 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.10 Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.11 Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6 Bigtable 59
6.1 Base for Everything . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.2 Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6.3 Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.4 Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6.5 Instance Admin API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
i
6.6 Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.7 Table Admin API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.8 Column Families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
6.9 Bigtable Row . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.10 Row Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.11 Bigtable Row Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.12 Data API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
7 Datastore 99
7.1 Datastore Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
7.2 Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
7.3 Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7.4 Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.5 Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
7.6 Batches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.7 Helpers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.8 Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
8 DNS 131
8.1 DNS Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
8.2 Managed Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
8.3 Resource Record Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
8.4 Change Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
8.5 Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
8.6 Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.7 Project Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.8 Managed Zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
8.9 Resource Record Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.10 Change requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
10 Pub/Sub 163
10.1 Authentication and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
10.2 Publishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
10.3 Subscribing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
10.4 Learn More . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
12 Runtimeconfig 203
12.1 Runtime Configuration Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
12.2 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
12.3 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
12.4 Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
ii
13 Spanner 209
13.1 Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
13.2 Instance Admin API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
13.3 Database Admin API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
13.4 Non-Admin Database Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
13.5 Batching Modifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
13.6 Read-only Transactions via Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
13.7 Read-write Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
13.8 Advanced Session Pool Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
13.9 Spanner Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
13.10 Instance API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
13.11 Database API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
13.12 Session API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
13.13 Session Pools API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
13.14 Keyset API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
13.15 Snapshot API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
13.16 Batch API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
13.17 Transaction API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
13.18 StreamedResultSet API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
14 Speech 245
14.1 Authentication and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
14.2 Asynchronous Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
14.3 Synchronous Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
14.4 Streaming Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
14.5 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
iii
17.5 Sinks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
17.6 Integration with Python logging module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
17.7 Python Logging Module Handler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
17.8 Google App Engine flexible Log Handler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
17.9 Google Container Engine Log Handler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
17.10 Python Logging Handler Sync Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
17.11 Python Logging Handler Threaded Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
17.12 Python Logging Handler Sync Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
17.13 Authentication and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
17.14 Writing log entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
17.15 Retrieving log entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
17.16 Delete all entries for a logger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
17.17 Manage log metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
17.18 Export log entries using sinks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
17.19 Integration with Python logging module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
18 Storage 311
18.1 Blobs / Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
18.2 Buckets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
18.3 ACL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
18.4 Batches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
19 Translation 339
19.1 Translation Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
19.2 Authentication / Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
19.3 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
20 Vision 343
20.1 Authentication and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
20.2 Annotate an Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
20.3 Single-feature Shortcuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
20.4 No results found . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
20.5 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
iv
CHAPTER 1
Configuration
1.1 Overview
When creating a client in this way, the project ID will be determined by searching these locations in the following
order.
GOOGLE_CLOUD_PROJECT environment variable
GOOGLE_APPLICATION_CREDENTIALS JSON file
Default service configuration path from $ gcloud beta auth application-default login.
Google App Engine application ID
Google Compute Engine project ID (from metadata server)
You can override the detection of your default project by setting the project parameter when creating client
objects.
You can see what project ID a client is referencing by accessing the project property on the client object.
>>> client.project
u'my-project'
1
google-cloud Documentation, Release 0.27.1
1.2 Authentication
The authentication credentials can be implicitly determined from the environment or directly. See Authentication.
Logging in via gcloud beta auth application-default login will automatically configure a JSON
key file with your default project ID and credentials.
Setting the GOOGLE_APPLICATION_CREDENTIALS and GOOGLE_CLOUD_PROJECT environment variables
will override the automatically configured credentials.
You can change your default project ID to my-new-default-project by using the gcloud CLI tool to change
the configuration.
2 Chapter 1. Configuration
CHAPTER 2
Authentication
2.1 Overview
If youre running in Compute Engine or App Engine, authentication should just work.
If youre developing locally, the easiest way to authenticate is using the Google Cloud SDK:
Note that this command generates credentials for client libraries. To authenticate the CLI itself, use:
Previously, gcloud auth login was used for both use cases. If your gcloud installation does not support
the new command, please update it:
If youre running your application elsewhere, you should download a service account JSON keyfile and point
to it using an environment variable:
$ export GOOGLE_APPLICATION_CREDENTIALS="/path/to/keyfile.json"
Every package uses a Client as a base for interacting with an API. For example:
Passing no arguments at all will just work if youve followed the instructions in the Overview. The credentials are
inferred from your local environment by using Google Application Default Credentials.
3
google-cloud Documentation, Release 0.27.1
When loading the Application Default Credentials, the library will check for credentials in your environment by
following the precedence outlined by google.auth.default().
The Application Default Credentials discussed above can be useful if your code needs to run in many different envi-
ronments or if you just dont want authentication to be a focus in your code.
However, you may want to be explicit because
your code will only run in one place
you may have code which needs to be run as a specific service account every time (rather than with the locally
inferred credentials)
you may want to use two separate accounts to simultaneously access data from different projects
In these situations, you can create an explicit Credentials object suited to your environment. After creation, you
can pass it directly to a Client:
client = Client(credentials=credentials)
client = Client.from_service_account_json('/path/to/keyfile.json')
4 Chapter 2. Authentication
google-cloud Documentation, Release 0.27.1
Tip: Previously the Google Cloud Console would issue a PKCS12/P12 key for your service account. This library
does not support that key format. You can generate a new JSON key for the same service account from the console.
The majority of cases are intended to authenticate machines or workers rather than actual user accounts. However, its
also possible to call Google Cloud APIs with a user account via OAuth 2.0.
Tip: A production application should use a service account, but you may wish to use your own personal user account
when first getting started with the google-cloud-python library.
The simplest way to use credentials from a user account is via Application Default Credentials using gcloud auth
login (as mentioned above) and google.auth.default():
import google.auth
This will still follow the precedence described above, so be sure none of the other possible environments conflict with
your user provided credentials.
Advanced users of oauth2client can also use custom flows to create credentials using client secrets or using a webserver
flow. After creation, Credentials can be serialized with to_json() and stored in a file and then and deserialized
with from_json(). In order to use oauth2clients credentials with this library, youll need to convert them.
2.4 Troubleshooting
If your application is not running on Google Compute Engine, you need a Google Developers Service Account.
1. Visit the Google Developers Console.
2. Create a new project or click on an existing project.
3. Navigate to APIs & auth > APIs and enable the APIs that your application requires.
Note: You may need to enable billing in order to use these services.
BigQuery
BigQuery API
Datastore
Google Cloud Datastore API
Pub/Sub
Google Cloud Pub/Sub
Storage
Google Cloud Storage
2.4. Troubleshooting 5
google-cloud Documentation, Release 0.27.1
If your code is running on Google Compute Engine, using the inferred Google Application Default Credentials will
be sufficient for retrieving credentials.
However, by default your credentials may not grant you access to the services you intend to use. Be sure when you set
up the GCE instance, you add the correct scopes for the APIs you want to access:
All APIs
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/cloud-platform.read-only
BigQuery
https://www.googleapis.com/auth/bigquery
https://www.googleapis.com/auth/bigquery.insertdata
Datastore
https://www.googleapis.com/auth/datastore
https://www.googleapis.com/auth/userinfo.email
Pub/Sub
https://www.googleapis.com/auth/pubsub
Storage
https://www.googleapis.com/auth/devstorage.full_control
https://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/devstorage.read_write
6 Chapter 2. Authentication
CHAPTER 3
Long-Running Operations
7
google-cloud Documentation, Release 0.27.1
client (Client) The client used to poll for the status of the operation.
caller_metadata (dict) caller-assigned metadata about the operation
Return type Operation
Returns new instance, with attributes based on the protobuf.
classmethod from_pb(operation_pb, client, **caller_metadata)
Factory: construct an instance from a protobuf.
Parameters
operation_pb (Operation) Protobuf to be parsed.
client (object: must provide _operations_stub accessor.) The client used to
poll for the status of the operation.
caller_metadata (dict) caller-assigned metadata about the operation
Return type Operation
Returns new instance, with attributes based on the protobuf.
metadata = None
Metadata about the current operation (as a protobuf).
Code that uses operations must register the metadata types (via register_type()) to ensure that the
metadata fields can be converted into the correct types.
poll()
Check if the operation has finished.
Return type bool
Returns A boolean indicating if the current operation has completed.
Raises ValueError if the operation has already completed.
response = None
Response returned from completed operation.
Only one of this and error can be populated.
target = None
Instance assocated with the operations callers may set.
google.cloud.operation.register_type(klass, type_url=None)
Register a klass as the factory for a given type URL.
Parameters
klass (type) class to be used as a factory for the given type
type_url (str) (Optional) URL naming the type. If not provided, infers the URL from
the type descriptor.
Raises ValueError if a registration already exists for the URL.
Base classes for client used to interact with Google Cloud APIs.
class google.cloud.client.Client(credentials=None, _http=None)
Bases: google.cloud.client._ClientFactoryMixin
Client to bundle configuration needed for API requests.
Stores credentials and an HTTP object so that subclasses can pass them along to a connection class.
If no value is passed in for _http, a requests.Session object will be created and authorized with the
credentials. If not, the credentials and _http need not be related.
Callers and subclasses may seek to use the private key from credentials to sign data.
Parameters
credentials (Credentials) (Optional) The OAuth2 Credentials to use for this
client. If not passed (and if no _http object is passed), falls back to the default inferred
from the environment.
_http (Session) (Optional) HTTP object to make requests. Can be any object that
defines request() with the same interface as requests.Session.request(). If
not passed, an _http object is created that is bound to the credentials for the current
object. This parameter should be considered private, and could change in the future.
SCOPE = None
The scopes required for authenticating with a service.
Needs to be set by subclasses.
from_service_account_json(json_credentials_path, *args, **kwargs)
Factory to retrieve JSON credentials while creating client.
Parameters
9
google-cloud Documentation, Release 0.27.1
json_credentials_path (str) The path to a private key file (this file was given
to you when you created the service account). This file must contain a JSON object with a
private key and other credentials information (downloaded from the Google APIs console).
args (tuple) Remaining positional arguments to pass to constructor.
kwargs (dict) Remaining keyword arguments to pass to constructor.
Return type _ClientFactoryMixin
Returns The client created with the retrieved JSON credentials.
Raises TypeError if there is a conflict with the kwargs and the credentials created by the
factory.
class google.cloud.client.ClientWithProject(project=None, credentials=None,
_http=None)
Bases: google.cloud.client.Client, google.cloud.client._ClientProjectMixin
Client that also stores a project.
Parameters
project (str) the project which the client acts on behalf of. If not passed falls back to
the default inferred from the environment.
credentials (Credentials) (Optional) The OAuth2 Credentials to use for this
client. If not passed (and if no _http object is passed), falls back to the default inferred
from the environment.
_http (Session) (Optional) HTTP object to make requests. Can be any object that
defines request() with the same interface as request(). If not passed, an _http
object is created that is bound to the credentials for the current object. This parameter
should be considered private, and could change in the future.
Raises ValueError if the project is neither passed in nor set in the environment.
from_service_account_json(json_credentials_path, *args, **kwargs)
Factory to retrieve JSON credentials while creating client.
Parameters
json_credentials_path (str) The path to a private key file (this file was given
to you when you created the service account). This file must contain a JSON object with a
private key and other credentials information (downloaded from the Google APIs console).
args (tuple) Remaining positional arguments to pass to constructor.
kwargs (dict) Remaining keyword arguments to pass to constructor.
Return type _ClientFactoryMixin
Returns The client created with the retrieved JSON credentials.
Raises TypeError if there is a conflict with the kwargs and the credentials created by the
factory.
4.2 Exceptions
google.cloud.exceptions.GrpcRendezvous
Exception class raised by gRPC stable.
alias of _Rendezvous
static authenticated_users()
Factory method for a member representing all authenticated users.
Return type str
Returns A member string representing all authenticated users.
static domain(domain)
Factory method for a domain member.
Parameters domain (str) The domain for this member.
Return type str
Returns A member string corresponding to the given domain.
editors
Legacy access to editor role.
classmethod from_api_repr(resource)
Create a policy from the resource returned from the API.
Parameters resource (dict) resource returned from the getIamPolicy API.
Return type Policy
Returns the parsed policy
static group(email)
Factory method for a group member.
Parameters email (str) An id or e-mail for this particular group.
Return type str
Returns A member string corresponding to the given group.
owners
Legacy access to owner role.
static service_account(email)
Factory method for a service account member.
Parameters email (str) E-mail for this particular service account.
Return type str
Returns A member string corresponding to the given service account.
to_api_repr()
Construct a Policy resource.
Return type dict
Returns a resource to be passed to the setIamPolicy API.
static user(email)
Factory method for a user member.
Parameters email (str) E-mail for this particular user.
Return type str
Returns A member string corresponding to the given user.
viewers
Legacy access to viewer role.
google.cloud.iam.VIEWER_ROLE = 'roles/viewer'
Generic role implying rights to access an object.
BigQuery
5.1 Client
15
google-cloud Documentation, Release 0.27.1
16 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
5.1. Client 17
google-cloud Documentation, Release 0.27.1
5.2 Datasets
18 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
Every entry in the access list will have exactly one of userByEmail, groupByEmail, domain,
specialGroup or view set. And if anything but view is set, itll also have a role specified. role is
omitted for a view, since view s are always read-only.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets.
Parameters
role (str) Role granted to the entity. One of
'OWNER'
'WRITER'
'READER'
May also be None if the entity_type is view.
entity_type (str) Type of entity being granted the role. One of ENTITY_TYPES.
entity_id (str) ID of entity being granted the role.
Raises ValueError if the entity_type is not among ENTITY_TYPES, or if a view has
role set or a non view does not have a role set.
ENTITY_TYPES = frozenset(['specialGroup', 'groupByEmail', 'userByEmail', 'domain', 'vie
Allowed entity types.
class google.cloud.bigquery.dataset.Dataset(name, client, access_grants=(),
project=None)
Bases: object
Datasets are containers for tables.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets
Parameters
name (str) the name of the dataset
client (google.cloud.bigquery.client.Client) A client which holds cre-
dentials and project configuration for the dataset (which requires a project).
access_grants (list of AccessGrant) roles granted to entities for this dataset
project (str) (Optional) project ID for the dataset (defaults to the project of the client).
access_grants
Datasets access grants.
Return type list of AccessGrant
Returns roles granted to entities for this dataset
create(client=None)
API call: create the dataset via a PUT request.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/insert
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
created
Datetime at which the dataset was created.
Return type datetime.datetime, or NoneType
Returns the creation time (None until set from the server).
5.2. Datasets 19
google-cloud Documentation, Release 0.27.1
dataset_id
ID for the dataset resource.
Return type str, or NoneType
Returns the ID (None until set from the server).
default_table_expiration_ms
Default expiration time for tables in the dataset.
Return type int, or NoneType
Returns The time in milliseconds, or None (the default).
delete(client=None)
API call: delete the dataset via a DELETE request.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/delete
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
description
Description of the dataset.
Return type str, or NoneType
Returns The description as set by the user, or None (the default).
etag
ETag for the dataset resource.
Return type str, or NoneType
Returns the ETag (None until set from the server).
exists(client=None)
API call: test for the existence of the dataset via a GET request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/get
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
Return type bool
Returns Boolean indicating existence of the dataset.
friendly_name
Title of the dataset.
Return type str, or NoneType
Returns The name as set by the user, or None (the default).
classmethod from_api_repr(resource, client)
Factory: construct a dataset given its API representation
Parameters
resource (dict) dataset resource representation returned from the API
client (google.cloud.bigquery.client.Client) Client which holds cre-
dentials and project configuration for the dataset.
Return type google.cloud.bigquery.dataset.Dataset
Returns Dataset parsed from resource.
20 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
list_tables(max_results=None, page_token=None)
List tables for the project associated with this client.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/list
Parameters
max_results (int) (Optional) Maximum number of tables to return. If not passed,
defaults to a value set by the API.
page_token (str) (Optional) Opaque marker for the next page of datasets. If not
passed, the API will return the first page of datasets.
Return type Iterator
Returns Iterator of Table contained within the current dataset.
location
Location in which the dataset is hosted.
Return type str, or NoneType
Returns The location as set by the user, or None (the default).
modified
Datetime at which the dataset was last modified.
Return type datetime.datetime, or NoneType
Returns the modification time (None until set from the server).
patch(client=None, **kw)
API call: update individual dataset properties via a PATCH request.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/patch
Parameters
client (Client or NoneType) the client to use. If not passed, falls back to the
client stored on the current dataset.
kw (dict) properties to be patched.
Raises ValueError for invalid value types.
path
URL path for the datasets APIs.
Return type str
Returns the path based on project and dataste name.
project
Project bound to the dataset.
Return type str
Returns the project (derived from the client).
reload(client=None)
API call: refresh dataset properties via a GET request.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/get
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
5.2. Datasets 21
google-cloud Documentation, Release 0.27.1
self_link
URL for the dataset resource.
Return type str, or NoneType
Returns the URL (None until set from the server).
table(name, schema=())
Construct a table bound to this dataset.
Parameters
name (str) Name of the table.
schema (list of google.cloud.bigquery.table.SchemaField) The tables
schema
Return type google.cloud.bigquery.table.Table
Returns a new Table instance
update(client=None)
API call: update dataset properties via a PUT request.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/update
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
5.3 Jobs
22 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
add_done_callback(fn)
Add a callback to be executed when the operation is complete.
If the operation is not already complete, this will start a helper thread to poll for the status of the operation
in the background.
Parameters fn (Callable[Future]) The callback to execute when the operation is com-
plete.
begin(client=None)
API call: begin the job via a POST request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/insert
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
Raises ValueError if the job has already begin.
cancel(client=None)
API call: cancel job via a POST request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/cancel
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
Return type bool
Returns Boolean indicating that the cancel request was sent.
cancelled()
Check if the job has been cancelled.
This always returns False. Its not possible to check if a job was cancelled in the API. This method is here
to satisfy the interface for google.api.core.future.Future.
Return type bool
Returns False
create_disposition
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.copy.createDisposition
created
Datetime at which the job was created.
Return type datetime.datetime, or NoneType
Returns the creation time (None until set from the server).
done()
Refresh the job and checks if it is complete.
Return type bool
Returns True if the job is complete, False otherwise.
ended
Datetime at which the job finished.
Return type datetime.datetime, or NoneType
Returns the end time (None until set from the server).
error_result
Error information about the job as a whole.
5.3. Jobs 23
google-cloud Documentation, Release 0.27.1
24 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
project
Project bound to the job.
Return type str
Returns the project (derived from the client).
reload(client=None)
API call: refresh job properties via a GET request.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/get
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
result(timeout=None)
Start the job and wait for it to complete and get the result.
Parameters timeout (int) How long to wait for job to complete before raising a
TimeoutError.
Return type _AsyncJob
Returns This instance.
Raises GoogleCloudError if the job failed or TimeoutError if the job did not complete
in the given timeout.
running()
True if the operation is currently running.
self_link
URL for the job resource.
Return type str, or NoneType
Returns the URL (None until set from the server).
set_exception(exception)
Set the Futures exception.
set_result(result)
Set the Futures result.
started
Datetime at which the job was started.
Return type datetime.datetime, or NoneType
Returns the start time (None until set from the server).
state
Status of the job.
Return type str, or NoneType
Returns the state (None until set from the server).
user_email
E-mail address of user who submitted the job.
Return type str, or NoneType
Returns the URL (None until set from the server).
write_disposition
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.copy.writeDisposition
5.3. Jobs 25
google-cloud Documentation, Release 0.27.1
class google.cloud.bigquery.job.CreateDisposition(name)
Bases: google.cloud.bigquery._helpers._EnumProperty
Pseudo-enum for create_disposition properties.
class google.cloud.bigquery.job.DestinationFormat(name)
Bases: google.cloud.bigquery._helpers._EnumProperty
Pseudo-enum for destination_format properties.
class google.cloud.bigquery.job.Encoding(name)
Bases: google.cloud.bigquery._helpers._EnumProperty
Pseudo-enum for encoding properties.
class google.cloud.bigquery.job.ExtractTableToStorageJob(name, source, destina-
tion_uris, client)
Bases: google.cloud.bigquery.job._AsyncJob
Asynchronous job: extract data from a table into Cloud Storage.
Parameters
name (str) the name of the job
source (google.cloud.bigquery.table.Table) Table into which data is to
be loaded.
destination_uris (list of string) URIs describing Cloud Storage blobs
into which extracted data will be written, in format gs://<bucket_name>/
<object_name_or_glob>.
client (google.cloud.bigquery.client.Client) A client which holds cre-
dentials and project configuration for the dataset (which requires a project).
add_done_callback(fn)
Add a callback to be executed when the operation is complete.
If the operation is not already complete, this will start a helper thread to poll for the status of the operation
in the background.
Parameters fn (Callable[Future]) The callback to execute when the operation is com-
plete.
begin(client=None)
API call: begin the job via a POST request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/insert
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
Raises ValueError if the job has already begin.
cancel(client=None)
API call: cancel job via a POST request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/cancel
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
Return type bool
Returns Boolean indicating that the cancel request was sent.
26 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
cancelled()
Check if the job has been cancelled.
This always returns False. Its not possible to check if a job was cancelled in the API. This method is here
to satisfy the interface for google.api.core.future.Future.
Return type bool
Returns False
compression
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.extract.compression
created
Datetime at which the job was created.
Return type datetime.datetime, or NoneType
Returns the creation time (None until set from the server).
destination_format
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.extract.
destinationFormat
done()
Refresh the job and checks if it is complete.
Return type bool
Returns True if the job is complete, False otherwise.
ended
Datetime at which the job finished.
Return type datetime.datetime, or NoneType
Returns the end time (None until set from the server).
error_result
Error information about the job as a whole.
Return type mapping, or NoneType
Returns the error information (None until set from the server).
errors
Information about individual errors generated by the job.
Return type list of mappings, or NoneType
Returns the error information (None until set from the server).
etag
ETag for the job resource.
Return type str, or NoneType
Returns the ETag (None until set from the server).
exception(timeout=None)
Get the exception from the operation, blocking if necessary.
Parameters timeout (int) How long to wait for the operation to complete. If None, wait
indefinitely.
Returns The operations error.
5.3. Jobs 27
google-cloud Documentation, Release 0.27.1
28 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
Parameters timeout (int) How long to wait for job to complete before raising a
TimeoutError.
Return type _AsyncJob
Returns This instance.
Raises GoogleCloudError if the job failed or TimeoutError if the job did not complete
in the given timeout.
running()
True if the operation is currently running.
self_link
URL for the job resource.
Return type str, or NoneType
Returns the URL (None until set from the server).
set_exception(exception)
Set the Futures exception.
set_result(result)
Set the Futures result.
started
Datetime at which the job was started.
Return type datetime.datetime, or NoneType
Returns the start time (None until set from the server).
state
Status of the job.
Return type str, or NoneType
Returns the state (None until set from the server).
user_email
E-mail address of user who submitted the job.
Return type str, or NoneType
Returns the URL (None until set from the server).
class google.cloud.bigquery.job.LoadTableFromStorageJob(name, destination,
source_uris, client,
schema=())
Bases: google.cloud.bigquery.job._AsyncJob
Asynchronous job for loading data into a table from CloudStorage.
Parameters
name (str) the name of the job
destination (google.cloud.bigquery.table.Table) Table into which
data is to be loaded.
source_uris (sequence of string) URIs of one or more data files to be loaded,
in format gs://<bucket_name>/<object_name_or_glob>.
client (google.cloud.bigquery.client.Client) A client which holds cre-
dentials and project configuration for the dataset (which requires a project).
5.3. Jobs 29
google-cloud Documentation, Release 0.27.1
30 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
5.3. Jobs 31
google-cloud Documentation, Release 0.27.1
32 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
quote_character
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.quote
reload(client=None)
API call: refresh job properties via a GET request.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/get
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
result(timeout=None)
Start the job and wait for it to complete and get the result.
Parameters timeout (int) How long to wait for job to complete before raising a
TimeoutError.
Return type _AsyncJob
Returns This instance.
Raises GoogleCloudError if the job failed or TimeoutError if the job did not complete
in the given timeout.
running()
True if the operation is currently running.
schema
Tables schema.
Return type list of SchemaField
Returns fields describing the schema
self_link
URL for the job resource.
Return type str, or NoneType
Returns the URL (None until set from the server).
set_exception(exception)
Set the Futures exception.
set_result(result)
Set the Futures result.
skip_leading_rows
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.skipLeadingRows
source_format
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.sourceFormat
started
Datetime at which the job was started.
Return type datetime.datetime, or NoneType
Returns the start time (None until set from the server).
state
Status of the job.
Return type str, or NoneType
Returns the state (None until set from the server).
5.3. Jobs 33
google-cloud Documentation, Release 0.27.1
user_email
E-mail address of user who submitted the job.
Return type str, or NoneType
Returns the URL (None until set from the server).
write_disposition
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.writeDisposition
class google.cloud.bigquery.job.QueryJob(name, query, client, udf_resources=(),
query_parameters=())
Bases: google.cloud.bigquery.job._AsyncJob
Asynchronous job: query tables.
Parameters
name (str) the name of the job
query (str) SQL query string
client (google.cloud.bigquery.client.Client) A client which holds cre-
dentials and project configuration for the dataset (which requires a project).
udf_resources (tuple) An iterable of google.cloud.bigquery.
_helpers.UDFResource (empty by default)
query_parameters (tuple) An iterable of google.cloud.bigquery.
_helpers.AbstractQueryParameter (empty by default)
add_done_callback(fn)
Add a callback to be executed when the operation is complete.
If the operation is not already complete, this will start a helper thread to poll for the status of the operation
in the background.
Parameters fn (Callable[Future]) The callback to execute when the operation is com-
plete.
allow_large_results
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.
allowLargeResults
begin(client=None)
API call: begin the job via a POST request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/insert
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
Raises ValueError if the job has already begin.
cancel(client=None)
API call: cancel job via a POST request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/cancel
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
Return type bool
Returns Boolean indicating that the cancel request was sent.
34 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
cancelled()
Check if the job has been cancelled.
This always returns False. Its not possible to check if a job was cancelled in the API. This method is here
to satisfy the interface for google.api.core.future.Future.
Return type bool
Returns False
create_disposition
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.createDisposition
created
Datetime at which the job was created.
Return type datetime.datetime, or NoneType
Returns the creation time (None until set from the server).
default_dataset
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.defaultDataset
destination
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.destinationTable
done()
Refresh the job and checks if it is complete.
Return type bool
Returns True if the job is complete, False otherwise.
dry_run
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.dryRun
ended
Datetime at which the job finished.
Return type datetime.datetime, or NoneType
Returns the end time (None until set from the server).
error_result
Error information about the job as a whole.
Return type mapping, or NoneType
Returns the error information (None until set from the server).
errors
Information about individual errors generated by the job.
Return type list of mappings, or NoneType
Returns the error information (None until set from the server).
etag
ETag for the job resource.
Return type str, or NoneType
Returns the ETag (None until set from the server).
exception(timeout=None)
Get the exception from the operation, blocking if necessary.
5.3. Jobs 35
google-cloud Documentation, Release 0.27.1
Parameters timeout (int) How long to wait for the operation to complete. If None, wait
indefinitely.
Returns The operations error.
Return type Optional[google.gax.GaxError]
exists(client=None)
API call: test for the existence of the job via a GET request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/get
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
Return type bool
Returns Boolean indicating existence of the job.
flatten_results
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.flattenResults
classmethod from_api_repr(resource, client)
Factory: construct a job given its API representation
Parameters
resource (dict) dataset job representation returned from the API
client (google.cloud.bigquery.client.Client) Client which holds cre-
dentials and project configuration for the dataset.
Return type google.cloud.bigquery.job.RunAsyncQueryJob
Returns Job parsed from resource.
job_type
Type of job
Return type str
Returns one of load, copy, extract, query
maximum_billing_tier
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.
maximumBillingTier
maximum_bytes_billed
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.
maximumBytesBilled
path
URL path for the jobs APIs.
Return type str
Returns the path based on project and job name.
priority
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.priority
project
Project bound to the job.
Return type str
Returns the project (derived from the client).
36 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
query_results()
Construct a QueryResults instance, bound to this job.
Return type QueryResults
Returns results instance
reload(client=None)
API call: refresh job properties via a GET request.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/get
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
result(timeout=None)
Start the job and wait for it to complete and get the result.
Parameters timeout (int) How long to wait for job to complete before raising a
TimeoutError.
Return type Iterator
Returns Iterator of row data tuple`s. During each page, the iterator will
have the ``total_rows` attribute set, which counts the total number of rows in the
result set (this is distinct from the total number of rows in the current page: iterator.
page.num_items).
Raises GoogleCloudError if the job failed or TimeoutError if the job did not complete
in the given timeout.
running()
True if the operation is currently running.
self_link
URL for the job resource.
Return type str, or NoneType
Returns the URL (None until set from the server).
set_exception(exception)
Set the Futures exception.
set_result(result)
Set the Futures result.
started
Datetime at which the job was started.
Return type datetime.datetime, or NoneType
Returns the start time (None until set from the server).
state
Status of the job.
Return type str, or NoneType
Returns the state (None until set from the server).
use_legacy_sql
See https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.query.useLegacySql
use_query_cache
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.useQueryCache
5.3. Jobs 37
google-cloud Documentation, Release 0.27.1
user_email
E-mail address of user who submitted the job.
Return type str, or NoneType
Returns the URL (None until set from the server).
write_disposition
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.writeDisposition
class google.cloud.bigquery.job.QueryPriority(name)
Bases: google.cloud.bigquery._helpers._EnumProperty
Pseudo-enum for QueryJob.priority property.
class google.cloud.bigquery.job.SourceFormat(name)
Bases: google.cloud.bigquery._helpers._EnumProperty
Pseudo-enum for source_format properties.
class google.cloud.bigquery.job.WriteDisposition(name)
Bases: google.cloud.bigquery._helpers._EnumProperty
Pseudo-enum for write_disposition properties.
5.4 Query
38 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
Returns True if the query completed on the server (None until set by the server).
default_dataset
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#defaultDataset
dry_run
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#dryRun
errors
Errors generated by the query.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#errors
Return type list of mapping, or NoneType
Returns Mappings describing errors generated on the server (None until set by the server).
fetch_data(max_results=None, page_token=None, start_index=None, timeout_ms=None,
client=None)
API call: fetch a page of query result data via a GET request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/getQueryResults
Parameters
max_results (int) (Optional) maximum number of rows to return.
page_token (str) (Optional) token representing a cursor into the tables rows.
start_index (int) (Optional) zero-based index of starting row
timeout_ms (int) (Optional) How long to wait for the query to complete, in mil-
liseconds, before the request times out and returns. Note that this is only a timeout for
the request, not the query. If the query takes longer to run than the timeout value, the call
returns without any results and with the jobComplete flag set to false. You can call Get-
QueryResults() to wait for the query to complete and read the results. The default value is
10000 milliseconds (10 seconds).
client (Client or NoneType) the client to use. If not passed, falls back to the
client stored on the current dataset.
Return type Iterator
Returns Iterator of row data tuple`s. During each page, the iterator will
have the ``total_rows` attribute set, which counts the total number of rows in the
result set (this is distinct from the total number of rows in the current page: iterator.
page.num_items).
Raises ValueError if the query has not yet been executed.
classmethod from_query_job(job)
Factory: construct from an existing job.
Parameters job (QueryJob) existing job
Return type QueryResults
Returns the instance, bound to the job
job
Job instance used to run the query.
Return type google.cloud.bigquery.job.QueryJob, or NoneType
Returns Job instance used to run the query (None until jobReference property is set by the
server).
5.4. Query 39
google-cloud Documentation, Release 0.27.1
max_results
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#maxResults
name
Job name, generated by the back-end.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#jobReference
Return type list of mapping, or NoneType
Returns Mappings describing errors generated on the server (None until set by the server).
num_dml_affected_rows
Total number of rows affected by a DML query.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#numDmlAffectedRows
Return type int, or NoneType
Returns Count generated on the server (None until set by the server).
page_token
Token for fetching next bach of results.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#pageToken
Return type str, or NoneType
Returns Token generated on the server (None until set by the server).
preserve_nulls
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#preserveNulls
project
Project bound to the job.
Return type str
Returns the project (derived from the client).
rows
Query results.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#rows
Return type list of tuples of row values, or NoneType
Returns fields describing the schema (None until set by the server).
run(client=None)
API call: run the query via a POST request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
schema
Schema for query results.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#schema
Return type list of SchemaField, or NoneType
Returns fields describing the schema (None until set by the server).
timeout_ms
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#timeoutMs
40 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
total_bytes_processed
Total number of bytes processed by the query.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#totalBytesProcessed
Return type int, or NoneType
Returns Count generated on the server (None until set by the server).
total_rows
Total number of rows returned by the query.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#totalRows
Return type int, or NoneType
Returns Count generated on the server (None until set by the server).
use_legacy_sql
See https://cloud.google.com/bigquery/docs/reference/v2/jobs/query#useLegacySql
use_query_cache
See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query#useQueryCache
5.5 Schemas
5.5. Schemas 41
google-cloud Documentation, Release 0.27.1
5.6 Tables
42 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
Note: This method assumes that its instances schema attribute is up-to-date with the schema as defined
on the back-end: if the two schemas are not identical, the values returned may be incomplete. To ensure
that the local copy of the schema is up-to-date, call reload().
Parameters
max_results (int) (Optional) Maximum number of rows to return.
page_token (str) (Optional) Token representing a cursor into the tables rows.
client (Client) (Optional) The client to use. If not passed, falls back to the client
stored on the current dataset.
Return type Iterator
5.6. Tables 43
google-cloud Documentation, Release 0.27.1
Returns Iterator of row data tuple`s. During each page, the iterator will
have the ``total_rows` attribute set, which counts the total number of rows in the
table (this is distinct from the total number of rows in the current page: iterator.page.
num_items).
friendly_name
Title of the table.
Return type str, or NoneType
Returns The name as set by the user, or None (the default).
classmethod from_api_repr(resource, dataset)
Factory: construct a table given its API representation
Parameters
resource (dict) table resource representation returned from the API
dataset (google.cloud.bigquery.dataset.Dataset) The dataset con-
taining the table.
Return type google.cloud.bigquery.table.Table
Returns Table parsed from resource.
insert_data(rows, row_ids=None, skip_invalid_rows=None, ignore_unknown_values=None, tem-
plate_suffix=None, client=None)
API call: insert table data via a POST request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/insertAll
Parameters
rows (list of tuples) Row data to be inserted. Each tuple should contain data
for each schema field on the current table and in the same order as the schema fields.
row_ids (list of string) Unique ids, one per row being inserted. If not passed,
no de-duplication occurs.
skip_invalid_rows (bool) (Optional) Insert all valid rows of a request, even if
invalid rows exist. The default value is False, which causes the entire request to fail if any
invalid rows exist.
ignore_unknown_values (bool) (Optional) Accept rows that contain values that
do not match the schema. The unknown values are ignored. Default is False, which treats
unknown values as errors.
template_suffix (str) (Optional) treat name as a template table and pro-
vide a suffix. BigQuery will create the table <name> + <template_suffix>
based on the schema of the template table. See https://cloud.google.com/bigquery/
streaming-data-into-bigquery#template-tables
client (Client or NoneType) the client to use. If not passed, falls back to the
client stored on the current dataset.
Return type list of mappings
Returns One mapping per row with insert errors: the index key identifies the row, and the
errors key contains a list of the mappings describing one or more problems with the row.
Raises ValueError if tables schema is not set
list_partitions(client=None)
List the partitions in a table.
44 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
Return type list
Returns a list of time partitions
location
Location in which the table is hosted.
Return type str, or NoneType
Returns The location as set by the user, or None (the default).
modified
Datetime at which the table was last modified.
Return type datetime.datetime, or NoneType
Returns the modification time (None until set from the server).
num_bytes
The size of the table in bytes.
Return type int, or NoneType
Returns the byte count (None until set from the server).
num_rows
The number of rows in the table.
Return type int, or NoneType
Returns the row count (None until set from the server).
partition_expiration
Expiration time in ms for a partition :rtype: int, or NoneType :returns: Returns the time in ms for partition
expiration
partitioning_type
Time partitioning of the table. :rtype: str, or NoneType :returns: Returns type if the table is partitioned,
None otherwise.
patch(client=None, friendly_name=<object object>, description=<object object>, location=<object
object>, expires=<object object>, view_query=<object object>, schema=<object object>)
API call: update individual table properties via a PATCH request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/patch
Parameters
client (Client or NoneType) the client to use. If not passed, falls back to the
client stored on the current dataset.
friendly_name (str) (Optional) a descriptive name for this table.
description (str) (Optional) a description of this table.
location (str) (Optional) the geographic location where the table resides.
expires (datetime.datetime) (Optional) point in time at which the table ex-
pires.
view_query (str) SQL query defining the table as a view
schema (list of SchemaField) fields describing the schema
Raises ValueError for invalid value types.
5.6. Tables 45
google-cloud Documentation, Release 0.27.1
path
URL path for the tables APIs.
Return type str
Returns the path based on project and dataste name.
project
Project bound to the table.
Return type str
Returns the project (derived from the dataset).
reload(client=None)
API call: refresh table properties via a GET request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/get
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
row_from_mapping(mapping)
Convert a mapping to a row tuple using the schema.
Parameters mapping (dict) Mapping of row data: must contain keys for all required fields
in the schema. Keys which do not correspond to a field in the schema are ignored.
Return type tuple
Returns Tuple whose elements are ordered according to the tables schema.
Raises ValueError if tables schema is not set
schema
Tables schema.
Return type list of SchemaField
Returns fields describing the schema
self_link
URL for the table resource.
Return type str, or NoneType
Returns the URL (None until set from the server).
table_id
ID for the table resource.
Return type str, or NoneType
Returns the ID (None until set from the server).
table_type
The type of the table.
Possible values are TABLE, VIEW, or EXTERNAL.
Return type str, or NoneType
Returns the URL (None until set from the server).
update(client=None)
API call: update table properties via a PUT request
See https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/update
46 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current dataset.
upload_from_file(file_obj, source_format, rewind=False, size=None, num_retries=6,
allow_jagged_rows=None, allow_quoted_newlines=None, cre-
ate_disposition=None, encoding=None, field_delimiter=None,
ignore_unknown_values=None, max_bad_records=None,
quote_character=None, skip_leading_rows=None, write_disposition=None,
client=None, job_name=None, null_marker=None)
Upload the contents of this table from a file-like object.
Parameters
file_obj (file) A file handle opened in binary mode for reading.
source_format (str) Any supported format. The full list of supported formats
is documented under the configuration.extract.destinationFormat prop-
erty on this page: https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs
rewind (bool) If True, seek to the beginning of the file handle before writing the file.
size (int) The number of bytes to read from the file handle. If not provided, well try
to guess the size using os.fstat(). (If the file handle is not from the filesystem this
wont be possible.)
num_retries (int) Number of upload retries. Defaults to 6.
allow_jagged_rows (bool) job configuration option; see google.cloud.
bigquery.job.LoadJob().
allow_quoted_newlines (bool) job configuration option; see google.
cloud.bigquery.job.LoadJob().
create_disposition (str) job configuration option; see google.cloud.
bigquery.job.LoadJob().
encoding (str) job configuration option; see google.cloud.bigquery.job.
LoadJob().
field_delimiter (str) job configuration option; see google.cloud.
bigquery.job.LoadJob().
ignore_unknown_values (bool) job configuration option; see google.
cloud.bigquery.job.LoadJob().
max_bad_records (int) job configuration option; see google.cloud.
bigquery.job.LoadJob().
quote_character (str) job configuration option; see google.cloud.
bigquery.job.LoadJob().
skip_leading_rows (int) job configuration option; see google.cloud.
bigquery.job.LoadJob().
write_disposition (str) job configuration option; see google.cloud.
bigquery.job.LoadJob().
client (Client) (Optional) The client to use. If not passed, falls back to the client
stored on the current table.
job_name (str) Optional. The id of the job. Generated if not explicitly passed in.
null_marker (str) Optional. A custom null marker (example: N)
Return type LoadTableFromStorageJob
5.6. Tables 47
google-cloud Documentation, Release 0.27.1
Returns the job instance used to load the data (e.g., for querying status). Note that the job is
already started: do not call job.begin().
Raises ValueError if size is not passed in and can not be determined, or if the file_obj
can be detected to be a file opened in text mode.
view_query
SQL query defining the table as a view.
Return type str, or NoneType
Returns The query as set by the user, or None (the default).
view_use_legacy_sql
Specifies whether to execute the view with legacy or standard SQL.
If not set, None is returned. BigQuerys default mode is equivalent to useLegacySql = True.
Return type bool, or NoneType
Returns The boolean for view.useLegacySql as set by the user, or None (the default).
5.8 Projects
A project is the top-level container in the BigQuery API: it is tied closely to billing, and can provide default access
control across all its datasets. If no project is passed to the client container, the library attempts to infer a project
using the environment (including explicit environment variables, GAE, and GCE).
To override the project inferred from the environment, pass an explicit project to the constructor, or to either of the
alternative classmethod factories:
Each project has an access control list granting reader / writer / owner permission to one or more entities. This list
cannot be queried or set via the API: it must be managed using the Google Developer Console.
48 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
5.9 Datasets
A dataset represents a collection of tables, and applies several default policies to tables as they are created:
An access control list (ACL). When created, a dataset has an ACL which maps to the ACL inherited from its
project.
A default table expiration period. If set, tables created within the dataset will have the value as their expiration
period.
dataset = client.dataset(DATASET_NAME)
dataset.create() # API request
Refresh metadata for a dataset (to pick up changes made by another client):
ONE_DAY_MS = 24 * 60 * 60 * 1000
assert dataset.description == ORIGINAL_DESCRIPTION
dataset.patch(
description=PATCHED_DESCRIPTION,
default_table_expiration_ms=ONE_DAY_MS
) # API request
assert dataset.description == PATCHED_DESCRIPTION
assert dataset.default_table_expiration_ms == ONE_DAY_MS
Replace the ACL for a dataset, and update all writeable fields:
5.9. Datasets 49
google-cloud Documentation, Release 0.27.1
Delete a dataset:
5.10 Tables
Create a table:
Refresh metadata for a table (to pick up changes made by another client):
50 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
ROWS_TO_INSERT = [
(u'Phred Phlyntstone', 32),
(u'Wylma Phlyntstone', 29),
]
table.insert_data(ROWS_TO_INSERT)
writer = csv.writer(csv_file)
writer.writerow((b'full_name', b'age'))
writer.writerow((b'Phred Phlyntstone', b'32'))
writer.writerow((b'Wylma Phlyntstone', b'29'))
csv_file.flush()
Delete a table:
5.11 Jobs
5.11. Jobs 51
google-cloud Documentation, Release 0.27.1
assert query.complete
assert len(query.rows) == LIMIT
assert [field.name for field in query.schema] == ['name']
assert query.complete
assert len(query.rows) == LIMIT
assert [field.name for field in query.schema] == ['name']
If the rows returned by the query do not fit into the initial response, then we need to fetch the remaining rows via
fetch_data():
query = client.run_sync_query(LIMITED)
query.timeout_ms = TIMEOUT_MS
query.max_results = PAGE_SIZE
query.run() # API request
assert query.complete
assert query.page_token is not None
assert len(query.rows) == PAGE_SIZE
assert [field.name for field in query.schema] == ['name']
If the query takes longer than the timeout allowed, query.complete will be False. In that case, we need to poll
the associated job until it is done, and then fetch the results:
query = client.run_sync_query(QUERY)
query.timeout_ms = TIMEOUT_MS
query.use_query_cache = False
query.run() # API request
52 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
job = query.job
job.reload() # API rquest
retry_count = 0
Note:
The created and state fields are not set until the job is submitted to the BigQuery back-end.
5.11. Jobs 53
google-cloud Documentation, Release 0.27.1
Start a job loading data asynchronously from a set of CSV files, located on Google Cloud Storage, appending rows
into an existing table. First, create the job locally:
Note:
google.cloud.bigquery generates a UUID for each job.
The created and state fields are not set until the job is submitted to the BigQuery back-end.
54 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
Start a job exporting a tables data asynchronously to a set of CSV files, located on Google Cloud Storage. First, create
the job locally:
Note:
google.cloud.bigquery generates a UUID for each job.
The created and state fields are not set until the job is submitted to the BigQuery back-end.
5.11. Jobs 55
google-cloud Documentation, Release 0.27.1
Note:
google.cloud.bigquery generates a UUID for each job.
The created and state fields are not set until the job is submitted to the BigQuery back-end.
56 Chapter 5. BigQuery
google-cloud Documentation, Release 0.27.1
>>> job.ended
datetime.datetime(2015, 7, 23, 9, 30, 21, 334792, tzinfo=<UTC>)
5.11. Jobs 57
google-cloud Documentation, Release 0.27.1
58 Chapter 5. BigQuery
CHAPTER 6
Bigtable
To use the API, the Client class defines a high-level interface which handles authorization and creating other objects:
When creating a Client, the user_agent argument has sensible a default (DEFAULT_USER_AGENT). However,
you may over-ride it and the value will be used throughout all API requests made with the client you create.
6.1.2 Configuration
59
google-cloud Documentation, Release 0.27.1
Tip: Be sure to use the Project ID, not the Project Number.
If youll be using your client to make Instance Admin and Table Admin API requests, youll need to pass the admin
argument:
client = bigtable.Client(admin=True)
If, on the other hand, you only have (or want) read access to the data, you can pass the read_only argument:
client = bigtable.Client(read_only=True)
This will ensure that the READ_ONLY_SCOPE is used for API requests (so any accidental requests that would modify
data will fail).
After a Client, the next highest-level object is an Instance. Youll need one before you can interact with tables
or data.
Head next to learn about the Instance Admin API.
6.2 Client
Note: Since the Cloud Bigtable API requires the gRPC transport, no _http argument is accepted by this class.
60 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
Parameters
project (str or unicode) (Optional) The ID of the project which owns the instances,
tables and data. If not provided, will attempt to determine from the environment.
credentials (Credentials) (Optional) The OAuth2 Credentials to use for this
client. If not passed, falls back to the default inferred from the environment.
read_only (bool) (Optional) Boolean indicating if the data scope should be for reading
only (or for writing as well). Defaults to False.
admin (bool) (Optional) Boolean indicating if the client will be used to interact with
the Instance Admin or Table Admin APIs. This requires the ADMIN_SCOPE. Defaults to
False.
user_agent (str) (Optional) The user agent to be used with API request. Defaults to
DEFAULT_USER_AGENT.
Raises ValueError if both read_only and admin are True
copy()
Make a copy of this client.
Copies the local data stored as simple types but does not copy the current state of any open connections
with the Cloud Bigtable API.
Return type Client
Returns A copy of the current client.
credentials
Getter for clients credentials.
Return type OAuth2Credentials
Returns The credentials stored on the client.
instance(instance_id, location=see-existing-cluster, display_name=None, serve_nodes=3)
Factory to create a instance associated with this client.
Parameters
instance_id (str) The ID of the instance.
location (str) location name, in form projects/<project>/locations/
<location>; used to set up the instances cluster.
display_name (str) (Optional) The display name for the instance in the Cloud Con-
sole UI. (Must be between 4 and 30 characters.) If this value is not set in the constructor,
will fall back to the instance ID.
serve_nodes (int) (Optional) The number of nodes in the instances cluster; used
to set up the instances cluster.
Return type Instance
Returns an instance owned by this client.
list_instances()
List instances owned by the project.
Return type tuple
Returns A pair of results, the first is a list of Instance objects returned and the second is a
list of strings (the failed locations in the request).
6.2. Client 61
google-cloud Documentation, Release 0.27.1
project_name
Project name to be used with Instance Admin API.
Note: This property will not change if project does not, but the return value is not cached.
google.cloud.bigtable.client.DATA_API_HOST = 'bigtable.googleapis.com'
Data API request host.
google.cloud.bigtable.client.DATA_SCOPE = 'https://www.googleapis.com/auth/bigtable.data'
Scope for reading and writing table data.
google.cloud.bigtable.client.INSTANCE_ADMIN_HOST = 'bigtableadmin.googleapis.com'
Cluster Admin API request host.
google.cloud.bigtable.client.READ_ONLY_SCOPE = 'https://www.googleapis.com/auth/bigtable.da
Scope for reading table data.
google.cloud.bigtable.client.TABLE_ADMIN_HOST = 'bigtableadmin.googleapis.com'
Table Admin API request host.
6.3 Cluster
Note: For now, we leave out the default_storage_type (an enum) which if not sent will end up as
data_v2_pb2.STORAGE_SSD.
Parameters
cluster_id (str) The ID of the cluster.
instance (Instance) The instance where the cluster resides.
serve_nodes (int) (Optional) The number of nodes in the cluster. Defaults to
DEFAULT_SERVE_NODES.
62 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
copy()
Make a copy of this cluster.
Copies the local data stored as simple types and copies the client attached to this instance.
Return type Cluster
Returns A copy of the current cluster.
create()
Create this cluster.
Note: Uses the project, instance and cluster_id on the current Cluster in addition to the
serve_nodes. To change them before creating, reset the values via
cluster.serve_nodes = 8
cluster.cluster_id = 'i-changed-my-mind'
delete()
Delete this cluster.
Marks a cluster and all of its tables for permanent deletion in 7 days.
Immediately upon completion of the request:
Billing will cease for all of the clusters reserved resources.
The clusters delete_time field will be set 7 days in the future.
Soon afterward:
All tables within the cluster will become unavailable.
At the clusters delete_time:
The cluster and all of its tables will immediately and irrevocably disappear from the API, and their
data will be permanently deleted.
classmethod from_pb(cluster_pb, instance)
Creates a cluster instance from a protobuf.
Parameters
cluster_pb (instance_pb2.Cluster) A cluster protobuf object.
instance (Instance>) The instance that owns the cluster.
Return type Cluster
Returns The cluster parsed from the protobuf response.
Raises ValueError if the cluster name does not match projects/{project}/
instances/{instance}/clusters/{cluster_id} or if the parsed project ID
does not match the project ID on the client.
6.3. Cluster 63
google-cloud Documentation, Release 0.27.1
name
Cluster name used in requests.
Note: This property will not change if _instance and cluster_id do not, but the return value is
not cached.
reload()
Reload the metadata for this cluster.
update()
Update this cluster.
Note: Updates the serve_nodes. If youd like to change them before updating, reset the values via
cluster.serve_nodes = 8
google.cloud.bigtable.cluster.DEFAULT_SERVE_NODES = 3
Default number of nodes to use when creating a cluster.
6.4 Instance
64 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
Note: For now, we leave out the default_storage_type (an enum) which if not sent will end up as
data_v2_pb2.STORAGE_SSD.
Parameters
instance_id (str) The ID of the instance.
client (Client) The client that owns the instance. Provides authorization and a project
ID.
location_id (str) ID of the location in which the instance will be created. Required
for instances which do not yet exist.
display_name (str) (Optional) The display name for the instance in the Cloud Con-
sole UI. (Must be between 4 and 30 characters.) If this value is not set in the constructor,
will fall back to the instance ID.
serve_nodes (int) (Optional) The number of nodes in the instances cluster; used to
set up the instances cluster.
cluster(cluster_id, serve_nodes=3)
Factory to create a cluster associated with this client.
Parameters
cluster_id (str) The ID of the cluster.
serve_nodes (int) (Optional) The number of nodes in the cluster. Defaults to 3.
Return type Cluster
Returns The cluster owned by this client.
copy()
Make a copy of this instance.
Copies the local data stored as simple types and copies the client attached to this instance.
Return type Instance
Returns A copy of the current instance.
create()
Create this instance.
Note: Uses the project and instance_id on the current Instance in addition to the
display_name. To change them before creating, reset the values via
6.4. Instance 65
google-cloud Documentation, Release 0.27.1
delete()
Delete this instance.
Marks an instance and all of its tables for permanent deletion in 7 days.
Immediately upon completion of the request:
Billing will cease for all of the instances reserved resources.
The instances delete_time field will be set 7 days in the future.
Soon afterward:
All tables within the instance will become unavailable.
At the instances delete_time:
The instance and all of its tables will immediately and irrevocably disappear from the API, and their
data will be permanently deleted.
classmethod from_pb(instance_pb, client)
Creates an instance instance from a protobuf.
Parameters
instance_pb (instance_pb2.Instance) An instance protobuf object.
client (Client) The client that owns the instance.
Return type Instance
Returns The instance parsed from the protobuf response.
Raises ValueError if the instance name does not match projects/{project}/
instances/{instance_id} or if the parsed project ID does not match the project ID
on the client.
list_clusters()
Lists clusters in this instance.
Return type tuple
Returns A pair of results, the first is a list of Cluster s returned and the second is a list of
strings (the failed locations in the request).
list_tables()
List the tables in this instance.
Return type list of Table
Returns The list of tables owned by the instance.
Raises ValueError if one of the returned tables has a name that is not of the expected format.
name
Instance name used in requests.
Note: This property will not change if instance_id does not, but the return value is not cached.
66 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
reload()
Reload the metadata for this instance.
table(table_id)
Factory to create a table associated with this instance.
Parameters table_id (str) The ID of the table.
Return type Table
Returns The table owned by this instance.
update()
Update this instance.
Note: Updates the display_name. To change that value before updating, reset its values via
After creating a Client, you can interact with individual instances for a project.
If you want a comprehensive list of all existing instances, make a ListInstances API request with Client.
list_instances():
instances = client.list_instances()
location_id is the ID of the location in which the instances cluster will be hosted, e.g.
'us-central1-c'. location_id is required for instances which do not already exist.
display_name is optional. When not provided, display_name defaults to the instance_id value.
You can also use Client.instance() to create a local wrapper for instances that have already been created with
the API, or through the web conole:
instance = client.instance(existing_instance_id)
instance.reload()
After creating the instance object, make a CreateInstance API request with create():
Note: When modifying an instance (via a CreateInstance request), the Bigtable API will return a long-running
operation and a corresponding Operation object will be returned by create().
You can check if a long-running operation (for a create() has finished by making a GetOperation request with
Operation.finished():
Note: Once an Operation object has returned True from finished(), the object should not be re-used.
Subsequent calls to finished() will result in a ValueError.
After creating the instance object, make a GetInstance API request with reload():
instance.reload()
After creating the instance object, make an UpdateInstance API request with update():
instance.delete()
68 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
6.6 Table
Note: We dont define any properties on a table other than the name. The only other fields
are column_families and granularity, The column_families are not stored locally and
granularity is an enum with only one value.
Parameters
table_id (str) The ID of the table.
instance (Instance) The instance that owns the table.
column_family(column_family_id, gc_rule=None)
Factory to create a column family associated with this table.
Parameters
column_family_id (str) The ID of the column family. Must be of the form
[_a-zA-Z0-9][-_.a-zA-Z0-9]*.
gc_rule (GarbageCollectionRule) (Optional) The garbage collection settings
for this column family.
Return type ColumnFamily
Returns A column family owned by this table.
create(initial_split_keys=None, column_families=())
Creates this table.
Note: A create request returns a _generated.table_pb2.Table but we dont use this response.
Parameters
6.6. Table 69
google-cloud Documentation, Release 0.27.1
initial_split_keys (list) (Optional) List of row keys that will be used to ini-
tially split the table into several tablets (Tablets are similar to HBase regions). Given two
split keys, "s1" and "s2", three tablets will be created, spanning the key ranges: [,
s1), [s1, s2), [s2, ).
column_families (list) (Optional) List or other iterable of ColumnFamily
instances.
delete()
Delete this table.
list_column_families()
List the column families owned by this table.
Return type dict
Returns Dictionary of column families attached to this table. Keys are strings (column family
names) and values are ColumnFamily instances.
Raises ValueError if the column family name from the response does not agree with the
computed name from the column family ID.
mutate_rows(rows)
Mutates multiple rows in bulk.
The method tries to update all specified rows. If some of the rows werent updated, it would not remove
mutations. They can be applied to the row separately. If row mutations finished successfully, they would
be cleaned up.
Parameters rows (list) List or other iterable of DirectRow instances.
Return type list
Returns A list of response statuses (google.rpc.status_pb2.Status) corresponding to success or
failure of each row mutation sent. These will be in the same order as the rows.
name
Table name used in requests.
Note: This property will not change if table_id does not, but the return value is not cached.
read_row(row_key, filter_=None)
Read a single row from this table.
Parameters
row_key (bytes) The key of the row to read from.
filter (RowFilter) (Optional) The filter to apply to the contents of the row. If
unset, returns the entire row.
Return type PartialRowData, NoneType
Returns The contents of the row if any chunks were returned in the response, otherwise None.
70 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
Parameters
row_key (bytes) The key for the row being created.
filter (RowFilter) (Optional) Filter to be used for conditional mutations. See
ConditionalRow for more details.
append (bool) (Optional) Flag to determine if the row should be used for append
mutations.
Return type Row
Returns A row owned by this table.
Raises ValueError if both filter_ and append are used.
sample_row_keys()
Read a sample of row keys in the table.
The returned row keys will delimit contiguous sections of the table of approximately equal size, which can
be used to break up the data for distributed tasks like mapreduces.
The elements in the iterator are a SampleRowKeys response and they have the properties offset_bytes
and row_key. They occur in sorted order. The table might have contents before the first row key in the
list and after the last one, but a key containing the empty string indicates end of table and will be the last
response given, if present.
Note: Row keys in this list may not have ever been written to or read from, and users should therefore not
make any assumptions about the row key structure that are specific to their use case.
6.6. Table 71
google-cloud Documentation, Release 0.27.1
The offset_bytes field on a response indicates the approximate total storage space used by all rows
in the table which precede row_key. Buffering the contents of all rows between two subsequent samples
would require space roughly equal to the difference in their offset_bytes fields.
Return type GrpcRendezvous
Returns A cancel-able iterator. Can be consumed by calling next() or by casting to a list
and can be cancelled by calling cancel().
exception google.cloud.bigtable.table.TableMismatchError
Bases: exceptions.ValueError
Row from another table.
exception google.cloud.bigtable.table.TooManyMutationsError
Bases: exceptions.ValueError
The number of mutations for bulk request is too big.
After creating an Instance, you can interact with individual tables, groups of tables or column families within a
table.
If you want a comprehensive list of all existing tables in a instance, make a ListTables API request with Instance.
list_tables():
>>> instance.list_tables()
[<google.cloud.bigtable.table.Table at 0x7ff6a1de8f50>,
<google.cloud.bigtable.table.Table at 0x7ff6a1de8350>]
table = instance.table(table_id)
Even if this Table already has been created with the API, youll want this object to use as a parent of a
ColumnFamily or Row.
After creating the table object, make a CreateTable API request with create():
table.create()
If you would like to initially split the table into several tablets (tablets are similar to HBase regions):
table.create(initial_split_keys=['s1', 's2'])
72 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
table.delete()
Though there is no official method for retrieving column families associated with a table, the GetTable API method
returns a table object with the names of the column families.
To retrieve the list of column families use list_column_families():
column_families = table.list_column_families()
column_family = table.column_family(column_family_id)
There is no real reason to use this factory unless you intend to create or delete a column family.
In addition, you can specify an optional gc_rule (a GarbageCollectionRule or similar):
column_family = table.column_family(column_family_id,
gc_rule=gc_rule)
This rule helps the backend determine when and how to clean up old cells in the column family.
See Column Families for more information about GarbageCollectionRule and related classes.
After creating the column family object, make a CreateColumnFamily API request with ColumnFamily.
create()
column_family.create()
column_family.delete()
column_family.update()
Now we go down the final step of the hierarchy from Table to Row as well as streaming data directly via a Table.
Head next to learn about the Data API.
When creating a ColumnFamily, it is possible to set garbage collection rules for expired data.
By setting a rule, cells in the table matching the rule will be deleted during periodic garbage collection (which executes
opportunistically in the background).
The types MaxAgeGCRule, MaxVersionsGCRule, GarbageCollectionRuleUnion and
GarbageCollectionRuleIntersection can all be used as the optional gc_rule argument in the
ColumnFamily constructor. This value is then used in the create() and update() methods.
These rules can be nested arbitrarily, with a MaxAgeGCRule or MaxVersionsGCRule at the lowest level of the
nesting:
import datetime
max_age = datetime.timedelta(days=3)
rule1 = MaxAgeGCRule(max_age)
rule2 = MaxVersionsGCRule(1)
Parameters
column_family_id (str) The ID of the column family. Must be of the form
[_a-zA-Z0-9][-_.a-zA-Z0-9]*.
table (Table) The table that owns the column family.
74 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
create()
Create this column family.
delete()
Delete this column family.
name
Column family name used in requests.
Note: This property will not change if column_family_id does not, but the return value is not cached.
to_pb()
Converts the column family to a protobuf.
Return type table_v2_pb2.ColumnFamily
Returns The converted current object.
update()
Update this column family.
Note: Only the GC rule can be updated. By changing the column family ID, you will simply be referring
to a different column family.
class google.cloud.bigtable.column_family.GCRuleIntersection(rules)
Bases: google.cloud.bigtable.column_family.GarbageCollectionRule
Intersection of garbage collection rules.
Parameters rules (list) List of GarbageCollectionRule.
to_pb()
Converts the intersection into a single GC rule as a protobuf.
Return type table_v2_pb2.GcRule
Returns The converted current object.
class google.cloud.bigtable.column_family.GCRuleUnion(rules)
Bases: google.cloud.bigtable.column_family.GarbageCollectionRule
Union of garbage collection rules.
Parameters rules (list) List of GarbageCollectionRule.
to_pb()
Converts the union into a single GC rule as a protobuf.
Return type table_v2_pb2.GcRule
Note: A string gc_expression can also be used with API requests, but that value would be superceded by
a gc_rule. As a result, we dont support that feature and instead support via native classes.
class google.cloud.bigtable.column_family.MaxAgeGCRule(max_age)
Bases: google.cloud.bigtable.column_family.GarbageCollectionRule
Garbage collection limiting the age of a cell.
Parameters max_age (datetime.timedelta) The maximum age allowed for a cell in the
table.
to_pb()
Converts the garbage collection rule to a protobuf.
Return type table_v2_pb2.GcRule
Returns The converted current object.
class google.cloud.bigtable.column_family.MaxVersionsGCRule(max_num_versions)
Bases: google.cloud.bigtable.column_family.GarbageCollectionRule
Garbage collection limiting the number of versions of a cell.
Parameters max_num_versions (int) The maximum number of versions
to_pb()
Converts the garbage collection rule to a protobuf.
Return type table_v2_pb2.GcRule
Returns The converted current object.
76 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
Parameters
row_key (bytes) The key for the current row.
table (Table) The table that owns the row.
append_cell_value(column_family_id, column, value)
Appends a value to an existing cell.
Note: This method adds a read-modify rule protobuf to the accumulated read-modify rules on this row,
but does not make an API request. To actually send an API request (with the rules) to the Google Cloud
Bigtable API, call commit().
Parameters
column_family_id (str) The column family that contains the column. Must be of
the form [_a-zA-Z0-9][-_.a-zA-Z0-9]*.
column (bytes) The column within the column family where the cell is located.
value (bytes) The value to append to the existing value in the cell. If the targeted
cell is unset, it will be treated as containing the empty string.
clear()
Removes all currently accumulated modifications on current row.
commit()
Makes a ReadModifyWriteRow API request.
This commits modifications made by append_cell_value() and increment_cell_value().
If no modifications were made, makes no API request and just returns {}.
Modifies a row atomically, reading the latest existing timestamp / value from the specified columns and
writing a new value by appending / incrementing. The new cell created uses either the current server time
or the highest timestamp of a cell in that column (if it exceeds the server time).
After committing the accumulated mutations, resets the local mutations.
>>> append_row.commit()
{
u'col-fam-id': {
b'col-name1': [
(b'cell-val', datetime.datetime(...)),
(b'cell-val-newer', datetime.datetime(...)),
],
b'col-name2': [
(b'altcol-cell-val', datetime.datetime(...)),
],
},
u'col-fam-id2': {
b'col-name3-but-other-fam': [
(b'foo', datetime.datetime(...)),
],
},
}
Returns The new contents of all modified cells. Returned as a dictionary of column families,
each of which holds a dictionary of columns. Each column contains a list of cells modified.
Each cell is represented with a two-tuple with the value (in bytes) and the timestamp for the
cell.
Raises ValueError if the number of mutations exceeds the MAX_MUTATIONS.
Note: This method adds a read-modify rule protobuf to the accumulated read-modify rules on this row,
but does not make an API request. To actually send an API request (with the rules) to the Google Cloud
Bigtable API, call commit().
Parameters
column_family_id (str) The column family that contains the column. Must be of
the form [_a-zA-Z0-9][-_.a-zA-Z0-9]*.
column (bytes) The column within the column family where the cell is located.
int_value (int) The value to increment the existing value in the cell by. If the
targeted cell is unset, it will be treated as containing a zero. Otherwise, the targeted cell
must contain an 8-byte value (interpreted as a 64-bit big-endian signed integer), or the
entire request will fail.
row_key
Row key.
Return type bytes
Returns The key for the current row.
table
Row table.
Return type table: Table
Returns table: The table that owns the row.
class google.cloud.bigtable.row.ConditionalRow(row_key, table, filter_)
Bases: google.cloud.bigtable.row._SetDeleteRow
Google Cloud Bigtable Row for sending mutations conditionally.
Each mutation has an associated state: True or False. When commit()-ed, the mutations for the True
state will be applied if the filter matches any cells in the row, otherwise the False state will be applied.
A ConditionalRow accumulates mutations in the same way a DirectRow does:
set_cell()
delete()
delete_cell()
delete_cells()
with the only change the extra state parameter:
78 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
Note: As with DirectRow, to actually send these mutations to the Google Cloud Bigtable API, you must call
commit().
Parameters
row_key (bytes) The key for the current row.
table (Table) The table that owns the row.
filter (RowFilter) Filter to be used for conditional mutations.
clear()
Removes all currently accumulated mutations on the current row.
commit()
Makes a CheckAndMutateRow API request.
If no mutations have been created in the row, no request is made.
The mutations will be applied conditionally, based on whether the filter matches any cells in the
ConditionalRow or not. (Each method which adds a mutation has a state parameter for this pur-
pose.)
Mutations are applied atomically and in order, meaning that earlier mutations can be masked / negated by
later ones. Cells already present in the row are left unchanged unless explicitly changed by a mutation.
After committing the accumulated mutations, resets the local mutations.
Return type bool
Returns Flag indicating if the filter was matched (which also indicates which set of mutations
were applied by the server).
Raises ValueError if the number of mutations exceeds the MAX_MUTATIONS.
delete(state=True)
Deletes this row from the table.
Note: This method adds a mutation to the accumulated mutations on this row, but does not make an
API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call
commit().
Parameters state (bool) (Optional) The state that the mutation should be applied in. De-
faults to True.
Note: This method adds a mutation to the accumulated mutations on this row, but does not make an
API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call
commit().
Parameters
column_family_id (str) The column family that contains the column or columns
with cells being deleted. Must be of the form [_a-zA-Z0-9][-_.a-zA-Z0-9]*.
column (bytes) The column within the column family that will have a cell deleted.
time_range (TimestampRange) (Optional) The range of time within which cells
should be deleted.
state (bool) (Optional) The state that the mutation should be applied in. Defaults to
True.
Note: This method adds a mutation to the accumulated mutations on this row, but does not make an
API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call
commit().
Parameters
column_family_id (str) The column family that contains the column or columns
with cells being deleted. Must be of the form [_a-zA-Z0-9][-_.a-zA-Z0-9]*.
columns (list of str / unicode, or object) The columns within the column
family that will have cells deleted. If ALL_COLUMNS is used then the entire column
family will be deleted from the row.
time_range (TimestampRange) (Optional) The range of time within which cells
should be deleted.
state (bool) (Optional) The state that the mutation should be applied in. Defaults to
True.
row_key
Row key.
Return type bytes
Returns The key for the current row.
set_cell(column_family_id, column, value, timestamp=None, state=True)
Sets a value in this row.
The cell is determined by the row_key of this ConditionalRow and the column. The column must
be in an existing ColumnFamily (as determined by column_family_id).
Note: This method adds a mutation to the accumulated mutations on this row, but does not make an
API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call
commit().
Parameters
column_family_id (str) The column family that contains the column. Must be of
the form [_a-zA-Z0-9][-_.a-zA-Z0-9]*.
column (bytes) The column within the column family where the cell is located.
80 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
value (bytes or int) The value to set in the cell. If an integer is used, will be inter-
preted as a 64-bit big-endian signed integer (8 bytes).
timestamp (datetime.datetime) (Optional) The timestamp of the operation.
state (bool) (Optional) The state that the mutation should be applied in. Defaults to
True.
table
Row table.
Return type table: Table
Returns table: The table that owns the row.
class google.cloud.bigtable.row.DirectRow(row_key, table)
Bases: google.cloud.bigtable.row._SetDeleteRow
Google Cloud Bigtable Row for sending direct mutations.
These mutations directly set or delete cell contents:
set_cell()
delete()
delete_cell()
delete_cells()
These methods can be used directly:
Note: A DirectRow accumulates mutations locally via the set_cell(), delete(), delete_cell()
and delete_cells() methods. To actually send these mutations to the Google Cloud Bigtable API, you
must call commit().
Parameters
row_key (bytes) The key for the current row.
table (Table) The table that owns the row.
clear()
Removes all currently accumulated mutations on the current row.
commit()
Makes a MutateRow API request.
If no mutations have been created in the row, no request is made.
Mutations are applied atomically and in order, meaning that earlier mutations can be masked / negated by
later ones. Cells already present in the row are left unchanged unless explicitly changed by a mutation.
After committing the accumulated mutations, resets the local mutations to an empty list.
Raises ValueError if the number of mutations exceeds the MAX_MUTATIONS.
delete()
Deletes this row from the table.
Note: This method adds a mutation to the accumulated mutations on this row, but does not make an
API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call
commit().
Note: This method adds a mutation to the accumulated mutations on this row, but does not make an
API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call
commit().
Parameters
column_family_id (str) The column family that contains the column or columns
with cells being deleted. Must be of the form [_a-zA-Z0-9][-_.a-zA-Z0-9]*.
column (bytes) The column within the column family that will have a cell deleted.
time_range (TimestampRange) (Optional) The range of time within which cells
should be deleted.
Note: This method adds a mutation to the accumulated mutations on this row, but does not make an
API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call
commit().
Parameters
column_family_id (str) The column family that contains the column or columns
with cells being deleted. Must be of the form [_a-zA-Z0-9][-_.a-zA-Z0-9]*.
columns (list of str / unicode, or object) The columns within the column
family that will have cells deleted. If ALL_COLUMNS is used then the entire column
family will be deleted from the row.
time_range (TimestampRange) (Optional) The range of time within which cells
should be deleted.
row_key
Row key.
Return type bytes
Returns The key for the current row.
set_cell(column_family_id, column, value, timestamp=None)
Sets a value in this row.
The cell is determined by the row_key of this DirectRow and the column. The column must be in
an existing ColumnFamily (as determined by column_family_id).
82 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
Note: This method adds a mutation to the accumulated mutations on this row, but does not make an
API request. To actually send an API request (with the mutations) to the Google Cloud Bigtable API, call
commit().
Parameters
column_family_id (str) The column family that contains the column. Must be of
the form [_a-zA-Z0-9][-_.a-zA-Z0-9]*.
column (bytes) The column within the column family where the cell is located.
value (bytes or int) The value to set in the cell. If an integer is used, will be inter-
preted as a 64-bit big-endian signed integer (8 bytes).
timestamp (datetime.datetime) (Optional) The timestamp of the operation.
table
Row table.
Return type table: Table
Returns table: The table that owns the row.
google.cloud.bigtable.row.MAX_MUTATIONS = 100000
The maximum number of mutations that a row can accumulate.
class google.cloud.bigtable.row.Row(row_key, table)
Bases: object
Base representation of a Google Cloud Bigtable Row.
This class has three subclasses corresponding to the three RPC methods for sending row mutations:
DirectRow for MutateRow
ConditionalRow for CheckAndMutateRow
AppendRow for ReadModifyWriteRow
Parameters
row_key (bytes) The key for the current row.
table (Table) The table that owns the row.
row_key
Row key.
Return type bytes
Returns The key for the current row.
table
Row table.
Return type table: Table
Returns table: The table that owns the row.
Container for Google Cloud Bigtable Cells and Streaming Row Contents.
class google.cloud.bigtable.row_data.Cell(value, timestamp, labels=())
Bases: object
Representation of a Google Cloud Bigtable Cell.
Parameters
value (bytes) The value stored in the cell.
timestamp (datetime.datetime) The timestamp when the cell was stored.
labels (list) (Optional) List of strings. Labels applied to the cell.
classmethod from_pb(cell_pb)
Create a new cell from a Cell protobuf.
Parameters cell_pb (_generated.data_pb2.Cell) The protobuf to convert.
Return type Cell
Returns The cell corresponding to the protobuf.
exception google.cloud.bigtable.row_data.InvalidChunk
Bases: exceptions.RuntimeError
Exception raised to to invalid chunk data from back-end.
exception google.cloud.bigtable.row_data.InvalidReadRowsResponse
Bases: exceptions.RuntimeError
Exception raised to to invalid response data from back-end.
class google.cloud.bigtable.row_data.PartialCellData(row_key, family_name, qual-
ifier, timestamp_micros,
labels=(), value=)
Bases: object
Representation of partial cell in a Google Cloud Bigtable Table.
These are expected to be updated directly from a _generated.bigtable_service_messages_pb2.
ReadRowsResponse
Parameters
row_key (bytes) The key for the row holding the (partial) cell.
family_name (str) The family name of the (partial) cell.
qualifier (bytes) The column qualifier of the (partial) cell.
timestamp_micros (int) The timestamp (in microsecods) of the (partial) cell.
labels (list of str) labels assigned to the (partial) cell
value (bytes) The (accumulated) value of the (partial) cell.
append_value(value)
Append bytes from a new chunk to value.
Parameters value (bytes) bytes to append
84 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
class google.cloud.bigtable.row_data.PartialRowData(row_key)
Bases: object
Representation of partial row in a Google Cloud Bigtable Table.
These are expected to be updated directly from a _generated.bigtable_service_messages_pb2.
ReadRowsResponse
Parameters row_key (bytes) The key for the row holding the (partial) data.
cells
Property returning all the cells accumulated on this partial row.
Return type dict
Returns Dictionary of the Cell objects accumulated. This dictionary has two-levels of keys
(first for column families and second for column names/qualifiers within a family). For a
given column, a list of Cell objects is stored.
row_key
Getter for the current (partial) rows key.
Return type bytes
Returns The current (partial) rows key.
to_dict()
Convert the cells to a dictionary.
This is intended to be used with HappyBase, so the column family and column qualiers are combined (with
:).
Return type dict
Returns Dictionary containing all the data in the cells of this row.
class google.cloud.bigtable.row_data.PartialRowsData(response_iterator)
Bases: object
Convenience wrapper for consuming a ReadRows streaming response.
Parameters response_iterator (GrpcRendezvous) A streaming iterator returned from
a ReadRows request.
cancel()
Cancels the iterator, closing the stream.
consume_all(max_loops=None)
Consume the streamed responses until there are no more.
This simply calls consume_next() until there are no more to consume.
Parameters max_loops (int) (Optional) Maximum number of times to try to consume an
additional ReadRowsResponse. You can use this to avoid long wait times.
consume_next()
Consume the next ReadRowsResponse from the stream.
Parse the response and its chunks into a new/existing row in _rows. Rows are returned in order by row
key.
rows
Property returning all rows accumulated from the stream.
Return type dict
It is possible to use a RowFilter when adding mutations to a ConditionalRow and when reading row data with
read_row() or read_rows().
As laid out in the RowFilter definition, the following basic filters are provided:
SinkFilter
PassAllFilter
BlockAllFilter
RowKeyRegexFilter
RowSampleFilter
FamilyNameRegexFilter
ColumnQualifierRegexFilter
TimestampRangeFilter
ColumnRangeFilter
ValueRegexFilter
ValueRangeFilter
CellsRowOffsetFilter
CellsRowLimitFilter
CellsColumnLimitFilter
StripValueTransformerFilter
ApplyLabelFilter
In addition, these filters can be combined into composite filters with
RowFilterChain
RowFilterUnion
ConditionalRowFilter
These rules can be nested arbitrarily, with a basic filter at the lowest level. For example:
86 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
Note: Due to a technical limitation of the backend, it is not currently possible to apply multiple labels to a cell.
Parameters label (str) Label to apply to cells in the output row. Values must be at most 15
characters long, and match the pattern [a-z0-9\-]+.
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.BlockAllFilter(flag)
Bases: google.cloud.bigtable.row_filters._BoolFilter
Row filter that doesnt match any cells.
Parameters flag (bool) Does not match any cells, regardless of input. Useful for temporarily
disabling just part of a filter.
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.CellsColumnLimitFilter(num_cells)
Bases: google.cloud.bigtable.row_filters._CellCountFilter
Row filter to limit cells in a column.
Parameters num_cells (int) Matches only the most recent N cells within each column. This
filters a (family name, column) pair, based on timestamps of each cell.
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.CellsRowLimitFilter(num_cells)
Bases: google.cloud.bigtable.row_filters._CellCountFilter
Row filter to limit cells in a row.
Parameters num_cells (int) Matches only the first N cells of the row.
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.CellsRowOffsetFilter(num_cells)
Bases: google.cloud.bigtable.row_filters._CellCountFilter
Row filter to skip cells in a row.
Parameters num_cells (int) Skips the first N cells of the row.
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.ColumnQualifierRegexFilter(regex)
Bases: google.cloud.bigtable.row_filters._RegexFilter
Row filter for a column qualifier regular expression.
The regex must be valid RE2 patterns. See Googles RE2 reference for the accepted syntax.
Note: Special care need be used with the expression used. Since each of these properties can contain arbitrary
bytes, the \C escape sequence must be used if a true wildcard is desired. The . character will not match the
new line character \n, which may be present in a binary value.
Parameters regex (bytes) A regular expression (RE2) to match cells from column that match
this regex (irrespective of column family).
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.ColumnRangeFilter(column_family_id,
start_column=None,
end_column=None,
inclusive_start=None,
inclusive_end=None)
Bases: google.cloud.bigtable.row_filters.RowFilter
88 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
Note: The base_filter does not execute atomically with the true and false filters, which may lead to
inconsistent or unexpected results.
Additionally, executing a ConditionalRowFilter has poor performance on the server, especially when
false_filter is set.
Parameters
base_filter (RowFilter) The filter to condition on before executing the true/false
filters.
true_filter (RowFilter) (Optional) The filter to execute if there are any cells
matching base_filter. If not provided, no results will be returned in the true case.
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.FamilyNameRegexFilter(regex)
Bases: google.cloud.bigtable.row_filters._RegexFilter
Row filter for a family name regular expression.
The regex must be valid RE2 patterns. See Googles RE2 reference for the accepted syntax.
Parameters regex (str) A regular expression (RE2) to match cells from columns in a given
column family. For technical reasons, the regex must not contain the ':' character, even if it is
not being used as a literal.
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.PassAllFilter(flag)
Bases: google.cloud.bigtable.row_filters._BoolFilter
Row filter equivalent to not filtering at all.
Parameters flag (bool) Matches all cells, regardless of input. Functionally equivalent to leav-
ing filter unset, but included for completeness.
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.RowFilter
Bases: object
Basic filter to apply to cells in a row.
These values can be combined via RowFilterChain, RowFilterUnion and
ConditionalRowFilter.
Note: This class is a do-nothing base class for all row filters.
class google.cloud.bigtable.row_filters.RowFilterChain(filters=None)
Bases: google.cloud.bigtable.row_filters._FilterCombination
Chain of row filters.
Sends rows through several filters in sequence. The filters are chained together to process a row. After the
first filter is applied, the second is applied to the filtered output and so on for subsequent filters.
Parameters filters (list) List of RowFilter
90 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.RowFilterUnion(filters=None)
Bases: google.cloud.bigtable.row_filters._FilterCombination
Union of row filters.
Sends rows through several filters simultaneously, then merges / interleaves all the filtered results together.
If multiple cells are produced with the same column and timestamp, they will all appear in the output row in an
unspecified mutual order.
Parameters filters (list) List of RowFilter
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.RowKeyRegexFilter(regex)
Bases: google.cloud.bigtable.row_filters._RegexFilter
Row filter for a row key regular expression.
The regex must be valid RE2 patterns. See Googles RE2 reference for the accepted syntax.
Note: Special care need be used with the expression used. Since each of these properties can contain arbitrary
bytes, the \C escape sequence must be used if a true wildcard is desired. The . character will not match the
new line character \n, which may be present in a binary value.
Parameters regex (bytes) A regular expression (RE2) to match cells from rows with row keys
that satisfy this regex. For a CheckAndMutateRowRequest, this filter is unnecessary since
the row key is already specified.
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.RowSampleFilter(sample)
Bases: google.cloud.bigtable.row_filters.RowFilter
Matches all cells from a row with probability p.
Parameters sample (float) The probability of matching a cell (must be in the interval [0,
1]).
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.SinkFilter(flag)
Bases: google.cloud.bigtable.row_filters._BoolFilter
Advanced row filter to skip parent filters.
Parameters flag (bool) ADVANCED USE ONLY. Hook for introspection into the row fil-
ter. Outputs all cells directly to the output of the read rather than to any parent filter. Can-
not be used within the predicate_filter, true_filter, or false_filter of a
ConditionalRowFilter.
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.StripValueTransformerFilter(flag)
Bases: google.cloud.bigtable.row_filters._BoolFilter
Row filter that transforms cells into empty string (0 bytes).
Parameters flag (bool) If True, replaces each cells value with the empty string. As the name
indicates, this is more useful as a transformer than a generic query / filter.
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
Returns The converted current object.
class google.cloud.bigtable.row_filters.TimestampRange(start=None, end=None)
Bases: object
Range of time with inclusive lower and exclusive upper bounds.
Parameters
start (datetime.datetime) (Optional) The (inclusive) lower bound of the times-
tamp range. If omitted, defaults to Unix epoch.
end (datetime.datetime) (Optional) The (exclusive) upper bound of the timestamp
range. If omitted, no upper bound is used.
to_pb()
Converts the TimestampRange to a protobuf.
Return type data_v2_pb2.TimestampRange
Returns The converted current object.
class google.cloud.bigtable.row_filters.TimestampRangeFilter(range_)
Bases: google.cloud.bigtable.row_filters.RowFilter
Row filter that limits cells to a range of time.
Parameters range (TimestampRange) Range of time that cells should match against.
to_pb()
Converts the row filter to a protobuf.
First converts the range_ on the current object to a protobuf and then uses it in the
timestamp_range_filter field.
Return type data_v2_pb2.RowFilter
92 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
Note: Special care need be used with the expression used. Since each of these properties can contain arbitrary
bytes, the \C escape sequence must be used if a true wildcard is desired. The . character will not match the
new line character \n, which may be present in a binary value.
Parameters regex (bytes) A regular expression (RE2) to match cells with values that match
this regex.
to_pb()
Converts the row filter to a protobuf.
Return type data_v2_pb2.RowFilter
After creating a Table and some column families, you are ready to store and retrieve data.
As explained in the table overview, tables can have many column families.
As described below, a table can also have many rows which are specified by row keys.
Within a row, data is stored in a cell. A cell simply has a value (as bytes) and a timestamp. The number of cells
in each row can be different, depending on what was stored in each row.
Each cell lies in a column (not a column family). A column is really just a more specific modifier within a
column family. A column can be present in every column family, in only one or anywhere in between.
Within a column family there can be many columns. For example, within the column family foo we could have
columns bar and baz. These would typically be represented as foo:bar and foo:baz.
Since data is stored in cells, which are stored in rows, we use the metaphor of a row in classes that are used to modify
(write, update, delete) data in a Table.
There are three ways to modify data in a table, described by the MutateRow, CheckAndMutateRow and ReadModify-
WriteRow API methods.
The direct way is via MutateRow which involves simply adding, overwriting or deleting cells. The
DirectRow class handles direct mutations.
The conditional way is via CheckAndMutateRow. This method first checks if some filter is matched in a a
given row, then applies one of two sets of mutations, depending on if a match occurred or not. (These mutation
sets are called the true mutations and false mutations.) The ConditionalRow class handles conditional
mutations.
The append way is via ReadModifyWriteRow. This simply appends (as bytes) or increments (as an integer)
data in a presumed existing cell in a row. The AppendRow class handles append mutations.
Row Factory
A single factory can be used to create any of the three row types. To create a DirectRow:
row = table.row(row_key)
Unlike the previous string values weve used before, the row key must be bytes.
To create a ConditionalRow, first create a RowFilter and then
94 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
To create an AppendRow
Building Up Mutations
In all three cases, a set of mutations (or two sets) are built up on a row before they are sent of in a batch via
row.commit()
Direct Mutations
If the timestamp is omitted, the current time on the Google Cloud Bigtable server will be used when the cell
is stored.
The value can either be bytes or an integer, which will be converted to bytes as a signed 64-bit integer.
delete_cell() deletes all cells (i.e. for all timestamps) in a given column
row.delete_cell(column_family_id, column)
row.delete_cell(column_family_id, column,
time_range=time_range)
delete_cells() does the same thing as delete_cell(), but accepts a list of columns in a column
family rather than a single one.
In addition, if we want to delete cells from every column in a column family, the special ALL_COLUMNS value
can be used
row.delete_cells(column_family_id, row.ALL_COLUMNS,
time_range=time_range)
row.delete()
Conditional Mutations
Making conditional modifications is essentially identical to direct modifications: it uses the exact same methods to
accumulate mutations.
However, each mutation added must specify a state: will the mutation be applied if the filter matches or if it fails to
match.
For example:
Append Mutations
Since only bytes are stored in a cell, the cell value is decoded as a signed 64-bit integer before being incremented.
(This happens on the Google Cloud Bigtable server, not in the library.)
Notice that no timestamp was specified. This is because append mutations operate on the latest value of the specified
column.
If there are no cells in the specified column, then the empty string (bytes case) or zero (integer case) are the assumed
values.
Starting Fresh
row.clear()
To make a ReadRows API request for a single row key, use Table.read_row():
96 Chapter 6. Bigtable
google-cloud Documentation, Release 0.27.1
u'fam2': {
b'col3': [
<google.cloud.bigtable.row_data.Cell at 0x7f80d150ef10>,
<google.cloud.bigtable.row_data.Cell at 0x7f80d150ef10>,
<google.cloud.bigtable.row_data.Cell at 0x7f80d150ef10>,
],
},
}
>>> cell = row_data.cells[u'fam1'][b'col1'][0]
>>> cell
<google.cloud.bigtable.row_data.Cell at 0x7f80d150ef10>
>>> cell.value
b'val1'
>>> cell.timestamp
datetime.datetime(2016, 2, 27, 3, 41, 18, 122823, tzinfo=<UTC>)
Rather than returning a DirectRow or similar class, this method returns a PartialRowData instance. This class
is used for reading and parsing data rather than for modifying data (as DirectRow is).
A filter can also be applied to the results:
The allowable filter_ values are the same as those used for a ConditionalRow. For more information, see the
Table.read_row() documentation.
row_data = table.read_rows()
Using gRPC over HTTP/2, a continual stream of responses will be delivered. In particular
consume_next() pulls the next result from the stream, parses it and stores it on the PartialRowsData
instance
consume_all() pulls results from the stream until there are no more
cancel() closes the stream
See the PartialRowsData documentation for more information.
As with Table.read_row(), an optional filter_ can be applied. In addition a start_key and / or end_key
can be supplied for the stream, a limit can be set and a boolean allow_row_interleaving can be specified
to allow faster streamed results at the potential cost of non-sequential reads.
See the Table.read_rows() documentation for more information on the optional arguments.
keys_iterator = table.sample_row_keys()
The returned row keys will delimit contiguous sections of the table of approximately equal size, which can be used to
break up the data for distributed tasks like mapreduces.
next_key = keys_iterator.next()
keys_iterator.cancel()
API requests are sent to the Google Cloud Bigtable API via RPC over HTTP/2. In order to support this, well rely on
gRPC. We are working with the gRPC team to rapidly make the install story more user-friendly.
Get started by learning about the Client on the Base for Everything page.
In the hierarchy of API concepts
a Client owns an Instance
an Instance owns a Table
a Table owns a ColumnFamily
a Table owns a Row (and all the cells in the row)
98 Chapter 6. Bigtable
CHAPTER 7
Datastore
Parameters
project (str) (optional) The project to pass to proxied API methods.
namespace (str) (optional) namespace to pass to proxied API methods.
credentials (Credentials) (Optional) The OAuth2 Credentials to use for this
client. If not passed (and if no _http object is passed), falls back to the default inferred
from the environment.
_http (Session) (Optional) HTTP object to make requests. Can be any object that
defines request() with the same interface as requests.Session.request(). If
not passed, an _http object is created that is bound to the credentials for the current
object. This parameter should be considered private, and could change in the future.
_use_grpc (bool) (Optional) Explicitly specifies whether to use the gRPC transport
(via GAX) or HTTP. If unset, falls back to the GOOGLE_CLOUD_DISABLE_GRPC envi-
ronment variable. This parameter should be considered private, and could change in the
future.
SCOPE = ('https://www.googleapis.com/auth/datastore',)
The scopes required for authenticating as a Cloud Datastore consumer.
99
google-cloud Documentation, Release 0.27.1
allocate_ids(incomplete_key, num_ids)
Allocate a list of IDs from a partial key.
Parameters
incomplete_key (google.cloud.datastore.key.Key) Partial key to use
as base for allocated IDs.
num_ids (int) The number of IDs to allocate.
Return type list of google.cloud.datastore.key.Key
Returns The (complete) keys allocated with incomplete_key as root.
Raises ValueError if incomplete_key is not a partial key.
batch()
Proxy to google.cloud.datastore.batch.Batch.
current_batch
Currently-active batch.
Return type google.cloud.datastore.batch.Batch, or an object implementing its
API, or NoneType (if no batch is active).
Returns The batch/transaction at the top of the batch stack.
current_transaction
Currently-active transaction.
Return type google.cloud.datastore.transaction.Transaction, or an object
implementing its API, or NoneType (if no transaction is active).
Returns The transaction at the top of the batch stack.
delete(key)
Delete the key in the Cloud Datastore.
Note: This is just a thin wrapper over delete_multi(). The backend API does not make a distinction
between a single key or multiple keys in a commit request.
delete_multi(keys)
Delete keys from the Cloud Datastore.
Parameters keys (list of google.cloud.datastore.key.Key) The keys to be
deleted from the Datastore.
get(key, missing=None, deferred=None, transaction=None)
Retrieve an entity from a single key (if it exists).
Note: This is just a thin wrapper over get_multi(). The backend API does not make a distinction
between a single key or multiple keys in a lookup request.
Parameters
Note: This is just a thin wrapper over put_multi(). The backend API does not make a distinction
between a single entity or multiple entities in a commit request.
put_multi(entities)
Save entities in the Cloud Datastore.
Parameters entities (list of google.cloud.datastore.entity.Entity) The
entities to be saved to the datastore.
Raises ValueError if entities is a single entity.
query(**kwargs)
Proxy to google.cloud.datastore.query.Query.
Passes our project.
Using query to search a datastore:
transaction()
Proxy to google.cloud.datastore.transaction.Transaction.
7.2 Entities
>>> client.get(key)
<Entity('EntityKind', 1234) {'property': 'value'}>
You can the set values on the entity just like you would on any other dictionary.
>>> entity['age'] = 20
>>> entity['name'] = 'JJ'
However, not all types are allowed as a value for a Google Cloud Datastore entity. The following basic types
are supported by the API:
datetime.datetime
Key
bool
float
int (as well as long in Python 2)
unicode (called str in Python 3)
bytes (called str in Python 2)
GeoPoint
None
In addition, three container types are supported:
list
Entity
dict (will just be treated like an Entity without a key or exclude_from_indexes)
Each entry in a list must be one of the value types (basic or container) and each value in an Entity must as
well. In this case an Entity as a container acts as a dict, but also has the special annotations of key and
exclude_from_indexes.
And you can treat an entity like a regular Python dictionary:
>>> sorted(entity.keys())
['age', 'name']
>>> sorted(entity.items())
[('age', 20), ('name', 'JJ')]
Note: When saving an entity to the backend, values which are text (unicode in Python2, str in Python3)
will be saved using the text_value field, after being encoded to UTF-8. When retrieved from the back-end,
such values will be decoded to text again. Values which are bytes (str in Python2, bytes in Python3),
will be saved using the blob_value field, without any decoding / encoding step.
Parameters
key (google.cloud.datastore.key.Key) Optional key to be set on entity.
exclude_from_indexes (tuple of string) Names of fields whose values are
not to be indexed for this entity.
exclude_from_indexes = None
Names of fields which are not to be indexed for this entity.
kind
Get the kind of the current entity.
Note: This relies entirely on the google.cloud.datastore.key.Key set on the entity. That
means that were not storing the kind of the entity at all, just the properties and a pointer to a Key which
knows its Kind.
7.3 Keys
Parameters
path_args (tuple of string and integer) May represent a partial (odd
length) or full (even length) key path.
kwargs (dict) Keyword arguments to be passed in.
name
Name getter. Based on the last element of path.
Return type str
Returns The (string) name of the key.
namespace
Namespace getter.
Return type str
Returns The namespace of the current key.
parent
The parent of the current key.
Return type google.cloud.datastore.key.Key or NoneType
Returns A new Key instance, whose path consists of all but the last element of current path. If
the current key has only one path element, returns None.
path
Path getter.
Returns a copy so that the key remains immutable.
Return type list of dict
Returns The (key) path of the current key.
project
Project getter.
Return type str
Returns The keys project.
to_legacy_urlsafe()
Convert to a base64 encode urlsafe string for App Engine.
This is intended to work with the legacy representation of a datastore Key used within Google App
Engine (a so-called Reference). The returned string can be used as the urlsafe argument to ndb.
Key(urlsafe=...). The base64 encoded values will have padding removed.
Note: The string returned by to_legacy_urlsafe is equivalent, but not identical, to the string
returned by ndb.
to_protobuf()
Return a protobuf corresponding to the key.
Return type entity_pb2.Key
Returns The protobuf representing the key.
7.4 Queries
where property is a property stored on the entity in the datastore and operator is one of OPERATORS (ie,
=, <, <=, >, >=):
Parameters
property_name (str) A property name.
operator (str) One of =, <, <=, >, >=.
value (int, str, bool, float, NoneType, datetime.datetime, google.
cloud.datastore.key.Key) The value to filter on.
Raises ValueError if operation is not one of the specified values, or if a filter names
'__key__' but passes an invalid value (a key is required).
ancestor
The ancestor key for the query.
Return type Key or None
Returns The ancestor for the query.
distinct_on
Names of fields used to group query results.
Return type sequence of string
Returns The distinct on fields set on the query.
fetch(limit=None, offset=0, start_cursor=None, end_cursor=None, client=None)
Execute the Query; return an iterator for the matching entities.
For example:
Parameters
limit (int) (Optional) limit passed through to the iterator.
offset (int) (Optional) offset passed through to the iterator.
filters
Filters set on the query.
Return type tuple[str, str, str]
Returns The filters set on the query. The sequence is (property_name, operator,
value).
key_filter(key, operator==)
Filter on a key.
Parameters
key (google.cloud.datastore.key.Key) The key to filter on.
operator (str) (Optional) One of =, <, <=, >, >=. Defaults to =.
keys_only()
Set the projection to include only keys.
kind
Get the Kind of the Query.
Return type str
Returns The kind for the query.
namespace
This querys namespace
Return type str or None
Returns the namespace assigned to this query
order
Names of fields used to sort query results.
Return type sequence of string
Returns The order(s) set on the query.
project
Get the project for this Query.
Return type str
Returns The project for the query.
projection
Fields names returned by the query.
Return type sequence of string
Returns Names of fields in query results.
7.5 Transactions
Because it derives from Batch, Transaction also provides put() and delete() methods:
By default, the transaction is rolled back if the transaction block exits with an error:
Warning:
Inside a transaction, automatically assigned IDs for entities will not be available at save time!
That means, if you try:
>>> with client.transaction():
... entity = Entity(key=client.key('Thing'))
... client.put(entity)
If you dont want to use the context manager you can initialize a transaction manually:
begin()
Begins a transaction.
This method is called automatically when entering a with statement, however it can be called explicitly if
you dont want to use a context manager.
Raises ValueError if the transaction has already begun.
commit()
Commits the transaction.
This is called automatically upon exiting a with statement, however it can be called explicitly if you dont
want to use a context manager.
This method has necessary side-effects:
Sets the current transactions ID to None.
current()
Return the topmost transaction.
Note: If the topmost element on the stack is not a transaction, returns None.
delete(key)
Remember a key to be deleted during commit().
Parameters key (google.cloud.datastore.key.Key) the key to be deleted.
Raises ValueError if the batch is not in progress, if key is not complete, or if the keys
project does not match ours.
id
Getter for the transaction ID.
Return type str
Returns The ID of the current transaction.
mutations
Getter for the changes accumulated by this batch.
Every batch is committed with a single commit request containing all the work to be done as mutations.
Inside a batch, calling put() with an entity, or delete() with a key, builds up the request by adding a
new mutation. This getter returns the protobuf that has been built-up so far.
Note: Any existing properties for the entity will be replaced by those currently set on this instance.
Already-stored properties which do not correspond to keys set on this instance will be removed from the
datastore.
Note: Property values which are text (unicode in Python2, str in Python3) map to string_value in
the datastore; values which are bytes (str in Python2, bytes in Python3) map to blob_value.
When an entity has a partial key, calling commit() sends it as an insert mutation and the key is
completed. On return, the key for the entity passed in is updated to match the key ID assigned by the
server.
Parameters entity (google.cloud.datastore.entity.Entity) the entity to be
saved.
Raises ValueError if the batch is not in progress, if entity has no key assigned, or if the keys
project does not match ours.
rollback()
Rolls back the current transaction.
This method has necessary side-effects:
Sets the current transactions ID to None.
7.6 Batches
For example, the following snippet of code will put the two save operations and the delete operation into
the same mutation, and send them to the server in a single API request:
You can also use a batch as a context manager, in which case commit() will be called automatically if its
block exits without raising an exception:
begin()
Begins a batch.
This method is called automatically when entering a with statement, however it can be called explicitly if
you dont want to use a context manager.
Overridden by google.cloud.datastore.transaction.Transaction.
Raises ValueError if the batch has already begun.
commit()
Commits the batch.
This is called automatically upon exiting a with statement, however it can be called explicitly if you dont
want to use a context manager.
Raises ValueError if the batch is not in progress.
current()
Return the topmost batch / transaction, or None.
delete(key)
Remember a key to be deleted during commit().
Parameters key (google.cloud.datastore.key.Key) the key to be deleted.
Raises ValueError if the batch is not in progress, if key is not complete, or if the keys
project does not match ours.
mutations
Getter for the changes accumulated by this batch.
Every batch is committed with a single commit request containing all the work to be done as mutations.
Inside a batch, calling put() with an entity, or delete() with a key, builds up the request by adding a
new mutation. This getter returns the protobuf that has been built-up so far.
Return type iterable
Returns The list of datastore_pb2.Mutation protobufs to be sent in the commit request.
namespace
Getter for namespace in which the batch will run.
Return type str
Returns The namespace in which the batch will run.
project
Getter for project in which the batch will run.
Return type str
Returns The project in which the batch will run.
put(entity)
Remember an entitys state to be saved during commit().
Note: Any existing properties for the entity will be replaced by those currently set on this instance.
Already-stored properties which do not correspond to keys set on this instance will be removed from the
datastore.
Note: Property values which are text (unicode in Python2, str in Python3) map to string_value in
the datastore; values which are bytes (str in Python2, bytes in Python3) map to blob_value.
When an entity has a partial key, calling commit() sends it as an insert mutation and the key is
completed. On return, the key for the entity passed in is updated to match the key ID assigned by the
server.
Parameters entity (google.cloud.datastore.entity.Entity) the entity to be
saved.
Raises ValueError if the batch is not in progress, if entity has no key assigned, or if the keys
project does not match ours.
rollback()
Rolls back the current batch.
Marks the batch as aborted (cant be used again).
Overridden by google.cloud.datastore.transaction.Transaction.
Raises ValueError if the batch is not in progress.
7.7 Helpers
7.8 Modules
>>> entity
<Entity('EntityKind', 1234) {'answer': 42}>
>>> query = client.query(kind='EntityKind')
You can also use a batch as a context manager, in which case commit() will be called automatically if its
block exits without raising an exception:
begin()
Begins a batch.
This method is called automatically when entering a with statement, however it can be called explicitly if
you dont want to use a context manager.
Overridden by google.cloud.datastore.transaction.Transaction.
Raises ValueError if the batch has already begun.
commit()
Commits the batch.
This is called automatically upon exiting a with statement, however it can be called explicitly if you dont
want to use a context manager.
Raises ValueError if the batch is not in progress.
current()
Return the topmost batch / transaction, or None.
delete(key)
Remember a key to be deleted during commit().
Parameters key (google.cloud.datastore.key.Key) the key to be deleted.
Raises ValueError if the batch is not in progress, if key is not complete, or if the keys
project does not match ours.
mutations
Getter for the changes accumulated by this batch.
Every batch is committed with a single commit request containing all the work to be done as mutations.
Inside a batch, calling put() with an entity, or delete() with a key, builds up the request by adding a
new mutation. This getter returns the protobuf that has been built-up so far.
Return type iterable
Returns The list of datastore_pb2.Mutation protobufs to be sent in the commit request.
namespace
Getter for namespace in which the batch will run.
Return type str
Returns The namespace in which the batch will run.
project
Getter for project in which the batch will run.
Return type str
Returns The project in which the batch will run.
put(entity)
Remember an entitys state to be saved during commit().
Note: Any existing properties for the entity will be replaced by those currently set on this instance.
Already-stored properties which do not correspond to keys set on this instance will be removed from the
datastore.
Note: Property values which are text (unicode in Python2, str in Python3) map to string_value in
the datastore; values which are bytes (str in Python2, bytes in Python3) map to blob_value.
When an entity has a partial key, calling commit() sends it as an insert mutation and the key is
completed. On return, the key for the entity passed in is updated to match the key ID assigned by the
server.
Parameters
project (str) (optional) The project to pass to proxied API methods.
namespace (str) (optional) namespace to pass to proxied API methods.
credentials (Credentials) (Optional) The OAuth2 Credentials to use for this
client. If not passed (and if no _http object is passed), falls back to the default inferred
from the environment.
_http (Session) (Optional) HTTP object to make requests. Can be any object that
defines request() with the same interface as requests.Session.request(). If
not passed, an _http object is created that is bound to the credentials for the current
object. This parameter should be considered private, and could change in the future.
_use_grpc (bool) (Optional) Explicitly specifies whether to use the gRPC transport
(via GAX) or HTTP. If unset, falls back to the GOOGLE_CLOUD_DISABLE_GRPC envi-
ronment variable. This parameter should be considered private, and could change in the
future.
allocate_ids(incomplete_key, num_ids)
Allocate a list of IDs from a partial key.
Parameters
incomplete_key (google.cloud.datastore.key.Key) Partial key to use
as base for allocated IDs.
num_ids (int) The number of IDs to allocate.
Return type list of google.cloud.datastore.key.Key
Returns The (complete) keys allocated with incomplete_key as root.
Raises ValueError if incomplete_key is not a partial key.
batch()
Proxy to google.cloud.datastore.batch.Batch.
current_batch
Currently-active batch.
Return type google.cloud.datastore.batch.Batch, or an object implementing its
API, or NoneType (if no batch is active).
Returns The batch/transaction at the top of the batch stack.
current_transaction
Currently-active transaction.
Return type google.cloud.datastore.transaction.Transaction, or an object
implementing its API, or NoneType (if no transaction is active).
Returns The transaction at the top of the batch stack.
delete(key)
Delete the key in the Cloud Datastore.
Note: This is just a thin wrapper over delete_multi(). The backend API does not make a distinction
between a single key or multiple keys in a commit request.
delete_multi(keys)
Delete keys from the Cloud Datastore.
Parameters keys (list of google.cloud.datastore.key.Key) The keys to be
deleted from the Datastore.
get(key, missing=None, deferred=None, transaction=None)
Retrieve an entity from a single key (if it exists).
Note: This is just a thin wrapper over get_multi(). The backend API does not make a distinction
between a single key or multiple keys in a lookup request.
Parameters
key (google.cloud.datastore.key.Key) The key to be retrieved from the
datastore.
missing (list) (Optional) If a list is passed, the key-only entities returned by the
backend as missing will be copied into it.
deferred (list) (Optional) If a list is passed, the keys returned by the backend as
deferred will be copied into it.
transaction (Transaction) (Optional) Transaction to use for read consistency.
If not passed, uses current transaction, if set.
Return type google.cloud.datastore.entity.Entity or NoneType
Returns The requested entity if it exists.
Parameters
keys (list of google.cloud.datastore.key.Key) The keys to be retrieved
from the datastore.
missing (list) (Optional) If a list is passed, the key-only entities returned by the
backend as missing will be copied into it. If the list is not empty, an error will occur.
deferred (list) (Optional) If a list is passed, the keys returned by the backend as
deferred will be copied into it. If the list is not empty, an error will occur.
transaction (Transaction) (Optional) Transaction to use for read consistency.
If not passed, uses current transaction, if set.
Return type list of google.cloud.datastore.entity.Entity
Returns The requested entities.
Raises ValueError if one or more of keys has a project which does not match our project.
key(*path_args, **kwargs)
Proxy to google.cloud.datastore.key.Key.
Passes our project.
put(entity)
Save an entity in the Cloud Datastore.
Note: This is just a thin wrapper over put_multi(). The backend API does not make a distinction
between a single entity or multiple entities in a commit request.
put_multi(entities)
Save entities in the Cloud Datastore.
Parameters entities (list of google.cloud.datastore.entity.Entity) The
entities to be saved to the datastore.
Raises ValueError if entities is a single entity.
query(**kwargs)
Proxy to google.cloud.datastore.query.Query.
Passes our project.
Using query to search a datastore:
transaction()
Proxy to google.cloud.datastore.transaction.Transaction.
class google.cloud.datastore.Entity(key=None, exclude_from_indexes=())
Bases: dict
Entities are akin to rows in a relational database
An entity storing the actual instance of data.
Each entity is officially represented with a Key, however it is possible that you might create an entity with only
a partial key (that is, a key with a kind, and possibly a parent, but without an ID). In such a case, the datastore
service will automatically assign an ID to the partial key.
Entities in this API act like dictionaries with extras built in that allow you to delete or persist the data stored on
the entity.
Entities are mutable and act like a subclass of a dictionary. This means you could take an existing entity and
change the key to duplicate the object.
Use get() to retrieve an existing entity:
>>> client.get(key)
<Entity('EntityKind', 1234) {'property': 'value'}>
You can the set values on the entity just like you would on any other dictionary.
>>> entity['age'] = 20
>>> entity['name'] = 'JJ'
However, not all types are allowed as a value for a Google Cloud Datastore entity. The following basic types
are supported by the API:
datetime.datetime
Key
bool
float
int (as well as long in Python 2)
unicode (called str in Python 3)
bytes (called str in Python 2)
GeoPoint
None
In addition, three container types are supported:
list
Entity
dict (will just be treated like an Entity without a key or exclude_from_indexes)
Each entry in a list must be one of the value types (basic or container) and each value in an Entity must as
well. In this case an Entity as a container acts as a dict, but also has the special annotations of key and
exclude_from_indexes.
And you can treat an entity like a regular Python dictionary:
>>> sorted(entity.keys())
['age', 'name']
>>> sorted(entity.items())
[('age', 20), ('name', 'JJ')]
Note: When saving an entity to the backend, values which are text (unicode in Python2, str in Python3)
will be saved using the text_value field, after being encoded to UTF-8. When retrieved from the back-end,
such values will be decoded to text again. Values which are bytes (str in Python2, bytes in Python3),
will be saved using the blob_value field, without any decoding / encoding step.
Parameters
key (google.cloud.datastore.key.Key) Optional key to be set on entity.
exclude_from_indexes (tuple of string) Names of fields whose values are
not to be indexed for this entity.
kind
Get the kind of the current entity.
Note: This relies entirely on the google.cloud.datastore.key.Key set on the entity. That
means that were not storing the kind of the entity at all, just the properties and a pointer to a Key which
knows its Kind.
Parameters
path_args (tuple of string and integer) May represent a partial (odd
length) or full (even length) key path.
kwargs (dict) Keyword arguments to be passed in.
to_legacy_urlsafe()
Convert to a base64 encode urlsafe string for App Engine.
This is intended to work with the legacy representation of a datastore Key used within Google App
Engine (a so-called Reference). The returned string can be used as the urlsafe argument to ndb.
Key(urlsafe=...). The base64 encoded values will have padding removed.
Note: The string returned by to_legacy_urlsafe is equivalent, but not identical, to the string
returned by ndb.
to_protobuf()
Return a protobuf corresponding to the key.
Return type entity_pb2.Key
Returns The protobuf representing the key.
class google.cloud.datastore.Query(client, kind=None, project=None, namespace=None, an-
cestor=None, filters=(), projection=(), order=(), dis-
tinct_on=())
Bases: object
A Query against the Cloud Datastore.
This class serves as an abstraction for creating a query over data stored in the Cloud Datastore.
Parameters
client (google.cloud.datastore.client.Client) The client used to con-
nect to Datastore.
kind (str) The kind to query.
project (str) (Optional) The project associated with the query. If not passed, uses the
clients value.
namespace (str) (Optional) The namespace to which to restrict results. If not passed,
uses the clients value.
ancestor (Key) (Optional) key of the ancestor to which this querys results are re-
stricted.
filters (tuple[str, str, str]) Property filters applied by this query. The
sequence is (property_name, operator, value).
projection (sequence of string) fields returned as part of query results.
order (sequence of string) field names used to order query results. Prepend -
to a field name to sort it in descending order.
distinct_on (sequence of string) field names used to group query results.
Raises ValueError if project is not passed and no implicit default is set.
add_filter(property_name, operator, value)
Filter the query based on a property name, operator and a value.
Expressions take the form of:
where property is a property stored on the entity in the datastore and operator is one of OPERATORS (ie,
=, <, <=, >, >=):
Parameters
property_name (str) A property name.
operator (str) One of =, <, <=, >, >=.
value (int, str, bool, float, NoneType, datetime.datetime, google.
cloud.datastore.key.Key) The value to filter on.
Raises ValueError if operation is not one of the specified values, or if a filter names
'__key__' but passes an invalid value (a key is required).
ancestor
The ancestor key for the query.
Return type Key or None
Returns The ancestor for the query.
distinct_on
Names of fields used to group query results.
Return type sequence of string
Returns The distinct on fields set on the query.
fetch(limit=None, offset=0, start_cursor=None, end_cursor=None, client=None)
Execute the Query; return an iterator for the matching entities.
For example:
Parameters
limit (int) (Optional) limit passed through to the iterator.
offset (int) (Optional) offset passed through to the iterator.
start_cursor (bytes) (Optional) cursor passed through to the iterator.
end_cursor (bytes) (Optional) cursor passed through to the iterator.
filters
Filters set on the query.
Return type tuple[str, str, str]
Returns The filters set on the query. The sequence is (property_name, operator,
value).
key_filter(key, operator==)
Filter on a key.
Parameters
key (google.cloud.datastore.key.Key) The key to filter on.
operator (str) (Optional) One of =, <, <=, >, >=. Defaults to =.
keys_only()
Set the projection to include only keys.
kind
Get the Kind of the Query.
Return type str
Returns The kind for the query.
namespace
This querys namespace
Return type str or None
Returns the namespace assigned to this query
order
Names of fields used to sort query results.
Return type sequence of string
Returns The order(s) set on the query.
project
Get the project for this Query.
Return type str
Returns The project for the query.
projection
Fields names returned by the query.
Return type sequence of string
Returns Names of fields in query results.
class google.cloud.datastore.Transaction(client)
Bases: google.cloud.datastore.batch.Batch
An abstraction representing datastore Transactions.
Transactions can be used to build up a bulk mutation and ensure all or none succeed (transactionally).
For example, the following snippet of code will put the two save operations (either insert or upsert) into
the same mutation, and execute those within a transaction:
Because it derives from Batch, Transaction also provides put() and delete() methods:
By default, the transaction is rolled back if the transaction block exits with an error:
Warning:
Inside a transaction, automatically assigned IDs for entities will not be available at save time!
That means, if you try:
>>> with client.transaction():
... entity = Entity(key=client.key('Thing'))
... client.put(entity)
If you dont want to use the context manager you can initialize a transaction manually:
begin()
Begins a transaction.
This method is called automatically when entering a with statement, however it can be called explicitly if
you dont want to use a context manager.
Raises ValueError if the transaction has already begun.
commit()
Commits the transaction.
This is called automatically upon exiting a with statement, however it can be called explicitly if you dont
want to use a context manager.
This method has necessary side-effects:
Sets the current transactions ID to None.
current()
Return the topmost transaction.
Note: If the topmost element on the stack is not a transaction, returns None.
id
Getter for the transaction ID.
Return type str
Returns The ID of the current transaction.
rollback()
Rolls back the current transaction.
This method has necessary side-effects:
Sets the current transactions ID to None.
DNS
131
google-cloud Documentation, Release 0.27.1
page_token (str) opaque marker for the next page of zones. If not passed, the
API will return the first page of zones.
Return type Iterator
Returns Iterator of ManagedZone belonging to this project.
quotas()
Return DNS quotas for the project associated with this client.
See https://cloud.google.com/dns/api/v1/projects/get
Return type mapping
Returns keys for the mapping correspond to those of the quota sub-mapping of the project
resource.
zone(name, dns_name=None, description=None)
Construct a zone bound to this client.
Parameters
name (str) Name of the zone.
dns_name (str) (Optional) DNS name of the zone. If not passed, then calls to zone.
create() will fail.
description (str) (Optional) the description for the zone. If not passed, defaults to
the value of dns_name.
Return type google.cloud.dns.zone.ManagedZone
Returns a new ManagedZone instance.
page_token (str) opaque marker for the next page of zones. If not passed, the
API will return the first page of zones.
client (google.cloud.dns.client.Client) (Optional) the client to use. If
not passed, falls back to the client stored on the current zone.
Return type Iterator
Returns Iterator of Changes belonging to this zone.
list_resource_record_sets(max_results=None, page_token=None, client=None)
List resource record sets for this zone.
See https://cloud.google.com/dns/api/v1/resourceRecordSets/list
Parameters
max_results (int) maximum number of zones to return, If not passed, defaults to a
value set by the API.
page_token (str) opaque marker for the next page of zones. If not passed, the
API will return the first page of zones.
client (google.cloud.dns.client.Client) (Optional) the client to use. If
not passed, falls back to the client stored on the current zone.
Return type Iterator
Returns Iterator of ResourceRecordSet belonging to this zone.
name_server_set
Named set of DNS name servers that all host the same ManagedZones.
Most users will leave this blank.
See https://cloud.google.com/dns/api/v1/managedZones#nameServerSet
Return type str, or NoneType
Returns The name as set by the user, or None (the default).
name_servers
Datetime at which the zone was created.
Return type list of strings, or NoneType.
Returns the assigned name servers (None until set from the server).
path
URL path for the zones APIs.
Return type str
Returns the path based on project and dataste name.
project
Project bound to the zone.
Return type str
Returns the project (derived from the client).
reload(client=None)
API call: refresh zone properties via a GET request
See https://cloud.google.com/dns/api/v1/managedZones/get
8.5 Client
Client objects provide a means to configure your DNS applications. Each instance holds both a project and an
authenticated connection to the DNS service.
For an overview of authentication in google-cloud-python, see Authentication.
Assuming your environment is set up as described in that document, create an instance of Client.
8.6 Projects
A project is the top-level container in the DNS API: it is tied closely to billing, and can provide default access control
across all its datasets. If no project is passed to the client container, the library attempts to infer a project using the
environment (including explicit environment variables, GAE, or GCE).
To override the project inferred from the environment, pass an explicit project to the constructor, or to either of the
alternative classmethod factories:
>>> from google.cloud import dns
>>> client = dns.Client(project='PROJECT_ID')
Each project has an access control list granting reader / writer / owner permission to one or more entities. This list
cannot be queried or set via the API: it must be managed using the Google Developer Console.
A managed zone is the container for DNS records for the same DNS name suffix and has a set of name servers that
accept and responds to queries:
>>> from google.cloud import dns
>>> client = dns.Client(project='PROJECT_ID')
>>> zone = client.zone('acme-co', 'example.com',
... description='Acme Company zone')
Update the resource record set for a zone by creating a change request bundling additions to or deletions from the set.
List changes made to the resource record set for a given zone:
Note: The page_token returned from zone.list_changes() will be an opaque string if there are more
changes than can be returned in a single request. To enumerate them all, repeat calling zone.list_changes(),
passing the page_token, until the token is None. E.g.:
Natural Language
The Google Natural Language API can be used to reveal the structure and meaning of text via powerful machine
learning models. You can use it to extract information about people, places, events and much more, mentioned in text
documents, news articles or blog posts. You can use it to understand sentiment about your product on social media or
parse intent from customer conversations happening in a call center or a messaging app. You can analyze text uploaded
in your request or integrate with your document storage on Google Cloud Storage.
9.2 Documents
141
google-cloud Documentation, Release 0.27.1
analyzeEntities
analyzeSentiment
annotateText
and each method uses a Document for representing text.
>>> document = language.types.Document(
... content='Google, headquartered in Mountain View, unveiled the '
... 'new Android phone at the Consumer Electronic Show. '
... 'Sundar Pichai said in his keynote that users love '
... 'their new Android phones.',
... language='en',
... type='PLAIN_TEXT',
... )
The documents language defaults to None, which will cause the API to auto-detect the language.
In addition, you can construct an HTML document:
>>> html_content = """\
... <html>
... <head>
... <title>El Tiempo de las Historias</time>
... </head>
... <body>
... <p>La vaca saltó sobre la luna.</p>
... </body>
... </html>
... """
>>> document = language.types.Document(
... content=html_content,
... language='es',
... type='HTML',
... )
The language argument can be either ISO-639-1 or BCP-47 language codes. The API reference page contains the
full list of supported languages.
In addition to supplying the text / HTML content, a document can refer to content stored in Google Cloud Storage.
>>> document = language.types.Document(
... gcs_content_uri='gs://my-text-bucket/sentiment-me.txt',
... type=language.enums.HTML,
... )
The analyze_entities() method finds named entities (i.e. proper names) in the text. This method returns a
AnalyzeEntitiesResponse.
>>> document = language.types.Document(
... content='Michelangelo Caravaggio, Italian painter, is '
... 'known for "The Calling of Saint Matthew".',
... type=language.enums.Type.PLAIN_TEXT,
... )
>>> response = client.analyze_entities(
... document=document,
... encoding_type='UTF32',
... )
>>> for entity in response.entities:
... print('=' * 20)
... print(' name: {0}'.format(entity.name))
... print(' type: {0}'.format(entity.entity_type))
... print(' metadata: {0}'.format(entity.metadata))
... print(' salience: {0}'.format(entity.salience))
====================
name: Michelangelo Caravaggio
type: PERSON
metadata: {'wikipedia_url': 'https://en.wikipedia.org/wiki/Caravaggio'}
salience: 0.7615959
====================
name: Italian
type: LOCATION
metadata: {'wikipedia_url': 'https://en.wikipedia.org/wiki/Italy'}
salience: 0.19960518
====================
name: The Calling of Saint Matthew
type: EVENT
metadata: {'wikipedia_url': 'https://en.wikipedia.org/wiki/The_Calling_
of_St_Matthew_(Caravaggio)'}
salience: 0.038798928
Note: It is recommended to send an encoding_type argument to Natural Language methods, so they provide
useful offsets for the data they return. While the correct value varies by environment, in Python you usually want
UTF32.
The analyze_sentiment() method analyzes the sentiment of the provided text. This method returns a
AnalyzeSentimentResponse.
Note: It is recommended to send an encoding_type argument to Natural Language methods, so they provide
useful offsets for the data they return. While the correct value varies by environment, in Python you usually want
UTF32.
The annotate_text() method analyzes a document and is intended for users who are familiar with machine
learning and need in-depth text features to build upon. This method returns a AnnotateTextResponse.
This package includes clients for multiple versions of the Natural Language API. By default, you will get v1, the
latest GA version.
Example
Parameters
document (Union[dict, Document]) Input document. If a dict is provided, it
must be of the same form as the protobuf message Document
encoding_type (EncodingType) The encoding type used by the API to calculate
offsets.
options (CallOptions) Overrides the default settings for this call, e.g, timeout,
retries etc.
Returns A AnalyzeEntitiesResponse instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
Example
Parameters
document (Union[dict, Document]) Input document. If a dict is provided, it
must be of the same form as the protobuf message Document
encoding_type (EncodingType) The encoding type used by the API to calculate
offsets.
options (CallOptions) Overrides the default settings for this call, e.g, timeout,
retries etc.
Returns A AnalyzeEntitySentimentResponse instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
Example
Parameters
document (Union[dict, Document]) Input document. If a dict is provided, it
must be of the same form as the protobuf message Document
encoding_type (EncodingType) The encoding type used by the API to calculate
sentence offsets.
options (CallOptions) Overrides the default settings for this call, e.g, timeout,
retries etc.
Returns A AnalyzeSentimentResponse instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
Example
Parameters
document (Union[dict, Document]) Input document. If a dict is provided, it
must be of the same form as the protobuf message Document
encoding_type (EncodingType) The encoding type used by the API to calculate
offsets.
options (CallOptions) Overrides the default settings for this call, e.g, timeout,
retries etc.
Returns A AnalyzeSyntaxResponse instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
Example
Parameters
document (Union[dict, Document]) Input document. If a dict is provided, it
must be of the same form as the protobuf message Document
features (Union[dict, Features]) The enabled features. If a dict is provided,
it must be of the same form as the protobuf message Features
encoding_type (EncodingType) The encoding type used by the API to calculate
offsets.
options (CallOptions) Overrides the default settings for this call, e.g, timeout,
retries etc.
Returns A AnnotateTextResponse instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
class google.cloud.language_v1.types.AnalyzeEntitiesRequest
The entity analysis request message.
document
Input document.
encoding_type
The encoding type used by the API to calculate offsets.
class google.cloud.language_v1.types.AnalyzeEntitiesResponse
The entity analysis response message.
entities
The recognized entities in the input document.
language
The language of the text, which will be the same as the language specified in the re-
quest or, if not specified, the automatically-detected language. See [Document.language][googl
e.cloud.language.v1.Document.language] field for more details.
class google.cloud.language_v1.types.AnalyzeEntitySentimentRequest
The entity-level sentiment analysis request message.
document
Input document.
encoding_type
The encoding type used by the API to calculate offsets.
class google.cloud.language_v1.types.AnalyzeEntitySentimentResponse
The entity-level sentiment analysis response message.
entities
The recognized entities in the input document with associated sentiments.
language
The language of the text, which will be the same as the language specified in the re-
quest or, if not specified, the automatically-detected language. See [Document.language][googl
e.cloud.language.v1.Document.language] field for more details.
class google.cloud.language_v1.types.AnalyzeSentimentRequest
The sentiment analysis request message.
document
Input document.
encoding_type
The encoding type used by the API to calculate sentence offsets.
class google.cloud.language_v1.types.AnalyzeSentimentResponse
The sentiment analysis response message.
document_sentiment
The overall sentiment of the input document.
language
The language of the text, which will be the same as the language specified in the re-
quest or, if not specified, the automatically-detected language. See [Document.language][googl
e.cloud.language.v1.Document.language] field for more details.
sentences
The sentiment for all the sentences in the document.
class google.cloud.language_v1.types.AnalyzeSyntaxRequest
The syntax analysis request message.
document
Input document.
encoding_type
The encoding type used by the API to calculate offsets.
class google.cloud.language_v1.types.AnalyzeSyntaxResponse
The syntax analysis response message.
sentences
Sentences in the input document.
tokens
Tokens, along with their syntactic information, in the input document.
language
The language of the text, which will be the same as the language specified in the re-
quest or, if not specified, the automatically-detected language. See [Document.language][googl
e.cloud.language.v1.Document.language] field for more details.
class google.cloud.language_v1.types.AnnotateTextRequest
The request message for the text annotation API, which can perform multiple analysis types (sentiment, entities,
and syntax) in one call.
extract_syntax
Extract syntax information.
extract_entities
Extract entities.
extract_document_sentiment
Extract document-level sentiment.
extract_entity_sentiment
Extract entities and their associated sentiment.
document
Input document.
features
The enabled features.
encoding_type
The encoding type used by the API to calculate offsets.
class Features
All available features for sentiment, syntax, and semantic analysis. Setting each one to true will enable
that specific analysis for the input.
class google.cloud.language_v1.types.AnnotateTextResponse
The text annotations response message.
sentences
Sentences in the input document. Populated if the user enables [AnnotateTextRe-
quest.Features.extract_syntax][google.cloud.la nguage.v1.AnnotateTextRequest.Features.extract_syntax].
tokens
Tokens, along with their syntactic information, in the input document. Populated if the
user enables [AnnotateTextRequest.F eatures.extract_syntax][google.cloud.language.v1.AnnotateText Re-
quest.Features.extract_syntax].
entities
Entities, along with their semantic information, in the input document. Populated if the user
enables [AnnotateTextRequest.F eatures.extract_entities][google.cloud.language.v1.AnnotateTe xtRe-
quest.Features.extract_entities].
document_sentiment
The overall sentiment for the document. Populated if the user enables [AnnotateTextRe-
quest.Features.extract_document_senti ment][google.cloud.language.v1.AnnotateTextRequest.Features.ex
tract_document_sentiment].
language
The language of the text, which will be the same as the language specified in the re-
quest or, if not specified, the automatically-detected language. See [Document.language][googl
e.cloud.language.v1.Document.language] field for more details.
class google.cloud.language_v1.types.DependencyEdge
Represents dependency parse tree information for a token. (For more information on dependency labels, see
http://www.aclweb.org/anthology/P13-2017
head_token_index
Represents the head of this token in the dependency tree. This is the index of the token which has an
arc going to this token. The index is the position of the token in the array of tokens returned by the API
method. If this token is a root token, then the head_token_index is its own index.
label
The parse label for the token.
class google.cloud.language_v1.types.Document
Represents the input to API methods.
type
Required. If the type is not set or is TYPE_UNSPECIFIED, returns an INVALID_ARGUMENT error.
source
The source of the document: a string containing the content or a Google Cloud Storage URI.
content
The content of the input in string format.
gcs_content_uri
The Google Cloud Storage URI where the file content is located. This URI must be of the form:
gs://bucket_name/object_name. For more details, see https://cloud.google.com/storage/docs/reference-
uris. NOTE: Cloud Storage object versioning is not supported.
language
The language of the document (if not specified, the language is automatically detected). Both ISO and
BCP-47 language codes are accepted. Language Support lists currently supported languages for each API
method. If the language (either specified by the caller or automatically detected) is not supported by the
called API method, an INVALID_ARGUMENT error is returned.
class google.cloud.language_v1.types.Entity
Represents a phrase in the text that is a known entity, such as a person, an organization, or location. The API
associates information, such as salience and mentions, with entities.
name
The representative name for the entity.
type
The entity type.
metadata
Metadata associated with the entity. Currently, Wikipedia URLs and Knowledge Graph MIDs are pro-
vided, if available. The associated keys are wikipedia_url and mid, respectively.
salience
The salience score associated with the entity in the [0, 1.0] range. The salience score for an entity provides
information about the importance or centrality of that entity to the entire document text. Scores closer to 0
are less salient, while scores closer to 1.0 are highly salient.
mentions
The mentions of this entity in the input document. The API currently supports proper noun mentions.
sentiment
For calls to [AnalyzeEntitySentiment][] or if [AnnotateTextReq
uest.Features.extract_entity_sentiment][google.cloud.languag e.v1.AnnotateTextRequest.Features.extract_entity_sentiment]
is set to true, this field will contain the aggregate sentiment expressed for this entity in the provided
document.
class google.cloud.language_v1.types.EntityMention
Represents a mention for an entity in the text. Currently, proper noun mentions are supported.
text
The mention text.
type
The type of the entity mention.
sentiment
For calls to [AnalyzeEntitySentiment][] or if [AnnotateTextReq
uest.Features.extract_entity_sentiment][google.cloud.languag e.v1.AnnotateTextRequest.Features.extract_entity_sentiment]
is set to true, this field will contain the sentiment expressed for this mention of the entity in the provided
document.
class google.cloud.language_v1.types.PartOfSpeech
Represents part of speech information for a token. Parts of speech are as defined in http://www.lrec-conf.org/
proceedings/lrec2012/pdf/274_Paper.pdf
tag
The part of speech tag.
aspect
The grammatical aspect.
case
The grammatical case.
form
The grammatical form.
gender
The grammatical gender.
mood
The grammatical mood.
number
The grammatical number.
person
The grammatical person.
proper
The grammatical properness.
reciprocity
The grammatical reciprocity.
tense
The grammatical tense.
voice
The grammatical voice.
class google.cloud.language_v1.types.Sentence
Represents a sentence in the input document.
text
The sentence text.
sentiment
For calls to [AnalyzeSentiment][] or if [AnnotateTextRequest.F ea-
tures.extract_document_sentiment][google.cloud.language.v1 .AnnotateTextRe-
quest.Features.extract_document_sentiment] is set to true, this field will contain the sentiment for
the sentence.
class google.cloud.language_v1.types.Sentiment
Represents the feeling associated with the entire text or entities in the text.
magnitude
A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment re-
gardless of score (positive or negative).
score
Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment).
class google.cloud.language_v1.types.TextSpan
Represents an output piece of text.
content
The content of the output text.
begin_offset
The API calculates the beginning offset of the content in the original document according to the [Encod-
ingType][google.cloud.language.v1.EncodingType] specified in the API request.
class google.cloud.language_v1.types.Token
Represents the smallest syntactic building block of the text.
text
The token text.
part_of_speech
Parts of speech tag for this token.
dependency_edge
Dependency tree parse for this token.
lemma
Lemma of the token.
If you are interested in beta features ahead of the latest GA, you may opt-in to the v1.1 beta, which is spelled v1beta2.
In order to do this, you will want to import from google.cloud.language_v1beta2 in lieu of google.
cloud.language.
An API and type reference is provided for the v1.1 beta also:
Example
Parameters
Example
Parameters
document (Union[dict, Document]) Input document. If a dict is provided, it
must be of the same form as the protobuf message Document
encoding_type (EncodingType) The encoding type used by the API to calculate
offsets.
options (CallOptions) Overrides the default settings for this call, e.g, timeout,
retries etc.
Returns A AnalyzeEntitySentimentResponse instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
Example
Parameters
document (Union[dict, Document]) Input document. If a dict is provided, it
must be of the same form as the protobuf message Document
encoding_type (EncodingType) The encoding type used by the API to calculate
sentence offsets for the sentence sentiment.
options (CallOptions) Overrides the default settings for this call, e.g, timeout,
retries etc.
Returns A AnalyzeSentimentResponse instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
Example
Parameters
document (Union[dict, Document]) Input document. If a dict is provided, it
must be of the same form as the protobuf message Document
encoding_type (EncodingType) The encoding type used by the API to calculate
offsets.
options (CallOptions) Overrides the default settings for this call, e.g, timeout,
retries etc.
Returns A AnalyzeSyntaxResponse instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
Example
Parameters
document (Union[dict, Document]) Input document. If a dict is provided, it
must be of the same form as the protobuf message Document
features (Union[dict, Features]) The enabled features. If a dict is provided,
it must be of the same form as the protobuf message Features
encoding_type (EncodingType) The encoding type used by the API to calculate
offsets.
options (CallOptions) Overrides the default settings for this call, e.g, timeout,
retries etc.
Returns A AnnotateTextResponse instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
classify_text(document, options=None)
Classifies a document into categories.
Example
Parameters
document (Union[dict, Document]) Input document. If a dict is provided, it
must be of the same form as the protobuf message Document
options (CallOptions) Overrides the default settings for this call, e.g, timeout,
retries etc.
class google.cloud.language_v1beta2.types.AnalyzeEntitiesRequest
The entity analysis request message.
document
Input document.
encoding_type
The encoding type used by the API to calculate offsets.
class google.cloud.language_v1beta2.types.AnalyzeEntitiesResponse
The entity analysis response message.
entities
The recognized entities in the input document.
language
The language of the text, which will be the same as the language specified in the re-
quest or, if not specified, the automatically-detected language. See [Document.language][googl
e.cloud.language.v1beta2.Document.language] field for more details.
class google.cloud.language_v1beta2.types.AnalyzeEntitySentimentRequest
The entity-level sentiment analysis request message.
document
Input document.
encoding_type
The encoding type used by the API to calculate offsets.
class google.cloud.language_v1beta2.types.AnalyzeEntitySentimentResponse
The entity-level sentiment analysis response message.
entities
The recognized entities in the input document with associated sentiments.
language
The language of the text, which will be the same as the language specified in the re-
quest or, if not specified, the automatically-detected language. See [Document.language][googl
e.cloud.language.v1beta2.Document.language] field for more details.
class google.cloud.language_v1beta2.types.AnalyzeSentimentRequest
The sentiment analysis request message.
document
Input document.
encoding_type
The encoding type used by the API to calculate sentence offsets for the sentence sentiment.
class google.cloud.language_v1beta2.types.AnalyzeSentimentResponse
The sentiment analysis response message.
document_sentiment
The overall sentiment of the input document.
language
The language of the text, which will be the same as the language specified in the re-
quest or, if not specified, the automatically-detected language. See [Document.language][googl
e.cloud.language.v1beta2.Document.language] field for more details.
sentences
The sentiment for all the sentences in the document.
class google.cloud.language_v1beta2.types.AnalyzeSyntaxRequest
The syntax analysis request message.
document
Input document.
encoding_type
The encoding type used by the API to calculate offsets.
class google.cloud.language_v1beta2.types.AnalyzeSyntaxResponse
The syntax analysis response message.
sentences
Sentences in the input document.
tokens
Tokens, along with their syntactic information, in the input document.
language
The language of the text, which will be the same as the language specified in the re-
quest or, if not specified, the automatically-detected language. See [Document.language][googl
e.cloud.language.v1beta2.Document.language] field for more details.
class google.cloud.language_v1beta2.types.AnnotateTextRequest
The request message for the text annotation API, which can perform multiple analysis types (sentiment, entities,
and syntax) in one call.
extract_syntax
Extract syntax information.
extract_entities
Extract entities.
extract_document_sentiment
Extract document-level sentiment.
extract_entity_sentiment
Extract entities and their associated sentiment.
classify_text
Classify the full document into categories.
document
Input document.
features
The enabled features.
encoding_type
The encoding type used by the API to calculate offsets.
class Features
All available features for sentiment, syntax, and semantic analysis. Setting each one to true will enable
that specific analysis for the input.
class google.cloud.language_v1beta2.types.AnnotateTextResponse
The text annotations response message.
sentences
Sentences in the input document. Populated if the user enables [AnnotateTextRe-
quest.Features.extract_syntax][google.cloud.la nguage.v1beta2.AnnotateTextRequest.Features.extract_syntax].
tokens
Tokens, along with their syntactic information, in the input document. Populated if the user en-
ables [AnnotateTextRequest.F eatures.extract_syntax][google.cloud.language.v1beta2.Annotat eTextRe-
quest.Features.extract_syntax].
entities
Entities, along with their semantic information, in the input document. Populated if the user en-
ables [AnnotateTextRequest.F eatures.extract_entities][google.cloud.language.v1beta2.Annot ateTextRe-
quest.Features.extract_entities].
document_sentiment
The overall sentiment for the document. Populated if the user enables [AnnotateTextRe-
quest.Features.extract_document_senti ment][google.cloud.language.v1beta2.AnnotateTextRequest.Featur
es.extract_document_sentiment].
language
The language of the text, which will be the same as the language specified in the re-
quest or, if not specified, the automatically-detected language. See [Document.language][googl
e.cloud.language.v1beta2.Document.language] field for more details.
categories
Categories identified in the input document.
class google.cloud.language_v1beta2.types.ClassificationCategory
Represents a category returned from the text classifier.
name
The name of the category representing the document.
confidence
The classifiers confidence of the category. Number represents how certain the classifier is that this cate-
gory represents the given text.
class google.cloud.language_v1beta2.types.ClassifyTextRequest
The document classification request message.
document
Input document.
class google.cloud.language_v1beta2.types.ClassifyTextResponse
The document classification response message.
categories
Categories representing the input document.
class google.cloud.language_v1beta2.types.DependencyEdge
Represents dependency parse tree information for a token.
head_token_index
Represents the head of this token in the dependency tree. This is the index of the token which has an
arc going to this token. The index is the position of the token in the array of tokens returned by the API
method. If this token is a root token, then the head_token_index is its own index.
label
The parse label for the token.
class google.cloud.language_v1beta2.types.Document
Represents the input to API methods.
type
Required. If the type is not set or is TYPE_UNSPECIFIED, returns an INVALID_ARGUMENT error.
source
The source of the document: a string containing the content or a Google Cloud Storage URI.
content
The content of the input in string format.
gcs_content_uri
The Google Cloud Storage URI where the file content is located. This URI must be of the form:
gs://bucket_name/object_name. For more details, see https://cloud.google.com/storage/docs/reference-
uris. NOTE: Cloud Storage object versioning is not supported.
language
The language of the document (if not specified, the language is automatically detected). Both ISO and
BCP-47 language codes are accepted. Language Support lists currently supported languages for each API
method. If the language (either specified by the caller or automatically detected) is not supported by the
called API method, an INVALID_ARGUMENT error is returned.
class google.cloud.language_v1beta2.types.Entity
Represents a phrase in the text that is a known entity, such as a person, an organization, or location. The API
associates information, such as salience and mentions, with entities.
name
The representative name for the entity.
type
The entity type.
metadata
Metadata associated with the entity. Currently, Wikipedia URLs and Knowledge Graph MIDs are pro-
vided, if available. The associated keys are wikipedia_url and mid, respectively.
salience
The salience score associated with the entity in the [0, 1.0] range. The salience score for an entity provides
information about the importance or centrality of that entity to the entire document text. Scores closer to 0
are less salient, while scores closer to 1.0 are highly salient.
mentions
The mentions of this entity in the input document. The API currently supports proper noun mentions.
sentiment
For calls to [AnalyzeEntitySentiment][] or if [AnnotateTextReq
uest.Features.extract_entity_sentiment][google.cloud.languag e.v1beta2.AnnotateTextRequest.Features.extract_entity_sentim
ent] is set to true, this field will contain the aggregate sentiment expressed for this entity in the provided
document.
class google.cloud.language_v1beta2.types.EntityMention
Represents a mention for an entity in the text. Currently, proper noun mentions are supported.
text
The mention text.
type
The type of the entity mention.
sentiment
For calls to [AnalyzeEntitySentiment][] or if [AnnotateTextReq
uest.Features.extract_entity_sentiment][google.cloud.languag e.v1beta2.AnnotateTextRequest.Features.extract_entity_sentim
ent] is set to true, this field will contain the sentiment expressed for this mention of the entity in the
provided document.
class google.cloud.language_v1beta2.types.PartOfSpeech
Represents part of speech information for a token.
tag
The part of speech tag.
aspect
The grammatical aspect.
case
The grammatical case.
form
The grammatical form.
gender
The grammatical gender.
mood
The grammatical mood.
number
The grammatical number.
person
The grammatical person.
proper
The grammatical properness.
reciprocity
The grammatical reciprocity.
tense
The grammatical tense.
voice
The grammatical voice.
class google.cloud.language_v1beta2.types.Sentence
Represents a sentence in the input document.
text
The sentence text.
sentiment
For calls to [AnalyzeSentiment][] or if [AnnotateTextRequest.F ea-
tures.extract_document_sentiment][google.cloud.language.v1 beta2.AnnotateTextRequest.Features.extract_document_sentim
t] is set to true, this field will contain the sentiment for the sentence.
class google.cloud.language_v1beta2.types.Sentiment
Represents the feeling associated with the entire text or entities in the text.
magnitude
A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment re-
gardless of score (positive or negative).
score
Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment).
class google.cloud.language_v1beta2.types.TextSpan
Represents an output piece of text.
content
The content of the output text.
begin_offset
The API calculates the beginning offset of the content in the original document according to the [Encod-
ingType][google.cloud.language.v1beta2.EncodingType] specified in the API request.
class google.cloud.language_v1beta2.types.Token
Represents the smallest syntactic building block of the text.
text
The token text.
part_of_speech
Parts of speech tag for this token.
dependency_edge
Dependency tree parse for this token.
lemma
Lemma of the token.
Note: The client for the beta API is provided on a provisional basis. The API surface is subject to change, and it is
possible that this client will be deprecated or removed after its features become GA.
Pub/Sub
Google Cloud Pub/Sub is a fully-managed real-time messaging service that allows you to send and receive messages
between independent applications. You can leverage Cloud Pub/Subs flexibility to decouple systems and components
hosted on Google Cloud Platform or elsewhere on the Internet. By building on the same technology Google uses,
Cloud Pub/Sub is designed to provide at least once delivery at low latency with on-demand scalability to 1 million
messages per second (and beyond).
163
google-cloud Documentation, Release 0.27.1
10.2 Publishing
To publish data to Cloud Pub/Sub you must create a topic, and then publish messages to it
>>> import os
>>> from google.cloud import pubsub
>>>
>>> publisher = pubsub.PublisherClient()
>>> topic = 'projects/{project_id}/topics/{topic}'.format(
... project_id=os.getenv('GOOGLE_CLOUD_PROJECT'),
... topic='MY_TOPIC_NAME', # Set this to something appropriate.
... )
>>> publisher.create_topic()
>>> publisher.publish(topic, b'My first message!', spam='eggs')
10.3 Subscribing
To subscribe to data in Cloud Pub/Sub, you create a subscription based on the topic, and subscribe to that.
>>> import os
>>> from google.cloud import pubsub
>>>
>>> subscriber = pubsub.SubscriberClient()
>>> topic = 'projects/{project_id}/topics/{topic}'.format(
... project_id=os.getenv('GOOGLE_CLOUD_PROJECT'),
... topic='MY_TOPIC_NAME', # Set this to something appropriate.
... )
>>> subscription_name = 'projects/{project_id}/subscriptions/{sub}'.format(
... project_id=os.getenv('GOOGLE_CLOUD_PROJECT'),
... sub='MY_SUBSCRIPTION_NAME', # Set this to something appropriate.
... )
>>> subscription = subscriber.create_subscription(subscription_name, topic)
The subscription is opened asychronously, and messages are processed by use of a callback.
>>> def callback(message):
... print(message.data)
... message.ack()
>>> subscription.open(callback)
Publish a Message
To publish a message, use the publish() method. This method accepts two positional arguments: the topic to
publish to, and the body of the message. It also accepts arbitrary keyword arguments, which are passed along as
attributes of the message.
The topic is passed along as a string; all topics have the canonical form of projects/{project_name}/
topics/{topic_name}.
Therefore, a very basic publishing call looks like:
topic = 'projects/{project}/topics/{topic}'
publish_client.publish(topic, b'This is my message.')
Note: The message data in Pub/Sub is an opaque blob of bytes, and as such, you must send a bytes object in Python
3 (str object in Python 2). If you send a text string (str in Python 3, unicode in Python 2), the method will raise
TypeError.
The reason it works this way is because there is no reasonable guarantee that the same language or environment is
being used by the subscriber, and so it is the responsibility of the publisher to properly encode the payload.
topic = 'projects/{project}/topics/{topic}'
publish_client.publish(topic, b'This is my message.', foo='bar')
Batching
Whenever you publish a message, a Batch is automatically created. This way, if you publish a large volume of
messages, it reduces the number of requests made to the server.
The way that this works is that on the first message that you send, a new Batch is created automatically. For every
subsequent message, if there is already a valid batch that is still accepting messages, then that batch is used. When
the batch is created, it begins a countdown that publishes the batch once sufficient time has elapsed (by default, this is
0.05 seconds).
If you need different batching settings, simply provide a BatchSettings object when you instantiate the Client:
client = pubsub.PublisherClient(
batch_settings=BatchSettings(max_messages=500),
)
Pub/Sub accepts a maximum of 1,000 messages in a batch, and the size of a batch can not exceed 10 megabytes.
Futures
Every call to publish() will return a class that conforms to the Future interface. You can use this to ensure that
the publish succeeded:
# The callback is added once you get the future. If you add a callback
# and the future is already done, it will simply be executed immediately.
future = client.publish(topic, b'My awesome message.')
future.add_done_callback(callback)
API Reference
class google.cloud.pubsub_v1.publisher.client.Client(batch_settings=(),
batch_class=<class
google.cloud.pubsub_v1.publisher.batch.thread.Batch>
**kwargs)
A publisher client for Google Cloud Pub/Sub.
This creates an object that is capable of publishing messages. Generally, you can instantiate this client with no
arguments, and you get sensible defaults.
Parameters
batch_settings (BatchSettings) The settings for batch publishing.
batch_class (class) A class that describes how to handle batches. You may subclass
the pubsub_v1.publisher.batch.base.BaseBatch class in order to define your
own batcher. This is primarily provided to allow use of different concurrency models; the
default is based on threading.Thread.
kwargs (dict) Any additional arguments provided are sent as keyword arguments to the
underlying PublisherClient. Generally, you should not need to set additional keyword
arguments.
batch(topic, message, create=True, autocommit=True)
Return the current batch for the provided topic.
This will create a new batch only if no batch currently exists.
Parameters
topic (str) A string representing the topic.
message (PubsubMessage) The message that will be committed.
create (bool) Whether to create a new batch if no batch is found. Defaults to True.
autocommit (bool) Whether to autocommit this batch. This is primarily useful for
debugging.
Returns The batch object.
Return type Batch
create_topic(*a, **kw)
Creates the given topic with the given name.
Example
Parameters
name (string) The name of the topic. It must have the format "projects/
{project}/topics/{topic}". {topic} must start with a letter, and contain only
letters ([A-Za-z]), numbers ([0-9]), dashes (-), underscores (_), periods (.), tildes
(~), plus (+) or percent signs (%). It must be between 3 and 255 characters in length, and
it must not start with "goog".
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.cloud.proto.pubsub.v1.pubsub_pb2.Topic instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
delete_topic(*a, **kw)
Deletes the topic with the given name. Returns NOT_FOUND if the topic does not exist. After a topic is
deleted, a new topic may be created with the same name; this is an entirely new topic with none of the old
configuration or subscriptions. Existing subscriptions to this topic are not deleted, but their topic field is
set to _deleted-topic_.
Example
Parameters
topic (string) Name of the topic to delete. Format is projects/{project}/
topics/{topic}.
get_iam_policy(*a, **kw)
Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not
have a policy set.
Example
Parameters
resource (string) REQUIRED: The resource for which the policy is being re-
quested. resource is usually specified as a path. For example, a Project resource is
specified as projects/{project}.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.iam.v1.policy_pb2.Policy instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
get_topic(*a, **kw)
Gets the configuration of a topic.
Example
Parameters
topic (string) The name of the topic to get. Format is projects/{project}/
topics/{topic}.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.cloud.proto.pubsub.v1.pubsub_pb2.Topic instance.
Raises
list_topic_subscriptions(*a, **kw)
Lists the name of the subscriptions for this topic.
Example
Parameters
topic (string) The name of the topic that subscriptions are attached to. Format is
projects/{project}/topics/{topic}.
page_size (int) The maximum number of resources contained in the underlying
API response. If page streaming is performed per- resource, this parameter does not affect
the return value. If page streaming is performed per-page, this determines the maximum
number of resources in a page.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.gax.PageIterator instance. By default, this is an iterable of string
instances. This object can also be configured to iterate over the pages of the response through
the CallOptions parameter.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
list_topics(*a, **kw)
Lists matching topics.
Example
Parameters
project (string) The name of the cloud project that topics belong to. Format is
projects/{project}.
page_size (int) The maximum number of resources contained in the underlying
API response. If page streaming is performed per- resource, this parameter does not affect
the return value. If page streaming is performed per-page, this determines the maximum
number of resources in a page.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.gax.PageIterator instance. By default, this is an iterable of
google.cloud.proto.pubsub.v1.pubsub_pb2.Topic instances. This object
can also be configured to iterate over the pages of the response through the CallOptions
parameter.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
match_project_from_project_name(*a, **kw)
Parses the project from a project resource.
Parameters project_name (string) A fully-qualified path representing a project re-
source.
Returns A string representing the project.
match_project_from_topic_name(*a, **kw)
Parses the project from a topic resource.
Parameters topic_name (string) A fully-qualified path representing a topic resource.
Returns A string representing the project.
match_topic_from_topic_name(*a, **kw)
Parses the topic from a topic resource.
Parameters topic_name (string) A fully-qualified path representing a topic resource.
Returns A string representing the topic.
project_path(*a, **kw)
Returns a fully-qualified project resource name string.
publish(topic, data, **attrs)
Publish a single message.
Note: Messages in Pub/Sub are blobs of bytes. They are binary data, not text. You must send data as a
bytestring (bytes in Python 3; str in Python 2), and this library will raise an exception if you send a
text string.
The reason that this is so important (and why we do not try to coerce for you) is because Pub/Sub is also
platform independent and there is no way to know how to decode messages properly on the other side;
therefore, encoding and decoding is a required exercise for the developer.
Add the given message to this object; this will cause it to be published once the batch either has enough
messages or a sufficient period of time has elapsed.
Example
Parameters
topic (str) The topic to publish messages to.
data (bytes) A bytestring representing the message body. This must be a bytestring.
attrs (Mapping[str, str]) A dictionary of attributes to be sent as metadata.
(These may be text strings or byte strings.)
Returns An object conforming to the concurrent.futures.Future interface.
Return type Future
set_iam_policy(*a, **kw)
Sets the access control policy on the specified resource. Replaces any existing policy.
Example
Parameters
resource (string) REQUIRED: The resource for which the policy is being speci-
fied. resource is usually specified as a path. For example, a Project resource is specified
as projects/{project}.
policy (google.iam.v1.policy_pb2.Policy) REQUIRED: The complete
policy to be applied to the resource. The size of the policy is limited to a few 10s
of KB. An empty policy is a valid policy but certain Cloud Platform services (such as
Projects) might reject them.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.iam.v1.policy_pb2.Policy instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
test_iam_permissions(*a, **kw)
Returns permissions that a caller has on the specified resource. If the resource does not exist, this will
return an empty set of permissions, not a NOT_FOUND error.
Example
Parameters
resource (string) REQUIRED: The resource for which the policy detail is being
requested. resource is usually specified as a path. For example, a Project resource is
specified as projects/{project}.
permissions (list[string]) The set of permissions to check for the
resource. Permissions with wildcards (such as * or storage.*) are not allowed. For
more information see IAM Overview.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse
instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
topic_path(*a, **kw)
Returns a fully-qualified topic resource name string.
Creating a Subscription
In Pub/Sub, a subscription is a discrete pull of messages from a topic. If multiple clients pull the same subscription,
then messages are split between them. If multiple clients create a subscription each, then each client will get every
message.
Note: Remember that Pub/Sub operates under the principle of everything at least once. Even in the case where
multiple clients pull the same subscription, some redundancy is likely.
Creating a subscription requires that you already know what topic you want to subscribe to, and it must already exist.
Once you have that, it is easy:
Pulling a Subscription
Once you have created a subscription (or if you already had one), the next step is to pull data from it. This entails two
steps: first you must call subscribe(), passing in the subscription string.
This will return an object with an open() method; calling this method will actually begin consumption of the sub-
scription.
Subscription Callbacks
Because subscriptions in this Pub/Sub client are opened asychronously, processing the messages that are yielded by
the subscription is handled through callbacks.
The basic idea: Define a function that takes one argument; this argument will be a Message instance. This function
should do whatever processing is necessary. At the end, the function should ack() the message.
When you call open(), you must pass the callback that will be used.
Here is an example:
Explaining Ack
In Pub/Sub, the term ack stands for acknowledge. You should ack a message when your processing of that message
has completed. When you ack a message, you are telling Pub/Sub that you do not need to see it again.
It might be tempting to ack messages immediately on receipt. While there are valid use cases for this, in general it is
unwise. The reason why: If there is some error or edge case in your processing logic, and processing of the message
fails, you will have already told Pub/Sub that you successfully processed the message. By contrast, if you ack only
upon completion, then Pub/Sub will eventually re-deliver the unacknowledged message.
It is also possible to nack a message, which is the opposite. When you nack, it tells Pub/Sub that you are unable or
unwilling to deal with the message, and that the service should redeliver it.
API Reference
class google.cloud.pubsub_v1.subscriber.client.Client(policy_class=<class
google.cloud.pubsub_v1.subscriber.policy.thread.Polic
**kwargs)
A subscriber client for Google Cloud Pub/Sub.
This creates an object that is capable of subscribing to messages. Generally, you can instantiate this client with
no arguments, and you get sensible defaults.
Parameters
policy_class (class) A class that describes how to handle subscriptions. You may
subclass the pubsub_v1.subscriber.policy.base.BasePolicy class in order
to define your own consumer. This is primarily provided to allow use of different concur-
rency models; the default is based on threading.Thread.
kwargs (dict) Any additional arguments provided are sent as keyword keyword ar-
guments to the underlying SubscriberClient. Generally, you should not need to set
additional keyword arguments.
acknowledge(*a, **kw)
Acknowledges the messages associated with the ack_ids in the AcknowledgeRequest. The
Pub/Sub system can remove the relevant messages from the subscription.
Acknowledging a message whose ack deadline has expired may succeed, but such a message may be
redelivered later. Acknowledging a message more than once will not result in an error.
Example
Parameters
subscription (string) The subscription whose message is being acknowledged.
Format is projects/{project}/subscriptions/{sub}.
ack_ids (list[string]) The acknowledgment ID for the messages being ac-
knowledged that was returned by the Pub/Sub system in the Pull response. Must not
be empty.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
create_snapshot(*a, **kw)
Creates a snapshot from the requested subscription. If the snapshot already exists, returns
ALREADY_EXISTS. If the requested subscription doesnt exist, returns NOT_FOUND.
If the name is not provided in the request, the server will assign a random name for this snapshot on
the same project as the subscription, conforming to the resource name format. The generated name is
populated in the returned Snapshot object. Note that for REST API requests, you must specify a name in
the request.
Example
Parameters
name (string) Optional user-provided name for this snapshot. If the name is not
provided in the request, the server will assign a random name for this snapshot on the
same project as the subscription. Note that for REST API requests, you must specify a
name. Format is projects/{project}/snapshots/{snap}.
subscription (string) The subscription whose backlog the snapshot retains.
Specifically, the created snapshot is guaranteed to retain:
The existing backlog on the subscription. More precisely, this is defined as the messages
in the subscriptions backlog that are unacknowledged upon the successful completion
of the CreateSnapshot request; as well as:
Any messages published to the subscriptions topic following the successful completion
of the CreateSnapshot request.
Format is projects/{project}/subscriptions/{sub}.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.cloud.proto.pubsub.v1.pubsub_pb2.Snapshot instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
create_subscription(*a, **kw)
Creates a subscription to a given topic. If the subscription already exists, returns ALREADY_EXISTS. If
the corresponding topic doesnt exist, returns NOT_FOUND.
If the name is not provided in the request, the server will assign a random name for this subscription on the
same project as the topic, conforming to the resource name format. The generated name is populated in the
returned Subscription object. Note that for REST API requests, you must specify a name in the request.
Example
Parameters
name (string) The name of the subscription. It must have the format "projects/
{project}/subscriptions/{subscription}". {subscription} must
start with a letter, and contain only letters ([A-Za-z]), numbers ([0-9]), dashes (-),
underscores (_), periods (.), tildes (~), plus (+) or percent signs (%). It must be between
3 and 255 characters in length, and it must not start with "goog".
topic (string) The name of the topic from which this subscription is receiving
messages. Format is projects/{project}/topics/{topic}. The value of this
field will be _deleted-topic_ if the topic has been deleted.
push_config (google.cloud.proto.pubsub.v1.pubsub_pb2.
PushConfig) If push delivery is used with this subscription, this field is used
to configure it. An empty pushConfig signifies that the subscriber will pull and ack
messages using API methods.
ack_deadline_seconds (int) This value is the maximum time after a subscriber
receives a message before the subscriber should acknowledge the message. After message
delivery but before the ack deadline expires and before the message is acknowledged, it is
an outstanding message and will not be delivered again during that time (on a best-effort
basis).
For pull subscriptions, this value is used as the initial value for the ack deadline. To over-
ride this value for a given message, call ModifyAckDeadline with the corresponding
ack_id if using pull. The minimum custom deadline you can specify is 10 seconds. The
maximum custom deadline you can specify is 600 seconds (10 minutes). If this parameter
is 0, a default value of 10 seconds is used.
For push delivery, this value is also used to set the request timeout for the call to the push
endpoint.
If the subscriber never acknowledges the message, the Pub/Sub system will eventually
redeliver the message.
retain_acked_messages (bool) Indicates whether to retain acknowledged mes-
sages. If true, then messages are not expunged from the subscriptions backlog, even if
they are acknowledged, until they fall out of the message_retention_duration
window.
message_retention_duration (google.protobuf.duration_pb2.
Duration) How long to retain unacknowledged messages in the subscriptions
backlog, from the moment a message is published. If retain_acked_messages
is true, then this also configures the retention of acknowledged messages, and thus
configures how far back in time a Seek can be done. Defaults to 7 days. Cannot be more
than 7 days or less than 10 minutes.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.cloud.proto.pubsub.v1.pubsub_pb2.Subscription in-
stance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
delete_snapshot(*a, **kw)
Removes an existing snapshot. All messages retained in the snapshot are immediately dropped. After a
snapshot is deleted, a new one may be created with the same name, but the new one has no association
with the old snapshot or its subscription, unless the same subscription is specified.
Example
Parameters
snapshot (string) The name of the snapshot to delete. Format is projects/
{project}/snapshots/{snap}.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
delete_subscription(*a, **kw)
Deletes an existing subscription. All messages retained in the subscription are immediately dropped. Calls
to Pull after deletion will return NOT_FOUND. After a subscription is deleted, a new one may be created
with the same name, but the new one has no association with the old subscription or its topic unless the
same topic is specified.
Example
Parameters
subscription (string) The subscription to delete. Format is projects/
{project}/subscriptions/{sub}.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
get_iam_policy(*a, **kw)
Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not
have a policy set.
Example
Parameters
resource (string) REQUIRED: The resource for which the policy is being re-
quested. resource is usually specified as a path. For example, a Project resource is
specified as projects/{project}.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.iam.v1.policy_pb2.Policy instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
get_subscription(*a, **kw)
Gets the configuration details of a subscription.
Example
Parameters
subscription (string) The name of the subscription to get. Format is
projects/{project}/subscriptions/{sub}.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.cloud.proto.pubsub.v1.pubsub_pb2.Subscription in-
stance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
list_snapshots(*a, **kw)
Lists the existing snapshots.
Example
Parameters
project (string) The name of the cloud project that snapshots belong to. Format is
projects/{project}.
page_size (int) The maximum number of resources contained in the underlying
API response. If page streaming is performed per- resource, this parameter does not affect
the return value. If page streaming is performed per-page, this determines the maximum
number of resources in a page.
list_subscriptions(*a, **kw)
Lists matching subscriptions.
Example
Parameters
project (string) The name of the cloud project that subscriptions belong to. Format
is projects/{project}.
page_size (int) The maximum number of resources contained in the underlying
API response. If page streaming is performed per- resource, this parameter does not affect
the return value. If page streaming is performed per-page, this determines the maximum
number of resources in a page.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.gax.PageIterator instance. By default, this is an iterable
of google.cloud.proto.pubsub.v1.pubsub_pb2.Subscription instances.
This object can also be configured to iterate over the pages of the response through the
CallOptions parameter.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
match_project_from_project_name(*a, **kw)
Parses the project from a project resource.
Parameters project_name (string) A fully-qualified path representing a project re-
source.
Returns A string representing the project.
match_project_from_snapshot_name(*a, **kw)
Parses the project from a snapshot resource.
Parameters snapshot_name (string) A fully-qualified path representing a snapshot re-
source.
Returns A string representing the project.
match_project_from_subscription_name(*a, **kw)
Parses the project from a subscription resource.
Parameters subscription_name (string) A fully-qualified path representing a sub-
scription resource.
Returns A string representing the project.
match_project_from_topic_name(*a, **kw)
Parses the project from a topic resource.
Parameters topic_name (string) A fully-qualified path representing a topic resource.
Returns A string representing the project.
match_snapshot_from_snapshot_name(*a, **kw)
Parses the snapshot from a snapshot resource.
Parameters snapshot_name (string) A fully-qualified path representing a snapshot re-
source.
Returns A string representing the snapshot.
match_subscription_from_subscription_name(*a, **kw)
Parses the subscription from a subscription resource.
Parameters subscription_name (string) A fully-qualified path representing a sub-
scription resource.
Returns A string representing the subscription.
match_topic_from_topic_name(*a, **kw)
Parses the topic from a topic resource.
Parameters topic_name (string) A fully-qualified path representing a topic resource.
Returns A string representing the topic.
modify_ack_deadline(*a, **kw)
Modifies the ack deadline for a specific message. This method is useful to indicate that more time is needed
to process a message by the subscriber, or to make the message available for redelivery if the processing
was interrupted. Note that this does not modify the subscription-level ackDeadlineSeconds used for
subsequent messages.
Example
Parameters
subscription (string) The name of the subscription. Format is projects/
{project}/subscriptions/{sub}.
ack_ids (list[string]) List of acknowledgment IDs.
ack_deadline_seconds (int) The new ack deadline with respect to the time this
request was sent to the Pub/Sub system. For example, if the value is 10, the new ack dead-
line will expire 10 seconds after the ModifyAckDeadline call was made. Specifying
zero may immediately make the message available for another pull request. The minimum
deadline you can specify is 0 seconds. The maximum deadline you can specify is 600
seconds (10 minutes).
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
modify_push_config(*a, **kw)
Modifies the PushConfig for a specified subscription.
This may be used to change a push subscription to a pull one (signified by an empty PushConfig) or vice
versa, or change the endpoint URL and other attributes of a push subscription. Messages will accumulate
for delivery continuously through the call regardless of changes to the PushConfig.
Example
Parameters
subscription (string) The name of the subscription. Format is projects/
{project}/subscriptions/{sub}.
push_config (google.cloud.proto.pubsub.v1.pubsub_pb2.
PushConfig) The push configuration for future deliveries.
An empty pushConfig indicates that the Pub/Sub system should stop pushing messages
from the given subscription and allow messages to be pulled and acknowledged - effec-
tively pausing the subscription if Pull is not called.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
project_path(*a, **kw)
Returns a fully-qualified project resource name string.
seek(*a, **kw)
Seeks an existing subscription to a point in time or to a given snapshot, whichever is provided in the
request.
Example
Parameters
subscription (string) The subscription to affect.
time (google.protobuf.timestamp_pb2.Timestamp) The time to seek
to. Messages retained in the subscription that were published before this time are
marked as acknowledged, and messages retained in the subscription that were pub-
lished after this time are marked as unacknowledged. Note that this operation affects
only those messages retained in the subscription (configured by the combination of
message_retention_duration and retain_acked_messages). For exam-
ple, if time corresponds to a point before the message retention window (or to a point
before the systems notion of the subscription creation time), only retained messages will
be marked as unacknowledged, and already-expunged messages will not be restored.
snapshot (string) The snapshot to seek to. The snapshots topic must be the same
as that of the provided subscription. Format is projects/{project}/snapshots/
{snap}.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.cloud.proto.pubsub.v1.pubsub_pb2.SeekResponse in-
stance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
set_iam_policy(*a, **kw)
Sets the access control policy on the specified resource. Replaces any existing policy.
Example
Parameters
resource (string) REQUIRED: The resource for which the policy is being speci-
fied. resource is usually specified as a path. For example, a Project resource is specified
as projects/{project}.
policy (google.iam.v1.policy_pb2.Policy) REQUIRED: The complete
policy to be applied to the resource. The size of the policy is limited to a few 10s
of KB. An empty policy is a valid policy but certain Cloud Platform services (such as
Projects) might reject them.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.iam.v1.policy_pb2.Policy instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
snapshot_path(*a, **kw)
Returns a fully-qualified snapshot resource name string.
subscribe(subscription, callback=None, flow_control=())
Return a representation of an individual subscription.
This method creates and returns a Consumer object (that is, a BaseConsumer) subclass) bound to the
topic. It does not create the subcription on the backend (or do any API call at all); it simply returns an
object capable of doing these things.
If the callback argument is provided, then the open() method is automatically called on the returned
object. If callback is not provided, the subscription is returned unopened.
Note: It only makes sense to provide callback here if you have already created the subscription
manually in the API.
Parameters
subscription (str) The name of the subscription. The subscription should have
already been created (for example, by using create_subscription()).
callback (function) The callback function. This function receives the
PubsubMessage as its only argument.
flow_control (FlowControl) The flow control settings. Use this to prevent situ-
ations where you are inundated with too many messages at once.
Returns
subscription_path(*a, **kw)
Returns a fully-qualified subscription resource name string.
test_iam_permissions(*a, **kw)
Returns permissions that a caller has on the specified resource. If the resource does not exist, this will
return an empty set of permissions, not a NOT_FOUND error.
Example
Parameters
resource (string) REQUIRED: The resource for which the policy detail is being
requested. resource is usually specified as a path. For example, a Project resource is
specified as projects/{project}.
permissions (list[string]) The set of permissions to check for the
resource. Permissions with wildcards (such as * or storage.*) are not allowed. For
more information see IAM Overview.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.iam.v1.iam_policy_pb2.TestIamPermissionsResponse
instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
topic_path(*a, **kw)
Returns a fully-qualified topic resource name string.
update_subscription(*a, **kw)
Updates an existing subscription. Note that certain properties of a subscription, such as its topic, are not
modifiable.
Example
Parameters
subscription (google.cloud.proto.pubsub.v1.pubsub_pb2.
Subscription) The updated subscription object.
update_mask (google.protobuf.field_mask_pb2.FieldMask) Indi-
cates which fields in the provided subscription to update. Must be specified and non-
empty.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.cloud.proto.pubsub.v1.pubsub_pb2.Subscription in-
stance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
Subscriptions
Messages
Note: This class should not be constructed directly; it is the responsibility of BasePolicy subclasses to do
so.
Parameters
message (PubsubMessage) The message received from Pub/Sub.
ack_id (str) The ack_id received from Pub/Sub.
request_queue (queue.Queue) A queue provided by the policy that can accept
requests; the policy is responsible for handling those requests.
ack()
Acknowledge the given message.
Acknowledging a message in Pub/Sub means that you are done with it, and it will not be delivered to this
subscription again. You should avoid acknowledging messages until you have finished processing them,
so that in the event of a failure, you receive the message again.
Warning: Acks in Pub/Sub are best effort. You should always ensure that your processing code is
idempotent, as you may receive any given message more than once.
attributes
Return the attributes of the underlying Pub/Sub Message.
Returns The messages attributes.
Return type dict
data
Return the data for the underlying Pub/Sub Message.
Returns
The message data. This is always a bytestring; if you want a text string, call bytes.
decode().
Return type bytes
nack()
Decline to acknowldge the given message.
This will cause the message to be re-delivered to the subscription.
publish_time
Return the time that the message was originally published.
Returns The date and time that the message was published.
Return type datetime
class google.cloud.pubsub_v1.types.AcknowledgeRequest
Request for the Acknowledge method.
subscription
The subscription whose message is being acknowledged. Format is projects/{project}/
subscriptions/{sub}.
ack_ids
The acknowledgment ID for the messages being acknowledged that was returned by the Pub/Sub system
in the Pull response. Must not be empty.
class google.cloud.pubsub_v1.types.BatchSettings(max_bytes, max_latency,
max_messages)
Create new instance of BatchSettings(max_bytes, max_latency, max_messages)
max_bytes
Alias for field number 0
max_latency
Alias for field number 1
max_messages
Alias for field number 2
class google.cloud.pubsub_v1.types.CreateSnapshotRequest
Request for the CreateSnapshot method.
name
Optional user-provided name for this snapshot. If the name is not provided in the request, the server will
assign a random name for this snapshot on the same project as the subscription. Note that for REST API
requests, you must specify a name. Format is projects/{project}/snapshots/{snap}.
subscription
The subscription whose backlog the snapshot retains. Specifically, the created snapshot is guaranteed
to retain: (a) The existing backlog on the subscription. More precisely, this is defined as the mes-
sages in the subscriptions backlog that are unacknowledged upon the successful completion of the
CreateSnapshot request; as well as: (b) Any messages published to the subscriptions topic fol-
lowing the successful completion of the CreateSnapshot request. Format is projects/{project}/
subscriptions/{sub}.
class google.cloud.pubsub_v1.types.DeleteSnapshotRequest
Request for the DeleteSnapshot method.
snapshot
The name of the snapshot to delete. Format is projects/{project}/snapshots/{snap}.
class google.cloud.pubsub_v1.types.DeleteSubscriptionRequest
Request for the DeleteSubscription method.
subscription
The subscription to delete. Format is projects/{project}/subscriptions/{sub}.
class google.cloud.pubsub_v1.types.DeleteTopicRequest
Request for the DeleteTopic method.
topic
Name of the topic to delete. Format is projects/{project}/topics/{topic}.
class google.cloud.pubsub_v1.types.FlowControl(max_bytes, max_messages, re-
sume_threshold)
Create new instance of FlowControl(max_bytes, max_messages, resume_threshold)
max_bytes
Alias for field number 0
max_messages
Alias for field number 1
resume_threshold
Alias for field number 2
class google.cloud.pubsub_v1.types.GetSubscriptionRequest
Request for the GetSubscription method.
subscription
The name of the subscription to get. Format is projects/{project}/subscriptions/{sub}.
class google.cloud.pubsub_v1.types.GetTopicRequest
Request for the GetTopic method.
topic
The name of the topic to get. Format is projects/{project}/topics/{topic}.
class google.cloud.pubsub_v1.types.ListSnapshotsRequest
Request for the ListSnapshots method.
project
The name of the cloud project that snapshots belong to. Format is projects/{project}.
page_size
Maximum number of snapshots to return.
page_token
The value returned by the last ListSnapshotsResponse; indicates that this is a continuation of a
prior ListSnapshots call, and that the system should return the next page of data.
class google.cloud.pubsub_v1.types.ListSnapshotsResponse
Response for the ListSnapshots method.
snapshots
The resulting snapshots.
next_page_token
If not empty, indicates that there may be more snapshot that match the request; this value should be passed
in a new ListSnapshotsRequest.
class google.cloud.pubsub_v1.types.ListSubscriptionsRequest
Request for the ListSubscriptions method.
project
The name of the cloud project that subscriptions belong to. Format is projects/{project}.
page_size
Maximum number of subscriptions to return.
page_token
The value returned by the last ListSubscriptionsResponse; indicates that this is a continuation
of a prior ListSubscriptions call, and that the system should return the next page of data.
class google.cloud.pubsub_v1.types.ListSubscriptionsResponse
Response for the ListSubscriptions method.
subscriptions
The subscriptions that match the request.
next_page_token
If not empty, indicates that there may be more subscriptions that match the request; this value should be
passed in a new ListSubscriptionsRequest to get more subscriptions.
class google.cloud.pubsub_v1.types.ListTopicSubscriptionsRequest
Request for the ListTopicSubscriptions method.
topic
The name of the topic that subscriptions are attached to. Format is projects/{project}/topics/
{topic}.
page_size
Maximum number of subscription names to return.
page_token
The value returned by the last ListTopicSubscriptionsResponse; indicates that this is a contin-
uation of a prior ListTopicSubscriptions call, and that the system should return the next page of
data.
class google.cloud.pubsub_v1.types.ListTopicSubscriptionsResponse
Response for the ListTopicSubscriptions method.
subscriptions
The names of the subscriptions that match the request.
next_page_token
If not empty, indicates that there may be more subscriptions that match the request; this value should be
passed in a new ListTopicSubscriptionsRequest to get more subscriptions.
class google.cloud.pubsub_v1.types.ListTopicsRequest
Request for the ListTopics method.
project
The name of the cloud project that topics belong to. Format is projects/{project}.
page_size
Maximum number of topics to return.
page_token
The value returned by the last ListTopicsResponse; indicates that this is a continuation of a prior
ListTopics call, and that the system should return the next page of data.
class google.cloud.pubsub_v1.types.ListTopicsResponse
Response for the ListTopics method.
topics
The resulting topics.
next_page_token
If not empty, indicates that there may be more topics that match the request; this value should be passed in
a new ListTopicsRequest.
class google.cloud.pubsub_v1.types.ModifyAckDeadlineRequest
Request for the ModifyAckDeadline method.
subscription
The name of the subscription. Format is projects/{project}/subscriptions/{sub}.
ack_ids
List of acknowledgment IDs.
ack_deadline_seconds
The new ack deadline with respect to the time this request was sent to the Pub/Sub system. For example,
if the value is 10, the new ack deadline will expire 10 seconds after the ModifyAckDeadline call
was made. Specifying zero may immediately make the message available for another pull request. The
minimum deadline you can specify is 0 seconds. The maximum deadline you can specify is 600 seconds
(10 minutes).
class google.cloud.pubsub_v1.types.ModifyPushConfigRequest
Request for the ModifyPushConfig method.
subscription
The name of the subscription. Format is projects/{project}/subscriptions/{sub}.
push_config
The push configuration for future deliveries. An empty pushConfig indicates that the Pub/Sub system
should stop pushing messages from the given subscription and allow messages to be pulled and acknowl-
edged - effectively pausing the subscription if Pull is not called.
class google.cloud.pubsub_v1.types.PublishRequest
Request for the Publish method.
topic
The messages in the request will be published on this topic. Format is projects/{project}/
topics/{topic}.
messages
The messages to publish.
class google.cloud.pubsub_v1.types.PublishResponse
Response for the Publish method.
message_ids
The server-assigned ID of each published message, in the same order as the messages in the request. IDs
are guaranteed to be unique within the topic.
class google.cloud.pubsub_v1.types.PubsubMessage
A message data and its attributes. The message payload must not be empty; it must contain either a non-empty
data field, or at least one attribute.
data
The message payload.
attributes
Optional attributes for this message.
message_id
ID of this message, assigned by the server when the message is published. Guaranteed to be unique within
the topic. This value may be read by a subscriber that receives a PubsubMessage via a Pull call or a
push delivery. It must not be populated by the publisher in a Publish call.
publish_time
The time at which the message was published, populated by the server when it receives the Publish call.
It must not be populated by the publisher in a Publish call.
class google.cloud.pubsub_v1.types.PullRequest
Request for the Pull method.
subscription
The subscription from which messages should be pulled. Format is projects/{project}/
subscriptions/{sub}.
return_immediately
If this field set to true, the system will respond immediately even if it there are no messages available to
return in the Pull response. Otherwise, the system may wait (for a bounded amount of time) until at least
one message is available, rather than returning no messages. The client may cancel the request if it does
not wish to wait any longer for the response.
max_messages
The maximum number of messages returned for this request. The Pub/Sub system may return fewer than
the number specified.
class google.cloud.pubsub_v1.types.PullResponse
Response for the Pull method.
received_messages
Received Pub/Sub messages. The Pub/Sub system will return zero messages if there are no more available
in the backlog. The Pub/Sub system may return fewer than the maxMessages requested even if there are
more messages available in the backlog.
class google.cloud.pubsub_v1.types.PushConfig
Configuration for a push delivery endpoint.
push_endpoint
A URL locating the endpoint to which messages should be pushed. For example, a Webhook endpoint
might use https://example.com/push.
attributes
Endpoint configuration attributes. Every endpoint has a set of API supported attributes that can be used to
control different aspects of the message delivery. The currently supported attribute is x-goog-version,
which you can use to change the format of the pushed message. This attribute indicates the version of
the data expected by the endpoint. This controls the shape of the pushed message (i.e., its fields and
metadata). The endpoint version is based on the version of the Pub/Sub API. If not present during the
CreateSubscription call, it will default to the version of the API used to make such call. If not
present during a ModifyPushConfig call, its value will not be changed. GetSubscription calls
will always return a valid version, even if the subscription was created without this attribute. The possible
values for this attribute are: - v1beta1: uses the push format defined in the v1beta1 Pub/Sub API. - v1
or v1beta2: uses the push format defined in the v1 Pub/Sub API.
class google.cloud.pubsub_v1.types.ReceivedMessage
A message and its corresponding acknowledgment ID.
ack_id
This ID can be used to acknowledge the received message.
message
The message.
class google.cloud.pubsub_v1.types.SeekRequest
Request for the Seek method.
subscription
The subscription to affect.
time
The time to seek to. Messages retained in the subscription that were published before this time
are marked as acknowledged, and messages retained in the subscription that were published after
this time are marked as unacknowledged. Note that this operation affects only those messages re-
tained in the subscription (configured by the combination of message_retention_duration and
retain_acked_messages). For example, if time corresponds to a point before the message reten-
tion window (or to a point before the systems notion of the subscription creation time), only retained
messages will be marked as unacknowledged, and already- expunged messages will not be restored.
snapshot
The snapshot to seek to. The snapshots topic must be the same as that of the provided subscription. Format
is projects/{project}/snapshots/{snap}.
class google.cloud.pubsub_v1.types.Snapshot
A snapshot resource.
name
The name of the snapshot.
topic
The name of the topic from which this snapshot is retaining messages.
expire_time
The snapshot is guaranteed to exist up until this time. A newly-created snapshot expires no later than
7 days from the time of its creation. Its exact lifetime is determined at creation by the existing backlog
in the source subscription. Specifically, the lifetime of the snapshot is 7 days - (age of oldest
unacked message in the subscription). For example, consider a subscription whose oldest
unacked message is 3 days old. If a snapshot is created from this subscription, the snapshot which will
always capture this 3-day-old backlog as long as the snapshot exists will expire in 4 days.
labels
User labels.
class google.cloud.pubsub_v1.types.StreamingPullRequest
Request for the StreamingPull streaming RPC method. This request is used to establish the initial stream
as well as to stream acknowledgements and ack deadline modifications from the client to the server.
subscription
The subscription for which to initialize the new stream. This must be provided in the first request on
the stream, and must not be set in subsequent requests from client to server. Format is projects/
{project}/subscriptions/{sub}.
ack_ids
List of acknowledgement IDs for acknowledging previously received messages (received on this stream
or a different stream). If an ack ID has expired, the corresponding message may be redelivered later.
Acknowledging a message more than once will not result in an error. If the acknowledgement ID is
malformed, the stream will be aborted with status INVALID_ARGUMENT.
modify_deadline_seconds
The list of new ack deadlines for the IDs listed in modify_deadline_ack_ids. The size of this list
must be the same as the size of modify_deadline_ack_ids. If it differs the stream will be aborted
with INVALID_ARGUMENT. Each element in this list is applied to the element in the same position in
modify_deadline_ack_ids. The new ack deadline is with respect to the time this request was sent
to the Pub/Sub system. Must be >= 0. For example, if the value is 10, the new ack deadline will expire
10 seconds after this request is received. If the value is 0, the message is immediately made available for
another streaming or non-streaming pull request. If the value is < 0 (an error), the stream will be aborted
with status INVALID_ARGUMENT.
modify_deadline_ack_ids
List of acknowledgement IDs whose deadline will be modified based on the corresponding element in
modify_deadline_seconds. This field can be used to indicate that more time is needed to pro-
cess a message by the subscriber, or to make the message available for redelivery if the processing was
interrupted.
stream_ack_deadline_seconds
The ack deadline to use for the stream. This must be provided in the first request on the stream, but it can
also be updated on subsequent requests from client to server. The minimum deadline you can specify is 10
seconds. The maximum deadline you can specify is 600 seconds (10 minutes).
class google.cloud.pubsub_v1.types.StreamingPullResponse
Response for the StreamingPull method. This response is used to stream messages from the server to the
client.
received_messages
Received Pub/Sub messages. This will not be empty.
class google.cloud.pubsub_v1.types.Subscription
A subscription resource.
name
The name of the subscription. It must have the format "projects/{project}/subscriptions/
{subscription}". {subscription} must start with a letter, and contain only letters
([A-Za-z]), numbers ([0-9]), dashes (-), underscores (_), periods (.), tildes (~), plus (+) or percent
signs (%). It must be between 3 and 255 characters in length, and it must not start with "goog".
topic
The name of the topic from which this subscription is receiving messages. Format is projects/
{project}/topics/{topic}. The value of this field will be _deleted-topic_ if the topic
has been deleted.
push_config
If push delivery is used with this subscription, this field is used to configure it. An empty pushConfig
signifies that the subscriber will pull and ack messages using API methods.
ack_deadline_seconds
This value is the maximum time after a subscriber receives a message before the subscriber should ac-
knowledge the message. After message delivery but before the ack deadline expires and before the mes-
sage is acknowledged, it is an outstanding message and will not be delivered again during that time (on
a best-effort basis). For pull subscriptions, this value is used as the initial value for the ack deadline. To
override this value for a given message, call ModifyAckDeadline with the corresponding ack_id if
using pull. The minimum custom deadline you can specify is 10 seconds. The maximum custom deadline
you can specify is 600 seconds (10 minutes). If this parameter is 0, a default value of 10 seconds is used.
For push delivery, this value is also used to set the request timeout for the call to the push endpoint. If the
subscriber never acknowledges the message, the Pub/Sub system will eventually redeliver the message.
retain_acked_messages
Indicates whether to retain acknowledged messages. If true, then messages are not expunged
from the subscriptions backlog, even if they are acknowledged, until they fall out of the
message_retention_duration window.
message_retention_duration
How long to retain unacknowledged messages in the subscriptions backlog, from the moment a message
is published. If retain_acked_messages is true, then this also configures the retention of acknowl-
edged messages, and thus configures how far back in time a Seek can be done. Defaults to 7 days. Cannot
be more than 7 days or less than 10 minutes.
labels
User labels.
class google.cloud.pubsub_v1.types.Topic
A topic resource.
name
The name of the topic. It must have the format "projects/{project}/topics/{topic}".
{topic} must start with a letter, and contain only letters ([A-Za-z]), numbers ([0-9]), dashes (-),
underscores (_), periods (.), tildes (~), plus (+) or percent signs (%). It must be between 3 and 255
characters in length, and it must not start with "goog".
labels
User labels.
class google.cloud.pubsub_v1.types.UpdateSnapshotRequest
Request for the UpdateSnapshot method.
snapshot
The updated snpashot object.
update_mask
Indicates which fields in the provided snapshot to update. Must be specified and non-empty.
class google.cloud.pubsub_v1.types.UpdateSubscriptionRequest
Request for the UpdateSubscription method.
subscription
The updated subscription object.
update_mask
Indicates which fields in the provided subscription to update. Must be specified and non-empty.
class google.cloud.pubsub_v1.types.UpdateTopicRequest
Request for the UpdateTopic method.
topic
The topic to update.
update_mask
Indicates which fields in the provided topic to update. Must be specified and non-empty.
Resource Manager
11.1 Client
Parameters
credentials (Credentials) (Optional) The OAuth2 Credentials to use for this
client. If not passed (and if no _http object is passed), falls back to the default inferred
from the environment.
_http (Session) (Optional) HTTP object to make requests. Can be any object that
defines request() with the same interface as requests.Session.request(). If
not passed, an _http object is created that is bound to the credentials for the current
object. This parameter should be considered private, and could change in the future.
SCOPE = ('https://www.googleapis.com/auth/cloud-platform',)
The scopes required for authenticating as a Resouce Manager consumer.
fetch_project(project_id)
Fetch an existing project and its relevant metadata by ID.
197
google-cloud Documentation, Release 0.27.1
Note: If the project does not exist, this will raise a NotFound error.
list_projects(filter_params=None, page_size=None)
List the projects visible to this client.
Example:
List all projects with label 'environment' set to 'prod' (filtering by labels):
See https://cloud.google.com/resource-manager/reference/rest/v1beta1/projects/list
Complete filtering example:
Parameters
filter_params (dict) (Optional) A dictionary of filter options where each key is a
property to filter on, and each value is the (case-insensitive) value to check (or the glob *
to check for existence of the property). See the example above for more details.
page_size (int) (Optional) Maximum number of projects to return in a single page.
If not passed, defaults to a value set by the API.
Return type Iterator
Returns Iterator of all Project. that the current user has access to.
11.2 Projects
Utility for managing projects via the Cloud Resource Manager API.
class google.cloud.resource_manager.project.Project(project_id, client, name=None,
labels=None)
Bases: object
Projects are containers for your work on Google Cloud Platform.
See https://cloud.google.com/resource-manager/reference/rest/v1beta1/projects
Parameters
project_id (str) The globally unique ID of the project.
client (google.cloud.resource_manager.client.Client) The Client
used with this project.
name (str) The display name of the project.
labels (dict) A list of labels associated with the project.
create(client=None)
API call: create the project via a POST request.
See https://cloud.google.com/resource-manager/reference/rest/v1beta1/projects/create
Parameters client (google.cloud.resource_manager.client.Client or
NoneType) the client to use. If not passed, falls back to the client stored on the current
project.
delete(client=None, reload_data=False)
API call: delete the project via a DELETE request.
See https://cloud.google.com/resource-manager/reference/rest/v1beta1/projects/delete
This actually changes the status (lifecycleState) from ACTIVE to DELETE_REQUESTED. Later
(its not specified when), the project will move into the DELETE_IN_PROGRESS state, which means the
deleting has actually begun.
Parameters
client (google.cloud.resource_manager.client.Client or
NoneType) the client to use. If not passed, falls back to the client stored on the
current project.
reload_data (bool) Whether to reload the project with the latest state. If you want
to get the updated status, youll want this set to True as the DELETE method doesnt
send back the updated project. Default: False.
exists(client=None)
API call: test the existence of a project via a GET request.
See https://cloud.google.com/resource-manager/reference/rest/v1beta1/projects/get
Parameters client (google.cloud.resource_manager.client.Client or
NoneType) the client to use. If not passed, falls back to the client stored on the current
project.
Return type bool
Returns Boolean indicating existence of the project.
classmethod from_api_repr(resource, client)
Factory: construct a project given its API representation.
Parameters
resource (dict) project resource representation returned from the API
client (google.cloud.resource_manager.client.Client) The Client
used with this project.
Return type google.cloud.resource_manager.project.Project
Returns The project created.
full_name
Fully-qualified name (ie, 'projects/purple-spaceship-123').
path
URL for the project (ie, '/projects/purple-spaceship-123').
reload(client=None)
API call: reload the project via a GET request.
This method will reload the newest metadata for the project. If youve created a new Project instance
via Client.new_project(), this method will retrieve project metadata.
Warning: This will overwrite any local changes youve made and not saved via update().
See https://cloud.google.com/resource-manager/reference/rest/v1beta1/projects/get
Parameters client (google.cloud.resource_manager.client.Client or
NoneType) the client to use. If not passed, falls back to the client stored on the current
project.
set_properties_from_api_repr(resource)
Update specific properties from its API representation.
undelete(client=None, reload_data=False)
API call: undelete the project via a POST request.
See https://cloud.google.com/resource-manager/reference/rest/v1beta1/projects/undelete
This actually changes the project status (lifecycleState) from DELETE_REQUESTED to ACTIVE.
If the project has already reached a status of DELETE_IN_PROGRESS, this request will fail and the
project cannot be restored.
Parameters
client (google.cloud.resource_manager.client.Client or
NoneType) the client to use. If not passed, falls back to the client stored on the
current project.
reload_data (bool) Whether to reload the project with the latest state. If you want
to get the updated status, youll want this set to True as the DELETE method doesnt
send back the updated project. Default: False.
update(client=None)
API call: update the project via a PUT request.
See https://cloud.google.com/resource-manager/reference/rest/v1beta1/projects/update
Parameters client (google.cloud.resource_manager.client.Client or
NoneType) the client to use. If not passed, falls back to the client stored on the current
project.
The Cloud Resource Manager API provides methods that you can use to programmatically manage your projects in
the Google Cloud Platform. With this API, you can do the following:
Get a list of all projects associated with an account
Create new projects
Update existing projects
Delete projects
Undelete, or recover, projects that you dont want to delete
Note: Dont forget to look at the Authentication section below. Its slightly different from the rest of this library.
Warning: Alpha
The projects.create() API method is in the Alpha stage. It might be changed in backward-incompatible ways and
is not recommended for production use. It is not subject to any SLA or deprecation policy. Access to this feature
is currently invite-only. For an invitation, contact our sales team at https://cloud.google.com/contact.
11.3 Authentication
Unlike the other APIs, the Resource Manager API is focused on managing your various projects inside Google Cloud
Platform. What this means (currently, as of August 2015) is that you cant use a Service Account to work with some
parts of this API (for example, creating projects).
The reason is actually pretty simple: if your API call is trying to do something like create a project, what projects
Service Account can you use? Currently none.
This means that for this API you should always use the credentials provided by the Google Cloud SDK, which you
can get by running gcloud auth login.
Once you run that command, google-cloud-python will automatically pick up the credentials, and you can use
the automatic discovery feature of the library.
Start by authenticating:
Runtimeconfig
Note: This will not make an HTTP request; it simply instantiates a config object owned by this client.
203
google-cloud Documentation, Release 0.27.1
12.2 Configuration
Parameters
variable_name (str) The name of the variable to retrieve.
client (Client) (Optional) The client to use. If not passed, falls back to the client
stored on the current config.
Return type google.cloud.runtimeconfig.variable.Variable or None
Returns The variable object if it exists, otherwise None.
Note: This will not make an HTTP request; it simply instantiates a variable object owned by this config.
12.3 Variables
value
Value of the variable, as bytes.
See https://cloud.google.com/deployment-manager/runtime-configurator/reference/rest/v1beta1/projects.
configs.variables
Return type bytes or NoneType
Returns The value of the variable or None if the property is not set locally.
12.4 Modules
Note: This will not make an HTTP request; it simply instantiates a config object owned by this client.
Spanner
13.1 Client
To use the API, the Client class defines a high-level interface which handles authorization and creating other objects:
When creating a Client, the user_agent and timeout_seconds arguments have sensible defaults
(DEFAULT_USER_AGENT and DEFAULT_TIMEOUT_SECONDS). However, you may over-ride them and these will
be used throughout all API requests made with the client you create.
13.1.2 Configuration
209
google-cloud Documentation, Release 0.27.1
Tip: Be sure to use the Project ID, not the Project Number.
After a Client, the next highest-level object is an Instance. Youll need one before you can interact with
databases.
Next, learn about the Instance Admin API.
After creating a Client, you can interact with individual instances for a project.
Each instance within a project maps to a named instance configuration, specifying the location and other parameters
for a set of instances. These configurations are defined by the server, and cannot be changed.
To list of all instance configurations available to your project, use the list_instance_configs() method of
the client:
To fetch a single instance configuration, use the get_instance_configuration() method of the client:
config = client.get_instance_configuration('config-name')
If you want a comprehensive list of all existing instances, use the list_instances() method of the client:
config = configs[0]
instance = client.instance(instance_id,
configuration_name=config.name,
node_count=10,
display_name='My Instance')
configuration_name is the name of the instance configuration to which the instance will be bound. It must
be one of the names configured for your project, discoverable via google.cloud.spanner.client.
Client.list_instance_configs().
node_count is a postitive integral count of the number of nodes used by the instance. More nodes allows for
higher performance, but at a higher billing cost.
display_name is optional. When not provided, display_name defaults to the instance_id value.
You can also use Client.instance() to create a local wrapper for an instance that has already been created:
instance = client.instance(existing_instance_id)
instance.reload()
After creating the instance object, use its create() method to trigger its creation on the server:
After creating the instance object, reload its server-side configuration using its reload() method:
instance.reload()
This will load display_name, config_name, and node_count for the existing instance object from the
back-end.
After creating the instance object, you can update its metadata via its update() method:
instance.delete()
The create() and update() methods of instance object trigger long-running operations on the server, and return
instances of the Operation class.
You can check if a long-running operation has finished by using its finished() method:
Note: Once an Operation object has returned True from its finished() method, the object should not be
re-used. Subsequent calls to finished() will result in an :excValueError being raised.
After creating a Instance, you can interact with individual databases for that instance.
To list of all existing databases for an instance, use its list_databases() method:
database = instance.database(existing_database_id)
After creating the database object, use its create() method to trigger its creation on the server:
operation = database.create()
After creating the database object, you can apply additional DDL statements via its update_ddl() method:
database.drop()
The create() and update() methods of instance object trigger long-running operations on the server, and return
instances of the Operation class.
You can check if a long-running operation has finished by using its finished() method:
Note: Once an Operation object has returned True from its finished() method, the object should not be
re-used. Subsequent calls to finished() will result in an :excValueError being raised.
Note: snapshot() returns an object intended to be used as a Python context manager (i.e., as the target of a with
statement). Use the instance, and any result sets returned by its read or execute_sql methods, only inside the
block created by the with statement.
See Read-only Transactions via Snapshots for more complete examples of snapshot usage.
A batch represents a bundled set of insert/upsert/update/delete operations on the rows of tables in the database.
Note: batch() returns an object intended to be used as a Python context manager (i.e., as the target of a with
statement). It applies any changes made inside the block of its with statement when exiting the block, unless an
exception is raised within the block. Use the batch only inside the block created by the with statement.
A transaction represents the union of a strong snapshot and a batch: it allows read and execute_sql operations,
and accumulates insert/upsert/update/delete operations.
Because other applications may be performing concurrent updates which would invalidate the reads / queries, the
work done by a transaction needs to be bundled as a retryable unit of work function, which takes the transaction as
a required argument:
def unit_of_work(transaction):
result = transaction.execute_sql(QUERY)
database.run_in_transaction(unit_of_work)
Note: run_in_transaction() commits the transaction automatically if the unit of work function returns
without raising an exception.
Note: run_in_transaction() retries the unit of work function if the read / query operatoins or the commit
are aborted due to concurrent updates
Under the covers, the snapshot, batch, and run_in_transaction methods use a pool of Session objects
to manage their communication with the back-end. You can configure one of the pools manually to control the number
of sessions, timeouts, etc., and then passing it to the Database constructor:
Note that creating a database with a pool may presume that its database already exists, as it may need to pre-create
sessions (rather than creating them on demand, as the default implementation does).
You can supply your own pool implementation, which must satisfy the contract laid out in
AbstractSessionPool:
class MyCustomPool(AbstractSessionPool):
See Advanced Session Pool Topics for more advanced coverage of session pools.
A Batch represents a set of data modification operations to be performed on tables in a dataset. Use of a Batch does
not require creating an explicit Snapshot or Transaction. Until commit() is called on a Batch, no changes
are propagated to the back-end.
batch = session.batch()
Batch.insert() adds one or more new records to a table. Fails if any of the records already exists.
batch.insert(
'citizens', columns=['email', 'first_name', 'last_name', 'age'],
values=[
['[email protected]', 'Phred', 'Phlyntstone', 32],
['[email protected]', 'Bharney', 'Rhubble', 31],
])
Note: Ensure that data being sent for STRING columns uses a text string (str in Python 3; unicode in Python 2).
Additionally, if you are writing data intended for a BYTES column, you must base64 encode it.
Batch.update() updates one or more existing records in a table. Fails if any of the records does not already exist.
batch.update(
'citizens', columns=['email', 'age'],
values=[
['[email protected]', 33],
['[email protected]', 32],
])
Note: Ensure that data being sent for STRING columns uses a text string (str in Python 3; unicode in Python 2).
Additionally, if you are writing data intended for a BYTES column, you must base64 encode it.
Batch.insert_or_update() inserts or updates one or more records in a table. Existing rows have values for
the supplied columns overwritten; other column values are preserved.
batch.insert_or_update(
'citizens', columns=['email', 'first_name', 'last_name', 'age'],
values=[
['[email protected]', 'Phred', 'Phlyntstone', 31],
['[email protected]', 'Wylma', 'Phlyntstone', 29],
])
Note: Ensure that data being sent for STRING columns uses a text string (str in Python 3; unicode in Python 2).
Additionally, if you are writing data intended for a BYTES column, you must base64 encode it.
Batch.replace() inserts or updates one or more records in a table. Existing rows have values for the supplied
columns overwritten; other column values are set to null.
batch.replace(
'citizens', columns=['email', 'first_name', 'last_name', 'age'],
values=[
['[email protected]', 'Bharney', 'Rhubble', 30],
['[email protected]', 'Bhettye', 'Rhubble', 30],
])
Note: Ensure that data being sent for STRING columns uses a text string (str in Python 3; unicode in Python 2).
Additionally, if you are writing data intended for a BYTES column, you must base64 encode it.
Batch.delete() removes one or more records from a table. Non-existent rows do not cause errors.
to_delete = KeySet(keys=[
('[email protected]',)
('[email protected]',)
])
batch.delete('citizens', to_delete)
After describing the modifications to be made to table data via the Batch.insert(), Batch.update(),
Batch.insert_or_update(), Batch.replace(), and Batch.delete() methods above, send them to
the back-end by calling Batch.commit(), which makes the Commit API call.
batch.commit()
Rather than calling Batch.commit() manually, you can use the Batch instance as a context manager, and have it
called automatically if the with block exits without raising an exception.
to_delete = KeySet(keys=[
('[email protected]',)
('[email protected]',)
])
batch.insert(
'citizens', columns=['email', 'first_name', 'last_name', 'age'],
values=[
['[email protected]', 'Phred', 'Phlyntstone', 32],
['[email protected]', 'Bharney', 'Rhubble', 31],
])
batch.update(
'citizens', columns=['email', 'age'],
values=[
['[email protected]', 33],
['[email protected]', 32],
])
...
batch.delete('citizens', to_delete)
A Snapshot represents a read-only transaction: when multiple read operations are peformed via a Snapshot, the
results are consistent as of a particular point in time.
To begin using a snapshot using the default bound (which is strong), meaning all reads are performed at a times-
tamp where all previously-committed transactions are visible:
snapshot = session.snapshot()
You can also specify a weaker bound, which can either be to perform all reads as of a given timestamp:
import datetime
from pytz import UTC
TIMESTAMP = datetime.utcnow().replace(tzinfo=UTC)
snapshot = session.snapshot(read_timestamp=TIMESTAMP)
Read data for selected rows from a table in the sessions database. Calls the Read API, which returns all rows specified
in key_set, or else fails if the result set is too large,
Note: The result set returned by execute_sql() must not be iterated after the snapshots session has been
returned to the databases session pool. Therefore, unless your application creates sessions manually, perform all
iteration within the context of the with database.snapshot() block.
Note: If streaming a chunk raises an exception, the application can retry the read, passing the resume_token
from StreamingResultSet which raised the error. E.g.:
Read data from a query against tables in the sessions database. Calls the ExecuteSql API, which returns all rows
matching the query, or else fails if the result set is too large,
Note: The result set returned by execute_sql() must not be iterated after the snapshots session has been
returned to the databases session pool. Therefore, unless your application creates sessions manually, perform all
iteration within the context of the with database.snapshot() block.
Note: If streaming a chunk raises an exception, the application can retry the query, passing the resume_token
from StreamingResultSet which raised the error. E.g.:
result = snapshot.execute_sql(QUERY)
while True:
try:
for row in result.rows:
print row
except Exception:
result = snapshot.execute_sql(
QUERY, resume_token=result.resume_token)
continue
else:
break
A Transaction represents a transaction: when the transaction commits, it will send any accumulated mutations to
the server.
transaction = session.transaction()
Read data for selected rows from a table in the sessions database. Calls the Read API, which returns all rows specified
in key_set, or else fails if the result set is too large,
result = transaction.read(
table='table-name', columns=['first_name', 'last_name', 'age'],
key_set=['[email protected]', '[email protected]'])
Note: If streaming a chunk fails due to a resumable error, Session.read() retries the StreamingRead API
reqeust, passing the resume_token from the last partial result streamed.
Read data from a query against tables in the sessions database. Calls the ExecuteSql API, which returns all rows
matching the query, or else fails if the result set is too large,
QUERY = (
'SELECT e.first_name, e.last_name, p.telephone '
'FROM employees as e, phones as p '
'WHERE p.employee_id == e.employee_id')
result = transaction.execute_sql(QUERY)
Transaction.insert() adds one or more new records to a table. Fails if any of the records already exists.
transaction.insert(
'citizens', columns=['email', 'first_name', 'last_name', 'age'],
values=[
['[email protected]', 'Phred', 'Phlyntstone', 32],
['[email protected]', 'Bharney', 'Rhubble', 31],
])
Note: Ensure that data being sent for STRING columns uses a text string (str in Python 3; unicode in Python 2).
Additionally, if you are writing data intended for a BYTES column, you must base64 encode it.
Transaction.update() updates one or more existing records in a table. Fails if any of the records does not
already exist.
transaction.update(
'citizens', columns=['email', 'age'],
values=[
['[email protected]', 33],
['[email protected]', 32],
])
Note: Ensure that data being sent for STRING columns uses a text string (str in Python 3; unicode in Python 2).
Additionally, if you are writing data intended for a BYTES column, you must base64 encode it.
Transaction.insert_or_update() inserts or updates one or more records in a table. Existing rows have
values for the supplied columns overwritten; other column values are preserved.
transaction.insert_or_update(
'citizens', columns=['email', 'first_name', 'last_name', 'age'],
values=[
['[email protected]', 'Phred', 'Phlyntstone', 31],
Note: Ensure that data being sent for STRING columns uses a text string (str in Python 3; unicode in Python 2).
Additionally, if you are writing data intended for a BYTES column, you must base64 encode it.
Transaction.replace() inserts or updates one or more records in a table. Existing rows have values for the
supplied columns overwritten; other column values are set to null.
transaction.replace(
'citizens', columns=['email', 'first_name', 'last_name', 'age'],
values=[
['[email protected]', 'Bharney', 'Rhubble', 30],
['[email protected]', 'Bhettye', 'Rhubble', 30],
])
Note: Ensure that data being sent for STRING columns uses a text string (str in Python 3; unicode in Python 2).
Additionally, if you are writing data intended for a BYTES column, you must base64 encode it.
Transaction.delete() removes one or more records from a table. Non-existent rows do not cause errors.
transaction.delete(
'citizens', keyset=['[email protected]', '[email protected]'])
After describing the modifications to be made to table data via the Transaction.insert(), Transaction.
update(), Transaction.insert_or_update(), Transaction.replace(), and Transaction.
delete() methods above, send them to the back-end by calling Transaction.commit(), which makes the
Commit API call.
transaction.commit()
After describing the modifications to be made to table data via the Transaction.insert(), Transaction.
update(), Transaction.insert_or_update(), Transaction.replace(), and Transaction.
delete() methods above, cancel the transaction on the the back-end by calling Transaction.rollback(),
which makes the Rollback API call.
transaction.rollback()
Rather than calling Transaction.commit() or Transaction.rollback() manually, you can use the
Transaction instance as a context manager: in that case, the transactions commit() method will called au-
tomatically if the with block exits without raising an exception.
If an exception is raised inside the with block, the transactions rollback() method will be called instead.
transaction.insert(
'citizens', columns=['email', 'first_name', 'last_name', 'age'],
values=[
['[email protected]', 'Phred', 'Phlyntstone', 32],
['[email protected]', 'Bharney', 'Rhubble', 31],
])
transaction.update(
'citizens', columns=['email', 'age'],
values=[
['[email protected]', 33],
['[email protected]', 32],
])
...
transaction.delete('citizens',
keyset['[email protected]', '[email protected]'])
You can supply your own pool implementation, which must satisfy the contract laid out in
AbstractSessionPool:
class MyCustomPool(AbstractSessionPool):
pool = MyCustomPool(custom_param=42)
database = instance.database(DATABASE_NAME, pool=pool)
Some applications may need to minimize latency for read operations, including particularly the overhead of making an
API request to create or refresh a session. PingingPool is designed for such applications, which need to configure
a background thread to do the work of keeping the sessions fresh.
Create an instance of PingingPool:
client = Client()
instance = client.instance(INSTANCE_NAME)
pool = PingingPool(size=10, default_timeout=5, ping_interval=300)
database = instance.database(DATABASE_NAME, pool=pool)
Set up a background thread to ping the pools session, keeping them from becoming stale:
import threading
Some applications may need to minimize latency for read write operations, including particularly the overhead of mak-
ing an API request to create or refresh a session or to begin a sessions transaction. TransactionPingingPool
is designed for such applications, which need to configure a background thread to do the work of keeping the sessions
fresh and starting their transactions after use.
Create an instance of TransactionPingingPool:
client = Client()
instance = client.instance(INSTANCE_NAME)
pool = TransactionPingingPool(size=10, default_timeout=5, ping_interval=300)
database = instance.database(DATABASE_NAME, pool=pool)
Set up a background thread to ping the pools session, keeping them from becoming stale, and ensuring that each
session has a new transaction started before it is used:
import threading
Note: Since the Cloud Spanner API requires the gRPC transport, no _http argument is accepted by this class.
Parameters
project (str or unicode) (Optional) The ID of the project which owns the instances,
tables and data. If not provided, will attempt to determine from the environment.
credentials (OAuth2Credentials or NoneType) (Optional) The OAuth2 Cre-
dentials to use for this client. If not provided, defaults to the Google Application Default
Credentials.
user_agent (str) (Optional) The user agent to be used with API request. Defaults to
DEFAULT_USER_AGENT.
Raises ValueError if both read_only and admin are True
SCOPE = ('https://www.googleapis.com/auth/spanner.admin',)
The scopes required for Google Cloud Spanner.
copy()
Make a copy of this client.
Copies the local data stored as simple types but does not copy the current state of any open connections
with the Cloud Bigtable API.
Return type Client
Returns A copy of the current client.
credentials
Getter for clients credentials.
Return type OAuth2Credentials
Returns The credentials stored on the client.
database_admin_api
Helper for session-related API calls.
instance(instance_id, configuration_name=None, display_name=None, node_count=1)
Factory to create a instance associated with this client.
Parameters
instance_id (str) The ID of the instance.
configuration_name (string) (Optional) Name of the instance configura-
tion used to set up the instances cluster, in the form: projects/<project>/
instanceConfigs/<config>. Required for instances which do not yet exist.
display_name (str) (Optional) The display name for the instance in the Cloud Con-
sole UI. (Must be between 4 and 30 characters.) If this value is not set in the constructor,
will fall back to the instance ID.
node_count (int) (Optional) The number of nodes in the instances cluster; used to
set up the instances cluster.
Return type Instance
Returns an instance owned by this client.
instance_admin_api
Helper for session-related API calls.
list_instance_configs(page_size=None, page_token=None)
List available instance configurations for the clients project.
See RPC docs.
Parameters
page_size (int) (Optional) Maximum number of results to return.
page_token (str) (Optional) Token for fetching next page of results.
Return type Iterator
Returns Iterator of InstanceConfig resources within the clients project.
list_instances(filter_=, page_size=None, page_token=None)
List instances for the clients project.
See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.
admin.database.v1.InstanceAdmin.ListInstances
Parameters
filter (string) (Optional) Filter to select instances listed. See the
ListInstancesRequest docs above for examples.
page_size (int) (Optional) Maximum number of results to return.
page_token (str) (Optional) Token for fetching next page of results.
Return type Iterator
Returns Iterator of Instance resources within the clients project.
project_name
Project name to be used with Spanner APIs.
Note: This property will not change if project does not, but the return value is not cached.
Parameters
instance_id (str) The ID of the instance.
client (Client) The client that owns the instance. Provides authorization and a project
ID.
configuration_name (str) Name of the instance configuration defining how the
instance will be created. Required for instances which do not yet exist.
node_count (int) (Optional) Number of nodes allocated to the instance.
display_name (str) (Optional) The display name for the instance in the Cloud Con-
sole UI. (Must be between 4 and 30 characters.) If this value is not set in the constructor,
will fall back to the instance ID.
copy()
Make a copy of this instance.
Copies the local data stored as simple types and copies the client attached to this instance.
Return type Instance
Returns A copy of the current instance.
create()
Create this instance.
See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.instance.v1#google.spanner.
admin.instance.v1.InstanceAdmin.CreateInstance
Note: Uses the project and instance_id on the current Instance in addition to the
display_name. To change them before creating, reset the values via
Note: This property will not change if instance_id does not, but the return value is not cached.
reload()
Reload the metadata for this instance.
See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.instance.v1#google.spanner.
admin.instance.v1.InstanceAdmin.GetInstanceConfig
Raises
NotFound if the instance does not exist
GaxError for other errors returned from the call
update()
Update this instance.
See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.instance.v1#google.spanner.
admin.instance.v1.InstanceAdmin.UpdateInstance
Note: Updates the display_name and node_count. To change those values before updating, set
them via
Parameters
database_id (str) The ID of the database.
batch()
Return an object which wraps a batch.
The wrapper must be used as a context manager, with the batch as the value returned by the wrapper.
Return type BatchCheckout
Returns new wrapper
create()
Create this database within its instance
Inclues any configured schema assigned to ddl_statements.
See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.
admin.database.v1.DatabaseAdmin.CreateDatabase
Return type Operation
Returns a future used to poll the status of the create request
Raises
Conflict if the database already exists
NotFound if the instance owning the database does not exist
GaxError for errors other than ALREADY_EXISTS returned from the call
ddl_statements
DDL Statements used to define database schema.
See cloud.google.com/spanner/docs/data-definition-language
Return type sequence of string
Returns the statements
drop()
Drop this database.
See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.
admin.database.v1.DatabaseAdmin.DropDatabase
exists()
Test whether this database exists.
See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.
admin.database.v1.DatabaseAdmin.GetDatabaseDDL
Return type bool
Returns True if the database exists, else false.
Raises GaxError for errors other than NOT_FOUND returned from the call
classmethod from_pb(database_pb, instance, pool=None)
Creates an instance of this class from a protobuf.
Parameters
database_pb (google.spanner.v2.spanner_instance_admin_pb2.
Instance) A instance protobuf object.
instance (Instance) The instance that owns the database.
pool (concrete subclass of AbstractSessionPool.) (Optional) session pool to be
used by database.
Return type Database
Returns The database parsed from the protobuf response.
Raises ValueError if the instance name does not match the expected format or if the parsed
project ID does not match the project ID on the instances client, or if the parsed instance ID
does not match the instances ID.
name
Database name used in requests.
Note: This property will not change if database_id does not, but the return value is not cached.
reload()
Reload this database.
Refresh any configured schema into ddl_statements.
See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.
admin.database.v1.DatabaseAdmin.GetDatabaseDDL
Raises
NotFound if the database does not exist
GaxError for errors other than NOT_FOUND returned from the call
run_in_transaction(func, *args, **kw)
Perform a unit of work in a transaction, retrying on abort.
Parameters
func (callable) takes a required positional argument, the transaction, and additional
positional / keyword arguments as supplied by the caller.
args (tuple) additional positional arguments to be passed to func.
kw (dict) optional keyword arguments to be passed to func. If passed, timeout_secs
will be removed and used to override the default timeout.
Return type datetime.datetime
Returns timestamp of committed transaction
session()
Factory to create a session for this database.
class google.cloud.spanner.session.Session(database)
Bases: object
Representation of a Cloud Spanner Session.
We can use a Session to:
create() the session
Use exists() to check for the existence of the session
drop() the session
batch()
Factory to create a batch for this session.
Return type Batch
Returns a batch bound to this session
Raises ValueError if the session has not yet been created.
create()
Create this session, bound to its database.
See https://cloud.google.com/spanner/reference/rpc/google.spanner.v1#google.spanner.v1.Spanner.
CreateSession
Raises ValueError if session_id is already set.
delete()
Delete this session.
See https://cloud.google.com/spanner/reference/rpc/google.spanner.v1#google.spanner.v1.Spanner.
GetSession
Raises
ValueError if session_id is not already set.
NotFound if the session does not exist
GaxError for errors other than NOT_FOUND returned from the call
execute_sql(sql, params=None, param_types=None, query_mode=None, resume_token=)
Perform an ExecuteStreamingSql API request.
Parameters
sql (str) SQL query statement
params (dict, {str -> column value}) values for parameter replacement.
Keys must match the names used in sql.
param_types (dict, {str -> google.spanner.v1.type_pb2.TypeCode})
(Optional) explicit types for one or more param values; overrides default type detection on
the back-end.
query_mode (google.spanner.v1.spanner_pb2.ExecuteSqlRequest.
QueryMode) Mode governing return of results / query plan. See https:
//cloud.google.com/spanner/reference/rpc/google.spanner.v1#google.spanner.v1.
ExecuteSqlRequest.QueryMode1
resume_token (bytes) token for resuming previously-interrupted query
Note: This property will not change if session_id does not, but the return value is not cached.
bind(database)
Associate the pool with a database.
Parameters database (Database) database used by the pool: used to create sessions
when needed.
clear()
Delete all sessions in the pool.
get()
Check a session out from the pool.
Return type Session
Returns an existing session from the pool, or a newly-created session.
put(session)
Return a session to the pool.
Never blocks: if the pool is full, the returned session is discarded.
Parameters session (Session) the session being returned.
class google.cloud.spanner.pool.FixedSizePool(size=10, default_timeout=10)
Bases: google.cloud.spanner.pool.AbstractSessionPool
Concrete session pool implementation:
Pre-allocates / creates a fixed number of sessions.
Pings existing sessions via session.exists() before returning them, and replaces expired sessions.
Blocks, with a timeout, when get() is called on an empty pool. Raises after timing out.
Raises when put() is called on a full pool. That error is never expected in normal practice, as users
should be calling get() followed by put() whenever in need of a session.
Parameters
size (int) fixed pool size
default_timeout (int) default timeout, in seconds, to wait for a returned session.
bind(database)
Associate the pool with a database.
Parameters database (Database) database used by the pool: used to create sessions
when needed.
clear()
Delete all sessions in the pool.
get(timeout=None)
Check a session out from the pool.
Parameters timeout (int) seconds to block waiting for an available session
Return type Session
Returns an existing session from the pool, or a newly-created session.
Raises six.moves.queue.Empty if the queue is empty.
put(session)
Return a session to the pool.
Never blocks: if the pool is full, raises.
Parameters session (Session) the session being returned.
Raises six.moves.queue.Full if the queue is full.
class google.cloud.spanner.pool.PingingPool(size=10, default_timeout=10,
ping_interval=3000)
Bases: google.cloud.spanner.pool.AbstractSessionPool
Concrete session pool implementation:
Pre-allocates / creates a fixed number of sessions.
Sessions are used in round-robin order (LRU first).
Pings existing sessions in the background after a specified interval via an API call (session.
exists()).
Blocks, with a timeout, when get() is called on an empty pool. Raises after timing out.
Raises when put() is called on a full pool. That error is never expected in normal practice, as users
should be calling get() followed by put() whenever in need of a session.
The application is responsible for calling ping() at appropriate times, e.g. from a background thread.
Parameters
size (int) fixed pool size
default_timeout (int) default timeout, in seconds, to wait for a returned session.
commit()
Commit mutations to the database.
Return type datetime
Returns timestamp of the committed changes.
committed = None
Timestamp at which the batch was successfully committed.
Parameters
lhs (google.protobuf.struct_pb2.Value) pending value to be merged
rhs (google.protobuf.struct_pb2.Value) remaining value to be merged
type (google.cloud.proto.spanner.v1.type_pb2.Type) field type of val-
ues being merged
API requests are sent to the Cloud Spanner API via RPC over HTTP/2. In order to support this, well rely on gRPC.
Get started by learning about the Client on the Client page.
In the hierarchy of API concepts
a Client owns an Instance
an Instance owns a Database
Speech
The Google Speech API enables developers to convert audio to text. The API recognizes over 80 languages and
variants, to support your global user base.
SpeechClient objects provide a means to configure your application. Each instance holds an authenticated con-
nection to the Cloud Speech Service.
For an overview of authentication in google-cloud-python, see Authentication.
Assuming your environment is set up as described in that document, create an instance of SpeechClient.
The long_running_recognize() method sends audio data to the Speech API and initiates a Long Running
Operation.
Using this operation, you can periodically poll for recognition results. Use asynchronous requests for audio data of
any duration up to 80 minutes.
See: Speech Asynchronous Recognize
245
google-cloud Documentation, Release 0.27.1
... ),
... config=speech.types.RecognitionConfig(
... encoding='LINEAR16',
... language_code='en-US',
... sample_rate_hertz=44100,
... ),
... )
>>> retry_count = 100
>>> while retry_count > 0 and not operation.complete:
... retry_count -= 1
... time.sleep(10)
... operation.poll() # API call
>>> operation.complete
True
>>> for result in operation.results:
... for alternative in result.alternatives:
... print('=' * 20)
... print(alternative.transcript)
... print(alternative.confidence)
====================
'how old is the Brooklyn Bridge'
0.98267895
The recognize() method converts speech data to text and returns alternative text transcriptions.
This example uses language_code='en-GB' to better recognize a dialect from Great Britain.
Using speech context hints to get better results. This can be used to improve the accuracy for specific words and
phrases. This can also be used to add new words to the vocabulary of the recognizer.
The streaming_recognize() method converts speech data to possible text alternatives on the fly.
See: https://cloud.google.com/speech/limits#content
>>> import io
>>> from google.cloud import speech
>>> client = speech.SpeechClient()
>>> config = speech.types.RecognitionConfig(
... encoding='LINEAR16',
... language_code='en-US',
... sample_rate_hertz=44100,
... )
>>> with io.open('./hello.wav', 'rb') as stream:
... requests = [speech.types.StreamingRecognizeRequest(
... audio_content=stream.read(),
... )]
>>> results = sample.streaming_recognize(
... config=speech.types.StreamingRecognitionConfig(config=config),
... requests,
... )
>>> for result in results:
... for alternative in result.alternatives:
... print('=' * 20)
... print('transcript: ' + alternative.transcript)
... print('confidence: ' + str(alternative.confidence))
====================
transcript: hello thank you for using Google Cloud platform
confidence: 0.927983105183
By default the API will perform continuous recognition (continuing to process audio even if the speaker in the audio
pauses speaking) until the client closes the output stream or until the maximum time limit has been reached.
If you only want to recognize a single utterance you can set single_utterance to True and only one result will
be returned.
See: Single Utterance
>>> import io
>>> from google.cloud import speech
>>> client = speech.SpeechClient()
>>> config = speech.types.RecognitionConfig(
... encoding='LINEAR16',
... language_code='en-US',
... sample_rate_hertz=44100,
... )
>>> with io.open('./hello-pause-goodbye.wav', 'rb') as stream:
... requests = [speech.types.StreamingRecognizeRequest(
... audio_content=stream.read(),
... )]
>>> results = sample.streaming_recognize(
... config=speech.types.StreamingRecognitionConfig(
... config=config,
... single_utterance=False,
... ),
... requests,
... )
>>> for result in results:
... for alternative in result.alternatives:
... print('=' * 20)
... print('transcript: ' + alternative.transcript)
If interim_results is set to True, interim results (tentative hypotheses) may be returned as they become avail-
able.
>>> import io
>>> from google.cloud import speech
>>> client = speech.SpeechClient()
>>> config = speech.types.RecognitionConfig(
... encoding='LINEAR16',
... language_code='en-US',
... sample_rate_hertz=44100,
... )
>>> with io.open('./hello.wav', 'rb') as stream:
... requests = [speech.types.StreamingRecognizeRequest(
... audio_content=stream.read(),
... )]
>>> config = speech.types.StreamingRecognitionConfig(config=config)
>>> responses = client.streaming_recognize(config,requests)
>>> for response in responses:
... for result in response:
... for alternative in result.alternatives:
... print('=' * 20)
... print('transcript: ' + alternative.transcript)
... print('confidence: ' + str(alternative.confidence))
... print('is_final:' + str(result.is_final))
====================
'he'
None
False
====================
'hell'
None
False
====================
'hello'
0.973458576
True
class google.cloud.speech_v1.SpeechClient(service_path=speech.googleapis.com,
port=443, channel=None, credentials=None,
ssl_credentials=None, scopes=None,
client_config=None, app_name=None,
app_version=, lib_name=None,
lib_version=, metrics_headers=())
Service that implements Google Cloud Speech API.
Constructor.
Parameters
service_path (string) The domain name of the API remote host.
port (int) The port on which to connect to the remote host.
channel (grpc.Channel) A Channel instance through which to make calls.
credentials (object) The authorization credentials to attach to requests. These
credentials identify this application to the service.
ssl_credentials (grpc.ChannelCredentials) A ChannelCredentials
instance for use with an SSL-enabled channel.
scopes (list[string]) A list of OAuth2 scopes to attach to requests.
client_config (dict) A dictionary for call options for each method. See google.
gax.construct_settings() for the structure of this data. Falls back to the default
config if not specified or the specified config is missing data points.
app_name (string) The name of the application calling the service. Recommended
for analytics purposes.
app_version (string) The version of the application calling the service. Recom-
mended for analytics purposes.
lib_name (string) The API library software used for calling the service. (Unless you
are writing an API client itself, leave this as default.)
lib_version (string) The API library software version used for calling the service.
(Unless you are writing an API client itself, leave this as default.)
metrics_headers (dict) A dictionary of values for tracking client library metrics.
Ultimately serializes to a string (e.g. foo/1.2.3 bar/3.14.1). This argument should be con-
sidered private.
Returns A SpeechClient object.
enums = <module 'google.cloud.gapic.speech.v1.enums' from '/home/docs/checkouts/readthe
long_running_recognize(config, audio, options=None)
Performs asynchronous speech recognition: receive results via the google.longrunning.Operations in-
terface. Returns either an Operation.error or an Operation.response which contains a
LongRunningRecognizeResponse message.
Example
Parameters
config (google.cloud.proto.speech.v1.cloud_speech_pb2.
RecognitionConfig) Required Provides information to the recognizer that
specifies how to process the request.
audio (google.cloud.proto.speech.v1.cloud_speech_pb2.
RecognitionAudio) Required The audio data to be recognized.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.gax._OperationFuture instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
Example
Parameters
config (google.cloud.proto.speech.v1.cloud_speech_pb2.
RecognitionConfig) Required Provides information to the recognizer that
specifies how to process the request.
audio (google.cloud.proto.speech.v1.cloud_speech_pb2.
RecognitionAudio) Required The audio data to be recognized.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries etc.
Returns A google.cloud.proto.speech.v1.cloud_speech_pb2.
RecognizeResponse instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
Warning: This method is EXPERIMENTAL. Its interface might change in the future.
Example
Parameters
config (StreamingRecognitionConfig) The configuration to use for the
stream.
requests (Iterable[StreamingRecognizeRequest]) The input objects.
class google.cloud.speech_v1.types.LongRunningRecognizeMetadata
Describes the progress of a long-running LongRunningRecognize call. It is included
in the metadata field of the Operation returned by the GetOperation call of the
google::longrunning::Operations service.
progress_percent
Approximate percentage of audio processed thus far. Guaranteed to be 100 when the audio is fully pro-
cessed and the results are available.
start_time
Time when the request was received.
last_update_time
Time of the most recent processing update.
class google.cloud.speech_v1.types.LongRunningRecognizeRequest
The top-level message sent by the client for the LongRunningRecognize method.
config
Required Provides information to the recognizer that specifies how to process the request.
audio
Required The audio data to be recognized.
class google.cloud.speech_v1.types.LongRunningRecognizeResponse
The only message returned to the client by the LongRunningRecognize method. It contains
the result as zero or more sequential SpeechRecognitionResult messages. It is included
in the result.response field of the Operation returned by the GetOperation call of the
google::longrunning::Operations service.
results
Output-only Sequential list of transcription results corresponding to sequential portions of audio.
class google.cloud.speech_v1.types.RecognitionAudio
Contains audio data in the encoding specified in the RecognitionConfig. Ei-
ther content or uri must be supplied. Supplying both or neither returns
[google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]. See audio limits.
audio_source
The audio source, which is either inline content or a Google Cloud Storage uri.
content
The audio data bytes encoded as specified in RecognitionConfig. Note: as with all bytes fields,
protobuffers use a pure binary representation, whereas JSON representations use base64.
uri
URI that points to a file that contains audio data bytes as specified in RecognitionConfig.
Currently, only Google Cloud Storage URIs are supported, which must be specified in
the following format: gs://bucket_name/object_name (other URI formats return
[google.rpc.Code.INVALID_ARGUMENT][google. rpc.Code.INVALID_ARGUMENT]). For more
information, see Request URIs.
class google.cloud.speech_v1.types.RecognitionConfig
Provides information to the recognizer that specifies how to process the request.
encoding
Required Encoding of audio data sent in all RecognitionAudio messages.
sample_rate_hertz
Required Sample rate in Hertz of the audio data sent in all RecognitionAudio messages. Valid values
are: 8000-48000. 16000 is optimal. For best results, set the sampling rate of the audio source to 16000 Hz.
If thats not possible, use the native sample rate of the audio source (instead of re- sampling).
language_code
Required The language of the supplied audio as a BCP-47 language tag. Example: en-US. See Language
Support for a list of the currently supported language codes.
max_alternatives
Optional Maximum number of recognition hypotheses to be returned. Specifically, the maximum num-
ber of SpeechRecognitionAlternative messages within each SpeechRecognitionResult.
The server may return fewer than max_alternatives. Valid values are 0-30. A value of 0 or 1 will
return a maximum of one. If omitted, will return a maximum of one.
profanity_filter
Optional If set to true, the server will attempt to filter out profanities, replacing all but the initial character
in each filtered word with asterisks, e.g. f***. If set to false or omitted, profanities wont be filtered
out.
speech_contexts
Optional A means to provide context to assist the speech recognition.
enable_word_time_offsets
Optional If true, the top result includes a list of words and the start and end time offsets (timestamps) for
those words. If false, no word-level time offset information is returned. The default is false.
class google.cloud.speech_v1.types.RecognizeRequest
The top-level message sent by the client for the Recognize method.
config
Required Provides information to the recognizer that specifies how to process the request.
audio
Required The audio data to be recognized.
class google.cloud.speech_v1.types.RecognizeResponse
The only message returned to the client by the Recognize method. It contains the result as zero or more
sequential SpeechRecognitionResult messages.
results
Output-only Sequential list of transcription results corresponding to sequential portions of audio.
class google.cloud.speech_v1.types.SpeechContext
Provides hints to the speech recognizer to favor specific words and phrases in the results.
phrases
Optional A list of strings containing words and phrases hints so that the speech recognition is more
likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for
example, if specific commands are typically spoken by the user. This can also be used to add additional
words to the vocabulary of the recognizer. See usage limits.
class google.cloud.speech_v1.types.SpeechRecognitionAlternative
Alternative hypotheses (a.k.a. n-best list).
transcript
Output-only Transcript text representing the words that the user spoke.
confidence
Output-only The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater
likelihood that the recognized words are correct. This field is typically provided only for the top hypothesis,
and only for is_final=true results. Clients should not rely on the confidence field as it is not
guaranteed to be accurate or consistent. The default of 0.0 is a sentinel value indicating confidence
was not set.
words
Output-only A list of word-specific information for each recognized word.
class google.cloud.speech_v1.types.SpeechRecognitionResult
A speech recognition result corresponding to a portion of the audio.
alternatives
Output-only May contain one or more recognition hypotheses (up to the maximum specified in
max_alternatives). These alternatives are ordered in terms of accuracy, with the top (first) alter-
native being the most probable, as ranked by the recognizer.
class google.cloud.speech_v1.types.StreamingRecognitionConfig
Provides information to the recognizer that specifies how to process the request.
config
Required Provides information to the recognizer that specifies how to process the request.
single_utterance
Optional If false or omitted, the recognizer will perform continuous recognition (continuing to
wait for and process audio even if the user pauses speaking) until the client closes the input
stream (gRPC API) or until the maximum time limit has been reached. May return multiple
StreamingRecognitionResults with the is_final flag set to true. If true, the recognizer
will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will
return an END_OF_SINGLE_UTTERANCE event and cease recognition. It will return no more than one
StreamingRecognitionResult with the is_final flag set to true.
interim_results
Optional If true, interim results (tentative hypotheses) may be returned as they become available
(these interim results are indicated with the is_final=false flag). If false or omitted, only
is_final=true result(s) are returned.
class google.cloud.speech_v1.types.StreamingRecognitionResult
A streaming speech recognition result corresponding to a portion of the audio that is currently being processed.
alternatives
Output-only May contain one or more recognition hypotheses (up to the maximum specified in
max_alternatives).
is_final
Output-only If false, this StreamingRecognitionResult represents an interim result that
may change. If true, this is the final time the speech service will return this particular
StreamingRecognitionResult, the recognizer will not return any further hypotheses for this por-
tion of the transcript and corresponding audio.
stability
Output-only An estimate of the likelihood that the recognizer will not change its guess about this interim
result. Values range from 0.0 (completely unstable) to 1.0 (completely stable). This field is only provided
for interim results (is_final=false). The default of 0.0 is a sentinel value indicating stability
was not set.
class google.cloud.speech_v1.types.StreamingRecognizeRequest
The top-level message sent by the client for the StreamingRecognize method. Multi-
ple StreamingRecognizeRequest messages are sent. The first message must contain a
streaming_config message and must not contain audio data. All subsequent messages must contain
audio data and must not contain a streaming_config message.
streaming_request
The streaming request, which is either a streaming config or audio content.
streaming_config
Provides information to the recognizer that specifies how to process the request. The first
StreamingRecognizeRequest message must contain a streaming_config message.
audio_content
The audio data to be recognized. Sequential chunks of audio data are sent in sequential
StreamingRecognizeRequest messages. The first StreamingRecognizeRequest mes-
sage must not contain audio_content data and all subsequent StreamingRecognizeRequest
messages must contain audio_content data. The audio bytes must be encoded as specified in
RecognitionConfig. Note: as with all bytes fields, protobuffers use a pure binary representation
(not base64). See audio limits.
class google.cloud.speech_v1.types.StreamingRecognizeResponse
StreamingRecognizeResponse is the only message returned to the client by StreamingRecognize.
A series of one or more StreamingRecognizeResponse messages are streamed back to the client.
Heres an example of a series of ten StreamingRecognizeResponses that might be returned while pro-
cessing audio:
1. results { alternatives { transcript: tube } stability: 0.01 }
2. results { alternatives { transcript: to be a } stability: 0.01 }
3. results { alternatives { transcript: to be } stability: 0.9 } results { alternatives { transcript: or not to be
} stability: 0.01 }
4. results { alternatives { transcript: to be or not to be confidence: 0.92 } alternatives { transcript: to bee
or not to bee } is_final: true }
5. results { alternatives { transcript: thats } stability: 0.01 }
6. results { alternatives { transcript: that is } stability: 0.9 } results { alternatives { transcript: the
question } stability: 0.01 }
7. results { alternatives { transcript: that is the question confidence: 0.98 } alternatives { transcript: that
was the question } is_final: true }
Notes:
Only two of the above responses #4 and #7 contain final results; they are indicated by is_final:
true. Concatenating these together generates the full transcript: to be or not to be that is the question.
The others contain interim results. #3 and #6 contain two interim results: the first portion has a
high stability and is less likely to change; the second portion has a low stability and is very likely to change.
A UI designer might choose to show only high stability results.
The specific stability and confidence values shown above are only for illustrative purposes. Ac-
tual values may vary.
In each response, only one of these fields will be set: error, speech_event_type, or one or more
(repeated) results.
error
Output-only If set, returns a [google.rpc.Status][google.rpc.Status] message that specifies the error for the
operation.
results
Output-only This repeated list contains zero or more results that correspond to consecutive portions of
the audio currently being processed. It contains zero or one is_final=true result (the newly settled
portion), followed by zero or more is_final=false results.
speech_event_type
Output-only Indicates the type of speech event.
class google.cloud.speech_v1.types.WordInfo
Word-specific information for recognized words. Word information is only included in the response when
certain request parameters are set, such as enable_word_time_offsets.
start_time
Output-only Time offset relative to the beginning of the audio, and corresponding to the start of the spoken
word. This field is only set if enable_word_time_offsets=true and only in the top hypothesis.
This is an experimental feature and the accuracy of the time offset can vary.
end_time
Output-only Time offset relative to the beginning of the audio, and corresponding to the end of the spoken
word. This field is only set if enable_word_time_offsets=true and only in the top hypothesis.
This is an experimental feature and the accuracy of the time offset can vary.
word
Output-only The word corresponding to this set of information.
259
google-cloud Documentation, Release 0.27.1
ronment variable. This parameter should be considered private, and could change in the
future.
Raises ValueError if the project is neither passed in nor set in the environment.
SCOPE = ('https://www.googleapis.com/auth/cloud-platform',)
The scopes required for authenticating as an API consumer.
report(message, http_context=None, user=None)
Reports a message to Stackdriver Error Reporting
https://cloud.google.com/error-reporting/docs/formatting-error-messages
Parameters
message (str) A user-supplied message to report
http_context (:class`google.cloud.error_reporting.
HTTPContext`) The HTTP request which was processed when the error was
triggered.
user (str) The user who caused or was affected by the crash. This can be a user ID,
an email address, or an arbitrary token that uniquely identifies the user. When sending an
error report, leave this field empty if the user was not logged in. In this case the Error
Reporting system will use other data, such as remote IP address, to distinguish affected
users.
Example:
report_errors_api
Helper for logging-related API calls.
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries https://cloud.google.com/logging/
docs/reference/v2/rest/v2/projects.logs
Return type _gax._ErrorReportingGaxApi or _logging.
_ErrorReportingLoggingAPI
Returns A class that implements the report errors API.
report_exception(http_context=None, user=None)
Reports the details of the latest exceptions to Stackdriver Error Reporting.
Parameters
http_context (:class`google.cloud.error_reporting.
HTTPContext`) The HTTP request which was processed when the error was
triggered.
user (str)
The user who caused or was affected by the crash. This can be a user ID, an email ad-
dress, or an arbitrary token that uniquely identifies the user. When sending an error
report, leave this field empty if the user was not logged in. In this case the Error Re-
porting system will use other data, such as remote IP address, to distinguish affected
users.
Example:
>>> try:
>>> raise NameError
>>> except Exception:
>>> client.report_exception()
>>> @app.errorhandler(HTTPException)
... def handle_error(exc):
... client.report_exception(
... http_context=build_flask_context(request))
... # rest of error response code here
In addition to any authentication configuration, you should also set the GOOGLE_CLOUD_PROJECT environ-
ment variable for the project youd like to interact with. If you are Google App Engine or Google Compute
Engine this will be detected automatically.
After configuring your environment, create a Client
Error Reporting associates errors with a service, which is an identifier for an executable, App Engine service, or
job. The default service is python, but a default can be specified for the client on construction time. You can
also optionally specify a version for that service, which defaults to default.
By default, the client will report the error using the service specified in the clients constructor, or the default service
of python.
The user and HTTP context can also be included in the exception. The HTTP context can be constructed using
google.cloud.error_reporting.HTTPContext. This will be used by Stackdriver Error Reporting to help
group exceptions.
An automatic helper to build the HTTP Context from a Flask (Werkzeug) request object is provided.
Errors can also be reported to Stackdriver Error Reporting outside the context of an exception. The library will include
the file path, function name, and line number of the location where the error was reported.
Similarly to reporting an exception, the user and HTTP context can be provided:
Stackdriver Monitoring
Client for interacting with the Google Stackdriver Monitoring API (V3).
Example:
At present, the client supports querying of time series, metric descriptors, and monitored resource descriptors.
class google.cloud.monitoring.client.Client(project=None, credentials=None,
_http=None)
Bases: google.cloud.client.ClientWithProject
Client to bundle configuration needed for API requests.
Parameters
project (str) The target project. If not passed, falls back to the default inferred from
the environment.
credentials (Credentials) (Optional) The OAuth2 Credentials to use for this
client. If not passed (and if no _http object is passed), falls back to the default inferred
from the environment.
_http (Session) (Optional) HTTP object to make requests. Can be any object that
defines request() with the same interface as requests.Session.request(). If
not passed, an _http object is created that is bound to the credentials for the current
object. This parameter should be considered private, and could change in the future.
SCOPE = ('https://www.googleapis.com/auth/monitoring.read', 'https://www.googleapis.com
The scopes required for authenticating as a Monitoring consumer.
265
google-cloud Documentation, Release 0.27.1
fetch_group(group_id)
Fetch a group from the API based on its ID.
Example:
>>> try:
>>> group = client.fetch_group('1234')
>>> except google.cloud.exceptions.NotFound:
>>> print('That group does not exist!')
fetch_metric_descriptor(metric_type)
Look up a metric descriptor by type.
Example:
fetch_resource_descriptor(resource_type)
Look up a monitored resource descriptor by type.
Example:
>>> print(client.fetch_resource_descriptor('gce_instance'))
Note: This will not make an HTTP request; it simply instantiates a group object owned by this client.
Parameters
group_id (str) (Optional) The ID of the group.
display_name (str) (Optional) A user-assigned name for this group, used only for
display purposes.
parent_id (str) (Optional) The ID of the groups parent, if it has one.
filter_string (str) (Optional) The filter string used to determine which moni-
tored resources belong to this group.
is_cluster (bool) If true, the members of this group are considered to be a cluster.
The system can perform additional analysis on groups that are clusters.
Return type Group
Returns The group created with the passed-in arguments.
Raises ValueError if both group_id and name are specified.
list_groups()
List all groups for the project.
Example:
list_metric_descriptors(filter_string=None, type_prefix=None)
List all metric descriptors for the project.
Examples:
Parameters
filter_string (str) (Optional) An optional filter expression describing the metric
descriptors to be returned. See the filter documentation.
type_prefix (str) (Optional) An optional prefix constraining the selected metric
types. This adds metric.type = starts_with("<prefix>") to the filter.
Return type list of MetricDescriptor
Returns A list of metric descriptor instances.
list_resource_descriptors(filter_string=None)
List all monitored resource descriptors for the project.
Example:
Parameters
type (str) The metric type name.
labels (dict) A mapping from label names to values for all labels enumerated in the
associated MetricDescriptor.
Return type Metric
Returns The metric object.
metric_descriptor(type_, metric_kind=METRIC_KIND_UNSPECIFIED,
value_type=VALUE_TYPE_UNSPECIFIED, labels=(), unit=, descrip-
tion=, display_name=)
Construct a metric descriptor object.
Metric descriptors specify the schema for a particular metric type.
This factory method is used most often in conjunction with the metric descriptor create() method to
define custom metrics:
Parameters
type (str) The metric type including a DNS name prefix. For example: "custom.
googleapis.com/my_metric"
metric_kind (str) The kind of measurement. It must be one of MetricKind.
GAUGE, MetricKind.DELTA, or MetricKind.CUMULATIVE. See MetricKind.
value_type (str) The value type of the metric. It must be one of
ValueType.BOOL, ValueType.INT64, ValueType.DOUBLE, ValueType.
STRING, or ValueType.DISTRIBUTION. See ValueType.
labels (list of LabelDescriptor) A sequence of zero or more label descriptors
specifying the labels used to identify a specific instance of this metric.
unit (str) An optional unit in which the metric value is reported.
description (str) An optional detailed description of the metric.
display_name (str) An optional concise name for the metric.
Return type MetricDescriptor
Returns The metric descriptor created with the passed-in arguments.
Parameters
metric_type (str) The metric type name. The default value is Query.
DEFAULT_METRIC_TYPE, but please note that this default value is provided only for
demonstration purposes and is subject to change. See the supported metrics.
end_time (datetime.datetime) (Optional) The end time (inclusive) of the time
interval for which results should be returned, as a datetime object. The default is the start
of the current minute.
The start time (exclusive) is determined by combining the values of days, hours, and
minutes, and subtracting the resulting duration from the end time.
It is also allowed to omit the end time and duration here, in which case
select_interval() must be called before the query is executed.
days (int) The number of days in the time interval.
hours (int) The number of hours in the time interval.
minutes (int) The number of minutes in the time interval.
Return type Query
Returns The query object.
Raises ValueError if end_time is specified but days, hours, and minutes are all zero.
If you really want to specify a point in time, use select_interval().
Note: While TimeSeries objects returned by the API typically have multiple data points,
TimeSeries objects sent to the API must have at most one point.
For example:
Note: The Python type of the value will determine the ValueType sent to the API,
which must match the value type specified in the metric descriptor. For example, a Python
float will be sent to the API as a ValueType.DOUBLE.
end_time (datetime) The end time for the point to be included in the time series.
Assumed to be UTC if no time zone information is present. Defaults to the current time,
as obtained by calling datetime.datetime.utcnow().
start_time (datetime) The start time for the point to be included in the time
series. Assumed to be UTC if no time zone information is present. Defaults to None. If
the start time is unspecified, the API interprets the start time to be the same as the end
time.
Return type TimeSeries
Returns A time series object.
Parameters
metric (Metric) A Metric object.
resource (Resource) A Resource object.
value (bool, int, string, or float) The value of the data point to create
for the TimeSeries.
Note: The Python type of the value will determine the ValueType sent to the API,
which must match the value type specified in the metric descriptor. For example, a Python
float will be sent to the API as a ValueType.DOUBLE.
end_time (datetime) The end time for the point to be included in the time series.
Assumed to be UTC if no time zone information is present. Defaults to the current time,
as obtained by calling datetime.datetime.utcnow().
start_time (datetime) The start time for the point to be included in the time
series. Assumed to be UTC if no time zone information is present. Defaults to None. If
the start time is unspecified, the API interprets the start time to be the same as the end
time.
write_time_series(timeseries_list)
Write a list of time series objects to the API.
The recommended approach to creating time series objects is using the time_series() factory method.
Example:
If you only need to write a single time series object, consider using the write_point() method instead.
Parameters timeseries_list (list of TimeSeries) A list of time series object to be
written to the API. Each time series must contain exactly one point.
Parameters
type (str) The metric type name.
labels (dict) A mapping from label names to values for all labels enumerated in the
associated MetricDescriptor.
Create new instance of Metric(type, labels)
class google.cloud.monitoring.metric.MetricDescriptor(client, type_, met-
ric_kind=METRIC_KIND_UNSPECIFIED,
value_type=VALUE_TYPE_UNSPECIFIED,
labels=(), unit=, descrip-
tion=, display_name=,
name=None)
Bases: object
Specification of a metric type and its schema.
The preferred way to construct a metric descriptor object is using the metric_descriptor() factory
method of the Client class.
Parameters
client (google.cloud.monitoring.client.Client) A client for operating
on the metric descriptor.
type (str) The metric type including a DNS name prefix. For example: "compute.
googleapis.com/instance/cpu/utilization"
metric_kind (str) The kind of measurement. It must be one of MetricKind.
GAUGE, MetricKind.DELTA, or MetricKind.CUMULATIVE. See MetricKind.
value_type (str) The value type of the metric. It must be one of
ValueType.BOOL, ValueType.INT64, ValueType.DOUBLE, ValueType.
STRING, or ValueType.DISTRIBUTION. See ValueType.
labels (list of LabelDescriptor) A sequence of zero or more label descriptors
specifying the labels used to identify a specific instance of this metric.
unit (str) An optional unit in which the metric value is reported.
description (str) An optional detailed description of the metric.
display_name (str) An optional concise name for the metric.
name (str) (Optional) The resource name of the metric descriptor. For example:
"projects/<project_id>/metricDescriptors/<type>". As retrieved from
the service, this will always be specified. You can and should omit it when constructing an
instance for the purpose of creating a new metric descriptor.
create()
Create a new metric descriptor based on this object.
Example:
The metric kind must not be MetricKind.METRIC_KIND_UNSPECIFIED, and the value type must
not be ValueType.VALUE_TYPE_UNSPECIFIED.
The name attribute is ignored in preparing the creation request. All attributes are overwritten by the values
received in the response (normally affecting only name).
delete()
Delete the metric descriptor identified by this object.
Example:
class google.cloud.monitoring.metric.ValueType
Bases: object
Choices for the metric value type.
VALUE_TYPE_UNSPECIFIED = 'VALUE_TYPE_UNSPECIFIED'
Monitored Resource Descriptors for the Google Stackdriver Monitoring API (V3).
class google.cloud.monitoring.resource.Resource
Bases: google.cloud.monitoring.resource.Resource
A monitored resource identified by specifying values for all labels.
The preferred way to construct a resource object is using the resource() factory method of the Client
class.
Parameters
type (str) The resource type name.
labels (dict) A mapping from label names to values for all labels enumerated in the
associated ResourceDescriptor.
Create new instance of Resource(type, labels)
16.4 Groups
The name attribute is ignored in preparing the creation request. All attributes are overwritten by the values
received in the response (normally affecting only name).
delete()
Delete the group via a DELETE request.
Example:
Warning: This method will fail for groups that have one or more children groups.
exists()
Test for the existence of the group via a GET request.
Return type bool
Returns Boolean indicating existence of the group.
fetch_parent()
Returns the parent group of this group via a GET request.
Return type Group or None
Returns The parent of the group.
id
Returns the group ID.
Return type str or None
Returns the ID of the group based on its name.
list_ancestors()
Lists all ancestors of this group via a GET request.
The groups are returned in order, starting with the immediate parent and ending with the most distant
ancestor. If the specified group has no immediate parent, the results are empty.
Return type list of Group
Returns A list of group instances.
list_children()
Lists all children of this group via a GET request.
Returns groups whose parent_name field contains the group name. If no groups have this parent, the results
are empty.
Return type list of Group
Returns A list of group instances.
list_descendants()
Lists all descendants of this group via a GET request.
This returns a superset of the results returned by the children() method, and includes children-of-
children, and so forth.
Return type list of Group
Parameters
filter_string (str) (Optional) An optional list filter describing the members to
be returned. The filter may reference the type, labels, and metadata of monitored resources
that comprise the group. See the filter documentation.
end_time (datetime.datetime) (Optional) The end time (inclusive) of the time
interval for which results should be returned, as a datetime object. If start_time is
specified, then this must also be specified.
start_time (datetime.datetime) (Optional) The start time (exclusive) of the
time interval for which results should be returned, as a datetime object.
Return type list of Resource
Returns A list of resource instances.
Raises ValueError if the start_time is specified, but the end_time is missing.
name
Returns the fully qualified name of the group.
Return type str or None
Returns The fully qualified name of the group in the format projects/<project>/groups/<id>.
parent_name
Returns the fully qualified name of the parent group.
Return type str or None
Returns The fully qualified name of the parent group.
path
URL path to this group.
Return type str
Warning: This will overwrite any local changes youve made and not saved via update().
update()
Update the group via a PUT request.
Time series query for the Google Stackdriver Monitoring API (V3).
class google.cloud.monitoring.query.Aligner
Bases: object
Allowed values for the supported aligners.
class google.cloud.monitoring.query.Query(client, metric_type=compute.googleapis.com/instance/cpu/utilization,
end_time=None, days=0, hours=0, minutes=0)
Bases: object
Query object for retrieving metric data.
The preferred way to construct a query object is using the query() method of the Client class.
Parameters
client (google.cloud.monitoring.client.Client) The client to use.
metric_type (str) The metric type name. The default value is Query.
DEFAULT_METRIC_TYPE, but please note that this default value is provided only for
demonstration purposes and is subject to change. See the supported metrics.
end_time (datetime.datetime) (Optional) The end time (inclusive) of the time
interval for which results should be returned, as a datetime object. The default is the start of
the current minute.
The start time (exclusive) is determined by combining the values of days, hours, and
minutes, and subtracting the resulting duration from the end time.
It is also allowed to omit the end time and duration here, in which case
select_interval() must be called before the query is executed.
days (int) The number of days in the time interval.
hours (int) The number of hours in the time interval.
minutes (int) The number of minutes in the time interval.
Raises ValueError if end_time is specified but days, hours, and minutes are all zero. If
you really want to specify a point in time, use select_interval().
align(per_series_aligner, seconds=0, minutes=0, hours=0)
Copy the query and add temporal alignment.
If per_series_aligner is not Aligner.ALIGN_NONE, each time series will contain data points
only on the period boundaries.
Example:
Parameters
per_series_aligner (str) The approach to be used to align individual time se-
ries. For example: Aligner.ALIGN_MEAN. See Aligner and the descriptions of the
supported aligners.
seconds (int) The number of seconds in the alignment period.
minutes (int) The number of minutes in the alignment period.
hours (int) The number of hours in the alignment period.
Return type Query
Returns The new query object.
as_dataframe(label=None, labels=None)
Return all the selected time series as a pandas dataframe.
Note: Use of this method requires that you have pandas installed.
Examples:
Parameters
label (str) (Optional) The label name to use for the dataframe header. This can be
the name of a resource label or metric label (e.g., "instance_name"), or the string
"resource_type".
labels (list of strings, or None) A list or tuple of label names to use for
the dataframe header. If more than one label name is provided, the resulting dataframe
will have a multi-level column header. Providing values for both label and labels is
an error.
copy()
Copy the query object.
Return type Query
Returns The new query object.
filter
The filter string.
This is constructed from the metric type, the resource type, and selectors for the group ID, monitored
projects, resource labels, and metric labels.
iter(headers_only=False, page_size=None)
Yield all time series objects selected by the query.
The generator returned iterates over TimeSeries objects containing points ordered from oldest to
newest.
Note that the Query object itself is an iterable, such that the following are equivalent:
Parameters
headers_only (bool) Whether to omit the point data from the time series objects.
page_size (int) (Optional) Positive number specifying the maximum number of
points to return per page. This can be used to control how far the iterator reads ahead.
Raises ValueError if the query time interval has not been specified.
metric_type
The metric type name.
reduce(cross_series_reducer, *group_by_fields)
Copy the query and add cross-series reduction.
Cross-series reduction combines time series by aggregating their data points.
For example, you could request an aggregated time series for each combination of project and zone as
follows:
query = query.reduce(Reducer.REDUCE_MEAN,
'resource.project_id', 'resource.zone')
Parameters
cross_series_reducer (str) The approach to be used to combine time series.
For example: Reducer.REDUCE_MEAN. See Reducer and the descriptions of the sup-
ported reducers.
select_group(group_id)
Copy the query and add filtering by group.
Example:
query = query.select_group('1234567')
select_interval(end_time, start_time=None)
Copy the query and set the query time interval.
Example:
import datetime
now = datetime.datetime.utcnow()
query = query.select_interval(
end_time=now,
start_time=now - datetime.timedelta(minutes=5))
As a convenience, you can alternatively specify the end time and an interval duration when you create the
query initially.
Parameters
end_time (datetime.datetime) The end time (inclusive) of the time interval for
which results should be returned, as a datetime object.
start_time (datetime.datetime) (Optional) The start time (exclusive) of the
time interval for which results should be returned, as a datetime object. If not specified,
the interval is a point in time.
Return type Query
Returns The new query object.
select_metrics(*args, **kwargs)
Copy the query and add filtering by metric labels.
Examples:
query = query.select_metrics(instance_name='myinstance')
query = query.select_metrics(instance_name_prefix='mycluster-')
metric.label.<label> = "<value>"
However, by adding "_prefix" or "_suffix" to the keyword, you can specify a partial match.
<label>_prefix=<value> generates:
metric.label.<label> = starts_with("<value>")
<label>_suffix=<value> generates:
metric.label.<label> = ends_with("<value>")
If the labels value type is INT64, a similar notation can be used to express inequalities:
<label>_less=<value> generates:
<label>_lessequal=<value> generates:
<label>_greater=<value> generates:
<label>_greaterequal=<value> generates:
Parameters
args (tuple) Raw filter expression strings to include in the conjunction. If just one is
provided and no keyword arguments are provided, it can be a disjunction.
kwargs (dict) Label filters to include in the conjunction as described above.
Return type Query
Returns The new query object.
select_projects(*args)
Copy the query and add filtering by monitored projects.
This is only useful if the target project represents a Stackdriver account containing the specified monitored
projects.
Examples:
query = query.select_projects('project-1')
query = query.select_projects('project-1', 'project-2')
Parameters args (tuple) Project IDs limiting the resources to be included in the query.
Return type Query
Returns The new query object.
select_resources(*args, **kwargs)
Copy the query and add filtering by resource labels.
Examples:
query = query.select_resources(zone='us-central1-a')
query = query.select_resources(zone_prefix='europe-')
query = query.select_resources(resource_type='gce_instance')
resource.label.<label> = "<value>"
However, by adding "_prefix" or "_suffix" to the keyword, you can specify a partial match.
<label>_prefix=<value> generates:
resource.label.<label> = starts_with("<value>")
<label>_suffix=<value> generates:
resource.label.<label> = ends_with("<value>")
resource.type = "<value>"
Note: The label "instance_name" is a metric label, not a resource label. You would filter on it using
select_metrics(instance_name=...).
Parameters
args (tuple) Raw filter expression strings to include in the conjunction. If just one is
provided and no keyword arguments are provided, it can be a disjunction.
kwargs (dict) Label filters to include in the conjunction as described above.
Return type Query
Returns The new query object.
class google.cloud.monitoring.query.Reducer
Bases: object
Allowed values for the supported reducers.
class google.cloud.monitoring.timeseries.Point
Bases: google.cloud.monitoring.timeseries.Point
A single point in a time series.
Parameters
end_time (str) The end time in RFC3339 UTC Zulu format.
start_time (str) (Optional) The start time in RFC3339 UTC Zulu format.
value (object) The metric value. This can be a scalar or a distribution.
Create new instance of Point(end_time, start_time, value)
class google.cloud.monitoring.timeseries.TimeSeries
Bases: google.cloud.monitoring.timeseries.TimeSeries
A single time series of metric values.
The preferred way to construct a TimeSeries object is using the time_series() factory method of the
Client class.
Parameters
metric (Metric) A metric object.
resource (Resource) A resource object.
metric_kind (str) The kind of measurement: MetricKind.GAUGE,
MetricKind.DELTA, or MetricKind.CUMULATIVE. See MetricKind.
value_type (str) The value type of the metric: ValueType.BOOL, ValueType.
INT64, ValueType.DOUBLE, ValueType.STRING, or ValueType.
DISTRIBUTION. See ValueType.
points (list of Point) A list of point objects.
Create new instance of TimeSeries(metric, resource, metric_kind, value_type, points)
header(points=None)
Copy everything but the point data.
Parameters points (list of Point, or None) An optional point list.
Return type TimeSeries
Returns The new time series object.
labels
A single dictionary with values for all the labels.
This combines resource.labels and metric.labels and also adds "resource_type".
16.8 Introduction
With the Stackdriver Monitoring API, you can work with Stackdriver metric data pertaining to monitored resources in
Google Cloud Platform (GCP) or elsewhere.
Essential concepts:
Metric data is associated with a monitored resource. A monitored resource has a resource type and a set of
resource labels key-value pairs that identify the particular resource.
A metric further identifies the particular kind of data that is being collected. It has a metric type and a set of
metric labels that, when combined with the resource labels, identify a particular time series.
A time series is a collection of data points associated with points or intervals in time.
Please refer to the documentation for the Stackdriver Monitoring API for more information.
At present, this client library supports the following features of the API:
Querying of time series.
Querying of metric descriptors and monitored resource descriptors.
Creation and deletion of metric descriptors for custom metrics.
Writing of custom metric data.
The Stackdriver Monitoring client library generally makes its functionality available as methods of the monitoring
Client class. A Client instance holds authentication credentials and the ID of the target project with which the
metric data of interest is associated. This project ID will often refer to a Stackdriver account binding multiple GCP
projects and AWS accounts. It can also simply be the ID of a monitored project.
Most often the authentication credentials will be determined implicitly from your environment. See Authentication for
more information.
It is thus typical to create a client object as follows:
If you are running in Google Compute Engine or Google App Engine, the current project is the default target project.
This default can be further overridden with the GOOGLE_CLOUD_PROJECT environment variable. Using the default
target project is even easier:
The available monitored resource types are defined by monitored resource descriptors. You can fetch a list of these
with the list_resource_descriptors() method:
>>> for descriptor in client.list_resource_descriptors():
... print(descriptor.type)
Each ResourceDescriptor has a type, a display name, a description, and a list of LabelDescriptor in-
stances. See the documentation about Monitored Resources for more information.
The available metric types are defined by metric descriptors. They include platform metrics, agent metrics, and custom
metrics. You can list all of these with the list_metric_descriptors() method:
>>> for descriptor in client.list_metric_descriptors():
... print(descriptor.type)
See MetricDescriptor and the Metric Descriptors API documentation for more information.
You can create new metric descriptors to define custom metrics in the custom.googleapis.com namespace. You
do this by creating a MetricDescriptor object using the clients metric_descriptor() factory and then
calling the objects create() method:
>>> from google.cloud.monitoring import MetricKind, ValueType
>>> descriptor = client.metric_descriptor(
... 'custom.googleapis.com/my_metric',
... metric_kind=MetricKind.GAUGE,
... value_type=ValueType.DOUBLE,
... description='This is a simple example of a custom metric.')
>>> descriptor.create()
To define a custom metric parameterized by one or more labels, you must build the appropriate LabelDescriptor
objects and include them in the MetricDescriptor object before you call create():
>>> from google.cloud.monitoring import LabelDescriptor, LabelValueType
>>> label = LabelDescriptor('response_code', LabelValueType.INT64,
... description='HTTP status code')
>>> descriptor = client.metric_descriptor(
... 'custom.googleapis.com/my_app/response_count',
... metric_kind=MetricKind.CUMULATIVE,
... value_type=ValueType.INT64,
... labels=[label],
... description='Cumulative count of HTTP responses.')
>>> descriptor.create()
16.12 Groups
A group is a dynamic collection of monitored resources whose membership is defined by a filter. These groups are
usually created via the Stackdriver dashboard. You can list all the groups in a project with the list_groups()
method:
See Group and the API documentation for Groups and Group members for more information.
You can get a specific group based on its ID as follows:
You can get the current members of this group using the list_members() method:
Passing in end_time and start_time to the above method will return historical members based on the current
filter of the group. The group membership changes over time, as monitored resources come and go, and as they change
properties.
You can create new groups to define new collections of monitored resources. You do this by creating a Group object
using the clients group() factory and then calling the objects create() method:
You can further manipulate an existing group by first initializing a Group object with its ID or name, and then calling
various methods on it.
Delete a group:
Update a group:
A time series includes a collection of data points and a set of resource and metric label values. See TimeSeries and
the Time Series API documentation for more information.
While you can obtain time series objects by iterating over a Query object, usually it is more useful to retrieve time
series data in the form of a pandas.DataFrame, where each column corresponds to a single time series. For this,
you must have pandas installed; it is not a required dependency of google-cloud-python.
You can display CPU utilization across your GCE instances over a five minute duration ending at the start of the
current minute as follows:
Query objects provide a variety of methods for refining the query. You can request temporal alignment and cross-
series reduction, and you can filter by label values. See the client query() method and the Query class for more
information.
For example, you can display CPU utilization during the last hour across GCE instances with names beginning with
"mycluster-", averaged over five-minute intervals and aggregated per zone, as follows:
The Stackdriver Monitoring API can be used to write data points to custom metrics. Please refer to the documentation
on Custom Metrics for more information.
To write a data point to a custom metric, you must provide an instance of Metric specifying the metric type as well
as the values for the metric labels. You will need to have either created the metric descriptor earlier (see the Metric
Descriptors section) or rely on metric type auto-creation (see Auto-creation of custom metrics).
You will also need to provide a Resource instance specifying a monitored resource type as well as values for all of
the monitored resource labels, except for project_id, which is ignored when its included in writes to the API. A
good choice is to use the underlying physical resource where your application code runs e.g., a monitored resource
type of gce_instance or aws_ec2_instance. In some limited circumstances, such as when only a single
process writes to the custom metric, you may choose to use the global monitored resource type.
See Monitored resource types for more information about particular monitored resource types.
With a Metric and Resource in hand, the Client can be used to write Point values.
When writing points, the Python type of the value must match the value type of the metric descriptor associated with
the metric. For example, a Python float will map to ValueType.DOUBLE.
Stackdriver Monitoring supports several metric kinds: GAUGE, CUMULATIVE, and DELTA. However, DELTA is not
supported for custom metrics.
GAUGE metrics represent only a single point in time, so only the end_time should be specified:
By default, end_time defaults to utcnow(), so metrics can be written to the current time as follows:
CUMULATIVE metrics enable the monitoring system to compute rates of increase on metrics that sometimes reset,
such as after a process restart. Without cumulative metrics, this reset would otherwise show up as a huge negative
spike. For cumulative metrics, the same start time should be re-used repeatedly as more points are written to the time
series.
In the examples below, the end_time again defaults to the current time:
While multiple time series can be written in a single batch, each TimeSeries object sent to the API must only
include a single point.
All timezone-naive Python datetime objects are assumed to be UTC.
Stackdriver Logging
289
google-cloud Documentation, Release 0.27.1
sinks_api
Helper for log sink-related API calls.
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/projects.sinks
17.2 Logger
delete(client=None)
API call: delete all entries in a logger via a DELETE request
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/projects.logs/delete
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current logger.
full_name
Fully-qualified name used in logging APIs
list_entries(projects=None, filter_=None, order_by=None, page_size=None, page_token=None)
Return a page of log entries.
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list
Parameters
projects (list of strings) project IDs to include. If not passed, defaults to
the project bound to the client.
filter (str) a filter expression. See https://cloud.google.com/logging/docs/view/
advanced_filters
order_by (str) One of ASCENDING or DESCENDING.
page_size (int) maximum number of entries to return, If not passed, defaults to a
value set by the API.
page_token (str) opaque marker for the next page of entries. If not passed, the
API will return the first page of entries.
Return type Iterator
Returns Iterator of _BaseEntry accessible to the current logger.
log_proto(message, client=None, labels=None, insert_id=None, severity=None, http_request=None,
timestamp=None, resource=Resource(type=global, labels={}))
API call: log a protobuf message via a POST request
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list
Parameters
message (Message) The protobuf message to be logged.
client (Client or NoneType) the client to use. If not passed, falls back to the
client stored on the current logger.
labels (dict) (optional) mapping of labels for the entry.
insert_id (str) (optional) unique ID for log entry.
severity (str) (optional) severity of event being logged.
http_request (dict) (optional) info about HTTP request associated with the entry.
resource (Resource) Monitored resource of the entry, defaults to the global re-
source type.
timestamp (datetime.datetime) (optional) timestamp of event being logged.
log_struct(info, client=None, labels=None, insert_id=None, severity=None, http_request=None,
timestamp=None, resource=Resource(type=global, labels={}))
API call: log a structured message via a POST request
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/write
Parameters
info (dict) the log entry information
client (Client or NoneType) the client to use. If not passed, falls back to the
client stored on the current logger.
labels (dict) (optional) mapping of labels for the entry.
insert_id (str) (optional) unique ID for log entry.
severity (str) (optional) severity of event being logged.
http_request (dict) (optional) info about HTTP request associated with the entry.
resource (Resource) Monitored resource of the entry, defaults to the global re-
source type.
timestamp (datetime.datetime) (optional) timestamp of event being logged.
log_text(text, client=None, labels=None, insert_id=None, severity=None, http_request=None, times-
tamp=None, resource=Resource(type=global, labels={}))
API call: log a text message via a POST request
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/write
Parameters
text (str) the log message.
client (Client or NoneType) the client to use. If not passed, falls back to the
client stored on the current logger.
labels (dict) (optional) mapping of labels for the entry.
insert_id (str) (optional) unique ID for log entry.
severity (str) (optional) severity of event being logged.
http_request (dict) (optional) info about HTTP request associated with the entry
resource (Resource) Monitored resource of the entry, defaults to the global re-
source type.
timestamp (datetime.datetime) (optional) timestamp of event being logged.
path
URI path for use in logging APIs
project
Project bound to the logger.
17.3 Entries
Parameters
payload (str, dict or any_pb2.Any) The payload passed as textPayload,
jsonPayload, or protoPayload. This also may be passed as a raw any_pb2.Any
if the protoPayload could not be deserialized.
logger (Logger) the logger used to write the entry.
insert_id (str) (optional) the ID used to identify an entry uniquely.
timestamp (datetime.datetime) (optional) timestamp for the entry
labels (dict) (optional) mapping of labels for the entry
severity (str) (optional) severity of event being logged.
http_request (dict) (optional) info about HTTP request associated with the entry
resource (Resource) (Optional) Monitored resource of the entry
parse_message(message)
Parse payload into a protobuf message.
Mutates the passed-in message in place.
Parameters message (Protobuf message) the message to be logged
class google.cloud.logging.entries.StructEntry(payload, logger, insert_id=None,
timestamp=None, labels=None,
severity=None, http_request=None,
resource=None)
Bases: google.cloud.logging.entries._BaseEntry
Entry created with jsonPayload.
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry
class google.cloud.logging.entries.TextEntry(payload, logger, insert_id=None, times-
tamp=None, labels=None, severity=None,
http_request=None, resource=None)
Bases: google.cloud.logging.entries._BaseEntry
Entry created with textPayload.
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry
google.cloud.logging.entries.logger_name_from_path(path)
Validate a logger URI path and get the logger name.
Parameters path (str) URI path for a logger API request.
Return type str
Returns Logger name parsed from path.
Raises ValueError if the path is ill-formed or if the project from the path does not agree with
the project passed in.
17.4 Metrics
path
URL path for the metrics APIs
project
Project bound to the logger.
reload(client=None)
API call: sync local metric configuration via a GET request
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/projects.metrics/get
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current metric.
update(client=None)
API call: update metric configuration via a PUT request
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/projects.metrics/update
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current metric.
17.5 Sinks
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current sink.
exists(client=None)
API call: test for the existence of the sink via a GET request
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/projects.sinks/get
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current sink.
Return type bool
Returns Boolean indicating existence of the sink.
classmethod from_api_repr(resource, client)
Factory: construct a sink given its API representation
Parameters
resource (dict) sink resource representation returned from the API
client (google.cloud.logging.client.Client) Client which holds cre-
dentials and project configuration for the sink.
Return type google.cloud.logging.sink.Sink
Returns Sink parsed from resource.
Raises ValueError if client is not None and the project from the resource does not agree
with the project from the client.
full_name
Fully-qualified name used in sink APIs
path
URL path for the sinks APIs
project
Project bound to the sink.
reload(client=None)
API call: sync local sink configuration via a GET request
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/projects.sinks/get
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current sink.
update(client=None)
API call: update sink configuration via a PUT request
See https://cloud.google.com/logging/docs/reference/v2/rest/v2/projects.sinks/update
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current sink.
Its possible to tie the Python logging module directly into Google Cloud Logging. To use it, create a
CloudLoggingHandler instance from your Logging client.
Note:
This handler by default uses an asynchronous transport that sends log entries on a background thread. How-
ever, the API call will still be made in the same process. For other transport options, see the transports section.
All logs will go to a single custom log, which defaults to python. The name of the Python logger will be included in
the structured log entry under the python_logger field. You can change it by providing a name to the handler:
It is also possible to attach the handler to the root Python logger, so that for example a plain logging.warn call would
be sent to Cloud Logging, as well as any other loggers created. However, you must avoid infinite recursion from the
logging calls the client itself makes. A helper method setup_logging is provided to configure this automatically:
The Python logging handler can use different transports. The default is google.cloud.logging.handlers.
BackgroundThreadTransport.
1. google.cloud.logging.handlers.BackgroundThreadTransport this is the default.
It writes entries on a background python.threading.Thread.
1. google.cloud.logging.handlers.SyncTransport this handler does a direct API call on
each logging statement to write the entry.
class google.cloud.logging.handlers.handlers.CloudLoggingHandler(client,
name=python,
trans-
port=<class
google.cloud.logging.handlers.transp
re-
source=Resource(type=global,
labels={}),
la-
bels=None)
Bases: logging.StreamHandler
Handler that directly makes Stackdriver logging API calls.
This is a Python standard logging handler using that can be used to route Python standard logging messages
directly to the Stackdriver Logging API.
This handler supports both an asynchronous and synchronous transport.
Parameters
client (google.cloud.logging.client) the authenticated Google Cloud Log-
ging client for this handler to use
name (str) the name of the custom log in Stackdriver Logging. Defaults to python.
The name of the Python logger will be represented in the python_logger field.
transport (type) Class for creating new transport objects. It should extend
from the base Transport type and implement :meth.Transport.send. Defaults to
BackgroundThreadTransport. The other option is SyncTransport.
resource (Resource) (Optional) Monitored resource of the entry, defaults to the
global resource type.
labels (dict) (Optional) Mapping of labels for the entry.
Example:
import logging
import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler
client = google.cloud.logging.Client()
handler = CloudLoggingHandler(client)
cloud_logger = logging.getLogger('cloudLogger')
cloud_logger.setLevel(logging.INFO)
cloud_logger.addHandler(handler)
emit(record)
Actually log the specified logging record.
Overrides the default emit behavior of StreamHandler.
See https://docs.python.org/2/library/logging.html#handler-objects
Parameters record (logging.LogRecord) The record to be logged.
google.cloud.logging.handlers.handlers.setup_logging(handler, ex-
cluded_loggers=(google.cloud,
google.auth,
google_auth_httplib2),
log_level=20)
Attach a logging handler to the Python root logger
Excludes loggers that this library itself uses to avoid infinite recursion.
Parameters
handler (logging.handler) the handler to attach to the global handler
excluded_loggers (tuple) (Optional) The loggers to not attach the handler to. This
will always include the loggers in the path of the logging client itself.
log_level (int) (Optional) Python logging log level. Defaults to logging.INFO.
Example:
import logging
import google.cloud.logging
from google.cloud.logging.handlers import CloudLoggingHandler
client = google.cloud.logging.Client()
handler = CloudLoggingHandler(client)
google.cloud.logging.handlers.setup_logging(handler)
logging.getLogger().setLevel(logging.DEBUG)
get_gae_resource()
Return the GAE resource using the environment variables.
Return type Resource
Returns Monitored resource for GAE.
Bases: google.cloud.logging.handlers.transports.base.Transport
Asynchronous transport that uses a background thread.
Parameters
client (Client) The Logging client.
name (str) the name of the logger.
grace_period (float) The amount of time to wait for pending logs to be submitted
when the process is shutting down.
batch_size (int) The maximum number of items to send at a time in the background
thread.
flush()
Submit any pending log records.
send(record, message, resource=None, labels=None)
Overrides Transport.send().
Parameters
record (logging.LogRecord) Python log record that the handler was called with.
message (str) The message from the LogRecord after being formatted by the as-
sociated log formatters.
resource (Resource) (Optional) Monitored resource of the entry.
labels (dict) (Optional) Mapping of labels for the entry.
Parameters
record (logging.LogRecord) Python log record that the handler was called with.
message (str) The message from the LogRecord after being formatted by the as-
sociated log formatters.
resource (Resource) (Optional) Monitored resource of the entry.
labels (dict) (Optional) Mapping of labels for the entry.
To write log entries, first create a Logger, passing the log name with which to associate the entries:
logger = client.logger(LOG_NAME)
logger.log_struct({
'message': 'My second entry',
'weather': 'partly cloudy',
}) # API call
iterator = client.list_entries()
pages = iterator.pages
Metrics are counters of entries which match a given filter. They can be used within Stackdriver Monitoring to create
charts and alerts.
List all metrics for a project:
Create a metric:
metric = client.metric(
METRIC_NAME, filter_=FILTER, description=DESCRIPTION)
assert not metric.exists() # API call
metric.create() # API call
assert metric.exists() # API call
existing_metric = client.metric(METRIC_NAME)
existing_metric.reload() # API call
Update a metric:
existing_metric.filter_ = UPDATED_FILTER
existing_metric.description = UPDATED_DESCRIPTION
existing_metric.update() # API call
Delete a metric:
metric.delete()
Sinks allow exporting entries which match a given filter to Cloud Storage buckets, BigQuery datasets, or Cloud
Pub/Sub topics.
Make sure that the storage bucket you want to export logs too has [email protected] as the owner. See
Setting permissions for Cloud Storage.
Add [email protected] as the owner of the bucket:
To export logs to BigQuery you must log into the Cloud Platform Console and add [email protected] to
a dataset.
See: Setting permissions for BigQuery
To export logs to BigQuery you must log into the Cloud Platform Console and add [email protected] to
a topic.
See: Setting permissions for Pub/Sub
existing_sink = client.sink(SINK_NAME)
existing_sink.reload()
Update a sink:
existing_sink.filter_ = UPDATED_FILTER
existing_sink.update()
Delete a sink:
sink.delete()
Its possible to tie the Python logging module directly into Google Stackdriver Logging. There are differ-
ent handler options to accomplish this. To automatically pick the default for your current environment, use
get_default_handler().
import logging
handler = client.get_default_handler()
cloud_logger = logging.getLogger('cloudLogger')
cloud_logger.setLevel(logging.INFO)
cloud_logger.addHandler(handler)
cloud_logger.error('bad news')
It is also possible to attach the handler to the root Python logger, so that for example a plain logging.warn call
would be sent to Stackdriver Logging, as well as any other loggers created. A helper method setup_logging() is
provided to configure this automatically.
client.setup_logging(log_level=logging.INFO)
Note: To reduce cost and quota usage, do not enable Stackdriver logging handlers while testing locally.
If you prefer not to use get_default_handler(), you can directly create a CloudLoggingHandler instance
which will write directly to the API.
from google.cloud.logging.handlers import CloudLoggingHandler
handler = CloudLoggingHandler(client)
cloud_logger = logging.getLogger('cloudLogger')
cloud_logger.setLevel(logging.INFO)
cloud_logger.addHandler(handler)
cloud_logger.error('bad news')
Note: This handler by default uses an asynchronous transport that sends log entries on a background thread. However,
the API call will still be made in the same process. For other transport options, see the transports section.
All logs will go to a single custom log, which defaults to python. The name of the Python logger will be included in
the structured log entry under the python_logger field. You can change it by providing a name to the handler:
The CloudLoggingHandler logging handler can use different transports. The default is
BackgroundThreadTransport.
1. BackgroundThreadTransport this is the default. It writes entries on a background python.
threading.Thread.
1. SyncTransport this handler does a direct API call on each logging statement to write the entry.
Besides CloudLoggingHandler, which writes directly to the API, two other handlers are provided.
AppEngineHandler, which is recommended when running on the Google App Engine Flexible vanilla runtimes
(i.e. your app.yaml contains runtime: python), and ContainerEngineHandler , which is recommended
when running on Google Container Engine with the Stackdriver Logging plugin enabled.
get_default_handler() and setup_logging() will attempt to use the environment to automatically detect
whether the code is running in these platforms and use the appropriate handler.
In both cases, the fluentd agent is configured to automatically parse log files in an expected format and forward them to
Stackdriver logging. The handlers provided help set the correct metadata such as log level so that logs can be filtered
accordingly.
Storage
311
google-cloud Documentation, Release 0.27.1
chunk_size
Get the blobs default chunk size.
Return type int or NoneType
Returns The current blobs chunk size, if it is set.
client
The client bound to this blob.
component_count
Number of underlying components that make up this object.
See https://cloud.google.com/storage/docs/json_api/v1/objects
Return type int or NoneType
Returns The component count (in case of a composed object) or None if the property is not set
locally. This property will not be set on objects not created via compose.
compose(sources, client=None)
Concatenate source blobs into this one.
Parameters
sources (list of Blob) blobs whose contents will be composed into this blob.
client (Client or NoneType) Optional. The client to use. If not passed, falls back
to the client stored on the blobs bucket.
Raises ValueError if this blob does not have its content_type set.
content_disposition
HTTP Content-Disposition header for this object.
See RFC 6266 and API reference docs.
If the property is not set locally, returns None.
Return type str or NoneType
content_encoding
HTTP Content-Encoding header for this object.
See RFC 7231 and API reference docs.
If the property is not set locally, returns None.
Return type str or NoneType
content_language
HTTP Content-Language header for this object.
See BCP47 and API reference docs.
If the property is not set locally, returns None.
Return type str or NoneType
content_type
HTTP Content-Type header for this object.
See RFC 2616 and API reference docs.
If the property is not set locally, returns None.
Return type str or NoneType
crc32c
CRC32C checksum for this object.
See RFC 4960 and API reference docs.
If the property is not set locally, returns None.
Return type str or NoneType
create_resumable_upload_session(content_type=None, size=None, origin=None,
client=None)
Create a resumable upload session.
Resumable upload sessions allow you to start an upload session from one client and complete the session
in another. This method is called by the initiator to set the metadata and limits. The initiator then passes
the session URL to the client that will upload the binary data. The client performs a PUT request on
the session URL to complete the upload. This process allows untrusted clients to upload to an access-
controlled bucket. For more details, see the documentation on signed URLs.
The content type of the upload will be determined in order of precedence:
The value passed in to this method (if not None)
The value stored on the current blob
The default value (application/octet-stream)
Note: The effect of uploading to an existing blob depends on the versioning and lifecycle policies
defined on the blobs bucket. In the absence of those policies, upload will overwrite any existing contents.
See the object versioning and lifecycle API documents for details.
If encryption_key is set, the blob will be encrypted with a customer-supplied encryption key.
Parameters
size (int) (Optional). The maximum number of bytes that can be uploaded using this
session. If the size is not known when creating the session, this should be left blank.
content_type (str) (Optional) Type of content being uploaded.
origin (str) (Optional) If set, the upload can only be completed by a user-agent that
uploads from the given origin. This can be useful when passing the session to a web client.
client (Client) (Optional) The client to use. If not passed, falls back to the client
stored on the blobs bucket.
Return type str
Returns The resumable upload session URL. The upload can be completed by making an HTTP
PUT request with the files contents.
Raises google.cloud.exceptions.GoogleCloudError if the session creation re-
sponse returns an error status.
delete(client=None)
Deletes a blob from Cloud Storage.
Parameters client (Client or NoneType) Optional. The client to use. If not passed,
falls back to the client stored on the blobs bucket.
Return type Blob
Returns The blob that was just deleted.
Note: If the server-set property, media_link, is not yet initialized, makes an additional API request to
load it.
Downloading a file that has been encrypted with a customer-supplied encryption key:
client = storage.Client(project='my-project')
bucket = client.get_bucket('my-bucket')
encryption_key = 'c7f32af42e45e85b9848a6a14dd2a8f6'
blob = Blob('secure-data', bucket, encryption_key=encryption_key)
blob.upload_from_string('my secret message.')
with open('/tmp/my-secure-file', 'wb') as file_obj:
blob.download_to_file(file_obj)
Note: If you are on Google Compute Engine, you cant generate a signed URL. Follow Issue 50 for
updates on this. If youd like to be able to generate a signed URL from GCE, you can use a standard
service account from a JSON file rather than a GCE service account.
If you have a blob that you want to allow access to for a set amount of time, you can use this method to
generate a URL that is only valid within a certain time period.
This is particularly useful if you dont want publicly accessible blobs, but dont want to require users to
explicitly log in.
Parameters
expiration (int, long, datetime.datetime, datetime.timedelta)
When the signed URL should expire.
method (str) The HTTP verb that will be used when requesting the URL.
content_type (str) (Optional) The content type of the object referenced by
resource.
generation (str) (Optional) A value that indicates which generation of the resource
to fetch.
response_disposition (str) (Optional) Content disposition of responses to re-
quests for the signed URL. For example, to enable the signed URL to initiate a file of
blog.png, use the value 'attachment; filename=blob.png'.
response_type (str) (Optional) Content type of responses to requests for the
signed URL. Used to over-ride the content type of the underlying blob/object.
client (Client or NoneType) (Optional) The client to use. If not passed, falls back
to the client stored on the blobs bucket.
credentials (oauth2client.client.OAuth2Credentials or NoneType)
(Optional) The OAuth2 credentials to use to sign the URL. Defaults to the credentials
stored on the client used.
Return type str
Returns A signed URL you can use to access the resource until expiration.
generation
Retrieve the generation for the object.
See https://cloud.google.com/storage/docs/json_api/v1/objects
Return type int or NoneType
Returns The generation of the blob or None if the property is not set locally.
get_iam_policy(client=None)
Retrieve the IAM policy for the object.
See https://cloud.google.com/storage/docs/json_api/v1/objects/getIamPolicy
Parameters client (Client or NoneType) Optional. The client to use. If not passed,
falls back to the client stored on the current objects bucket.
Return type google.cloud.iam.Policy
Returns the policy instance, based on the resource returned from the getIamPolicy API
request.
id
Retrieve the ID for the object.
See https://cloud.google.com/storage/docs/json_api/v1/objects
Return type str or NoneType
Returns The ID of the blob or None if the property is not set locally.
make_public(client=None)
Make this blob public giving all users read access.
Parameters client (Client or NoneType) Optional. The client to use. If not passed,
falls back to the client stored on the blobs bucket.
md5_hash
MD5 hash for this object.
See RFC 1321 and API reference docs.
If the property is not set locally, returns None.
Return type str or NoneType
media_link
Retrieve the media download URI for the object.
See https://cloud.google.com/storage/docs/json_api/v1/objects
Return type str or NoneType
Returns The media link for the blob or None if the property is not set locally.
metadata
Retrieve arbitrary/application specific metadata for the object.
See https://cloud.google.com/storage/docs/json_api/v1/objects
Setter Update arbitrary/application specific metadata for the object.
Getter Retrieve arbitrary/application specific metadata for the object.
Return type dict or NoneType
Returns The metadata associated with the blob or None if the property is not set locally.
metageneration
Retrieve the metageneration for the object.
See https://cloud.google.com/storage/docs/json_api/v1/objects
Return type int or NoneType
Returns The metageneration of the blob or None if the property is not set locally.
owner
Retrieve info about the owner of the object.
See https://cloud.google.com/storage/docs/json_api/v1/objects
Return type dict or NoneType
Returns Mapping of owners role/ID. If the property is not set locally, returns None.
patch(client=None)
Sends all changed properties in a PATCH request.
Updates the _properties with the response from the backend.
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current object.
path
Getter property for the URL path to this Blob.
Return type str
Returns The URL path to this Blob.
static path_helper(bucket_path, blob_name)
Relative URL path for a blob.
Parameters
bucket_path (str) The URL path for a bucket.
blob_name (str) The name of the blob.
Return type str
Returns The relative URL path for blob_name.
public_url
The public URL for this blobs object.
Return type string
Returns The public URL for this blob.
reload(client=None)
Reload properties from Cloud Storage.
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current object.
rewrite(source, token=None, client=None)
Rewrite source blob into this one.
Parameters
source (Blob) blob whose contents will be rewritten into this blob.
token (str) Optional. Token returned from an earlier, not-completed call to rewrite
the same source blob. If passed, result will include updated status, total bytes written.
client (Client or NoneType) Optional. The client to use. If not passed, falls back
to the client stored on the blobs bucket.
Note: The effect of uploading to an existing blob depends on the versioning and lifecycle policies
defined on the blobs bucket. In the absence of those policies, upload will overwrite any existing contents.
See the object versioning and lifecycle API documents for details.
client = storage.Client(project='my-project')
bucket = client.get_bucket('my-bucket')
encryption_key = 'aa426195405adee2c8081bb9e7e74b19'
blob = Blob('secure-data', bucket, encryption_key=encryption_key)
with open('my-file', 'rb') as my_file:
blob.upload_from_file(my_file)
Note: The effect of uploading to an existing blob depends on the versioning and lifecycle policies
defined on the blobs bucket. In the absence of those policies, upload will overwrite any existing contents.
See the object versioning and lifecycle API documents for details.
Parameters
filename (str) The path to the file.
Note: The effect of uploading to an existing blob depends on the versioning and lifecycle policies
defined on the blobs bucket. In the absence of those policies, upload will overwrite any existing contents.
See the object versioning and lifecycle API documents for details.
Parameters
data (bytes or str) The data to store in this blob. If the value is text, it will be
encoded as UTF-8.
content_type (str) Optional type of content being uploaded. Defaults to 'text/
plain'.
client (Client or NoneType) Optional. The client to use. If not passed, falls back
to the client stored on the blobs bucket.
18.2 Buckets
Note: This will not make an HTTP request; it simply instantiates a blob object owned by this bucket.
Parameters
blob_name (str) The name of the blob to be instantiated.
chunk_size (int) The size of a chunk of data whenever iterating (1 MB). This must
be a multiple of 256 KB per the API specification.
client
The client bound to this bucket.
configure_website(main_page_suffix=None, not_found_page=None)
Configure website-related properties.
See https://cloud.google.com/storage/docs/hosting-static-website
Note: This (apparently) only works if your bucket name is a domain name (and to do that, you need to
get approved somehow. . . ).
If you want this bucket to host a website, just provide the name of an index page and a page to use when a
blob isnt found:
client = storage.Client()
bucket = client.get_bucket(bucket_name)
bucket.configure_website('index.html', '404.html')
bucket.make_public(recursive=True, future=True)
This says: Make the bucket public, and all the stuff already in the bucket, and anything else I add to the
bucket. Just make it all public.
Parameters
main_page_suffix (str) The page to use as the main page of a directory. Typically
something like index.html.
not_found_page (str) The file to use when a page isnt found.
copy_blob(blob, destination_bucket, new_name=None, client=None, preserve_acl=True)
Copy the given blob to the given bucket, optionally with a new name.
Parameters
blob (google.cloud.storage.blob.Blob) The blob to be copied.
destination_bucket (google.cloud.storage.bucket.Bucket) The
bucket into which the blob should be copied.
new_name (str) (optional) the new name for the copied file.
client (Client or NoneType) Optional. The client to use. If not passed, falls back
to the client stored on the current bucket.
preserve_acl (bool) Optional. Copies ACL from old blob to new blob. Default:
True.
Return type google.cloud.storage.blob.Blob
Returns The new Blob.
cors
Retrieve or set CORS policies configured for this bucket.
See http://www.w3.org/TR/cors/ and https://cloud.google.com/storage/docs/json_api/v1/buckets
Note: The getter for this property returns a list which contains copies of the buckets CORS policy
mappings. Mutating the list or one of its dicts has no effect unless you then re-assign the dict via the setter.
E.g.:
create(client=None)
Creates current bucket.
If the bucket already exists, will raise google.cloud.exceptions.Conflict.
This implements storage.buckets.insert.
Parameters client (Client or NoneType) Optional. The client to use. If not passed,
falls back to the client stored on the current bucket.
default_object_acl
Create our defaultObjectACL on demand.
delete(force=False, client=None)
Delete this bucket.
The bucket must be empty in order to submit a delete request. If force=True is passed, this will first
attempt to delete all the objects / blobs in the bucket (i.e. try to empty the bucket).
If the bucket doesnt exist, this will raise google.cloud.exceptions.NotFound. If the bucket is
not empty (and force=False), will raise google.cloud.exceptions.Conflict.
If force=True and the bucket contains more than 256 objects / blobs this will cowardly refuse to delete
the objects (or the bucket). This is to prevent accidental bucket deletion and to prevent extremely long
runtime of this method.
Parameters
force (bool) If True, empties the buckets objects then deletes it.
client (Client or NoneType) Optional. The client to use. If not passed, falls back
to the client stored on the current bucket.
Raises ValueError if force is True and the bucket contains more than 256 objects / blobs.
delete_blob(blob_name, client=None)
Deletes a blob from the current bucket.
Parameters
blob_name (str) A blob name to delete.
client (Client or NoneType) Optional. The client to use. If not passed, falls back
to the client stored on the current bucket.
Raises google.cloud.exceptions.NotFound (to suppress the exception, call
delete_blobs, passing a no-op on_error callback, e.g.:
exists(client=None)
Determines whether or not this bucket exists.
Parameters client (Client or NoneType) Optional. The client to use. If not passed,
falls back to the client stored on the current bucket.
Return type bool
Returns True if the bucket exists in Cloud Storage.
generate_upload_policy(conditions, expiration=None, client=None)
Create a signed upload policy for uploading objects.
This method generates and signs a policy document. You can use policy documents to allow visitors to a
website to upload files to Google Cloud Storage without giving them direct write access.
For example:
bucket = client.bucket('my-bucket')
conditions = [
['starts-with', '$key', ''],
{'acl': 'public-read'}]
policy = bucket.generate_upload_policy(conditions)
upload_form = (
'<form action="http://{bucket_name}.storage.googleapis.com"'
' method="post" enctype="multipart/form-data">'
'<input type="text" name="key" value="my-test-key">'
'<input type="hidden" name="bucket" value="{bucket_name}">'
'<input type="hidden" name="acl" value="public-read">'
'<input name="file" type="file">'
'<input type="submit" value="Upload">'
'{policy_fields}'
'</form>').format(bucket_name=bucket.name, policy_fields=policy_
fields)
print(upload_form)
Parameters
client = storage.Client()
bucket = client.get_bucket('my-bucket')
assert isinstance(bucket.get_blob('/path/to/blob.txt'), Blob)
# <Blob: my-bucket, /path/to/blob.txt>
assert not bucket.get_blob('/does-not-exist.txt')
# None
Parameters
blob_name (str) The name of the blob to retrieve.
client (Client or NoneType) Optional. The client to use. If not passed, falls back
to the client stored on the current bucket.
encryption_key (bytes) Optional 32 byte encryption key for customer-supplied
encryption. See https://cloud.google.com/storage/docs/encryption#customer-supplied.
kwargs (dict) Keyword arguments to pass to the Blob constructor.
Return type google.cloud.storage.blob.Blob or None
Returns The blob object if it exists, otherwise None.
get_iam_policy(client=None)
Retrieve the IAM policy for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets/getIamPolicy
Parameters client (Client or NoneType) Optional. The client to use. If not passed,
falls back to the client stored on the current bucket.
Return type google.cloud.iam.Policy
Returns the policy instance, based on the resource returned from the getIamPolicy API
request.
get_logging()
Return info about access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs#status
Return type dict or None
Returns a dict w/ keys, logBucket and logObjectPrefix (if logging is enabled), or None
(if not).
id
Retrieve the ID for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Return type str or NoneType
Returns The ID of the bucket or None if the property is not set locally.
labels
Retrieve or set labels assigned to this bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets#labels
Note: The getter for this property returns a dict which is a copy of the buckets labels. Mutating that dict
has no effect unless you then re-assign the dict via the setter. E.g.:
lifecycle_rules
Retrieve or set lifecycle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and https://cloud.google.com/storage/docs/json_
api/v1/buckets
Note: The getter for this property returns a list which contains copies of the buckets lifecycle rules
mappings. Mutating the list or one of its dicts has no effect unless you then re-assign the dict via the setter.
E.g.:
owner
Retrieve info about the owner of the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Return type dict or NoneType
Returns Mapping of owners role/ID. If the property is not set locally, returns None.
patch(client=None)
Sends all changed properties in a PATCH request.
Updates the _properties with the response from the backend.
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current object.
path
The URL path to this bucket.
static path_helper(bucket_name)
Relative URL path for a bucket.
Parameters bucket_name (str) The bucket name in the path.
Return type str
Returns The relative URL path for bucket_name.
project_number
Retrieve the number of the project to which the bucket is assigned.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Return type int or NoneType
Returns The project number that owns the bucket or None if the property is not set locally.
reload(client=None)
Reload properties from Cloud Storage.
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current object.
rename_blob(blob, new_name, client=None)
Rename the given blob using copy and delete operations.
Effectively, copies blob to the same bucket with a new name, then deletes the blob.
Warning: This method will first duplicate the data and then delete the old blob. This means that with
very large objects renaming could be a very (temporarily) costly or a very slow operation.
Parameters
blob (google.cloud.storage.blob.Blob) The blob to be renamed.
new_name (str) The new name for this blob.
client (Client or NoneType) Optional. The client to use. If not passed, falls back
to the client stored on the current bucket.
Return type Blob
Returns The newly-renamed blob.
self_link
Retrieve the URI for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Return type str or NoneType
Returns The self link for the bucket or None if the property is not set locally.
set_iam_policy(policy, client=None)
Update the IAM policy for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets/setIamPolicy
Parameters
policy (google.cloud.iam.Policy) policy instance used to update buckets
IAM policy.
client (Client or NoneType) Optional. The client to use. If not passed, falls back
to the client stored on the current bucket.
Return type google.cloud.iam.Policy
Returns the policy instance, based on the resource returned from the setIamPolicy API
request.
storage_class
Retrieve or set the storage class for the bucket.
See https://cloud.google.com/storage/docs/storage-classes
Setter Set the storage class for this bucket.
Getter Gets the the storage class for this bucket.
Return type str or NoneType
Returns If set, one of MULTI_REGIONAL, REGIONAL, NEARLINE, COLDLINE,
STANDARD, or DURABLE_REDUCED_AVAILABILITY, else None.
test_iam_permissions(permissions, client=None)
API call: test permissions
See https://cloud.google.com/storage/docs/json_api/v1/buckets/testIamPermissions
Parameters
permissions (list of string) the permissions to check
client (Client or NoneType) Optional. The client to use. If not passed, falls back
to the client stored on the current bucket.
Return type list of string
Returns the permissions returned by the testIamPermissions API request.
time_created
Retrieve the timestamp at which the bucket was created.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
Return type datetime.datetime or NoneType
Returns Datetime object parsed from RFC3339 valid timestamp, or None if the property is not
set locally.
update(client=None)
Sends all properties in a PUT request.
Updates the _properties with the response from the backend.
Parameters client (Client or NoneType) the client to use. If not passed, falls back to
the client stored on the current object.
versioning_enabled
Is versioning enabled for this bucket?
See https://cloud.google.com/storage/docs/object-versioning for details.
Setter Update whether versioning is enabled for this bucket.
Getter Query whether versioning is enabled for this bucket.
Return type bool
Returns True if enabled, else False.
18.3 ACL
Adding and removing permissions can be done with the following methods (in increasing order of granularity):
ACL.all() corresponds to access for all users.
ACL.all_authenticated() corresponds to access for all users that are signed into a Google account.
ACL.domain() corresponds to access on a per Google Apps domain (ie, example.com).
ACL.group() corresponds to access on a per group basis (either by ID or e-mail address).
ACL.user() corresponds to access on a per user basis (either by ID or e-mail address).
And you are able to grant and revoke the following roles:
Reading: _ACLEntity.grant_read() and _ACLEntity.revoke_read()
Writing: _ACLEntity.grant_write() and _ACLEntity.revoke_write()
Owning: _ACLEntity.grant_owner() and _ACLEntity.revoke_owner()
You can use any of these like any other factory method (these happen to be _ACLEntity factories):
acl.user('[email protected]').grant_read()
acl.all_authenticated().grant_write()
After that, you can save any changes you make with the google.cloud.storage.acl.ACL.save() method:
acl.save()
You can alternatively save any existing google.cloud.storage.acl.ACL object (whether it was created by a
factory method or not) from a google.cloud.storage.bucket.Bucket:
bucket.acl.save(acl=acl)
To get the list of entity and role for each unique pair, the ACL class is iterable:
print(list(acl))
# [{'role': 'OWNER', 'entity': 'allUsers'}, ...]
This list of tuples can be used as the entity and role fields when sending metadata for ACLs to the API.
class google.cloud.storage.acl.ACL
Bases: object
Container class representing a list of access controls.
PREDEFINED_JSON_ACLS = frozenset(['publicRead', 'bucketOwnerFullControl', 'bucketOwnerR
See https://cloud.google.com/storage/docs/access-control/lists#predefined-acl
add_entity(entity)
Add an entity to the ACL.
Parameters entity (_ACLEntity) The entity to add to this ACL.
all()
Factory method for an Entity representing all users.
Return type _ACLEntity
Returns An entity representing all users.
all_authenticated()
Factory method for an Entity representing all authenticated users.
Return type _ACLEntity
Returns An entity representing all authenticated users.
clear(client=None)
Remove all ACL entries.
Note that this wont actually remove ALL the rules, but it will remove all the non-default rules. In short,
youll still have access to a bucket that you created even after you clear ACL rules with this method.
Parameters client (Client or NoneType) Optional. The client to use. If not passed,
falls back to the client stored on the ACLs parent.
client
Abstract getter for the object client.
domain(domain)
Factory method for a domain Entity.
Parameters domain (str) The domain for this entity.
Return type _ACLEntity
Returns An entity corresponding to this domain.
entity(entity_type, identifier=None)
Factory method for creating an Entity.
If an entity with the same type and identifier already exists, this will return a reference to that entity. If not,
it will create a new one and add it to the list of known entities for this ACL.
Parameters
entity_type (str) The type of entity to create (ie, user, group, etc)
identifier (str) The ID of the entity (if applicable). This can be either an ID or an
e-mail address.
Return type _ACLEntity
Returns A new Entity or a reference to an existing identical entity.
entity_from_dict(entity_dict)
Build an _ACLEntity object from a dictionary of data.
An entity is a mutable object that represents a list of roles belonging to either a user or group or the special
types for all users and all authenticated users.
Parameters entity_dict (dict) Dictionary full of data from an ACL lookup.
Return type _ACLEntity
Returns An Entity constructed from the dictionary.
get_entities()
Get a list of all Entity objects.
Return type list of _ACLEntity objects
Returns A list of all Entity objects.
get_entity(entity, default=None)
Gets an entity object from the ACL.
Parameters
entity (_ACLEntity or string) The entity to get lookup in the ACL.
default (anything) This value will be returned if the entity doesnt exist.
Return type _ACLEntity
Returns The corresponding entity or the value provided to default.
group(identifier)
Factory method for a group Entity.
Parameters identifier (str) An id or e-mail for this particular group.
Return type _ACLEntity
Returns An Entity corresponding to this group.
has_entity(entity)
Returns whether or not this ACL has any entries for an entity.
Parameters entity (_ACLEntity) The entity to check for existence in this ACL.
Return type bool
Returns True of the entity exists in the ACL.
reload(client=None)
Reload the ACL data from Cloud Storage.
Parameters client (Client or NoneType) Optional. The client to use. If not passed,
falls back to the client stored on the ACLs parent.
reset()
Remove all entities from the ACL, and clear the loaded flag.
save(acl=None, client=None)
Save this ACL for the current bucket.
Parameters
acl (google.cloud.storage.acl.ACL, or a compatible list.) The ACL object
to save. If left blank, this will save current entries.
client (Client or NoneType) Optional. The client to use. If not passed, falls back
to the client stored on the ACLs parent.
save_predefined(predefined, client=None)
Save this ACL for the current bucket using a predefined ACL.
Parameters
predefined (str) An identifier for a predefined ACL. Must be one of the keys in
PREDEFINED_JSON_ACLS or PREDEFINED_XML_ACLS (which will be aliased to
the corresponding JSON name). If passed, acl must be None.
client (Client or NoneType) Optional. The client to use. If not passed, falls back
to the client stored on the ACLs parent.
user(identifier)
Factory method for a user Entity.
Parameters identifier (str) An id or e-mail for this particular user.
Return type _ACLEntity
Returns An Entity corresponding to this user.
class google.cloud.storage.acl.BucketACL(bucket)
Bases: google.cloud.storage.acl.ACL
An ACL specifically for a bucket.
Parameters bucket (google.cloud.storage.bucket.Bucket) The bucket to which
this ACL relates.
client
The client bound to this ACLs bucket.
reload_path
Compute the path for GET API requests for this ACL.
save_path
Compute the path for PATCH API requests for this ACL.
class google.cloud.storage.acl.DefaultObjectACL(bucket)
Bases: google.cloud.storage.acl.BucketACL
A class representing the default object ACL for a bucket.
class google.cloud.storage.acl.ObjectACL(blob)
Bases: google.cloud.storage.acl.ACL
An ACL specifically for a Cloud Storage object / blob.
Parameters blob (google.cloud.storage.blob.Blob) The blob that this ACL corre-
sponds to.
client
The client bound to this ACLs blob.
reload_path
Compute the path for GET API requests for this ACL.
save_path
Compute the path for PATCH API requests for this ACL.
18.4 Batches
_http (Session) (Optional) HTTP object to make requests. Can be any object that
defines request() with the same interface as requests.Session.request(). If
not passed, an _http object is created that is bound to the credentials for the current
object. This parameter should be considered private, and could change in the future.
SCOPE = ('https://www.googleapis.com/auth/devstorage.full_control', 'https://www.google
The scopes required for authenticating as a Cloud Storage consumer.
batch()
Factory constructor for batch object.
Note: This will not make an HTTP request; it simply instantiates a batch object owned by this client.
bucket(bucket_name)
Factory constructor for bucket object.
Note: This will not make an HTTP request; it simply instantiates a bucket object owned by this client.
create_bucket(bucket_name)
Create a new bucket.
For example:
bucket = client.create_bucket('my-bucket')
assert isinstance(bucket, Bucket)
# <Bucket: my-bucket>
For example:
try:
bucket = client.get_bucket('my-bucket')
except google.cloud.exceptions.NotFound:
print('Sorry, that bucket does not exist!')
bucket = client.lookup_bucket('doesnt-exist')
assert not bucket
# None
bucket = client.lookup_bucket('my-bucket')
assert isinstance(bucket, Bucket)
# <Bucket: my-bucket>
Returns The bucket matching the name provided or None if not found.
Translation
339
google-cloud Documentation, Release 0.27.1
19.3 Methods
To create a client:
By default, the client targets English when doing detections and translations, but a non-default value can be used as
well:
The Google Cloud Translation API has three supported methods, and they map to three methods on a client:
get_languages(), detect_language() and translate().
To get a list of languages supported by the Google Cloud Translation API
The confidence value is an optional floating point value between 0 and 1. The closer this value is to 1, the higher the
confidence level for the language detection. This member is not always available.
To translate text:
Vision
The Google Cloud Vision (Vision API docs) API enables developers to understand the content of an image by encap-
sulating powerful machine learning models in an easy to use REST API. It quickly classifies images into thousands
of categories (e.g., sailboat, lion, Eiffel Tower), detects individual objects and faces within images, and finds
and reads printed words contained within images. You can build metadata on your image catalog, moderate offensive
content, or enable new marketing scenarios through image sentiment analysis. Analyze images uploaded in the request
or integrate with your image storage on Google Cloud Storage.
343
google-cloud Documentation, Release 0.27.1
If you are only requesting a single feature, you may find it easier to ask for it using our direct methods:
>>> from google.cloud import vision
>>> client = vision.ImageAnnotatorClient()
>>> response = client.face_detection({
... 'source': {'image_uri': 'gs://my-test-bucket/image.jpg'},
... })
>>> len(response.annotations)
1
>>> for face in response.annotations[0].faces:
... print(face.joy)
Likelihood.VERY_LIKELY
Likelihood.VERY_LIKELY
Likelihood.VERY_LIKELY
If no results for the detection performed can be extracted from the image, then an empty list is returned. This behavior
is similar with all detection types.
Example with logo_detection():
>>> from google.cloud import vision
>>> client = vision.ImageAnnotatorClient()
>>> with open('./image.jpg', 'rb') as image_file:
... content = image_file.read()
>>> response = client.logo_detection({
... 'content': content,
... })
>>> len(response.annotations)
0
Example
... }
>>> response = client.annotate_image(request)
Parameters
request (AnnotateImageRequest)
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries, etc.
Returns AnnotateImageResponse The API response.
batch_annotate_images(requests, options=None)
Run image detection and annotation for a batch of images.
Example
Parameters
requests (list[Union[dict, AnnotateImageRequest]]) Individual im-
age annotation requests for this batch. If a dict is provided, it must be of the same form as
the protobuf message AnnotateImageRequest
options (CallOptions) Overrides the default settings for this call, e.g, timeout,
retries etc.
Returns A BatchAnnotateImagesResponse instance.
Raises
google.gax.errors.GaxError if the RPC is aborted.
ValueError if the parameters are invalid.
Parameters
image (Image) The image to analyze.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries, etc.
kwargs (dict) Additional properties to be set on the AnnotateImageRequest.
Returns The API response.
Return type AnnotateImageResponse
enums = <module 'google.cloud.vision_v1.gapic.enums' from '/home/docs/checkouts/readthe
face_detection(image, options=None, **kwargs)
Perform face detection.
Parameters
image (Image) The image to analyze.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries, etc.
kwargs (dict) Additional properties to be set on the AnnotateImageRequest.
Returns The API response.
Return type AnnotateImageResponse
image_properties(image, options=None, **kwargs)
Return image properties information.
Parameters
image (Image) The image to analyze.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries, etc.
kwargs (dict) Additional properties to be set on the AnnotateImageRequest.
Returns The API response.
Return type AnnotateImageResponse
label_detection(image, options=None, **kwargs)
Perform label detection.
Parameters
image (Image) The image to analyze.
options (google.gax.CallOptions) Overrides the default settings for this call,
e.g, timeout, retries, etc.
kwargs (dict) Additional properties to be set on the AnnotateImageRequest.
Returns The API response.
Return type AnnotateImageResponse
landmark_detection(image, options=None, **kwargs)
Perform landmark detection.
Parameters
image (Image) The image to analyze.
class google.cloud.vision_v1.types.AnnotateImageRequest
Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested fea-
tures.
image
The image to be processed.
features
Requested features.
image_context
Additional context that may accompany the image.
class google.cloud.vision_v1.types.AnnotateImageResponse
Response to an image annotation request.
face_annotations
If present, face detection has completed successfully.
landmark_annotations
If present, landmark detection has completed successfully.
logo_annotations
If present, logo detection has completed successfully.
label_annotations
If present, label detection has completed successfully.
text_annotations
If present, text (OCR) detection or document (OCR) text detection has completed successfully.
full_text_annotation
If present, text (OCR) detection or document (OCR) text detection has completed successfully. This
annotation provides the structural hierarchy for the OCR detected text.
safe_search_annotation
If present, safe-search annotation has completed successfully.
image_properties_annotation
If present, image properties were extracted successfully.
crop_hints_annotation
If present, crop hints have completed successfully.
web_detection
If present, web detection has completed successfully.
error
If set, represents the error message for the operation. Note that filled-in image annotations are guaranteed
to be correct, even when error is set.
class google.cloud.vision_v1.types.BatchAnnotateImagesRequest
Multiple image annotation requests are batched into a single service call.
requests
Individual image annotation requests for this batch.
class google.cloud.vision_v1.types.BatchAnnotateImagesResponse
Response to a batch image annotation request.
responses
Individual responses to image annotation requests within the batch.
class google.cloud.vision_v1.types.Block
Logical element on the page.
property
Additional information detected for the block.
bounding_box
The bounding box for the block. The vertices are in the order of top-left, top-right, bottom-right, bottom-
left. When a rotation of the bounding box is detected the rotation is represented as around the top-left
corner as defined when the text is read in the natural orientation. For example: * when the text is
horizontal it might look like: 0-1 | | 3 -2 * when its rotated 180 degrees around the top-left corner it
becomes: 2-3 | | 1-0 and the vertice order will still be (0, 1, 2, 3).
paragraphs
List of paragraphs in this block (if this blocks is of type text).
block_type
Detected block type (text, image etc) for this block.
class google.cloud.vision_v1.types.BoundingPoly
A bounding polygon for the detected image annotation.
vertices
The bounding polygon vertices.
class google.cloud.vision_v1.types.ColorInfo
Color information consists of RGB channels, score, and the fraction of the image that the color occupies in the
image.
color
RGB components of the color.
score
Image-specific score for this color. Value in range [0, 1].
pixel_fraction
The fraction of pixels the color occupies in the image. Value in range [0, 1].
class google.cloud.vision_v1.types.CropHint
Single crop hint that is used to generate a new crop when serving an image.
bounding_poly
The bounding polygon for the crop region. The coordinates of the bounding box are in the original images
scale, as returned in ImageParams.
confidence
Confidence of this being a salient region. Range [0, 1].
importance_fraction
Fraction of importance of this salient region with respect to the original image.
class google.cloud.vision_v1.types.CropHintsAnnotation
Set of crop hints that are used to generate new crops when serving images.
class google.cloud.vision_v1.types.CropHintsParams
Parameters for crop hints annotation request.
aspect_ratios
Aspect ratios in floats, representing the ratio of the width to the height of the image. For example, if
the desired aspect ratio is 4/3, the corresponding float value should be 1.33333. If not specified, the best
possible crop is returned. The number of provided aspect ratios is limited to a maximum of 16; any aspect
ratios provided after the 16th are ignored.
class google.cloud.vision_v1.types.DominantColorsAnnotation
Set of dominant colors and their corresponding scores.
colors
RGB color values with their score and pixel fraction.
class google.cloud.vision_v1.types.EntityAnnotation
Set of detected entity features.
mid
Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
locale
The language code for the locale in which the entity textual description is expressed.
description
Entity textual description, expressed in its locale language.
score
Overall score of the result. Range [0, 1].
confidence
The accuracy of the entity detection in an image. For example, for an image in which the Eiffel Tower
entity is detected, this field represents the confidence that there is a tower in the query image. Range [0,
1].
topicality
The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of
tower is likely higher to an image containing the detected Eiffel Tower than to an image containing a
detected distant towering building, even though the confidence that there is a tower in each image may be
the same. Range [0, 1].
bounding_poly
Image region to which this entity belongs. Currently not produced for LABEL_DETECTION features.
For TEXT_DETECTION (OCR), boundingPolys are produced for the entire text detected in an image
region, followed by boundingPolys for each word within the detected text.
locations
The location information for the detected entity. Multiple LocationInfo elements can be present be-
cause one location may indicate the location of the scene in the image, and another location may indicate
the location of the place where the image was taken. Location information is usually present for landmarks.
properties
Some entities may have optional user-supplied Property (name/value) fields, such a score or string that
qualifies the entity.
class google.cloud.vision_v1.types.FaceAnnotation
A face annotation object contains the results of face detection.
type
Face landmark type.
position
Face landmark position.
bounding_poly
The bounding polygon around the face. The coordinates of the bounding box are in the original images
scale, as returned in ImageParams. The bounding box is computed to frame the face in accordance
with human expectations. It is based on the landmarker results. Note that one or more x and/or y coordi-
nates may not be generated in the BoundingPoly (the polygon will be unbounded) if only a partial face
appears in the image to be annotated.
fd_bounding_poly
The fd_bounding_poly bounding polygon is tighter than the boundingPoly, and encloses only
the skin part of the face. Typically, it is used to eliminate the face from any image analysis that detects
the amount of skin visible in an image. It is not based on the landmarker results, only on the initial face
detection, hence the fd (face detection) prefix.
landmarks
Detected face landmarks.
roll_angle
Roll angle, which indicates the amount of clockwise/anti- clockwise rotation of the face relative to the
image vertical about the axis perpendicular to the face. Range [-180,180].
pan_angle
Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical
plane perpendicular to the image. Range [-180,180].
tilt_angle
Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the images
horizontal plane. Range [-180,180].
detection_confidence
Detection confidence. Range [0, 1].
landmarking_confidence
Face landmarking confidence. Range [0, 1].
joy_likelihood
Joy likelihood.
sorrow_likelihood
Sorrow likelihood.
anger_likelihood
Anger likelihood.
surprise_likelihood
Surprise likelihood.
under_exposed_likelihood
Under-exposed likelihood.
blurred_likelihood
Blurred likelihood.
headwear_likelihood
Headwear likelihood.
class Landmark
A face-specific landmark (for example, a face feature). Landmark positions may fall outside the bounds of
the image if the face is near one or more edges of the image. Therefore it is NOT guaranteed that 0 <=
x < width or 0 <= y < height.
class google.cloud.vision_v1.types.Feature
Users describe the type of Google Cloud Vision API tasks to perform over images by using Features. Each
Feature indicates a type of image detection task to perform. Features encode the Cloud Vision API vertical to
operate on and the number of top-scoring results to return.
type
The feature type.
max_results
Maximum number of results of this type.
class google.cloud.vision_v1.types.Image
Client image to perform Google Cloud Vision API tasks over.
content
Image content, represented as a stream of bytes. Note: as with all bytes fields, protobuffers use a pure
binary representation, whereas JSON representations use base64.
source
Google Cloud Storage image location. If both content and source are provided for an image,
content takes precedence and is used to perform the image annotation request.
class google.cloud.vision_v1.types.ImageContext
Image context and/or feature-specific parameters.
lat_long_rect
lat/long rectangle that specifies the location of the image.
language_hints
List of languages to use for TEXT_DETECTION. In most cases, an empty value yields the best re-
sults since it enables automatic language detection. For languages based on the Latin alphabet, setting
language_hints is not needed. In rare cases, when the language of the text in the image is known, set-
ting a hint will help get better results (although it will be a significant hindrance if the hint is wrong). Text
detection returns an error if one or more of the specified languages is not one of the supported languages.
crop_hints_params
Parameters for crop hints annotation request.
class google.cloud.vision_v1.types.ImageProperties
Stores image properties, such as dominant colors.
dominant_colors
If present, dominant colors completed successfully.
class google.cloud.vision_v1.types.ImageSource
External image source (Google Cloud Storage image location).
gcs_image_uri
NOTE: For new code image_uri below is preferred. Google Cloud Storage image URI, which must be
in the following form: gs://bucket_name/object_name (for details, see Google Cloud Storage
Request URIs). NOTE: Cloud Storage object versioning is not supported.
image_uri
Image URI which supports: 1) Google Cloud Storage image URI, which must be in the following form:
gs://bucket_name/object_name (for details, see Google Cloud Storage Request URIs). NOTE:
Cloud Storage object versioning is not supported. 2) Publicly accessible image HTTP/HTTPS URL. This
is preferred over the legacy gcs_image_uri above. When both gcs_image_uri and image_uri
are specified, image_uri takes precedence.
class google.cloud.vision_v1.types.LatLongRect
Rectangle determined by min and max LatLng pairs.
min_lat_lng
Min lat/long pair.
max_lat_lng
Max lat/long pair.
class google.cloud.vision_v1.types.LocationInfo
Detected entity location information.
lat_lng
lat/long location coordinates.
class google.cloud.vision_v1.types.Page
Detected page from OCR.
property
Additional information detected on the page.
width
Page width in pixels.
height
Page height in pixels.
blocks
List of blocks of text, images etc on this page.
class google.cloud.vision_v1.types.Paragraph
Structural unit of text representing a number of words in certain order.
property
Additional information detected for the paragraph.
bounding_box
The bounding box for the paragraph. The vertices are in the order of top-left, top-right, bottom-right,
bottom-left. When a rotation of the bounding box is detected the rotation is represented as around the
top-left corner as defined when the text is read in the natural orientation. For example: * when the text is
horizontal it might look like: 0-1 | | 3 -2 * when its rotated 180 degrees around the top-left corner it
becomes: 2-3 | | 1-0 and the vertice order will still be (0, 1, 2, 3).
words
List of words in this paragraph.
class google.cloud.vision_v1.types.Position
A 3D position in the image, used primarily for Face detection landmarks. A valid Position must have both x and
y coordinates. The position coordinates are in the same scale as the original image.
x
X coordinate.
y
Y coordinate.
z
Z coordinate (or depth).
class google.cloud.vision_v1.types.Property
A Property consists of a user-supplied name/value pair.
name
Name of the property.
value
Value of the property.
class google.cloud.vision_v1.types.SafeSearchAnnotation
Set of features pertaining to the image, computed by computer vision methods over safe-search verticals (for
example, adult, spoof, medical, violence).
adult
Represents the adult content likelihood for the image.
spoof
Spoof likelihood. The likelihood that an modification was made to the images canonical version to make
it appear funny or offensive.
medical
Likelihood that this is a medical image.
violence
Violence likelihood.
class google.cloud.vision_v1.types.Symbol
A single symbol representation.
property
Additional information detected for the symbol.
bounding_box
The bounding box for the symbol. The vertices are in the order of top-left, top-right, bottom-right, bottom-
left. When a rotation of the bounding box is detected the rotation is represented as around the top-left
corner as defined when the text is read in the natural orientation. For example: * when the text is
horizontal it might look like: 0-1 | | 3 -2 * when its rotated 180 degrees around the top-left corner it
becomes: 2-3 | | 1-0 and the vertice order will still be (0, 1, 2, 3).
text
The actual UTF-8 representation of the symbol.
class google.cloud.vision_v1.types.TextAnnotation
TextAnnotation contains a structured representation of OCR extracted text. The hierarchy
of an OCR extracted text structure is like this: TextAnnotation -> Page -> Block -> Para-
graph -> Word -> Symbol Each structural component, starting from Page, may further have
their own properties. Properties describe detected languages, breaks etc.. Please refer to the
[google.cloud.vision.v1.TextAnnotation.TextProperty][google.cloud.vision.v1.TextAnnotation.TextProperty]
message definition below for more detail.
language_code
The BCP-47 language code, such as en-US or sr-Latn. For more information, see http://www.unicode.
org/reports/tr35/#Uni code_locale_identifier.
confidence
Confidence of detected language. Range [0, 1].
is_prefix
True if break prepends the element.
detected_languages
A list of detected languages together with confidence.
detected_break
Detected start or end of a text segment.
pages
List of pages detected by OCR.
text
UTF-8 text detected on the pages.
class DetectedBreak
Detected start or end of a structural component.
class DetectedLanguage
Detected language for a structural component.
class TextProperty
Additional information detected on the structural component.
class google.cloud.vision_v1.types.Vertex
X coordinate.
y
Y coordinate.
class google.cloud.vision_v1.types.WebDetection
Relevant information for the image from the Internet.
entity_id
Opaque entity ID.
score
Overall relevancy score for the web page. Not normalized and not comparable across different image
queries.
description
Canonical description of the entity, in English.
url
The result web page URL.
web_entities
Deduced entities from similar images on the Internet.
full_matching_images
Fully matching images from the Internet. Theyre definite neardups and most often a copy of the query
image with merely a size change.
partial_matching_images
Partial matching images from the Internet. Those images are similar enough to share some key-point
features. For example an original image will likely have partial matching for its crops.
pages_with_matching_images
Web pages containing the matching images from the Internet.
class WebEntity
Entity deduced from similar images on the Internet.
class WebImage
Metadata for online images.
class WebPage
Metadata for web pages.
class google.cloud.vision_v1.types.Word
A word representation.
property
Additional information detected for the word.
bounding_box
The bounding box for the word. The vertices are in the order of top-left, top-right, bottom-right, bottom-
left. When a rotation of the bounding box is detected the rotation is represented as around the top-left
corner as defined when the text is read in the natural orientation. For example: * when the text is
horizontal it might look like: 0-1 | | 3 -2 * when its rotated 180 degrees around the top-left corner it
becomes: 2-3 | | 1-0 and the vertice order will still be (0, 1, 2, 3).
symbols
List of symbols in the word. The order of the symbols follows the natural reading order.
Google Cloud Datastore is a fully managed, schemaless database for storing non-relational data.
client = datastore.Client()
key = client.key('Person')
entity = datastore.Entity(key=key)
entity['name'] = 'Your name'
entity['age'] = 25
client.put(entity)
client = storage.Client()
bucket = client.get_bucket('<your-bucket-name>')
blob = bucket.blob('my-test-file.txt')
blob.upload_from_string('this is test content!')
359
google-cloud Documentation, Release 0.27.1
21.1.3 Resources
GitHub
Issues
Stack Overflow
PyPI
g google.cloud.logging.entries, 295
google.cloud.bigquery.client, 15 google.cloud.logging.handlers.app_engine,
google.cloud.bigquery.dataset, 18 302
google.cloud.bigquery.job, 22 google.cloud.logging.handlers.container_engine,
google.cloud.bigquery.query, 38 303
google.cloud.bigquery.schema, 41 google.cloud.logging.handlers.handlers,
google.cloud.bigquery.table, 42 300
google.cloud.bigtable.client, 60 google.cloud.logging.handlers.transports.background
google.cloud.bigtable.cluster, 62 304
google.cloud.bigtable.column_family, 74 google.cloud.logging.handlers.transports.base,
google.cloud.bigtable.instance, 64 304
google.cloud.bigtable.row, 76 google.cloud.logging.handlers.transports.sync,
google.cloud.bigtable.row_data, 84 303
google.cloud.bigtable.row_filters, 87 google.cloud.logging.logger, 292
google.cloud.bigtable.table, 69 google.cloud.logging.metric, 296
google.cloud.client, 9 google.cloud.logging.sink, 298
google.cloud.datastore, 115 google.cloud.monitoring.client, 265
google.cloud.datastore.batch, 112 google.cloud.monitoring.group, 274
google.cloud.datastore.client, 99 google.cloud.monitoring.label, 283
google.cloud.datastore.entity, 102 google.cloud.monitoring.metric, 271
google.cloud.datastore.helpers, 114 google.cloud.monitoring.query, 277
google.cloud.datastore.key, 104 google.cloud.monitoring.resource, 273
google.cloud.datastore.query, 107 google.cloud.monitoring.timeseries, 282
google.cloud.datastore.transaction, 110 google.cloud.operation, 7
google.cloud.dns.changes, 136 google.cloud.pubsub_v1.publisher.client,
google.cloud.dns.client, 131 166
google.cloud.dns.resource_record_set, google.cloud.pubsub_v1.subscriber.client,
135 174
google.cloud.dns.zone, 132 google.cloud.pubsub_v1.types, 188
google.cloud.environment_vars, 11 google.cloud.resource_manager.client,
google.cloud.error_reporting.client, 259 197
google.cloud.error_reporting.util, 261 google.cloud.resource_manager.project,
google.cloud.exceptions, 10 199
google.cloud.iam, 11 google.cloud.runtimeconfig, 208
google.cloud.language_v1, 144 google.cloud.runtimeconfig.client, 203
google.cloud.language_v1.types, 147 google.cloud.runtimeconfig.config, 204
google.cloud.language_v1beta2, 153 google.cloud.runtimeconfig.variable, 206
google.cloud.language_v1beta2.types, 157 google.cloud.spanner.batch, 241
google.cloud.logging.client, 289 google.cloud.spanner.client, 224
google.cloud.spanner.database, 230
361
google-cloud Documentation, Release 0.27.1
google.cloud.spanner.instance, 227
google.cloud.spanner.keyset, 240
google.cloud.spanner.pool, 236
google.cloud.spanner.session, 233
google.cloud.spanner.snapshot, 241
google.cloud.spanner.streamed, 242
google.cloud.spanner.transaction, 242
google.cloud.speech_v1, 250
google.cloud.speech_v1.types, 253
google.cloud.storage.acl, 331
google.cloud.storage.batch, 335
google.cloud.storage.blob, 311
google.cloud.storage.bucket, 321
google.cloud.storage.client, 335
google.cloud.translate_v2.client, 339
google.cloud.vision_v1, 345
google.cloud.vision_v1.types, 349
A add_record_set() (google.cloud.dns.changes.Changes
AbstractSessionPool (class in method), 136
google.cloud.spanner.pool), 236 additions (google.cloud.dns.changes.Changes attribute),
access_grants (google.cloud.bigquery.dataset.Dataset at- 136
tribute), 19 ADMIN_SCOPE (in module
AccessGrant (class in google.cloud.bigquery.dataset), 18 google.cloud.bigtable.client), 60
ack() (google.cloud.pubsub_v1.subscriber.message.Messageadult (google.cloud.vision_v1.types.SafeSearchAnnotation
method), 187 attribute), 355
align() (google.cloud.monitoring.query.Query method),
ack_deadline_seconds (google.cloud.pubsub_v1.types.ModifyAckDeadlineRequest
attribute), 191 277
Aligner (class in google.cloud.monitoring.query), 277
ack_deadline_seconds (google.cloud.pubsub_v1.types.Subscription
attribute), 194 all() (google.cloud.storage.acl.ACL method), 332
ack_id (google.cloud.pubsub_v1.types.ReceivedMessage all_authenticated() (google.cloud.storage.acl.ACL
attribute), 192 method), 332
ack_ids (google.cloud.pubsub_v1.types.AcknowledgeRequest all_users() (google.cloud.iam.Policy static method), 11
attribute), 188 allocate_ids() (google.cloud.datastore.Client method),
ack_ids (google.cloud.pubsub_v1.types.ModifyAckDeadlineRequest 118
attribute), 191 allocate_ids() (google.cloud.datastore.client.Client
ack_ids (google.cloud.pubsub_v1.types.StreamingPullRequest method), 99
attribute), 193 allow_jagged_rows (google.cloud.bigquery.job.LoadTableFromStorageJob
acknowledge() (google.cloud.pubsub_v1.subscriber.client.Client attribute), 30
method), 174 allow_large_results (google.cloud.bigquery.job.QueryJob
AcknowledgeRequest (class in attribute), 34
google.cloud.pubsub_v1.types), 188 allow_quoted_newlines (google.cloud.bigquery.job.LoadTableFromStorageJ
ACL (class in google.cloud.storage.acl), 332 attribute), 30
acl (google.cloud.storage.blob.Blob attribute), 311 alternatives (google.cloud.speech_v1.types.SpeechRecognitionResult
acl (google.cloud.storage.bucket.Bucket attribute), 321 attribute), 255
add_done_callback() (google.cloud.bigquery.job.CopyJob alternatives (google.cloud.speech_v1.types.StreamingRecognitionResult
method), 22 attribute), 255
analyze_entities()
add_done_callback() (google.cloud.bigquery.job.ExtractTableToStorageJob (google.cloud.language_v1.LanguageServiceClient
method), 26 method), 144
add_done_callback() (google.cloud.bigquery.job.LoadTableFromStorageJob (google.cloud.language_v1beta2.LanguageServiceClient
analyze_entities()
method), 30 method), 153
add_done_callback() (google.cloud.bigquery.job.QueryJob analyze_entity_sentiment()
method), 34 (google.cloud.language_v1.LanguageServiceClient
add_entity() (google.cloud.storage.acl.ACL method), 332 method), 145
add_filter() (google.cloud.datastore.Query method), 125 analyze_entity_sentiment()
add_filter() (google.cloud.datastore.query.Query (google.cloud.language_v1beta2.LanguageServiceClient
method), 107 method), 154
analyze_sentiment() (google.cloud.language_v1.LanguageServiceClient
363
google-cloud Documentation, Release 0.27.1
364 Index
google-cloud Documentation, Release 0.27.1
google.cloud.logging.handlers.transports.background_thread),
google.cloud.environment_vars), 11
304 bind() (google.cloud.spanner.pool.AbstractSessionPool
BASE (in module google.cloud.translate_v2.client), 339 method), 236
Batch (class in google.cloud.datastore), 116 bind() (google.cloud.spanner.pool.BurstyPool method),
Batch (class in google.cloud.datastore.batch), 112 237
Batch (class in google.cloud.logging.logger), 292 bind() (google.cloud.spanner.pool.FixedSizePool
Batch (class in google.cloud.spanner.batch), 241 method), 238
Batch (class in google.cloud.storage.batch), 335 bind() (google.cloud.spanner.pool.PingingPool method),
batch() (google.cloud.datastore.Client method), 118 239
batch() (google.cloud.datastore.client.Client method), bind() (google.cloud.spanner.pool.TransactionPingingPool
100 method), 240
batch() (google.cloud.logging.logger.Logger method), Blob (class in google.cloud.storage.blob), 311
293 blob() (google.cloud.storage.bucket.Bucket method), 321
batch() (google.cloud.pubsub_v1.publisher.client.Client Block (class in google.cloud.vision_v1.types), 350
method), 166 block_type (google.cloud.vision_v1.types.Block at-
batch() (google.cloud.spanner.database.Database tribute), 350
method), 231 BlockAllFilter (class in
batch() (google.cloud.spanner.session.Session method), google.cloud.bigtable.row_filters), 87
234 blocks (google.cloud.vision_v1.types.Page attribute), 354
batch() (google.cloud.storage.client.Client method), 336 blurred_likelihood (google.cloud.vision_v1.types.FaceAnnotation
batch_annotate_images() attribute), 352
(google.cloud.vision_v1.ImageAnnotatorClient bounding_box (google.cloud.vision_v1.types.Block at-
method), 346 tribute), 350
BatchAnnotateImagesRequest (class in bounding_box (google.cloud.vision_v1.types.Paragraph
google.cloud.vision_v1.types), 349 attribute), 354
BatchAnnotateImagesResponse (class in bounding_box (google.cloud.vision_v1.types.Symbol at-
google.cloud.vision_v1.types), 349 tribute), 355
BatchCheckout (class in google.cloud.spanner.database), bounding_box (google.cloud.vision_v1.types.Word at-
230 tribute), 356
BatchSettings (class in google.cloud.pubsub_v1.types), bounding_poly (google.cloud.vision_v1.types.CropHint
188 attribute), 350
begin() (google.cloud.bigquery.job.CopyJob method), 23 bounding_poly (google.cloud.vision_v1.types.EntityAnnotation
begin() (google.cloud.bigquery.job.ExtractTableToStorageJob attribute), 351
method), 26 bounding_poly (google.cloud.vision_v1.types.FaceAnnotation
begin() (google.cloud.bigquery.job.LoadTableFromStorageJob attribute), 351
method), 30 BoundingPoly (class in google.cloud.vision_v1.types),
begin() (google.cloud.bigquery.job.QueryJob method), 34 350
begin() (google.cloud.datastore.Batch method), 116 Bucket (class in google.cloud.storage.bucket), 321
begin() (google.cloud.datastore.batch.Batch method), 113 bucket() (google.cloud.storage.client.Client method), 336
begin() (google.cloud.datastore.Transaction method), 129 BucketACL (class in google.cloud.storage.acl), 334
begin() (google.cloud.datastore.transaction.Transaction build_flask_context() (in module
method), 111 google.cloud.error_reporting.util), 261
begin() (google.cloud.spanner.snapshot.Snapshot BurstyPool (class in google.cloud.spanner.pool), 237
method), 241
begin() (google.cloud.spanner.transaction.Transaction C
method), 242 cache_control (google.cloud.storage.blob.Blob attribute),
begin_offset (google.cloud.language_v1.types.TextSpan 311
attribute), 152 cache_hit (google.cloud.bigquery.query.QueryResults at-
begin_offset (google.cloud.language_v1beta2.types.TextSpan tribute), 38
attribute), 162 cancel() (google.cloud.bigquery.job.CopyJob method), 23
begin_pending_transactions() cancel() (google.cloud.bigquery.job.ExtractTableToStorageJob
(google.cloud.spanner.pool.TransactionPingingPool method), 26
method), 240 cancel() (google.cloud.bigquery.job.LoadTableFromStorageJob
BIGTABLE_EMULATOR (in module method), 30
Index 365
google-cloud Documentation, Release 0.27.1
366 Index
google-cloud Documentation, Release 0.27.1
Index 367
google-cloud Documentation, Release 0.27.1
368 Index
google-cloud Documentation, Release 0.27.1
Index 369
google-cloud Documentation, Release 0.27.1
370 Index
google-cloud Documentation, Release 0.27.1
Index 371
google-cloud Documentation, Release 0.27.1
372 Index
google-cloud Documentation, Release 0.27.1
Index 373
google-cloud Documentation, Release 0.27.1
374 Index
google-cloud Documentation, Release 0.27.1
Index 375
google-cloud Documentation, Release 0.27.1
I interim_results (google.cloud.speech_v1.types.StreamingRecognitionConfig
id (google.cloud.datastore.Key attribute), 124 attribute), 255
id (google.cloud.datastore.key.Key attribute), 105 InvalidChunk, 84
id (google.cloud.datastore.Transaction attribute), 129 InvalidReadRowsResponse, 84
id (google.cloud.datastore.transaction.Transaction at- is_final (google.cloud.speech_v1.types.StreamingRecognitionResult
tribute), 111 attribute), 255
id (google.cloud.monitoring.group.Group attribute), 275 is_nullable (google.cloud.bigquery.schema.SchemaField
id (google.cloud.storage.blob.Blob attribute), 316 attribute), 42
id (google.cloud.storage.bucket.Bucket attribute), 327 is_partial (google.cloud.datastore.Key attribute), 124
id_or_name (google.cloud.datastore.Key attribute), 124 is_partial (google.cloud.datastore.key.Key attribute), 105
id_or_name (google.cloud.datastore.key.Key attribute), is_prefix (google.cloud.vision_v1.types.TextAnnotation
105 attribute), 355
iter() (google.cloud.monitoring.query.Query
ignore_unknown_values (google.cloud.bigquery.job.LoadTableFromStorageJob method),
attribute), 32 279
Image (class in google.cloud.vision_v1.types), 353 Iterator (class in google.cloud.datastore.query), 107
image (google.cloud.vision_v1.types.AnnotateImageRequest
attribute), 349 J
job (google.cloud.bigquery.query.QueryResults attribute),
image_context (google.cloud.vision_v1.types.AnnotateImageRequest
attribute), 349 39
job_from_resource()
image_properties() (google.cloud.vision_v1.ImageAnnotatorClient (google.cloud.bigquery.client.Client
method), 347 method), 16
image_properties_annotation job_type (google.cloud.bigquery.job.CopyJob attribute),
(google.cloud.vision_v1.types.AnnotateImageResponse 24
attribute), 349 job_type (google.cloud.bigquery.job.ExtractTableToStorageJob
image_uri (google.cloud.vision_v1.types.ImageSource attribute), 28
attribute), 353 job_type (google.cloud.bigquery.job.LoadTableFromStorageJob
ImageAnnotatorClient (class in google.cloud.vision_v1), attribute), 32
345 job_type (google.cloud.bigquery.job.QueryJob attribute),
ImageContext (class in google.cloud.vision_v1.types), 36
353 joy_likelihood (google.cloud.vision_v1.types.FaceAnnotation
ImageProperties (class in google.cloud.vision_v1.types), attribute), 352
353
ImageSource (class in google.cloud.vision_v1.types), 353 K
importance_fraction (google.cloud.vision_v1.types.CropHintKey (class in google.cloud.datastore), 122
attribute), 350 Key (class in google.cloud.datastore.key), 104
increment_cell_value() (google.cloud.bigtable.row.AppendRowkey() (google.cloud.datastore.Client method), 120
method), 78 key() (google.cloud.datastore.client.Client method), 101
input_file_bytes (google.cloud.bigquery.job.LoadTableFromStorageJob
key_filter() (google.cloud.datastore.Query method), 127
attribute), 32 key_filter() (google.cloud.datastore.query.Query
input_files (google.cloud.bigquery.job.LoadTableFromStorageJob method), 109
attribute), 32 key_from_protobuf() (in module
insert_data() (google.cloud.bigquery.table.Table method), google.cloud.datastore.helpers), 115
44 KeyRange (class in google.cloud.spanner.keyset), 240
Instance (class in google.cloud.bigtable.instance), 64 keys_only() (google.cloud.datastore.Query method), 127
Instance (class in google.cloud.spanner.instance), 227 keys_only() (google.cloud.datastore.query.Query
instance() (google.cloud.bigtable.client.Client method), method), 109
61 KeySet (class in google.cloud.spanner.keyset), 240
instance() (google.cloud.spanner.client.Client method), kind (google.cloud.datastore.Entity attribute), 122
225 kind (google.cloud.datastore.entity.Entity attribute), 104
instance_admin_api (google.cloud.spanner.client.Client kind (google.cloud.datastore.Key attribute), 124
attribute), 226 kind (google.cloud.datastore.key.Key attribute), 105
INSTANCE_ADMIN_HOST (in module kind (google.cloud.datastore.Query attribute), 127
google.cloud.bigtable.client), 62 kind (google.cloud.datastore.query.Query attribute), 109
InstanceConfig (class in google.cloud.spanner.client), 226
376 Index
google-cloud Documentation, Release 0.27.1
L language_code (google.cloud.speech_v1.types.RecognitionConfig
label (google.cloud.language_v1.types.DependencyEdge attribute), 254
attribute), 150 language_code (google.cloud.vision_v1.types.TextAnnotation
label (google.cloud.language_v1beta2.types.DependencyEdge attribute), 355
attribute), 160 language_hints (google.cloud.vision_v1.types.ImageContext
label_annotations (google.cloud.vision_v1.types.AnnotateImageResponse attribute), 353
attribute), 349 LanguageServiceClient (class in
label_detection() (google.cloud.vision_v1.ImageAnnotatorClient google.cloud.language_v1), 144
method), 347 LanguageServiceClient (class in
LabelDescriptor (class in google.cloud.monitoring.label), google.cloud.language_v1beta2), 153
283 last_update_time (google.cloud.speech_v1.types.LongRunningRecognizeM
labels (google.cloud.monitoring.timeseries.TimeSeries attribute), 253
attribute), 283 lat_lng (google.cloud.vision_v1.types.LocationInfo at-
labels (google.cloud.pubsub_v1.types.Snapshot at- tribute), 354
tribute), 193 lat_long_rect (google.cloud.vision_v1.types.ImageContext
labels (google.cloud.pubsub_v1.types.Subscription at- attribute), 353
tribute), 195 LatLongRect (class in google.cloud.vision_v1.types), 353
labels (google.cloud.pubsub_v1.types.Topic attribute), lemma (google.cloud.language_v1.types.Token attribute),
195 152
labels (google.cloud.storage.bucket.Bucket attribute), 327 lemma (google.cloud.language_v1beta2.types.Token at-
LabelValueType (class in google.cloud.monitoring.label), tribute), 162
284 lifecycle_rules (google.cloud.storage.bucket.Bucket at-
landmark_annotations (google.cloud.vision_v1.types.AnnotateImageResponse327
tribute),
attribute), 349 list_ancestors() (google.cloud.monitoring.group.Group
landmark_detection() (google.cloud.vision_v1.ImageAnnotatorClient method), 275
method), 347 list_blobs() (google.cloud.storage.bucket.Bucket
landmarking_confidence (google.cloud.vision_v1.types.FaceAnnotation method), 327
attribute), 352 list_buckets() (google.cloud.storage.client.Client
landmarks (google.cloud.vision_v1.types.FaceAnnotation method), 337
attribute), 352 list_changes() (google.cloud.dns.zone.ManagedZone
language (google.cloud.language_v1.types.AnalyzeEntitiesResponse method), 133
attribute), 148 list_children() (google.cloud.monitoring.group.Group
language (google.cloud.language_v1.types.AnalyzeEntitySentimentResponsemethod), 275
attribute), 148 list_clusters() (google.cloud.bigtable.instance.Instance
language (google.cloud.language_v1.types.AnalyzeSentimentResponse method), 66
attribute), 148 list_column_families() (google.cloud.bigtable.table.Table
language (google.cloud.language_v1.types.AnalyzeSyntaxResponse method), 70
attribute), 149 list_databases() (google.cloud.spanner.instance.Instance
language (google.cloud.language_v1.types.AnnotateTextResponse method), 229
attribute), 150 list_datasets() (google.cloud.bigquery.client.Client
language (google.cloud.language_v1.types.Document at- method), 16
tribute), 150 list_descendants() (google.cloud.monitoring.group.Group
language (google.cloud.language_v1beta2.types.AnalyzeEntitiesResponse method), 275
attribute), 157 list_entries() (google.cloud.logging.client.Client
method), 290
language (google.cloud.language_v1beta2.types.AnalyzeEntitySentimentResponse
attribute), 157 list_entries() (google.cloud.logging.logger.Logger
language (google.cloud.language_v1beta2.types.AnalyzeSentimentResponse method), 294
attribute), 158 list_groups() (google.cloud.monitoring.client.Client
language (google.cloud.language_v1beta2.types.AnalyzeSyntaxResponse method), 267
attribute), 158 list_instance_configs() (google.cloud.spanner.client.Client
language (google.cloud.language_v1beta2.types.AnnotateTextResponse method), 226
attribute), 159 list_instances() (google.cloud.bigtable.client.Client
language (google.cloud.language_v1beta2.types.Document method), 61
attribute), 160 list_instances() (google.cloud.spanner.client.Client
Index 377
google-cloud Documentation, Release 0.27.1
378 Index
google-cloud Documentation, Release 0.27.1
Index 379
google-cloud Documentation, Release 0.27.1
380 Index
google-cloud Documentation, Release 0.27.1
next_page_token (google.cloud.pubsub_v1.types.ListSnapshotsResponse
page_size (google.cloud.pubsub_v1.types.ListTopicSubscriptionsRequest
attribute), 189 attribute), 190
next_page_token (google.cloud.pubsub_v1.types.ListSubscriptionsResponse
page_token (google.cloud.bigquery.query.QueryResults
attribute), 190 attribute), 40
next_page_token (google.cloud.pubsub_v1.types.ListTopicsResponse
page_token (google.cloud.pubsub_v1.types.ListSnapshotsRequest
attribute), 191 attribute), 189
next_page_token (google.cloud.pubsub_v1.types.ListTopicSubscriptionsResponse
page_token (google.cloud.pubsub_v1.types.ListSubscriptionsRequest
attribute), 190 attribute), 190
NMT (in module google.cloud.translate_v2.client), 341 page_token (google.cloud.pubsub_v1.types.ListTopicsRequest
null_marker (google.cloud.bigquery.job.LoadTableFromStorageJob attribute), 190
attribute), 32 page_token (google.cloud.pubsub_v1.types.ListTopicSubscriptionsRequest
num_bytes (google.cloud.bigquery.table.Table attribute), attribute), 190
45 pages (google.cloud.vision_v1.types.TextAnnotation at-
num_dml_affected_rows (google.cloud.bigquery.query.QueryResults tribute), 355
attribute), 40 pages_with_matching_images
num_rows (google.cloud.bigquery.table.Table attribute), (google.cloud.vision_v1.types.WebDetection
45 attribute), 356
number (google.cloud.language_v1.types.PartOfSpeech pan_angle (google.cloud.vision_v1.types.FaceAnnotation
attribute), 151 attribute), 352
number (google.cloud.language_v1beta2.types.PartOfSpeechParagraph (class in google.cloud.vision_v1.types), 354
attribute), 161 paragraphs (google.cloud.vision_v1.types.Block at-
tribute), 350
O parent (google.cloud.datastore.Key attribute), 124
ObjectACL (class in google.cloud.storage.acl), 334 parent (google.cloud.datastore.key.Key attribute), 106
one() (google.cloud.spanner.streamed.StreamedResultSet parent_name (google.cloud.monitoring.group.Group at-
method), 243 tribute), 276
parse_message() (google.cloud.logging.entries.ProtobufEntry
one_or_none() (google.cloud.spanner.streamed.StreamedResultSet
method), 243 method), 296
part_of_speech (google.cloud.language_v1.types.Token
open() (google.cloud.pubsub_v1.subscriber.policy.thread.Policy
method), 186 attribute), 152
Operation (class in google.cloud.operation), 7 part_of_speech (google.cloud.language_v1beta2.types.Token
OPERATORS (google.cloud.datastore.query.Query at- attribute), 162
tribute), 107 partial_matching_images
order (google.cloud.datastore.Query attribute), 127 (google.cloud.vision_v1.types.WebDetection
order (google.cloud.datastore.query.Query attribute), 109 attribute), 356
PartialCellData (class in google.cloud.bigtable.row_data),
output_bytes (google.cloud.bigquery.job.LoadTableFromStorageJob
attribute), 32 84
PartialRowData
output_rows (google.cloud.bigquery.job.LoadTableFromStorageJob (class in
attribute), 32 google.cloud.bigtable.row_data), 84
owner (google.cloud.storage.blob.Blob attribute), 317 PartialRowsData (class in
owner (google.cloud.storage.bucket.Bucket attribute), google.cloud.bigtable.row_data), 85
328 partition_expiration (google.cloud.bigquery.table.Table
OWNER_ROLE (in module google.cloud.iam), 11 attribute), 45
owners (google.cloud.iam.Policy attribute), 12 partitioning_type (google.cloud.bigquery.table.Table at-
tribute), 45
P PartOfSpeech (class in google.cloud.language_v1.types),
Page (class in google.cloud.vision_v1.types), 354 151
PartOfSpeech
page_size (google.cloud.pubsub_v1.types.ListSnapshotsRequest (class in
attribute), 189 google.cloud.language_v1beta2.types), 161
PassAllFilter (class in google.cloud.bigtable.row_filters),
page_size (google.cloud.pubsub_v1.types.ListSubscriptionsRequest
attribute), 190 90
page_size (google.cloud.pubsub_v1.types.ListTopicsRequest patch() (google.cloud.bigquery.dataset.Dataset method),
attribute), 190 21
patch() (google.cloud.bigquery.table.Table method), 45
Index 381
google-cloud Documentation, Release 0.27.1
382 Index
google-cloud Documentation, Release 0.27.1
Index 383
google-cloud Documentation, Release 0.27.1
384 Index
google-cloud Documentation, Release 0.27.1
114 S
rollback() (google.cloud.datastore.Transaction method), safe_search_annotation (google.cloud.vision_v1.types.AnnotateImageRespo
129 attribute), 349
rollback() (google.cloud.datastore.transaction.Transaction safe_search_detection() (google.cloud.vision_v1.ImageAnnotatorClient
method), 112 method), 348
rollback() (google.cloud.spanner.transaction.Transaction SafeSearchAnnotation (class in
method), 242 google.cloud.vision_v1.types), 354
Row (class in google.cloud.bigtable.row), 83 salience (google.cloud.language_v1.types.Entity at-
row() (google.cloud.bigtable.table.Table method), 71 tribute), 150
row_from_mapping() (google.cloud.bigquery.table.Table salience (google.cloud.language_v1beta2.types.Entity at-
method), 46 tribute), 160
row_key (google.cloud.bigtable.row.AppendRow at- sample_rate_hertz (google.cloud.speech_v1.types.RecognitionConfig
tribute), 78 attribute), 254
row_key (google.cloud.bigtable.row.ConditionalRow at- sample_row_keys() (google.cloud.bigtable.table.Table
tribute), 80 method), 71
row_key (google.cloud.bigtable.row.DirectRow at- save() (google.cloud.storage.acl.ACL method), 333
tribute), 82 save_path (google.cloud.storage.acl.BucketACL at-
row_key (google.cloud.bigtable.row.Row attribute), 83 tribute), 334
row_key (google.cloud.bigtable.row_data.PartialRowData save_path (google.cloud.storage.acl.ObjectACL at-
attribute), 85 tribute), 335
RowFilter (class in google.cloud.bigtable.row_filters), 90 save_predefined() (google.cloud.storage.acl.ACL
RowFilterChain (class in method), 334
google.cloud.bigtable.row_filters), 90 schema (google.cloud.bigquery.job.LoadTableFromStorageJob
RowFilterUnion (class in attribute), 33
google.cloud.bigtable.row_filters), 91 schema (google.cloud.bigquery.query.QueryResults at-
RowKeyRegexFilter (class in tribute), 40
google.cloud.bigtable.row_filters), 91 schema (google.cloud.bigquery.table.Table attribute), 46
rows (google.cloud.bigquery.query.QueryResults at- SchemaField (class in google.cloud.bigquery.schema), 41
tribute), 40 SCOPE (google.cloud.bigquery.client.Client attribute),
rows (google.cloud.bigtable.row_data.PartialRowsData 15
attribute), 85 SCOPE (google.cloud.client.Client attribute), 9
rows (google.cloud.spanner.streamed.StreamedResultSet SCOPE (google.cloud.datastore.client.Client attribute),
attribute), 243 99
RowSampleFilter (class in SCOPE (google.cloud.dns.client.Client attribute), 131
google.cloud.bigtable.row_filters), 91 SCOPE (google.cloud.error_reporting.client.Client at-
run() (google.cloud.bigquery.query.QueryResults tribute), 260
method), 40 SCOPE (google.cloud.logging.client.Client attribute),
run_async_query() (google.cloud.bigquery.client.Client 289
method), 17 SCOPE (google.cloud.monitoring.client.Client attribute),
run_in_transaction() (google.cloud.spanner.database.Database 265
method), 232 SCOPE (google.cloud.resource_manager.client.Client at-
run_in_transaction() (google.cloud.spanner.session.Session tribute), 197
method), 235 SCOPE (google.cloud.runtimeconfig.client.Client at-
run_sync_query() (google.cloud.bigquery.client.Client tribute), 203
method), 18 SCOPE (google.cloud.spanner.client.Client attribute),
running() (google.cloud.bigquery.job.CopyJob method), 225
25 SCOPE (google.cloud.storage.client.Client attribute), 336
running() (google.cloud.bigquery.job.ExtractTableToStorageJob
SCOPE (google.cloud.translate_v2.client.Client at-
method), 29 tribute), 339
running() (google.cloud.bigquery.job.LoadTableFromStorageJobscore (google.cloud.language_v1.types.Sentiment at-
method), 33 tribute), 152
running() (google.cloud.bigquery.job.QueryJob method), score (google.cloud.language_v1beta2.types.Sentiment
37 attribute), 162
score (google.cloud.vision_v1.types.ColorInfo attribute),
Index 385
google-cloud Documentation, Release 0.27.1
386 Index
google-cloud Documentation, Release 0.27.1
Index 387
google-cloud Documentation, Release 0.27.1
streaming_request (google.cloud.speech_v1.types.StreamingRecognizeRequest
303
attribute), 256
StreamingPullRequest (class in T
google.cloud.pubsub_v1.types), 193 Table (class in google.cloud.bigquery.table), 42
StreamingPullResponse (class in Table (class in google.cloud.bigtable.table), 69
google.cloud.pubsub_v1.types), 194 table (google.cloud.bigtable.row.AppendRow attribute),
StreamingRecognitionConfig (class in 78
google.cloud.speech_v1.types), 255 table (google.cloud.bigtable.row.ConditionalRow at-
StreamingRecognitionResult (class in tribute), 81
google.cloud.speech_v1.types), 255 table (google.cloud.bigtable.row.DirectRow attribute), 83
StreamingRecognizeRequest (class in table (google.cloud.bigtable.row.Row attribute), 83
google.cloud.speech_v1.types), 256 table() (google.cloud.bigquery.dataset.Dataset method),
StreamingRecognizeResponse (class in 22
google.cloud.speech_v1.types), 256 table() (google.cloud.bigtable.instance.Instance method),
StripValueTransformerFilter (class in 67
google.cloud.bigtable.row_filters), 92 TABLE_ADMIN_HOST (in module
StructEntry (class in google.cloud.logging.entries), 296 google.cloud.bigtable.client), 62
subscribe() (google.cloud.pubsub_v1.subscriber.client.Clienttable_id (google.cloud.bigquery.table.Table attribute), 46
method), 184 table_type (google.cloud.bigquery.table.Table attribute),
Subscription (class in google.cloud.pubsub_v1.types), 46
194 TableMismatchError, 72
subscription (google.cloud.pubsub_v1.types.AcknowledgeRequest
tag (google.cloud.language_v1.types.PartOfSpeech at-
attribute), 188 tribute), 151
subscription (google.cloud.pubsub_v1.types.CreateSnapshotRequest
tag (google.cloud.language_v1beta2.types.PartOfSpeech
attribute), 188 attribute), 161
subscription (google.cloud.pubsub_v1.types.DeleteSubscriptionRequest
target (google.cloud.operation.Operation attribute), 8
attribute), 189 tense (google.cloud.language_v1.types.PartOfSpeech at-
subscription (google.cloud.pubsub_v1.types.GetSubscriptionRequest tribute), 151
attribute), 189 tense (google.cloud.language_v1beta2.types.PartOfSpeech
subscription (google.cloud.pubsub_v1.types.ModifyAckDeadlineRequest attribute), 161
attribute), 191 test_iam_permissions() (google.cloud.pubsub_v1.publisher.client.Client
subscription (google.cloud.pubsub_v1.types.ModifyPushConfigRequest method), 172
attribute), 191 test_iam_permissions() (google.cloud.pubsub_v1.subscriber.client.Client
subscription (google.cloud.pubsub_v1.types.PullRequest method), 185
attribute), 192 test_iam_permissions() (google.cloud.storage.blob.Blob
subscription (google.cloud.pubsub_v1.types.SeekRequest method), 318
attribute), 193 test_iam_permissions() (google.cloud.storage.bucket.Bucket
subscription (google.cloud.pubsub_v1.types.StreamingPullRequest method), 330
attribute), 193 text (google.cloud.language_v1.types.EntityMention at-
subscription (google.cloud.pubsub_v1.types.UpdateSubscriptionRequest tribute), 151
attribute), 195 text (google.cloud.language_v1.types.Sentence attribute),
subscription_path() (google.cloud.pubsub_v1.subscriber.client.Client 152
method), 185 text (google.cloud.language_v1.types.Token attribute),
subscriptions (google.cloud.pubsub_v1.types.ListSubscriptionsResponse 152
attribute), 190 text (google.cloud.language_v1beta2.types.EntityMention
subscriptions (google.cloud.pubsub_v1.types.ListTopicSubscriptionsResponse
attribute), 160
attribute), 190 text (google.cloud.language_v1beta2.types.Sentence at-
surprise_likelihood (google.cloud.vision_v1.types.FaceAnnotation tribute), 161
attribute), 352 text (google.cloud.language_v1beta2.types.Token at-
Symbol (class in google.cloud.vision_v1.types), 355 tribute), 162
symbols (google.cloud.vision_v1.types.Word attribute), text (google.cloud.vision_v1.types.Symbol attribute), 355
357 text (google.cloud.vision_v1.types.TextAnnotation
SyncTransport (class in attribute), 355
google.cloud.logging.handlers.transports.sync),
388 Index
google-cloud Documentation, Release 0.27.1
Index 389
google-cloud Documentation, Release 0.27.1
390 Index
google-cloud Documentation, Release 0.27.1
update_subscription() (google.cloud.pubsub_v1.subscriber.client.Client
ValueType (class in google.cloud.monitoring.metric), 273
method), 185 Variable (class in google.cloud.runtimeconfig.variable),
update_time (google.cloud.runtimeconfig.variable.Variable 206
attribute), 207 variable() (google.cloud.runtimeconfig.config.Config
updated (google.cloud.storage.blob.Blob attribute), 319 method), 206
UpdateSnapshotRequest (class in versioning_enabled (google.cloud.storage.bucket.Bucket
google.cloud.pubsub_v1.types), 195 attribute), 331
UpdateSubscriptionRequest (class in Vertex (class in google.cloud.vision_v1.types), 356
google.cloud.pubsub_v1.types), 195 vertices (google.cloud.vision_v1.types.BoundingPoly at-
UpdateTopicRequest (class in tribute), 350
google.cloud.pubsub_v1.types), 195 view_query (google.cloud.bigquery.table.Table attribute),
upload_from_file() (google.cloud.bigquery.table.Table 48
method), 47 view_use_legacy_sql (google.cloud.bigquery.table.Table
upload_from_file() (google.cloud.storage.blob.Blob attribute), 48
method), 319 VIEWER_ROLE (in module google.cloud.iam), 12
upload_from_filename() (google.cloud.storage.blob.Blob viewers (google.cloud.iam.Policy attribute), 12
method), 320 violence (google.cloud.vision_v1.types.SafeSearchAnnotation
upload_from_string() (google.cloud.storage.blob.Blob attribute), 355
method), 321 voice (google.cloud.language_v1.types.PartOfSpeech at-
uri (google.cloud.speech_v1.types.RecognitionAudio at- tribute), 151
tribute), 253 voice (google.cloud.language_v1beta2.types.PartOfSpeech
url (google.cloud.vision_v1.types.WebDetection at- attribute), 161
tribute), 356
use_legacy_sql (google.cloud.bigquery.job.QueryJob at- W
tribute), 37 web_detection (google.cloud.vision_v1.types.AnnotateImageResponse
use_legacy_sql (google.cloud.bigquery.query.QueryResults attribute), 349
attribute), 41 web_detection() (google.cloud.vision_v1.ImageAnnotatorClient
use_query_cache (google.cloud.bigquery.job.QueryJob method), 348
attribute), 37 web_entities (google.cloud.vision_v1.types.WebDetection
use_query_cache (google.cloud.bigquery.query.QueryResults attribute), 356
attribute), 41 WebDetection (class in google.cloud.vision_v1.types),
user() (google.cloud.iam.Policy static method), 12 356
user() (google.cloud.storage.acl.ACL method), 334 WebDetection.WebEntity (class in
user_email (google.cloud.bigquery.job.CopyJob at- google.cloud.vision_v1.types), 356
tribute), 25 WebDetection.WebImage (class in
user_email (google.cloud.bigquery.job.ExtractTableToStorageJob google.cloud.vision_v1.types), 356
attribute), 29 WebDetection.WebPage (class in
user_email (google.cloud.bigquery.job.LoadTableFromStorageJob google.cloud.vision_v1.types), 356
attribute), 33 width (google.cloud.vision_v1.types.Page attribute), 354
user_email (google.cloud.bigquery.job.QueryJob at- Word (class in google.cloud.vision_v1.types), 356
tribute), 37 word (google.cloud.speech_v1.types.WordInfo attribute),
257
V WordInfo (class in google.cloud.speech_v1.types), 257
value (google.cloud.runtimeconfig.variable.Variable at- words (google.cloud.speech_v1.types.SpeechRecognitionAlternative
tribute), 207 attribute), 255
value (google.cloud.vision_v1.types.Property attribute), words (google.cloud.vision_v1.types.Paragraph at-
354 tribute), 354
VALUE_TYPE_UNSPECIFIED write_disposition (google.cloud.bigquery.job.CopyJob at-
(google.cloud.monitoring.metric.ValueType tribute), 25
attribute), 273 write_disposition (google.cloud.bigquery.job.LoadTableFromStorageJob
ValueRangeFilter (class in attribute), 34
google.cloud.bigtable.row_filters), 93 write_disposition (google.cloud.bigquery.job.QueryJob
ValueRegexFilter (class in attribute), 38
google.cloud.bigtable.row_filters), 93
Index 391
google-cloud Documentation, Release 0.27.1
write_point() (google.cloud.monitoring.client.Client
method), 270
write_time_series() (google.cloud.monitoring.client.Client
method), 271
WriteDisposition (class in google.cloud.bigquery.job), 38
X
x (google.cloud.vision_v1.types.Position attribute), 354
Y
y (google.cloud.vision_v1.types.Position attribute), 354
y (google.cloud.vision_v1.types.Vertex attribute), 356
Z
z (google.cloud.vision_v1.types.Position attribute), 354
zone() (google.cloud.dns.client.Client method), 132
zone_id (google.cloud.dns.zone.ManagedZone attribute),
135
392 Index