NVIDIA and Google Cloud deliver accelerator-optimized solutions that address your most demanding workloads, including machine learning, high performance computing, data analytics, graphics, and gaming workloads.
Benefits
Increased performance for diverse workloads
With the latest NVIDIA GPUs on Google Cloud, you can easily provision Compute Engine instances with NVIDIA H100, A100, L4, T4, P100, P4, and V100 to accelerate a broad set of demanding workloads.
Reduce costs with per-second billing
Google Cloud's per-second pricing means you pay only for what you need, with up to a 30% monthly discount applied automatically. Save on upfront costs while enjoying the same uptime and scalable performance.
Optimize workloads with custom machine configurations
Optimize your workloads by precisely configuring an instance with the exact ratio of processors, memory, and NVIDIA GPUs you need instead of modifying your workload to fit within limited system configurations.
Key features
A3 VMs, powered by NVIDIA H100 Tensor Core GPUs, are purpose-built to train and serve especially demanding gen AI workloads and LLMs. Combining NVIDIA GPUs with Google Cloud’s leading infrastructure technologies provides massive scale and performance and is a huge leap forward in supercomputing capabilities.
The accelerator-optimized A2 VMs are based on the NVIDIA Ampere A100 Tensor Core GPU. Each A100 GPU offers up to 20x the compute performance of the previous generation. These VMs are designed to deliver acceleration at every scale for AI, data analytics, and high performance computing to tackle the toughest computing challenges.
G2 was the industry’s first cloud VM powered by the newly announced NVIDIA L4 Tensor Core GPU, and is purpose-built for large inference AI workloads like generative AI. G2 delivers cutting-edge performance-per-dollar for AI inference workloads. As a universal GPU, G2 offers significant performance improvements on HPC, graphics, and video transcoding workloads.
Using Google Kubernetes Engine (GKE) you can seamlessly create clusters with NVIDIA GPUs on demand, load balance, and minimize operational costs by automatically scaling GPU resources up or down. With support for multi-instance GPUs (MIG) in NVIDIA A100 GPUs, GKE can now provision the right-size GPU acceleration with finer granularity for multiuser, multimodel AI inference workloads.
NVIDIA CloudXR, a groundbreaking innovation built on NVIDIA RTX™ technology, makes high-quality XR accessible through Google Cloud Marketplace with NVIDIA RTX Virtual Workstation as a virtual machine image (VMI). Users can easily set up, scale, and consume high-quality immersive experience and stream XR workflows from the cloud.
Ready to get started? Contact us
Learn more about how Google Cloud and NVIDIA can help you transform your business.
Customers
Documentation
Compute Engine provides GPUs that you can add to your virtual machine instances. Learn what you can do with GPUs and what types of GPU hardware are available.
Accelerate the training process for many deep-learning models, like image classification, video analysis, and natural language processing.
Learn how to use GPU hardware accelerators in your Google Kubernetes Engine clusters’ nodes.
Attach GPUs to the master and worker Compute Engine nodes in a Dataproc cluster to accelerate specific workloads, such as machine learning and data processing.
What's new
Tell us what you’re solving for. A Google Cloud expert will help you find the best solution.