By John Barrus, Product Manager
Google Cloud Platform gets a performance boost today with the much anticipated public beta of NVIDIA Tesla K80 GPUs. You can now spin up NVIDIA GPU-based VMs in three GCP regions: us-east1, asia-east1 and europe-west1, using the gcloud command-line tool. Support for creating GPU VMs using the Cloud Console appears later this week.
If you need extra computational power for deep learning, you can attach up to eight GPUs (4 K80 boards) to any custom Google Compute Engine virtual machine. GPUs can accelerate many types of computing and analysis, including video and image transcoding, seismic analysis, molecular modeling, genomics, computational finance, simulations, high performance data analysis, computational chemistry, finance, fluid dynamics and visualization.
|NVIDIA K80 GPU Accelerator Board|
Rather than constructing a GPU cluster in your own datacenter, just add GPUs to virtual machines running in our cloud. GPUs on Google Compute Engine are attached directly to the VM, providing bare-metal performance. Each NVIDIA GPU in a K80 has 2,496 stream processors with 12 GB of GDDR5 memory. You can shape your instances for optimal performance by flexibly attaching 1, 2, 4 or 8 NVIDIA GPUs to custom machine shapes.
|Google Cloud supports as many as 8 GPUs attached to custom VMs, allowing you to optimize the performance of your applications.|
These instances support popular machine learning and deep learning frameworks such as TensorFlow, Theano, Torch, MXNet and Caffe, as well as NVIDIA’s popular CUDA software for building GPU-accelerated applications.
Like the rest of our infrastructure, the GPUs are priced competitively and are billed per minute (10 minute minimum). In the US, each K80 GPU attached to a VM is priced at $0.700 per hour per GPU and in Asia and Europe, $0.770 per hour per GPU. As always, you only pay for what you use. This frees you up to spin up a large cluster of GPU machines for rapid deep learning and machine learning training with zero capital investment.
The new Google Cloud GPUs are tightly integrated with Google Cloud Machine Learning (Cloud ML), helping you slash the time it takes to train machine learning models at scale using the TensorFlow framework. Now, instead of taking several days to train an image classifier on a large image dataset on a single machine, you can run distributed training with multiple GPU workers on Cloud ML, dramatically shorten your development cycle and iterate quickly on the model.
Cloud ML is a fully-managed service that provides end-to-end training and prediction workflow with cloud computing tools such as Google Cloud Dataflow, Google BigQuery, Google Cloud Storage and Google Cloud Datalab.
Start small and train a TensorFlow model locally on a small dataset. Then, kick off a larger Cloud ML training job against a full dataset in the cloud to take advantage of the scale and performance of Google Cloud GPUs. For more on Cloud ML, please see the Quickstart guide to get started, or this document to dive into using GPUs.
Register for Cloud NEXT, sign up for the CloudML Bootcamp and learn how to Supercharge performance using GPUs in the cloud. You can use the gcloud command-line to create a VM today and start experimenting with TensorFlow-accelerated machine learning. Detailed documentation is available on our website.