In the past several years, Jupyter notebooks have become a convenient way of experimenting with machine learning datasets and models, as well as sharing training processes with colleagues and collaborators. Often times your notebook will take a long time to complete its execution. An extended training session may cause you to incur charges even though you are no longer using Compute Engine resources.
This post will explain how to execute a Jupyter Notebook in a simple and cost-efficient way.
We’ll explain how to deploy a Deep Learning VM image using TensorFlow to launch a Jupyter notebook which will be executed using the Nteract Papermill open source project. Once the notebook has finished executing, the Compute Engine instance that hosts your Deep Learning VM image will automatically terminate.
The components of our system:
First, Jupyter Notebooks
The Jupyter Notebook is an open-source web-based, interactive environment for creating and sharing IPython notebook (.ipynb) documents that contain live code, equations, visualizations and narrative text. This platform supports data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.
Next, Deep Learning Virtual Machine (VM) images
The Deep Learning Virtual Machine images are a set of Debian 9-based Compute Engine virtual machine disk images that are optimized for data science and machine learning tasks. All images include common ML frameworks and tools installed from first boot, and can be used out of the box on instances with GPUs to accelerate your data processing tasks. You can launch Compute Engine instances pre-installed with popular ML frameworks like TensorFlow, PyTorch, or scikit-learn, and even add Cloud TPU and GPU support with a single click.
And now, Papermill
Papermill is a library for parametrizing, executing, and analyzing Jupyter Notebooks. It lets you spawn multiple notebooks with different parameter sets and execute them concurrently. Papermill can also help collect and summarize metrics from a collection of notebooks.
Papermill also permits you to read or write data from many different locations. Thus, you can store your output notebook on a different storage system that provides higher durability and easy access in order to establish a reliable pipeline. Papermill recently added support for Google Cloud Storage buckets, and in this post we will show you how to put this new functionality to use.
Submit a Jupyter notebook for execution
The following command starts execution of a Jupyter notebook stored in a Cloud Storage bucket:
The above commands do the following:
And there you have it! You’ll no longer pay for resources you don’t use since after execution completes, your notebook, with populated cells, is uploaded to the specified Cloud Storage bucket. You can read more about it in the Cloud Storage documentation.
Note: In case you are not using a Deep Learning VM, and you want to install Papermill library with Cloud Storage support, you only need to run:
Note: Papermill version 0.18.2 supports Cloud Storage.
And here is an even simpler set of
Execute a notebook using GPU resources
Execute a notebook using CPU resources
The Deep Learning VM instance requires several permissions: read and write ability to Cloud Storage, and the ability to delete instances on Compute Engine. That is why our original command has the scope “https://www.proxy.ustclug.org/auth/cloud-platform” defined.
Your submission process will look like this:
Note: Verify that you have enough CPU or GPU resources available by checking your quota in the zone where your instance will be deployed.
Executing a Jupyter notebook
Let’s look into the following code:
This command is the standard way to create a Deep Learning VM. But keep in mind, you’ll need to pick the VM that includes the core dependencies you need to execute your notebook. Do not try to use a TensorFlow image if your notebook needs PyTorch or vice versa.
Note: if you do not see a dependency that is required for your notebook and you think should be in the image, please let us know on the forum (or with a comment to this article).
The secret sauce here contains two following things:
Papermill is a tool for parameterizing, executing, and analyzing Jupyter Notebooks.
Papermill lets you:
In our case, we are just using its ability to execute notebooks and pass parameters if needed.
Behind the scenes
Let’s start with the startup shell script parameters:
INPUT_NOTEBOOK_PATH: The input notebook located Cloud Storage bucket.
OUTPUT_NOTEBOOK_PATH: The output notebook located Cloud Storage bucket.
PARAMETERS_FILE: Users can provide a YAML file where notebook parameter values should be read.
PARAMETERS: Pass parameters via -p key value for notebook execution.
-p batch_size 128 -p epochs 40.
The two ways to execute the notebook with parameters are: (1) through the Python API and (2) through the command line interface. This sample script supports two different ways to pass parameters to Jupyter notebook, although Papermill supports other formats, so please consult Papermill’s documentation.
The above script performs the following steps:
By using the Deep Learning VM images, you can automate your notebook training, such that you no longer need to pay extra or manually manage your Cloud infrastructure. Take advantage of all the pre-installed ML software and Nteract’s Papermill project to help you solve your ML problems more quickly! Papermill will help you automate the execution of yourJupyter notebooks and in combination of Cloud Storage and Deep Learning VM images you can now set up this process in a very simple and cost efficient way.