Posted by Alex Wiltschko, Research Scientist, Google Brain Team

Tangent is a new, free, and open-source Python library for automatic differentiation. In contrast to existing machine learning libraries, Tangent is a source-to-source system, consuming a Python function f and emitting a new Python function that computes the gradient of f. This allows much better user visibility into gradient computations, as well as easy user-level editing and debugging of gradients. Tangent comes with many more features for debugging and designing machine learning models:

- Easily debug your backward pass
- Fast gradient surgery
- Forward mode automatic differentiation
- Efficient Hessian-vector products
- Code optimizations

This post gives an overview of the Tangent API. It covers how to use Tangent to generate gradient code in Python that is easy to interpret, debug and modify.

Neural networks (NNs) have led to great advances in machine learning models for images, video, audio, and text. The fundamental abstraction that lets us train NNs to perform well at these tasks is a 30-year-old idea called reverse-mode automatic differentiation (also known as backpropagation), which comprises two passes through the NN. First, we run a “forward pass” to calculate the output value of each node. Then we run a “backward pass” to calculate a series of derivatives to determine how to update the weights to increase the model’s accuracy.

Training NNs, and doing research on novel architectures, requires us to compute these derivatives correctly, efficiently, and easily. We also need to be able to debug these derivatives when our model isn’t training well, or when we’re trying to build something new that we do not yet understand. Automatic differentiation, or just “autodiff,” is a technique to calculate the derivatives of computer programs that denote some mathematical function, and nearly every machine learning library implements it.

Existing libraries implement automatic differentiation by tracing a program’s execution (at runtime, like TF Eager, PyTorch and Autograd) or by building a dynamic data-flow graph and then differentiating the graph (ahead-of-time, like TensorFlow). In contrast, Tangent performs ahead-of-time autodiff on the Python source code itself, and produces Python source code as its output.

As a result, you can finally read your automatic derivative code just like the rest of your program. Tangent is useful to researchers and students who not only want to write their models in Python, but also read and debug automatically-generated derivative code without sacrificing speed and flexibility.

You can easily inspect and debug your models written in Tangent, without special tools or indirection. Tangent works on a large and growing subset of Python, provides extra autodiff features other Python ML libraries don’t have, is high-performance, and is compatible with TensorFlow and NumPy.

**Automatic differentiation of Python code**

How do we automatically generate derivatives of plain Python code? Math functions like tf.exp or tf.log have derivatives, which we can compose to build the backward pass. Similarly, pieces of syntax, such as subroutines, conditionals, and loops, also have backward-pass versions. Tangent contains recipes for generating derivative code for each piece of Python syntax, along with many NumPy and TensorFlow function calls.

Tangent has a one-function API:

Here’s an animated graphic of what happens when we call tangent.grad on a Python function:

If you want to print out your derivatives, you can run:

Under the hood, tangent.grad first grabs the source code of the Python function you pass it. Tangent has a large library of recipes for the derivatives of Python syntax, as well as TensorFlow Eager functions. The function tangent.grad then walks your code in reverse order, looks up the matching backward-pass recipe, and adds it to the end of the derivative function. This reverse-order processing gives the technique its name: reverse-mode automatic differentiation.

The function df above only works for scalar (non-array) inputs. Tangent also supports

- Using TensorFlow Eager functions, for processing arrays of numbers.
- Subroutines
- Control flow

Although we started with TensorFlow Eager support, Tangent isn’t tied to one numeric library or another—we would gladly welcome pull requests adding PyTorch or MXNet derivative recipes.

**Next Steps**

Tangent is open source now at github.com/google/tangent. Go check it out for download and installation instructions. Tangent is still an experiment, so expect some bugs. If you report them to us on GitHub, we will do our best to fix them quickly.

We are working to add support in Tangent for more aspects of the Python language (e.g., closures, inline function definitions, classes, more NumPy and TensorFlow functions). We also hope to add more advanced automatic differentiation and compiler functionality in the future, such as automatic trade-off between memory and compute (Griewank and Walther 2000; Gruslys et al., 2016), more aggressive optimizations, and lambda lifting.

We intend to develop Tangent together as a community. We welcome pull requests with fixes and features. Happy deriving!

**Acknowledgments**

Bart van Merriënboer contributed immensely to all aspects of Tangent during his internship, and Dan Moldovan led TF Eager integration, infrastructure and benchmarking. Also, thanks to the Google Brain team for their support of this post and special thanks to Sanders Kleinfeld and Aleks Haecky for their valuable contribution for the technical aspects of the post.

Source: Tangent: Source-to-Source Debuggable Derivatives

除非特别声明，此文章内容采用知识共享署名 3.0许可，代码示例采用Apache 2.0许可。更多细节请查看我们的服务条款。

Tags:
Develop

- Setting a course to the future of cloud computing
- Analyze this—expanding the power of your API data with new Apigee analytics features
- Hello, .dev!
- Google announces intent to acquire Alooma to simplify cloud migration
- Google announces intent to acquire Alooma to simplify cloud migration
- New UI tools and a richer creative canvas come to ARCore
- Introducing PlaNet: A Deep Planning Network for Reinforcement Learning
- AI in depth: monitoring home appliances from power readings with ML
- AI in depth: monitoring home appliances from power readings with ML
- AI in depth: monitoring home appliances from power readings with ML

- 谷歌招聘软件工程师 (21,021)
- Google 推出的 31 套在线课程 (20,112)
- 如何选择 compileSdkVersion, minSdkVersion 和 targetSdkVersion (18,698)
- Seti UI 主题: 让你编辑器焕然一新 (12,684)
- Android Studio 2.0 稳定版 (8,963)
- Android N 最初预览版：开发者 API 和工具 (7,934)
- 像 Sublime Text 一样使用 Chrome DevTools (5,949)
- Google I/O 2016: Android 演讲视频汇总 (5,519)
- 用 Google Cloud 打造你的私有免费 Git 仓库 (5,500)
- 面向普通开发者的机器学习应用方案 (5,200)
- 生还是死？Android 进程优先级详解 (4,971)
- 面向 Web 开发者的 Sublime Text 插件 (4,137)
- 适配 Android N 多窗口特性的 5 个要诀 (4,103)
- 参加 Google I/O Extended，观看 I/O 直播，线下聚会！ (3,475)

© 2018 中国谷歌开发者社区 - ChinaGDG