Source: Introducing TensorNetwork, an Open Source Library for Efficient Tensor Calculations from Google Research

Posted by Chase Roberts, Research Engineer, Google AI and Stefan Leichenauer, Research Scientist, X

Many of the world’s toughest scientific challenges, like developing high-temperature superconductors and understanding the true nature of space and time, involve dealing with the complexity of quantum systems. What makes these challenges difficult is that the number of quantum states in these systems is exponentially large, making brute-force computation infeasible. To deal with this, data structures called tensor networks are used. Tensor networks let one focus on the quantum states that are most relevant for real-world problems—the states of low energy, say—while ignoring other states that aren’t relevant. Tensor networks are also increasingly finding applications in machine learning (ML). However, there remain difficulties that prohibit them from widespread use in the ML community: 1) a production-level tensor network library for accelerated hardware has not been available to run tensor network algorithms at scale, and 2) most of the tensor network literature is geared toward physics applications and creates the false impression that expertise in quantum mechanics is required to understand the algorithms.

In order to address these issues, we are releasing TensorNetwork, a brand new open source library to improve the efficiency of tensor calculations, developed in collaboration with the Perimeter Institute for Theoretical Physics and X. TensorNetwork uses TensorFlow as a backend and is optimized for GPU processing, which can enable speedups of up to 100x when compared to work on a CPU. We introduce TensorNetwork in a series of papers, the first of which presents the new library and its API, and provides an overview of tensor networks for a non-physics audience. In our second paper we focus on a particular use case in physics, demonstrating the speedup that one gets using GPUs.

**How are Tensor Networks Useful?**

Tensors are multidimensional arrays, categorized in a hierarchy according to their *order*: e.g., an ordinary number is a tensor of order zero (also known as a scalar), a vector is an order-one tensor, a matrix is an order-two tensor, and so on. While low-order tensors can easily be represented by an explicit array of numbers or with a mathematical symbol such as T_{ijnklm} (where the number of indices represents the order of the tensor), that notation becomes very cumbersome once we start talking about high-order tensors. At that point it’s useful to start using diagrammatic notation, where one simply draws a circle (or some other shape) with a number of lines, or legs, coming out of it—the number of legs being the same as the *order* of the tensor. In this notation, a scalar is just a circle, a vector has a single leg, a matrix has two legs, etc. Each leg of the tensor also has a *dimension*, which is the size of that leg. For example, a vector representing an object’s velocity through space would be a three-dimensional, order-one tensor.

Diagrammatic notation for tensors. |

The benefit of representing tensors in this way is to succinctly encode mathematical operations, e.g., multiplying a matrix by a vector to produce another vector, or multiplying two vectors to make a scalar. These are all examples of a more general concept called *tensor contraction*.

Diagrammatic notation for tensor contraction. Vector and matrix multiplication, as well as the matrix trace (i.e., the sum of the diagonal elements of a matrix), are all examples. |

These are also simple examples of *tensor networks*, which are graphical ways of encoding the pattern of tensor contractions of several constituent tensors to form a new one. Each constituent tensor has an order determined by its own number of legs. Legs that are connected, forming an edge in the diagram, represent contraction, while the number of remaining dangling legs determines the order of the resultant tensor.

While these examples are very simple, the tensor networks of interest often represent hundreds of tensors contracted in a variety of ways. Describing such a thing would be very obscure using traditional notation, which is why the diagrammatic notation was invented by Roger Penrose in 1971.

**Tensor Networks in Practice**

Consider a collection of black-and-white images, each of which can be thought of as a list of *N* pixel values. A single pixel of a single image can be one-hot-encoded into a two-dimensional vector, and by combining these pixel encodings together we can make a 2^{N}-dimensional one-hot encoding of the entire image. We can reshape that high-dimensional vector into an order-*N* tensor, and then add up all of the tensors in our collection of images to get a total tensor *T _{i1,i2,…,iN}* encapsulating the collection.

This sounds like a very wasteful thing to do: encoding images with about 50 pixels in this way would already take petabytes of memory. That’s where tensor networks come in. Rather than storing or manipulating the tensor *T* directly, we instead represent *T* as the contraction of many smaller constituent tensors in the shape of a tensor network. That turns out to be much more efficient. For instance, the popular matrix product state (MPS) network would write *T* in terms of *N* much smaller tensors, so that the total number of parameters is only linear in *N*, rather than exponential.

The high-order tensor T is represented in terms of many low-order tensors in a matrix product state tensor network. |

It’s not obvious that large tensor networks can be efficiently created or manipulated while consistently avoiding the need for a huge amount of memory. But it turns out that this is possible in many cases, which is why tensor networks have been used extensively in quantum physics and, now, in machine learning. Stoudenmire and Schwab used the encoding just described to make an image classification model, demonstrating a new use for tensor networks. The TensorNetwork library is designed to facilitate exactly that kind of work, and our first paper describes how the library functions for general tensor network manipulations.

**Performance in Physics Use-Cases **

TensorNetwork is a general-purpose library for tensor network algorithms, and so it should prove useful for physicists as well. Approximating quantum states is a typical use-case for tensor networks in physics, and is well-suited to illustrate the capabilities of the TensorNetwork library. In our second paper, we describe a tree tensor network (TTN) algorithm for approximating the ground state of either a periodic quantum spin chain (1D) or a lattice model on a thin torus (2D), and implement the algorithm using TensorNetwork. We compare the use of CPUs with GPUs and observe significant computational speed-ups, up to a factor of 100, when using a GPU and the TensorNetwork library.

**Conclusion and Future Work**

These are the first in a series of planned papers to illustrate the power of TensorNetwork in real-world applications. In our next paper we will use TensorNetwork to classify images in the MNIST and Fashion-MNIST datasets. Future plans include time series analysis on the ML side, and quantum circuit simulation on the physics side. With the open source community, we are also always adding new features to TensorNetwork itself. We hope that TensorNetwork will become a valuable tool for physicists and machine learning practitioners.

**Acknowledgements***The TensorNetwork library was developed by Chase Roberts, Adam Zalcman, and Bruce Fontaine of Google AI; Ashley Milsted, Martin Ganahl, and Guifre Vidal of the Perimeter Institute; and Jack Hidary and Stefan Leichenauer of X. We’d also like to thank Stavros Efthymiou at X for valuable contributions. *

除非特别声明，此文章内容采用知识共享署名 3.0许可，代码示例采用Apache 2.0许可。更多细节请查看我们的服务条款。

Tags:
Develop

- Analyze BigQuery data with Kaggle Kernels notebooks
- Analyze BigQuery data with Kaggle Kernels notebooks
- Introducing Workload Identity: Better authentication for your GKE applications
- Supporting FINMA-regulated customers in Switzerland
- Security Crawl Maze: An Open Source Tool to Test Web Security Crawlers
- Up, up and away: immersive machine learning at Google
- Scan your Cloud Storage buckets for sensitive data using Cloud DLP
- 5 frequently asked questions about Google Cloud Anthos
- Google Cloud networking in depth: three defense-in-depth principles for securing your environment
- Google Pay and PayPal expand their integration to give merchants more ways to accept payments online

- 如何选择 compileSdkVersion, minSdkVersion 和 targetSdkVersion (21,814)
- 谷歌招聘软件工程师 (21,757)
- Google 推出的 31 套在线课程 (21,322)
- Seti UI 主题: 让你编辑器焕然一新 (13,319)
- Android Studio 2.0 稳定版 (9,171)
- Android N 最初预览版：开发者 API 和工具 (7,980)
- 像 Sublime Text 一样使用 Chrome DevTools (6,088)
- 用 Google Cloud 打造你的私有免费 Git 仓库 (5,788)
- Google I/O 2016: Android 演讲视频汇总 (5,543)
- 面向普通开发者的机器学习应用方案 (5,353)
- 生还是死？Android 进程优先级详解 (5,077)
- 面向 Web 开发者的 Sublime Text 插件 (4,232)
- 适配 Android N 多窗口特性的 5 个要诀 (4,217)
- 参加 Google I/O Extended，观看 I/O 直播，线下聚会！ (3,554)

© 2019 中国谷歌开发者社区 - ChinaGDG