Posted by Maya Gupta, Research Scientist, Jan Pfeifer, Software Engineer and Seungil You, Software Engineer*(Cross-posted on the Google Open Source Blog)*

Machine learning has made huge advances in many applications including natural language processing, computer vision and recommendation systems by capturing complex input/output relationships using highly flexible models. However, a remaining challenge is problems with semantically meaningful inputs that obey known global relationships, like “the estimated time to drive a road goes up if traffic is heavier, and all else is the same.” Flexible models like DNNs and random forests may not learn these relationships, and then may fail to generalize well to examples drawn from a different sampling distribution than the examples the model was trained on.

Today we present TensorFlow Lattice, a set of prebuilt TensorFlow Estimators that are easy to use, and TensorFlow operators to build your own lattice models. Lattices are multi-dimensional interpolated look-up tables (for more details, see [1–5]), similar to the look-up tables in the back of a geometry textbook that approximate a sine function. We take advantage of the look-up table’s structure, which can be keyed by multiple inputs to approximate an arbitrarily flexible relationship, to satisfy monotonic relationships that you specify in order to generalize better. That is, the look-up table values are trained to minimize the loss on the training examples, but in addition, adjacent values in the look-up table are constrained to increase along given directions of the input space, which makes the model outputs increase in those directions. Importantly, because they interpolate between the look-up table values, the lattice models are smooth and the predictions are bounded, which helps to avoid spurious large or small predictions in the testing time.

Suppose you are designing a system to recommend nearby coffee shops to a user. You would like the model to learn, “if two cafes are the same, prefer the closer one.” Below we show a flexible model (pink) that accurately fits some training data for users in Tokyo (purple), where there are many coffee shops nearby. The pink flexible model overfits the noisy training examples, and misses the overall trend that a closer cafe is better. If you used this pink model to rank test examples from Texas (blue), where businesses are spread farther out, you would find it acted strangely, sometimes preferring farther cafes!

A monotonic flexible function (green) is both accurate on training examples and can generalize for Texas examples compared to non-monotonic flexible function (pink) from the previous figure. |

In contrast, a lattice model, trained over the same example from Tokyo, can be constrained to satisfy such a monotonic relationship and result in a monotonic flexible function (green). The green line also accurately fits the Tokyo training examples, but also generalizes well to Texas, never preferring farther cafes.

In general, you might have many inputs about each cafe, e.g., coffee quality, price, etc. Flexible models have a hard time capturing global relationships of the form, “if all other inputs are equal, nearer is better, ” especially in parts of the feature space where your training data is sparse and noisy. Machine learning models that capture prior knowledge (e.g. how inputs should impact the prediction) work better in practice, and are easier to debug and more interpretable.

**Pre-built Estimators**

We provide a range of lattice model architectures as TensorFlow Estimators. The simplest estimator we provide is the *calibrated linear model*, which learns the best 1-d transformation of each feature (using 1-d lattices), and then combines all the calibrated features linearly. This works well if the training dataset is very small, or there are no complex nonlinear input interactions. Another estimator is a *calibrated lattice model*. This model combines the calibrated features nonlinearly using a two-layer single lattice model, which can represent complex nonlinear interactions in your dataset. The calibrated lattice model is usually a good choice if you have 2-10 features, but for 10 or more features, we expect you will get the best results with an ensemble of calibrated lattices, which you can train using the pre-built ensemble architectures. Monotonic lattice ensembles can achieve 0.3% — 0.5% accuracy gain compared to Random Forests [4], and these new TensorFlow lattice estimators can achieve 0.1 — 0.4% accuracy gain compared to the prior state-of-the-art in learning models with monotonicity [5].

**Build Your Own**

You may want to experiment with deeper lattice networks or research using partial monotonic functions as part of a deep neural network or other TensorFlow architecture. We provide the building blocks: TensorFlow operators for calibrators, lattice interpolation, and monotonicity projections. For example, the figure below shows a 9-layer deep lattice network [5].

In addition to the choice of model flexibility and standard L1 and L2 regularization, we offer new regularizers with TensorFlow Lattice:

- Monotonicity constraints [3] on your choice of inputs as described above.
- Laplacian regularization [3] on the lattices to make the learned function flatter.
- Torsion regularization [3] to suppress un-necessary nonlinear feature interactions.

We hope TensorFlow Lattice will be useful to the larger community working with meaningful semantic inputs. This is part of a larger research effort on interpretability and controlling machine learning models to satisfy policy goals, and enable practitioners to take advantage of their prior knowledge. We’re excited to share this with all of you. To get started, please check out our GitHub repository and our tutorials, and let us know what you think!

**Acknowledgements***Developing and open sourcing TensorFlow Lattice was a huge team effort. We’d like to thank all the people involved: Andrew Cotter, Kevin Canini, David Ding, Mahdi Milani Fard, Yifei Feng, Josh Gordon, Kiril Gorovoy, Clemens Mewald, Taman Narayan, Alexandre Passos, Christine Robson, Serena Wang, Martin Wicke, Jarek Wilkiewicz, Sen Zhao, Tao Zhu*

**References**

[1] Lattice Regression, *Eric Garcia, Maya Gupta, Advances in Neural Information Processing Systems (NIPS), 2009*

[2] Optimized Regression for Efficient Function Evaluation, *Eric Garcia, Raman Arora, Maya R. Gupta, IEEE Transactions on Image Processing, 2012*

[3] Monotonic Calibrated Interpolated Look-Up Tables, *Maya Gupta, Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin Canini, Alexander Mangylov, Wojciech Moczydlowski, Alexander van Esbroeck, Journal of Machine Learning Research (JMLR), 2016*

[4] Fast and Flexible Monotonic Functions with Ensembles of Lattices, *Mahdi Milani Fard, Kevin Canini, Andrew Cotter, Jan Pfeifer, Maya Gupta, Advances in Neural Information Processing Systems (NIPS), 2016*

[5] Deep Lattice Networks and Partial Monotonic Functions, *Seungil You, David Ding, Kevin Canini, Jan Pfeifer, Maya R. Gupta, Advances in Neural Information Processing Systems (NIPS), 2017*

Source: TensorFlow Lattice: Flexibility Empowered by Prior Knowledge

除非特别声明，此文章内容采用知识共享署名 3.0许可，代码示例采用Apache 2.0许可。更多细节请查看我们的服务条款。

Tags:
Develop

- Advanced in-app billing: handling alternative purchase flows
- Take charge of your data: How tokenization makes data usable without sacrificing privacy
- Introducing Deep Learning Containers: Consistent and portable environments
- See how your code actually executes with Stackdriver Profiler, now GA
- How SRE teams are organized, and how to get started
- Innovations in Graph Representation Learning
- Bringing businesses more proactive phishing protections and data controls in G Suite
- Analyze BigQuery data with Kaggle Kernels notebooks
- Analyze BigQuery data with Kaggle Kernels notebooks
- Introducing Workload Identity: Better authentication for your GKE applications

- 如何选择 compileSdkVersion, minSdkVersion 和 targetSdkVersion (21,864)
- 谷歌招聘软件工程师 (21,766)
- Google 推出的 31 套在线课程 (21,336)
- Seti UI 主题: 让你编辑器焕然一新 (13,330)
- Android Studio 2.0 稳定版 (9,172)
- Android N 最初预览版：开发者 API 和工具 (7,980)
- 像 Sublime Text 一样使用 Chrome DevTools (6,092)
- 用 Google Cloud 打造你的私有免费 Git 仓库 (5,792)
- Google I/O 2016: Android 演讲视频汇总 (5,545)
- 面向普通开发者的机器学习应用方案 (5,356)
- 生还是死？Android 进程优先级详解 (5,081)
- 面向 Web 开发者的 Sublime Text 插件 (4,233)
- 适配 Android N 多窗口特性的 5 个要诀 (4,217)
- 参加 Google I/O Extended，观看 I/O 直播，线下聚会！ (3,554)

© 2019 中国谷歌开发者社区 - ChinaGDG