Source: The NeurIPS 2018 Test of Time Award: The Trade-Offs of Large Scale Learning from Google Research

Posted by Anna Ukhanova, Program Manager, Google AI Zürich

Progress in machine learning (ML) is happening so rapidly, that it can sometimes feel like any idea or algorithm more than 2 years old is already outdated or superseded by something better. However, old ideas sometimes remain relevant even when a large fraction of the scientific community has turned away from them. This is often a question of context: an idea which may seem to be a dead end in a particular context may become wildly successful in a different one. In the specific case of deep learning (DL), the growth of both the availability of data and computing power renewed interest in the area and significantly influenced research directions.

The NIPS 2008 paper “The Trade-Offs of Large Scale Learning” by Léon Bottou (then at NEC Labs, now at Facebook AI Research) and Olivier Bousquet (Google AI, Zürich) is a good example of this phenomenon. As the recent recipient of the NeurIPS 2018 Test of Time Award, this seminal work investigated the interplay between data and computation in ML, showing that if one is limited by computing power but can make use of a large dataset, it is more efficient to perform a small amount of computation on many individual training examples rather than to perform extensive computation on a subset of the data. This demonstrated the power of an old algorithm, stochastic gradient descent, which is nowadays used in pretty much all applications of DL.

**Optimization and the Challenge of Scale**

Many ML algorithms can be thought of as the combination of two main ingredients:

- A model, which is a set of possible functions that will be used to fit the data.
- An optimization algorithm which specifies how to find the best function in that set.

Back in the 90’s the datasets used in ML were much smaller than the ones in use today, and while artificial neural networks had already led to some successes, they were considered hard to train. In the early 2000’s, with the introduction of Kernel Machines (SVMs in particular), neural networks went out of fashion. Simultaneously, the attention shifted away from the optimization algorithms that had been used to train neural networks (stochastic gradient descent) to focus on those used for kernel machines (quadratic programming). One important difference being that in the former case, training examples are used one at a time to perform gradient steps (this is called “stochastic”), while in the latter case, all training examples are used at each iteration (this is called “batch”).

As the size of the training sets increased, the efficiency of optimization algorithms to handle large amounts of data became a bottleneck. For example, in the case of quadratic programming, running time scales at least quadratically in the number of examples. In other words, if you double your training set size, your training will take at least 4 times longer. Hence, lots of effort went into trying to make these algorithms scale to larger training sets (see for example *Large Scale Kernel Machines*).

People who had experience with training neural networks knew that stochastic gradient descent was comparably easier to scale to large datasets, but unfortunately its convergence is very slow (it takes lots of iterations to reach an accuracy comparable to that of a batch algorithm), so it wasn’t clear that this would be a solution to the scaling problem.

**Stochastic Algorithms Scale Better**

In the context of ML, the number of iterations needed to optimize the cost function is actually not the main concern: there is no point in perfectly tuning your model since you will essentially “overfit” to the training data. So why not reduce the computational effort that you put into tuning the model and instead spend the effort processing more data?

The work of Léon and Olivier provided a formal study of this phenomenon: by considering access to a large amount of data and assuming the limiting factor is computation, they showed that it is better to perform a minimal amount of computation on each individual training example (thus processing more of them) rather than performing extensive computation on a smaller amount of data.

In doing so, they also demonstrated that among various possible optimization algorithms, stochastic gradient descent is the best. This was confirmed by many experiments and led to a renewed interest in online optimization algorithms which are now in extensive use in ML.

**Mysteries Remain**

In the following years, many variants of stochastic gradient descent were developed both in the convex case and in the non-convex one (particularly relevant for DL). The most common variant now is the so-called “mini-batch” SGD where one considers a small number (~10-100) of training examples at each iteration, and performs several passes over the training set, with a couple of clever tricks to scale the gradient appropriately. Most ML libraries provide a default implementation of such an algorithm and it is arguably one of the pillars of DL.

While this analysis provided a solid foundation for understanding the properties of this algorithm, the amazing and sometimes surprising successes of DL continue to raise many more questions for the scientific community. In particular, the role of this algorithm in the generalization properties of deep networks has been repeatedly demonstrated but is still poorly understood. This means that a lot of fascinating questions are yet to be explored which could lead to a better understanding of the algorithms currently in use and the development of even more efficient algorithms in the future.

The perspective proposed by Léon and Olivier in their collaboration 10 years ago provided a significant boost to the development of the algorithm that is nowadays the workhorse of ML systems that benefit our lives daily, and we offer our sincere congratulations to both authors on this well-deserved award.

除非特别声明，此文章内容采用知识共享署名 3.0许可，代码示例采用Apache 2.0许可。更多细节请查看我们的服务条款。

Tags:
Develop

- More visibility into the Android Open Source Project
- Notifications from the Twitter app are easier on your battery
- Taking a practical approach to BigQuery cost monitoring
- Enterprise IT can move up or out (or both)
- Improving the Effectiveness of Diabetic Retinopathy Models
- Google Cloud Platform now IRAP-certified by Australian Cyber Security Center
- MLPerf benchmark establishes that Google Cloud offers the most accessible scale for machine learning training
- Now you can train TensorFlow machine learning models faster and at lower cost on Cloud TPU Pods
- Countdown to 2019, Firebase style
- New Keystore features keep your slice of Android Pie a little safer

- 谷歌招聘软件工程师 (20,695)
- Google 推出的 31 套在线课程 (19,392)
- 如何选择 compileSdkVersion, minSdkVersion 和 targetSdkVersion (17,257)
- Seti UI 主题: 让你编辑器焕然一新 (12,284)
- Android Studio 2.0 稳定版 (8,875)
- Android N 最初预览版：开发者 API 和工具 (7,917)
- 像 Sublime Text 一样使用 Chrome DevTools (5,870)
- Google I/O 2016: Android 演讲视频汇总 (5,514)
- 用 Google Cloud 打造你的私有免费 Git 仓库 (5,302)
- 面向普通开发者的机器学习应用方案 (5,114)
- 生还是死？Android 进程优先级详解 (4,923)
- 面向 Web 开发者的 Sublime Text 插件 (4,090)
- 适配 Android N 多窗口特性的 5 个要诀 (4,032)
- 参加 Google I/O Extended，观看 I/O 直播，线下聚会！ (3,456)

© 2018 中国谷歌开发者社区 - ChinaGDG