Source: Learning Better Simulation Methods for Partial Differential Equations from Google Research

Posted by Stephan Hoyer, Software Engineer, Google Research

The world’s fastest supercomputers were designed for modeling physical phenomena, yet they still are not fast enough to robustly predict the impacts of climate change, to design controls for airplanes based on airflow or to accurately simulate a fusion reactor. All of these phenomena are modeled by partial differential equations (PDEs), the class of equations that describe everything smooth and continuous in the physical world, and the most common class of simulation problems in science and engineering. To solve these equations, we need faster simulations, but in recent years, Moore’s law has been slowing. At the same time, we’ve seen huge breakthroughs in machine learning (ML) along with faster hardware optimized for it. What does this new paradigm offer for scientific computing?

In “Learning Data Driven Discretizations for Partial Differential Equations”, published in Proceedings of the National Academy of Sciences, we explore a potential path for how ML can offer continued improvements in high-performance computing, both for solving PDEs and, more broadly, for solving hard computational problems in every area of science.

For most real-world problems, closed-form solutions to PDEs don’t exist. Instead, one must find discrete equations (“discretizations”) that a computer can solve to approximate the continuous PDE. Typical approaches to solve PDEs represent equations on a grid, e.g., using finite differences. To achieve convergence, the mesh spacing of the grid needs to be smaller than the smallest feature size of the solutions. This often isn’t feasible because of an unfortunate scaling law: achieving 10x higher resolution requires 10,000x more compute, because the grid must be scaled in four dimensions—three spatial dimensions and time. Instead, in our paper we show that ML can be used to learn better representations for PDEs on coarser grids.

Satellite photo of a hurricane, at both full resolution and simulated resolution in a state of the art weather model. Cumulus clouds (e.g., in the red circle) are responsible for heavy rainfall, but in the weather model the details are entirely blurred out. Instead, models rely on crude approximations for sub-grid physics, a key source of uncertainty in climate models. Image credit: NOAA |

The challenge is to retain the accuracy of high-resolution simulations while still using the coarsest grid possible. In our work we’re able to improve upon existing schemes by replacing heuristics based on deep human insight (e.g., “solutions to a PDE should always be smooth away from discontinuities”) with optimized rules based on machine learning. The rules our ML models recover are complex, and we don’t entirely understand them, but they incorporate sophisticated physical principles like the idea of “upwinding”—to accurately model what’s coming towards you in a fluid flow, you should look upstream in the direction the wind is coming from. An example of our results on a simple model of fluid dynamics are shown below:

Simulations of Burgers’ equation, a model for shock waves in fluids, solved with either a standard finite volume method (left) or our neural network based method (right). The orange squares represent simulations with each method on low resolution grids. These points are fed back into the model at each time step, which then predicts how they should change. Blue lines show the exact simulations used for training. The neural network solution is much better, even on a 4x coarser grid, as indicated by the orange squares smoothly tracing the blue line. |

Our research also illustrates a broader lesson about how to effectively combine machine learning and physics. Rather than attempting to learn physics from scratch, we combined neural networks with components from traditional simulation methods, including the known form of the equations we’re solving and finite volume methods. This means that laws such as conservation of momentum are exactly satisfied, by construction, and allows our machine learning models to focus on what they do best, learning optimal rules for interpolation in complex, high-dimensional spaces.

**Next Steps**

We are focused on scaling up the techniques outlined in our paper to solve larger scale simulation problems with real-world impacts, such as weather and climate prediction. We’re excited about the broad potential of blending machine learning into the complex algorithms of scientific computing.

**Acknowledgments***Thanks to co-authors Yohai Bar-Sinari, Jason Hickey and Michael Brenner; and Google collaborators Peyman Milanfar, Pascal Getreur, Ignacio Garcia Dorado, Dmitrii Kochkov, Jiawei Zhuang and Anton Geraschenko.*

除非特别声明，此文章内容采用知识共享署名 3.0许可，代码示例采用Apache 2.0许可。更多细节请查看我们的服务条款。

Tags:
Develop

- 3 things to know about Kotlin from Android Dev Summit 2019
- Introducing the Next Generation of On-Device Vision Models: MobileNetV3 and MobileNetEdgeTPU
- Using machine learning to tackle Fall Armyworm
- The Go language turns 10: A Look at Go’s Growth in the Enterprise
- Monitor cloud costs and create budgets at scale
- Announcing Network Intelligence Center—towards proactive network operations
- Advanced API Ops: Bringing the power of AI and ML to API operations
- Hey! Ho! Ten Years of Go!
- 3 things to know about Jetpack Compose from Android Dev Summit 2019
- How to add a store locator to your website before the holidays

- 如何选择 compileSdkVersion, minSdkVersion 和 targetSdkVersion (24,841)
- Google 推出的 31 套在线课程 (22,265)
- 谷歌招聘软件工程师 (22,219)
- Seti UI 主题: 让你编辑器焕然一新 (13,761)
- Android Studio 2.0 稳定版 (9,364)
- Android N 最初预览版：开发者 API 和工具 (8,021)
- 像 Sublime Text 一样使用 Chrome DevTools (6,289)
- 用 Google Cloud 打造你的私有免费 Git 仓库 (6,044)
- Google I/O 2016: Android 演讲视频汇总 (5,590)
- 面向普通开发者的机器学习应用方案 (5,495)
- 生还是死？Android 进程优先级详解 (5,198)
- 面向 Web 开发者的 Sublime Text 插件 (4,325)
- 适配 Android N 多窗口特性的 5 个要诀 (4,299)
- 参加 Google I/O Extended，观看 I/O 直播，线下聚会！ (3,609)

© 2019 中国谷歌开发者社区 - ChinaGDG