谷歌中国开发者社区 (GDG)
  • 主页
  • 博客
    • Android
    • Design
    • GoogleCloud
    • GoogleMaps
    • GooglePlay
    • Web
  • 社区
    • 各地社区
    • 社区历史
    • GDG介绍
    • 社区通知
  • 视频
  • 资源
    • 资源汇总
    • 精选视频
    • 优酷频道

An Augmented Reality Microscope for Cancer Detection

2018-04-17adminGoogleDevFeedsNo comments

Source: An Augmented Reality Microscope for Cancer Detection from Google Research

Posted by Martin Stumpe, Technical Lead and Craig Mermel, Product Manager, Google Brain Team

Applications of deep learning to medical disciplines including ophthalmology, dermatology, radiology, and pathology have recently shown great promise to increase both the accuracy and availability of high-quality healthcare to patients around the world. At Google, we have also published results showing that a convolutional neural network is able to detect breast cancer metastases in lymph nodes at a level of accuracy comparable to a trained pathologist. However, because direct tissue visualization using a compound light microscope remains the predominant means by which a pathologist diagnoses illness, a critical barrier to the widespread adoption of deep learning in pathology is the dependence on having a digital representation of the microscopic tissue.

Today, in a talk delivered at the Annual Meeting of the American Association for Cancer Research (AACR), with an accompanying paper “An Augmented Reality Microscope for Real-time Automated Detection of Cancer” (under review), we describe a prototype Augmented Reality Microscope (ARM) platform that we believe can possibly help accelerate and democratize the adoption of deep learning tools for pathologists around the world. The platform consists of a modified light microscope that enables real-time image analysis and presentation of the results of machine learning algorithms directly into the field of view. Importantly, the ARM can be retrofitted into existing light microscopes found in hospitals and clinics around the world using low-cost, readily-available components, and without the need for whole slide digital versions of the tissue being analyzed.

Modern computational components and deep learning models, such as those built upon TensorFlow, will allow a wide range of pre-trained models to run on this platform. As in a traditional analog microscope, the user views the sample through the eyepiece. A machine learning algorithm projects its output back into the optical path in real-time. This digital projection is visually superimposed on the original (analog) image of the specimen to assist the viewer in localizing or quantifying features of interest. Importantly, the computation and visual feedback updates quickly — our present implementation runs at approximately 10 frames per second, so the model output updates seamlessly as the user scans the tissue by moving the slide and/or changing magnification.
Left: Schematic overview of the ARM. A digital camera captures the same field of view (FoV) as the user and passes the image to an attached compute unit capable of running real-time inference of a machine learning model. The results are fed back into a custom AR display which is inline with the ocular lens and projects the model output on the same plane as the slide. Right: A picture of our prototype which has been retrofitted into a typical clinical-grade light microscope.

In principle, the ARM can provide a wide variety of visual feedback, including text, arrows, contours, heatmaps, or animations, and is capable of running many types of machine learning algorithms aimed at solving different problems such as object detection, quantification, or classification.

As a demonstration of the potential utility of the ARM, we configured it to run two different cancer detection algorithms: one that detects breast cancer metastases in lymph node specimens, and another that detects prostate cancer in prostatectomy specimens. These models can run at magnifications between 4-40x, and the result of a given model is displayed by outlining detected tumor regions with a green contour. These contours help draw the pathologist’s attention to areas of interest without obscuring the underlying tumor cell appearance.

Example view through the lens of the ARM. These images show examples of the lymph node metastasis model with 4x, 10x, 20x, and 40x microscope objectives.

While both cancer models were originally trained on images from a whole slide scanner with a significantly different optical configuration, the models performed remarkably well on the ARM with no additional re-training. For example, the lymph node metastasis model had an area-under-the-curve (AUC) of 0.98 and our prostate cancer model had an AUC of 0.96 for cancer detection in the field of view (FoV) when run on the ARM, only slightly decreased performance than obtained on WSI. We believe it is likely that the performance of these models can be further improved by additional training on digital images captured directly from the ARM itself.

We believe that the ARM has potential for a large impact on global health, particularly for the diagnosis of infectious diseases, including tuberculosis and malaria, in developing countries. Furthermore, even in hospitals that will adopt a digital pathology workflow in the near future, ARM could be used in combination with the digital workflow where scanners still face major challenges or where rapid turnaround is required (e.g. cytology, fluorescent imaging, or intra-operative frozen sections). Of course, light microscopes have proven useful in many industries other than pathology, and we believe the ARM can be adapted for a broad range of applications across healthcare, life sciences research, and material science. We’re excited to continue to explore how the ARM can help accelerate the adoption of machine learning for positive impact around the world.

除非特别声明,此文章内容采用知识共享署名 3.0许可,代码示例采用Apache 2.0许可。更多细节请查看我们的服务条款。

Tags: Develop

Related Articles

可解释性的基石

2018-03-23admin

发布 Android Things Developer Preview 4.1

2017-06-22admin

Announcing Core ML support in TensorFlow Lite

2017-12-06admin

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">

Recent Posts

  • Congratulations to our US Grow with Google Developer Scholars!
  • Cloud SQL for PostgreSQL now generally available and ready for your production workloads
  • Exploring container security: Protecting and defending your Kubernetes Engine network
  • BigQuery arrives in the Tokyo region
  • What’s new in Firebase Authentication?

Recent Comments

  • 鸿维 on Google 帐号登录 API 更新
  • admin on 推出 CVPR 2018 学习图像压缩挑战赛
  • Henry Chen on 推出 CVPR 2018 学习图像压缩挑战赛
  • 王中 on Google 推出的 31 套在线课程
  • Francis Wang on Google 推出的 31 套在线课程

Archives

  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • July 2017
  • June 2017
  • May 2017
  • April 2017
  • March 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • May 2016
  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • January 1970

Categories

  • Android
  • Design
  • Firebase
  • GoogleCloud
  • GoogleDevFeeds
  • GoogleMaps
  • GooglePlay
  • Google动态
  • iOS
  • Uncategorized
  • VR
  • Web
  • WebMaster
  • 社区
  • 通知

Meta

  • Register
  • Log in
  • Entries RSS
  • Comments RSS
  • WordPress.org

最新文章

  • Congratulations to our US Grow with Google Developer Scholars!
  • Cloud SQL for PostgreSQL now generally available and ready for your production workloads
  • Exploring container security: Protecting and defending your Kubernetes Engine network
  • BigQuery arrives in the Tokyo region
  • What’s new in Firebase Authentication?
  • Showcase your innovations at the 2018 China-US Young Makers Competition
  • Protecting WebView with Safe Browsing
  • Protecting WebView with Safe Browsing
  • Dialogflow Enterprise Edition is now generally available
  • Improving the Google Cloud Storage backend for HashiCorp Vault

最多查看

  • 谷歌招聘软件工程师 (19,918)
  • Google 推出的 31 套在线课程 (18,087)
  • 如何选择 compileSdkVersion, minSdkVersion 和 targetSdkVersion (14,903)
  • Seti UI 主题: 让你编辑器焕然一新 (11,117)
  • Android Studio 2.0 稳定版 (8,419)
  • Android N 最初预览版:开发者 API 和工具 (7,752)
  • 像 Sublime Text 一样使用 Chrome DevTools (5,611)
  • Google I/O 2016: Android 演讲视频汇总 (5,387)
  • 用 Google Cloud 打造你的私有免费 Git 仓库 (4,896)
  • 面向普通开发者的机器学习应用方案 (4,734)
  • 生还是死?Android 进程优先级详解 (4,709)
  • 面向 Web 开发者的 Sublime Text 插件 (4,002)
  • 适配 Android N 多窗口特性的 5 个要诀 (3,838)
  • 参加 Google I/O Extended,观看 I/O 直播,线下聚会! (3,419)
© 2018 中国谷歌开发者社区 - ChinaGDG