For nearly 200 years, the Zoological Society of London (ZSL) has pioneered new methods of discovery and conservation. We opened the world’s first reptile house and public aquarium, we discovered the now endangered okapi, and we’ve founded laboratories for zoological research. Our vision is to create a world where wildlife thrives. However, today, our planet’s biodiversity is still under threat.
Climate change, habitat loss, and exploitation are just some of the challenges wildlife face. To combat these, and other threats, conservation organizations are increasingly partnering with the private sector and international organizations to build new tools to protect at-risk species and foster healthy ecosystems. Modern technology, including satellites, can help monitor the health of animal populations and stop crimes against wildlife in real time. Another tool that supports animal conservation is the camera trap.
Camera traps consist of two key parts: a small camera and a motion sensor. The challenge is that their benefit is also a detriment—any motion can trigger the motion sensor. One camera takes an average of 60 images a day, and if you were to have 30 camera traps deployed in a particular region, you could end up with more than 500,000 captured images in a six month time period. Until recently, we processed each of these images individually, labeling them with the animal pictured, and filtering out any false positives (such as an empty frame triggered by a wayward branch). But this process is extremely time consuming, often requiring many months of work. We knew there had to be a better way—and we began to investigate whether machine learning could help us process these images at scale.
In 2017, we partnered with Google Cloud to test the new AutoML Vision functionality. Since then, we’ve developed custom machine learning models that can identify animal species from camera trap data and exponentially speed up large-scale analysis. In the future, this could help us better track species health and animal behavior in threatened regions around the world. For us, AutoML Vision has the potential to be game-changing. It requires no training as a data scientist and automates time-consuming manual work so we can focus on our core conservation efforts.
Working with Datatonic, a member of Google Cloud’s partner program, we’re creating a “model factory” that allows conservationists to use existing camera trap data to create image recognition models in Cloud AutoML. These models could then be shared with other conservation-focused organizations and applied to unlabelled datasets, thereby saving hundreds of hours that would otherwise be spent on manual identification. We hope to make this model factory free and open source, and that it will allow the conservation community to collectively refine and improve their accuracy.
We also plan to make the models available via APIs to be incorporated into applications such as Instant Detect, a proprietary camera trap monitoring system that won the Google Impact Award in 2014.
With the resources we’ve received through Google Cloud’s Data Solutions for Change program, we’re now able to explore more ways to scale our data with BigQuery, automate our work with AutoML, and uncover meaningful insights that we hope will ensure wildlife health, save species from extinction, and foster relationships between wildlife and people.
In the words of Charles Darwin, ZSL Fellow: “In the long history of humankind (and animal kind, too) those who learned to collaborate and improvise most effectively have prevailed.” This collaboration marks a new chapter in conservation, one where private and public sector organizations come together to apply modern technology to one of the world’s greatest challenges: preserving biodiversity in the present day.
To learn more about Data Solutions for Change and, if you’re an eligible nonprofit organization, how to apply, visit the program website.