Can artificial intelligence aid conservation mapping?

Microsoft is betting it can speed up this labour-intensive process with modern tools

Geospatial analyst Chigo Ibeh reviews a land-cover map of Baltimore at the office of the Chesapeake Conservancy in Annapolis, Maryland.

Thomson Reuters Foundation – In December 2016, environmental group Chesapeake Conservancy unveiled one of the largest, high-resolution land-cover maps made in the United States.

It analyzed every square metre of satellite data in the 207 cities and counties that touch the watershed of the Chesapeake Bay on the U.S. eastern seaboard.

The bay, North America’s biggest estuary, has struggled to recover from overfishing and pollution, and the conservancy hopes the map will guide environmental restoration decisions like where to plant storm water-absorbing trees.

Creating a 100,000-square-mile (259,000-square-kilometre) digital map that defined land use — water, vegetation or concrete — at such a fine scale was “gruelling,” said project director Jeff Allenby.

“(It was) day after day of having staff process and correct the tiles,” he said.

First a computer analyzed almost 80,000 tiles — each of which corresponds to about 13 square miles and digitally records the landscape. Then, 30 people fixed any errors over a period of 10 months.

For land managers, better and more detailed maps help to improve how resources are conserved, Allenby said.

That painstaking process is set to get much quicker. U.S. software giant Microsoft hopes using artificial intelligence and machine learning will speed up the labour-intensive approach used by Chesapeake Conservancy.

“Your brain has an algorithm that’s been trained to (identify images): ‘That’s a tree, that’s a car, that’s a boat,’” said Microsoft chief environmental officer Lucas Joppa.

But, he said, the volume of such data flowing in on a daily basis from around the world precludes such human analysis.

“What we need to be able to do is train computers to do that.”

Warp-speed processing

Chesapeake Conservancy, which specializes in using high tech to make environmental restoration quicker and more efficient, said more detailed land-cover maps that were updated more frequently would help conservationists.

Before it started this map-making project, the best available version was from 2011 and had a 30-metre resolution. The new map, using 2016 satellite imagery, has resolution to one metre, making it 900 times more detailed.

At a 30-metre resolution, a single pixel comprises about 0.25 acre (0.1 hectare), which could identify a farmer’s field, for example.

But at a one-metre resolution, a single pixel can identify an individual crop, said Jim Levitt, co-founder of the International Land Conservation Network.

“If you have a one-square-metre resolution you can deploy vegetative buffers along streams at exactly the place where the water will flow in a storm,” he said, describing a scenario of rainfall causing polluted run-off from a field that drains into a stream.

“If you are able to do precision conservation and that allows you to measurably reduce the amount of nutrients that flow into a larger river system, then you have a much better chance of taking care of these very large — and to date intractable — problems.”

Chesapeake Conservancy’s Allenby said being able to measure performance in that way “is a game changer, because it allows you to make more effective decisions about where to invest your resources.”

It is, he added, “a fundamentally different way of thinking about land management.”

Since Chesapeake Conservancy produced its map, Microsoft has used technology to process land-cover images at much higher speeds as part of its AI for Earth initiative.

Earlier this year, its deep-learning initiative called Project Brainwave began using a specialist computer chip called a field programmable gate array.

Working with Microsoft’s cloud service, it processed nearly 200 million satellite images in only 10 minutes and — for a cost of $42 — produced a draft land-cover map of the entire United States.

For Allenby, such speeds could scale the Chesapeake project nationally — something that is impossible with its current approach.

“We would need a warehouse of workers going 24-7 to turn out a national dataset in any reasonable amount of time,” he told the Thomson Reuters Foundation via phone from Maryland.

Getting smarter

Being able to generate high-resolution land-cover maps fast means conservation groups can measure change more frequently, and hone in on places where, for example, forests are quickly becoming housing.

Since the Chesapeake Conservancy started working with Microsoft in February 2016, it has used the company’s processing power to produce similar land-cover maps for decision makers in Iowa and Arizona. It is now planning to map the Great Lakes region.

However, accuracy remains a challenge, said Allenby.

For example, the machine learning algorithm based on the blue and green tones of the Chesapeake watershed did not compute the beige and sandstone colours that dominate the arid landscape of Pima County, Arizona.

“Remote sensing is never a direct replacement for actual boots on the ground. What it can do is help direct you to areas you need to focus on,” Allenby said.

Despite its shortcomings, the approach has helped the Trust for Public Land, a San Francisco-based non-profit that advocates for parks.

It uses the technology to prioritize where to build parks among 98,000 locations in the country that the trust has determined are underserved by green space.

The organization cross-referenced on a map pre-identified areas that lack parks with data measuring the “urban heat island effect” — whereby cities are often several degrees warmer than nearby rural areas due to heat trapped by dark-coloured roads and buildings.

The purpose was to see where parks could not only fill an open space need, but also reduce temperatures, said Pete Aniello, the group’s geographic information specialist.

“To process a heat island attribute from 100 gigabytes of data and provide that attribute to 98,000 points is a huge data-processing problem that would be barely possible — or impossible — without the Microsoft data science machine,” said Aniello.

Key questions

Microsoft’s Joppa said he was hopeful artificial intelligence could serve a greater good. After all, he said, the technology industry had expended vast efforts training computers to recognize celebrities, faces and the spoken word.

“We just haven’t focused our skill and our focus on some of these bigger societal challenges,” he said at Microsoft’s Redmond, Washington headquarters.

He said accurate land maps would prove vital in answering key questions for land managers.

“We don’t know what is where, we don’t know how much is there, and we don’t know how fast it’s changing,” he said.

“Yet those three questions are at the heart of some of our most contentious land use and economic development conversations in the country,” he said.

Land experts like George McCarthy, head of the Lincoln Institute of Land Policy, heralded the technological advances of high-resolution land-cover maps.

But he cautioned that such tools, created from public data, should not serve private interests in the way that companies have curated and sold information about publicly available land records.

“There is a giant interest among the tech companies to figure out how they can package information and then monetize it,” he said.

“Sooner or later, we should be ready and concerned about any efforts to privatize the data commons.”

About the author

Gregory Scruggs's recent articles



Stories from our other publications