Deep learning to map concentrated animal feeding operations

We discuss how our research team developed an algorithm to map intensive live stock farms.
Deep learning to map concentrated animal feeding operations

Our research idea was conceived after one of us (Ho) taught a module on livestock production and environmental sustainability for a Food Law and Policy class at Stanford.  Agriculture is one of the leading sources of water pollution, and one of the most contested regulatory areas  involves intensive livestock farms, known in the U.S. context as Concentrated Animal Feeding Operations (CAFOs). Dramatic evidence of environmental harm emerged when Hurricane Florence flooded much of North Carolina late in 2018, as seen in the photograph in Figure 1.

Figure 1: Flooding of poultry CAFO after Hurricane Florence in 2018.  Source: Rick Dove, Waterkeeper Alliance.

The stark fact is that the U.S. Environmental Protection Agency (EPA) lacks basic information about CAFOs, due in part to heavy litigation about EPA’s authority to permit CAFOs. Yet location information is critical to any effort by environmental interest groups and regulators to monitor and enforce clean water laws.

With Zoe Ashwood, then a research fellow at Stanford, we brainstormed ways to bridge the informational gap by leveraging rapid advances in “deep learning.”  The key to any deep learning model is rich training data. Yet we quickly realized that -- likely in part because of litigation -- EPA’s CAFO data was outdated and would have required substantial novel data collection. Fortunately, the Environmental Working Group, in conjunction with Waterkeeper Alliance, had recently conducted an intensive manual enumeration of North Carolina. EWG generously shared their CAFO location data with us, and Descartes Labs’ geospatial platform made it easy to systematically generate image tiles of NAIP data to compare to the locations.

Because EWG collected the location data about a year before our satellite images were taken,  we assembled a team of undergraduate research assistants to help us validate the locations against the images. We built out a Shiny app that allowed multiple coders to tag imagery with relevant features (Figure 2).


Figure 2: Screenshot from customized CAFO image tagging web app.  Undergraduate team members were assigned batches of image tiles to code. 

In the early weeks, we double-coded images to develop a consensus protocol. After developing the protocol, one of the more rewarding moments came when we confirmed with EWG that we had independently arrived at substantially similar protocol.

With a large training dataset in hand, we began to develop the model within Google’s TensorFlow framework.  We worked with a team of students enrolled in an innovative course in Earth Sciences and Computer Science to prototype improved methods of searching for facilities.   To reduce false positives, we fed the model images that visually look quite similar to CAFOs (e.g., stadium bleachers).

Our results are very promising.  Figure 3 depicts a swine and poultry CAFO in the top row.  The prototypical distinguishing feature of a swine CAFO is the red-colored “lagoon” storing manure. The bottom panels depict a heat map of how our model detects CAFOs, with higher red intensity corresponding to an area that “activates” the CAFO classification.  

Figure 3: Depiction of swine and poultry CAFOs in top row and illustration of what parts of the images activate model classification of a CAFO in bottom row. 

We believe many refinements and improvements can still be made, especially given rapid developments in image learning, and we are continuing work to scale the approach to other jurisdictions. Our paper takes a first step in demonstrating the concrete way in which machine learning can lower the cost of environmental monitoring and enforcement.  

The deep learning revolution has much to offer to environmental law and policy.  

By: Daniel E. Ho & Cassandra Handan-Nader

Please sign in or register for FREE

If you are a registered user on Sustainability Community, please sign in