Ethical Machine Learning for Disaster Relief: Rage for Machine Learning

Ethical Machine Learning for Disaster Relief: Rage for Machine Learning

You may not know it, but you’re living in the middle of a revolution. Supervised learning allows engineers to develop models that can (with proper input) train themselves. In turn, these models are helping solve crisis management problems before disaster strikes. Autodidactic algorithms are quite literally amazing, as confounding as astounding. How are disaster relief providers balancing machine learning’s inscrutability with its capacity to provide novel solutions to ancient problems? Can we translate our enthusiasm into ethical machine learning? If so, how?

An infographic showing the relationship between Artificial Intelligence, Machine Learning, and Deep Learning. Three increasingly small nested circles describes each.
Welcome to the revolution! (Source: Wikipedia)

Part one of this blog series addressed the ethical implications of using machine learning in the immediate aftermath of an emergency. In this post, we discuss principled approaches to exploiting its prescience. How can machine learning best inform policies and decision-making before the deluge? 

Forecasting need

Technologists have long modeled data to harness machine learning for disaster relief. After the Chernobyl crisis, scientists analyzed satellite imagery and weather data to track the flow of radiation from the reactor. Today’s algorithms far outpace their predecessors in analytic and predictive powers. Machine learning models are able to deliver ever more granular predictions. They augur a future in which we manage crises before they occur.

NASA has developed the Landslide Hazard Assessment for Situational Awareness (LHASA) Model. Data from the Global Precipitation Measurement (GPM) is fed into LHASA in three-hour intervals. If a landslide-prone area is experiencing heavy rain, LHASA then issues a warning. Analysts then channel that information to the appropriate agencies, providing near-real-time risk assessments.

LHASA can predict landslide risk in near-real-time. (Source: NASA Goddard)

But, machine learning can draw the timeline for aid even further back. In Guatemala, models are identifying “soft-story” buildings–those most likely to collapse during an earthquake. “Forecast funding” can mitigate damage by providing the most vulnerable with cash assistance to prepare for the disaster. Bangladesh and Nepal are nations that are already implementing this strategy

Mapping risk in the Caribbean

DrivenData Labs is among the avant-garde in developing predictive solutions. To leverage the power of crowdsourcing, they host online challenges to solve difficult deep learning problems. Like Azavea, they endeavor to use machine learning for social impact. One current competition is to map disaster risk in the Caribbean. Competitors are building machine learning models that can predict the roofing material of buildings in St. Lucia, Guatemala, and Columbia.

Aerial drone imagery of buildings across the Caribbean for a DrivenData competition. Like Azavea, DrivenData Labs strive to produce ethical machine learning.
Aerial drone imagery of buildings across the Caribbean.

Roofing material is a major risk factor in resilience to natural disasters. So, a model that can predict it is also one that can predict which buildings are most at risk during an emergency. Partnering with the World Bank and WeRobotics, Azavea created a dataset of annotated aerial drone imagery for the competition. Users can submit their machine learning models through December 23, 2019. A successful model will “enable experts to quickly and effectively target resources for disaster preparation.

Ethical machine learning for ethical decision-making 

As machine learning models grow more sophisticated, they also become more inscrutable. Deep learning algorithms and neural networks are complex enough that computer scientists don’t understand how they make their decisions. This has become known as AI’s “black box” problem. Added to the documented ways in which bias (intentional or otherwise) infiltrates datasets, this creates a conundrum. Can we produce ethical machine learning with these imperfect tools?

Supporting documentation

To deem an algorithm’s prediction as actionable knowledge, corroborating evidence is necessary. The AMA Journal of Ethics (AMAJE) supports this. During its prosecution of war crimes in Sudan, the International Criminal Court cited analysis by the Satellite Sentinel Project (SSP). Supported by the Harvard Humanitarian Initiative as well as actor George Clooney, SSP analyzed satellite imagery to document war crimes during the South Sudanese War. AMAJE points out that SSP “has developed its own protocol for what constitutes an adequate level of certainty for analytic conclusions.” It further questions whether our understanding of these protocols is “sufficient to justify…decision making at the policy level.”

Satellite imagery of an Ammunition Depot in Taji, Iraq. Used deceptively, they provide an example of the need for ethical machine learning.
These materials are reproduced from www.nsarchive.org with the permission of the National Security Archive.
Satellite imagery was used to justify the 2003 invasion of Iraq. (These materials are reproduced from www.nsarchive.org with the permission of the National Security Archive)

Recent history illustrates the wisdom of that question. In 2003, the George W. Bush administration used satellite imagery of Iraq to bolster claims that Sadam Hussein was building weapons of mass destruction. This analysis convinced U.S. policymakers to authorize the use of military force in the country. The disastrous ramifications of this decision continue to impact the Near and Middle East. 

The World Bank points out that “disasters impact vulnerable groups disproportionately.” The weight of the mistakes of machine learning will fall on the shoulders of those least able to bear it. What can organizations do to ensure that all stakeholders understand the fallibility of these models? The World Bank suggests that algorithmic accountability and transparency can address this problem.

Algorithmic accountability and transparency for ethical machine learning

Nick Diakopoulos, a professor at Northwestern, developed the two related, but distinct, concepts. Algorithmic transparency is the principle that the factors and inputs that influence an algorithm’s decision-making should be transparent. It can lay bare the biases embedded in the data as well as biases that shaped the engineering of the machine learning model. Algorithmic accountability is the concept that organizations should be accountable for the decisions their models are making. This is especially important in the arena of crisis management, where decisions bear life-and-death consequences.

These ideas are gaining mainstream acceptance. In April, Representative Yvette D. Clarke (D-NY), Senator Cory Booker (D-NJ), and Senator Ron Wyden (D-OR) introduced the Algorithmic Accountability Act of 2019. Transparency and accountability may soon be legal imperatives as well as ethical ones. 

Ethical engagement

There is one clear imperative to ethical machine learning: consider people and context. This includes following the lead of local populations. It also includes involving them in the decision-making process as relief work continues.

More than a show of respect, this is fundamental to the analytic process. Local actors are necessary to accurately interpret data.

Satellite imagery is fantastic at revealing what is visible, less so at revealing the unseen. This includes concepts such as national, ethnic, religious, or linguistic boundaries. Without that crucial context, maps “tell incomplete stories.

Cultivating community relationships 

The lack of local involvement plagued 2010 relief efforts in Haiti. There was a “massive mismatch and duplication of services, expertise, and resources.” The well-intentioned became inadvertent “partners in death.” Unsuitable supplies and volunteers flooded the island, diverting resources from the earthquake’s victims. Coordination with Haitian aid groups would have ameliorated this problem. Local communities are more than victims or partners–they are (or should be) leaders

Consent is fundamental to a community-centered approach to ethical machine learning. This is particularly critical when remote sensing is one of the tools you intend to use. Expectations of privacy are culturally and contextually varied. Assuming otherwise can have harmful consequences.

This was clear following the 2015 Nepal earthquake. In the immediate aftermath, aid and news organizations sent drones to document the damage. Then, the Nepalese Civil Aviation Authority ordered those drones grounded. It was banning the use of drones in aid work without prior clearance. Nepal had grave concerns about the potential misuse of images of cherished cultural sites. Had groups asked in advance, they could have avoided sacred areas while still delivering valuable imagery. Instead, the government diverted its attention from relief efforts to the unwarranted use of its airspace.

Picture shows a collapsed building surrounded by rubble.
Issues of consent affected relief efforts following the 2015 earthquake in Nepal. (Source: Nirmal Dulal)

Reining in the rage 

The application of machine learning techniques to satellite imagery is revolutionizing disaster relief. Crisis maps and image comparisons are helping relief organizations target aid with precision. Deep learning is driving efforts to predict need before crises occur. But, it also has the potential to perpetuate inequalities as well as put the vulnerable at further risk. This technology demands deliberate and mindful ethical guidelines and policies. At Azavea, part of the way we navigate these murky waters is through our process for selecting projects. But the work of navigating these ethical complexities is ongoing. As we ramp up our machine learning work, we’ll continue learning as we aim to do good without doing harm.