Is The Internet Misleading Us About Conflict Casualty Figures?

A research paper by Mary Grace Flaherty and Leslie F. Roberts published by Conflict and Health looks at the accuracy of internet search results relating to death rates during crises.  Data from the research shows that Googling may not return accurate results for “violence specific mortality rate” – an issue that becomes crucial when donors and agencies have to pick which countries to prioritise with aid response.

By Carolina Are

Picture via Unsplash

Incorrect Representation of Mortality and Violence Rates

Flaherty and Roberts write that incorrect information about rates of mortality and rates of violence can arise from various causes, such as political leaders denying evidence or quoting inaccurate statistics to influence or confuse public perception. They also argue that certain deaths can be more visible – for example, deaths of migrants crossing the Mediterranean from Northern Africa have received a lot of attention in recent years, even though it is likely far more migrants die on their way to crossing the Sahel.

However, more often than not it’s people outside the international community, who may be illiterate, and/or in rural and undeveloped settings, whose deaths go unnoticed: one prominent example being the high number of Rohingya deaths in Myanmar which were rarely noted until a Médecins Sans Frontières’ survey. This may have a profound impact on humanitarian response, support, and resource provision.

Methodology

Flaherty and Roberts conducted their explorations using graduate students at three separate universities in the United States. They chose two public health and one information science classes with 60 students in total – a sample of educated, internet-savvy students familiar with statistics. The students were asked to search for the death rates related to five specific crises.

Flaherty and Roberts chose to focus on five areas where conflicts have existed for many years: Venezuela, Syria, Yemen, CAR and Mali. During a four month period in the year 2017 to 2018, they asked students to conduct an internet search to determine which of the five countries had the highest and lowest “violence specific mortality rate”.

Students were divided into groups of three. They were given approximately 20 min to search the internet and assess the relative rates of violent deaths, and each group explored the question with three different approaches:

  1. A general search through search engines like Bing or Google;
  2. A “constrained criteria search mechanism” like Medline or Google Scholar;
  3. A search addressing the question without starting a specific search, “but instead by going to the internet source they deemed most credible for the query (e.g. the World Health Organization, the US Central Intelligence Agency Factbook, or Ministries of Health for specific countries)”.

Results and Possible Explanations

Many graduate students, in all three of these groups, could not determine the relative death rates caused by these crises. 34 participants identified a crisis that they believed had the highest violent death rate: 81% concluded it was Venezuela, followed by Syria (13%), Mali (3%) and CAR (3%). Of the 26 searches that identified a least violent death rate, 83% reported either CAR or Mali, followed by Yemen (10%) and Syria (8%).

Aside from the lack of data on CAR and Mali, Flaherty and Roberts’ students were perplexed about whether to include suicides or executions in the measure. This resulted in almost half of all inquiries unable to estimate a highest and lowest rate among these five countries.

Yet, out of the five countries, Venezuela probably has the lowest violence specific death rate of about 50 per 100,000 population per year. This compares with 435 for Syria over the first five years of war (2011–2016) or 81 per 100,000 in CAR in 2010, a period far less violent than more recent years. Likewise, while mortality data is scarce, Mali has experienced the highest rate of United Nations’ peacekeepers killed, as well as the highest rate of aid workers killed in recent years.

There are several reasons for this discordance. These are linked to the confusion between different causes of death. The World Health Oganisation has five categories of violent deaths:

  1. Interpersonal violence
  2. War
  3. Suicide
  4. Legal executions
  5. Collective violence

In the categories of homicide or interpersonal violence, Venezuela does have the highest rates among these five countries within the WHO database. However, hundreds of killings of protesters and others in Venezuela by the government are not reflected in WHO’s violence data, which only shows interpersonal violence, meaning that some of the discordance recorded is related to definitions of violence-specific mortality, and how it’s reported by individual governments.

Definitions vs Algorithms

Online searches may not distinguish between these different, nuanced categories. For example, in August 2018, a Google search using the phrases “murder rates by country” and “homicide rates by country” produced practically identical results. This means that the Google search algorithm treated these two terms as synonyms, despite their very different meaning in terms of the intention and motivation. In another example, a search on the phrases “violent death rates by country” and “violence specific mortality rate by country” returned the same top two results as the “homicide” and “murder” searches.

Picture via Unsplash

Among the top 10 results, one other was the same and two more referenced WHO’s interpersonal violence data. This means that five out of the top 10 sources were the same when using the phrase “violent mortality” or “violence specific mortality” versus “homicide” or “murder”, suggesting that the Google algorithm does not distinguish war deaths from interpersonal violence deaths in the way WHO does.

Digital Divide and North/South Representation

However, Flaherty and Roberts argue that the digital divide and the Global North’s representation of faraway countries might also have played a role in the deceptive results found in their exploration. This lack of representation might also influence the directing of humanitarian assistance. For example, in 2000 during the Kosovo crisis, the relief community spent 18 times more per affected beneficiary in Southeastern Europe than in Somalia, and at least 1000 times more per death when compared to the Eastern Democratic Republic of Congo. The researchers write that:

“Being close to Europe, access to international communication, and being of political and military interest to major donors are factors likely associated with both donor motivation to spend, and media motivation to cover a crisis. The influence of these factors existing is likely greater in vicinity of wealthiest nations, such as with crises in Kosovo in 2000 and Syria in 2018.”

Conclusions and Recommendations

The researchers concluded that the internet drew students to the opposite conclusion from reality. For instance, search engine algorithms are not always aligned with differing categories of violent deaths as defined by the World Health Organization. Additionally, the digital divide shows how the world’s most poor and remote locations are often under-reported online, highlighting the importance of primary data collection and reporting in such settings.

Find the open source paper here.