Tom Felle, City, University of London
Fake news has become an important focus for news foundations, democratic interest groups and various journalism academics and researchers, following claims that the US presidential elections may have been influenced by anti-Clinton propaganda created by Russia and shared on social networks.
In recent weeks there has been a concerted effort by news organisations and social networks to combat the proliferation of so-called “fake news” online.
Facebook recently announced a three-day campaign to warn users in 14 countries about sharing content without knowing its origin. The notice: “Tips to Spot False News” advised users to be wary of headlines, to look closely at URLs and investigate the source of material before sharing it.
However Adam Mosseri, Facebook’s vice president of news feed, said there was “no silver bullet” to solve the problem.
The German government has proposed fining social media platforms and other digital publishers up to €50m if they fail to promptly remove fake news, hate speech and other illegal content. Google also recently updated a limited fact-checking tag to alert users about search results in its feed. Meanwhile the threat of far-right extremists trying to hijack the French election via social networks is being taken very seriously by media companies and social networks in France, who are working together to fact check news.
Seven degrees of fakery
But the problem with fake news is that the term has become a euphemism for everything from satire, to information shared out of context, to malicious content created with the intention of deceiving and potentially influencing public opinion.
It is even used by some – including the US president, Donald Trump – to describe news we just don’t like or disagree with.
Clare Wardle at First Draft News has created a “misinformation matrix” of seven types of mis and disinformation, ranking stories from poor journalism to content produced for profit, political influence or propaganda.
Some of the “fake” news shared on social networks is funny. But worryingly, there is a suspicion, even allegations, that malicious false news has been, and is being, created to “game” social media and search algorithms and influence public opinion, as is alleged to have happened in the run up to the US presidential elections.
News companies must bear some blame for being duped into believing their future success lay at the end of a social media rainbow. News companies desperate to survive have been consumed with metrics, producing increasingly generic content optimised for social sharing to reach ever greater audiences. But in doing so they have lost a vital connection with their audience. On social media, users are promiscuous and no longer connect the news they consume and share with the news companies that produced it in the first place.
Social networks and other algorithm-based aggregators had turned a blind eye to the problem, their algorithms rewarding “content” that drives engagement, rather than ranking on trust and truth. Investigations by Buzzfeed’s media editor Craig Silverman found that fake news companies were making significant revenue from traffic generated as a result of shares on social platforms and other aggregators.
More worryingly, though, the same “gaming” of the algorithms may be used for propaganda or to influence public opinion. And there lies the real threat of fake news. In a post-truth society, trust in institutions including the media has reached its nadir. But we trust our friends and families – and we trust what they share on social media, and are more influenced by it.
Whatever their editorial leaning, traditional news organisations – and the journalists that staffed them – had an overriding public interest function. Tech companies have no such lofty ideals. Even if Facebook itself would never attempt to influence public opinion via its timeline, the very fact that news that appears in timelines is based on algorithms is already a cause for concern, because algorithms can be gamed.
Algorithms rule, ok?
Now that they have been publicly embarrassed into taking action, the next step for tech companies and social networks in combating fake news will be based on algorithms pre-filtering content, rather than any overriding public interest or editorial decision making. But what if the Guardian runs an expose on a major shareholder in Facebook or The New York Times uncovers uncomfortable truths about a past or future relationship between a major social network and national security services? How will social media algorithms deal with stories like that? In an era where news is increasingly viewed via social networks, these networks are the new gatekeepers – and they hold all the keys.
If the public only ever gets to see content that has been pre-filtered, this will have serious implications for free speech and quality journalism – especially journalism that argues against prevailing viewpoints. Social networks will disintegrate into echo chambers where dissenting voices are drowned out in favour of news and opinion that chimes with a pre-filtered view based on the algorithm’s data about an individual user’s views of preferences.
No news organisation has ever commanded as much power to shape public opinion as Facebook – and yet its founder Mark Zuckerberg clings steadfastly to the mantra that the social network is not a media company, and so has no editorial function.
Digital literacy is part of the answer and Facebook’s latest foray was in part an attempt to alert audiences to be more sceptical about what they see and what they share. But in doing so, Facebook is abdicating its own responsibility to address the problem head on.
Zuckerberg’s claims are wearing thin. Until Facebook faces up to the reality that it has an editorial responsibility to its audience and to news organisations that help produce quality, trusted news, the problem will never be solved.
Tom Felle, Lecturer in News and Digital Journalism, City, University of London
This article was originally published on The Conversation. Read the original article.