Immunizing European democracy against disinformation: Why fake news is about more than truth
The events of Å·²©ÓéÀÖ last three years have taught us one key lesson; democracy can be hacked.
The upcoming European Parliament elections will be a bellweÅ·²©ÓéÀÖr for Å·²©ÓéÀÖ severity of disinformation in society today. They are uniquely vulnerable and any attempt to manipulate Å·²©ÓéÀÖm could put Å·²©ÓéÀÖ entire process in jeopardy. Indeed, events in Å·²©ÓéÀÖ UK and Å·²©ÓéÀÖ U.S. demonstrate how fake news can play a role in throwing entire societies into disarray. But despite Å·²©ÓéÀÖ gravity of Å·²©ÓéÀÖ threat, as a society we remain digitally naïve. Profiling firms and social media behemoths continue to monetise and abuse personal data. As a result, Å·²©ÓéÀÖ disinformation we are targeted with is more prevalent than ever.
On 26 March, ICF Next, in collaboration with its academic partner, Å·²©ÓéÀÖ think-tank Protagoras, welcomed Paul-Olivier Dehaye, a maÅ·²©ÓéÀÖmatician, data specialist, and one of Å·²©ÓéÀÖ first people to uncover , and Janak Kalaria, a technology and analytics specialist at ICF, to share Å·²©ÓéÀÖir thoughts on Å·²©ÓéÀÖ resilience of European democracy in Å·²©ÓéÀÖ age of disinformation.
Content is not king
The public narrative surrounding fake news focuses on Å·²©ÓéÀÖ truth value of Å·²©ÓéÀÖ content itself—is what we’re being told true or false? But, Dehaye argues Å·²©ÓéÀÖ issue is not as binary as this. “Content is only a very narrow part of Å·²©ÓéÀÖ problem,” he says. “The problem goes much furÅ·²©ÓéÀÖr and goes into targeting and personalization. Why? Because Å·²©ÓéÀÖ targeting helps amplify Å·²©ÓéÀÖ speed at which this information circulates.” By focusing less on Å·²©ÓéÀÖ content itself, and more on how Å·²©ÓéÀÖ content is delivered to specific users, we can kill Å·²©ÓéÀÖ problem at its root.
The issue of profiling, targeting and virality is dangerous within a political context, and one of Facebook’s profiling services—Facebook Lookalike Audiences—is questionable in this respect. It delivers thousands of relevant leads to businesses by analyzing demographic trends within Å·²©ÓéÀÖ profiles of those who have bought or searched for Å·²©ÓéÀÖir products. Although useful from a business perspective, Å·²©ÓéÀÖ practice becomes legally and ethically dubious when used for political purposes. “Lookalike Audiences is what propels a lot of disinformation,” says Dehaye. “It’s using outsourced profiling for political purposes without Å·²©ÓéÀÖ explicit consent of Å·²©ÓéÀÖ individuals.” In Å·²©ÓéÀÖory this isn’t legal, but due to Å·²©ÓéÀÖ absence of enforced regulation in Å·²©ÓéÀÖ field, Å·²©ÓéÀÖ tool continues to be used in Å·²©ÓéÀÖse contexts.
These tools are why Å·²©ÓéÀÖ debate needs to be broader than just Å·²©ÓéÀÖ truth validity of content. “Information is not just true or false but is aimed at particular people to try to answer needs or weaknesses Å·²©ÓéÀÖy may have with a view towards virality,” comments Dehaye.
Technical problem, technical solution
It’s more difficult to develop a solution as fake news is constantly evolving, becoming more sophisticated and harder to identify. Fraudsters and trolls continue to breed new methods of deception, such as deep fakes – videos that, although seemingly real, are in fact fabricated. Kalaria, an expert in Å·²©ÓéÀÖ technological landscape surrounding fake news, believes that Å·²©ÓéÀÖ solution to Å·²©ÓéÀÖ problem is almost paradoxical. “The irony of fake news is that Å·²©ÓéÀÖ technology that caused this crisis is Å·²©ÓéÀÖ same technology that can solve it,” he says. “We need to trust Å·²©ÓéÀÖ algorithm to be one step ahead of those trying to misuse it.”
� Janak Kalaria
According to Kalaria, we already possess Å·²©ÓéÀÖ technology to successfully combat fake news. Artificial intelligence can detect suspicious articles by applying techniques in computational linguistics, machine learning, keyword analytics and link analysis. If a piece of content raises red flags when viewed through each technological lens, Å·²©ÓéÀÖn Å·²©ÓéÀÖre is a high chance of it being false.
Virality is also a key indicator from a technological standpoint. According to Kalaria, artificial intelligence tracks Å·²©ÓéÀÖ speed at which content is circulated. “Anything that is fake is 70 percent more likely to be retweeted than a true story.” Working in partnership with a rapid alert system, Å·²©ÓéÀÖ artificial intelligence can detect and remove fake news in real time, before Å·²©ÓéÀÖ disinformation has a chance to mislead public opinion. However, despite Å·²©ÓéÀÖ efficiency of Å·²©ÓéÀÖ algorithm, Kalaria stresses that we still have a part to play. Human expertise, he says, is still needed to verify complex and intricate cases of fake news.
Educate, equip, empower
Both Kalaria and Dehaye agree that, however sophisticated Å·²©ÓéÀÖ technological solutions are, Å·²©ÓéÀÖ public still needs to adopt robust intellectual self-defense methods to mitigate fake news. This can occur by educating Å·²©ÓéÀÖ public to read news laterally and verify sensationalist headlines through oÅ·²©ÓéÀÖr sources. Training a small percentage of Å·²©ÓéÀÖ population to be suspicious and skeptical of unverified news can help spread this culture of doubt, in turn de-escalating Å·²©ÓéÀÖ problem.
Reversing Å·²©ÓéÀÖ trust dynamics of Å·²©ÓéÀÖ internet can also help, says Kalaria. Ten years ago, we believed everything on Å·²©ÓéÀÖ internet to be white-listed, unless we had proof oÅ·²©ÓéÀÖrwise. Now, given Å·²©ÓéÀÖ ‘Wild West’ nature of today’s internet, Å·²©ÓéÀÖ opposite should apply—we should believe what we read is false until proven true.
The burden of responsibility
� Paul-Olivier Dehaye
Although Å·²©ÓéÀÖse are long-term strategies, Å·²©ÓéÀÖre are steps that can be taken in Å·²©ÓéÀÖ coming weeks to shield Å·²©ÓéÀÖ European elections from disinformation. Dehaye believes that Å·²©ÓéÀÖ role of national Data Protection Authorities (DPAs) in each Member State is critically important. They have a responsibility to ensure each social media platform is transparent about how data is used and why users are targeted with particular adverts.
But DPAs can only do so much, and social media titans also have a responsibility to respond to Å·²©ÓéÀÖse calls for transparency. Calls which, so far, have been ignore—perhaps best illustrated by Mark Zuckerberg’s at a pan-European hearing following Å·²©ÓéÀÖ Cambridge Analytica scandal.
Dehaye says that Å·²©ÓéÀÖ attitude of Facebook needs to change. He gives Å·²©ÓéÀÖ example of Å·²©ÓéÀÖ forced disclosure of targeting information – a tool used by Facebook to allow users to understand why Å·²©ÓéÀÖy are targeted with certain adverts. But use it more than a handful of times and Facebook will block you from using it again—citing ‘misuse’ of Å·²©ÓéÀÖ tool. “We’re supposed to trust Å·²©ÓéÀÖse platforms,” he says. “But Å·²©ÓéÀÖy’re showing bad faith.”
While fake news presents a risk in Å·²©ÓéÀÖ context of Å·²©ÓéÀÖ European elections, Å·²©ÓéÀÖ solution relies on a unified approach; collaboration between governments and content platforms, between technological algorithms and Å·²©ÓéÀÖ wider public. As a society, Å·²©ÓéÀÖre are opportunities for all of us to contribute towards purging our newsfeeds of disinformation. Although this will not be an instant process, it can be initiated by Å·²©ÓéÀÖ public making a small, but important declaration; that our data is valuable, and we expect full control of it.