As social animals, it’s in our nature to embellish, bend the truth, and contrive falsehoods to advance our agendas. These tendencies, however, can have untold consequences in the digital age. Indeed, the network effects of technology can be misused to disseminate falsehoods at an exponential level. With the click of a button, lies can spread like virus, emboldening existing biases, infecting the news cycle, and advancing propaganda aimed at sabotaging democratic processes.
Untruths can be amplified by bots, influencers and social media campaigns to skew public opinion, lionize or deride public figures, undermine the fourth estate, and create mass hysteria around important issues affecting the citizenry. They can affect individual lives deeply. They can create and amplify trends. They can affect elections, policies and almost every stratum of life.
In the absence of regulatory policies, and processes, to counter these dangerous trends, we’ll end up not being able to trust any information at all. For media and social media companies, the risk is not merely a reputational issue; there are hard costs associated with monitoring and filtering content, as well as legal costs. There’s also an increasing risk of fines, as governments around the world start to take notice and put in place regulations to limit the damage. To protect their businesses, to keep on the right side of societal ethics, these companies need to deploy technologies that help build trust, while protecting the freedoms that people hold dear. Indeed, the era of trust technologies is upon us.
Tools of deception
Ever since the dawn of the internet, digital tools which can easily manipulate and alter data have been around. Consider the term “photoshopped”, a term representing the deliberate changing of photographs, which entered the lexicon many years ago.
Now, with AI and the vast repository of content online, there’s a whole new level of deception. Deep fakes produce fake content that is almost indistinguishable from the real thing. Take a look at some of the people pictured on ThisPersonDoesNotExist.com; if you didn’t know, could you tell that these are pictures of non-existent people – that they are total fabrications? Tools to similarly fake audio and video content have also been developed. As these tools become increasingly sophisticated and available, they will open the floodgates to a glut of maliciously designed falsehoods.
As the lines between what is true, and what is false, begin to blur, there will be more confounding consequences. What happens when the footage of a real event is dismissed as fake? What happens to legal evidence if there are multiple versions that don’t cohere with each other? What happens when inflammatory fake events or speeches are timed to release at critical times, such as right before an election? What happens when such content leads to an outbreak of violence?
As for distribution, the scope and sophistication of campaigns to spread these deceitful messages are becoming more high-tech, well-funded and better organized. At first, there were individual trolls spreading their animus, one message at a time. Then came bots – unsophisticated, but highly scalable. Granted, even as bots acquire intelligence (we use “intelligence” loosely), they don’t yet have the creativity and social skills to deliver intricate deceptions.
The most alarming danger, however, comes from “troll farms” – large armies of professional trolls who handle multiple “sock puppet” accounts. Twitter, in one report, indicated that over nine million tweets were distributed by one Russia-backed organization from 3,841 accounts. Of course, Twitter is not the only channel misused to disseminate propaganda – the entire social media landscape presents multifarious avenues to propagate falsehoods and create echo chambers.
As Indiana University researchers Giovanni Luca Ciampaglia and Filippo Menczer stated, “these personalization technologies are designed to select only the most engaging and relevant content for each individual user … But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.”
Tools of trust
Fortunately, trolls and propagandists are not the only people that can use technology; An increasingly powerful array of technologies are becoming available to detect fake news.
One of the first line of defenses is a reverse image lookup, looking up an image online to see where it was used before. Often, the results will show the original image and it’s clear how it was altered. For more sophisticated photo manipulations, there are artifacts and other tell-tale signs, but these might require the use of a specialist to detect. There are even AI-powered image detection tools which can deeply analyze photos at scale.
Companies that build photo tools can build fraud detection elements into the tools. For example, meta data in Photoshop contains records of creation that can help trace the history of the image. Adding watermarks to fake images could do the same.
Bots are fought with bots; AI tools that can fairly accurately (for now) determine if an account is a bot. This portends an arms race of bots vs bot detectors, with increasing AI capabilities on both sides. Hopefully, as social media companies have a deep incentive to stop the intrusion, their wealth and technical know-how will produce effective solutions.
Social media companies might also implement identity verification procedures to better ensure that account users are real people. Requiring proof of identity would help protect users from propaganda and misinformation, but it’s an open question if users would be comfortable with proving their identity to use their accounts. However, as social media begin to offer more services on top of their primary product, such as payments, identity verification becomes a reasonable ask.
For advertising accounts, identity verification is a natural step in the process of legitimization: The Honest Ads Act would make it mandatory for all social media advertisers to prohibit the purchase of US political advertising by foreign nationals, and reveal the true source of funding for political ads.
Not taking action against fake news would probably not go over well with the public or regulators. Margot James, the UK's Minister of Digital and Creative Industries, made some prescient comments about the need for safety on the internet:
“There will be a powerful sanction regime and it's inconceivable that it won't include financial penalties. And they will have to be of a size to act as a deterrent. If you look at the ICO's [Information Commission Office*] fining powers, that might be a useful guide to what we're thinking about.”
*The ICO is a UK agency which concerns itself with enforcing GDPR, and other data protection acts.
This fight against fake news can’t be underestimated; The battle for the minds of the world using online networks is the future of information warfare. Trust in the press, trust in leaders, trust in democracy itself is at stake. Creating regulations and processes to ensure that trust is not broken is one of the most important issues of our time.