Combatting Political Disinformation and Manipulation.
In the turbulent landscape of current politics, the lines between truth and fiction have become increasingly blurred. The rise of artificial intelligence (AI) in political campaigns has exacerbated this phenomenon, amplifying the spread of disinformation and manipulation to unprecedented levels. As we navigate the convergence of the largest election year in history with the proliferation of AI technology, it is crucial to confront the existential threat posed by the fabrication of truth to democratic discourse and social justice endeavours.
Close to three billion people are expected to head to the electoral polls across several economies – including Bangladesh, India, Indonesia, Mexico, Pakistan, the United Kingdom and the United States in the next year. The interconnectedness of historical events and the influence of large-scale trends, as reflected in Asimov’s concept of ‘psychohistory’ from the book ‘Foundation’, points out the urgency of addressing this pervasive threat. From Pakistan to the United States, there are examples of AI being strategically wielded to sway public opinion and perhaps seeks to undermine the integrity of candidates and the elections themselves.
In Pakistan, the Tehreek-e-Insaf (PTI) party embraced the power of artificial intelligence to circumvent traditional media barriers. Despite their leader, Imran Khan, being incarcerated, AI technology enabled the party to disseminate his message directly to millions of voters. This was achieved by using technology to generate content in Khan’s own voice. To this effect Khan managed to speak virtually to millions of Pakistanis ahead of and after polls.
In Indonesia, the rebranding of a controversial figure as a ‘cuddly grandpa’ through AI avatars underscores technology’s ability to reshape public perception and electoral outcomes. Prabowo Subianto, a former army general, successfully navigated past his historical reputation (once tarnished by alleged human rights violations), to secure victory in the recent Indonesian election.
In the United States, the proliferation of generative AI has facilitated the spread of disinformation, with manipulated videos and hyperrealistic robocall programs targeting candidates like President Joe Biden. Political campaigns are further leveraging AI to target voters with tailored messages, exploiting social divisions and amplifying partisan tensions.
The implications of this democratisation of disinformation are profound. As individuals gain access to sophisticated AI tools, the barrier for entry to spread false narratives diminishes, threatening the very foundation of democratic discourse. In the high-stakes arena of electoral politics, the ability to manipulate public opinion can tip the scales of power and undermine the legitimacy of democratic institutions.
If we are to confront the challenges of this potential ‘post-truth era’, it is imperative that governing bodies seek to take decisive action to safeguard democratic integrity. Principles and policies through self-enforcement mechanisms can be implemented to combat the spread of disinformation, but who holds bad actors accountable, especially when interference is from outside a country’s remit? Transparency in political advertising, robust cybersecurity measures and public awareness campaigns are essential tools in the fight against misinformation – but who sets out these policies and who polices the truth and mitigates attacks?
Moreover, does the global community recognise the interconnected nature of this threat? There are opportunities to fortify democratic institutions against the corrosive influence of disinformation and manipulation, including the use of technology itself. However, it is an unprecedented challenge.
What does our future hold if the integrity of our elections can’t be preserved?
Some key insights:
- According to a survey held in December 2020, 18.4 percent of people aged 16 years and above in Europe had encountered manipulated content, e.g. photos and videos which had been doctored, Photoshopped, or contained deep fakes, and 15.5 percent had come across fake news sites.
- In the European Union, fake news is perceived as a significant problem for democracy, with 81% of respondents expressing concerns about its impact.
- Bulgaria ranks highest among EU states where fake news is seen most frequently, highlighting the widespread nature of the issue.
- In India, misinformation and disinformation were ranked as the highest risks by experts, indicating the gravity of the threat posed by false narratives in the world’s largest democracy.
References & further reading:
World Economic Forum:Global Risks Report 2024
Statista:Fake News in Europe Report
Brookings:Data misuse and disinformation: Technology and the 2022 elections
Politico:Imran Khan AI Victory Speech
BBC:Prabowo Subianto ‘Cuddly Grandpa’
Wired:Biden Robocall
Leave a Reply
You must be logged in to post a comment.