In the turbulent landscape of current politics, the lines between truth and fiction have become increasingly blurred. The rise of artificial intelligence (AI) in political campaigns has exacerbated this phenomenon, amplifying the spread of disinformation and manipulation to unprecedented levels. As we reflect on the convergence of the largest election year in history with the proliferation of AI technology, it is crucial to confront the existential threat posed by the fabrication of truth to democratic discourse and social justice endeavours.
Close to three billion people were expected to head to the electoral polls across several economies in 2024 – including Bangladesh, India, Indonesia, Mexico, Pakistan, the United Kingdom and the United States. The interconnectedness of historical events and the influence of large-scale trends, as reflected in Asimov’s concept of ‘psychohistory’ from the book Foundation, points to the urgency of addressing this pervasive threat. From Pakistan to the United States, there were examples of AI being strategically wielded to sway public opinion and perhaps to undermine the integrity of candidates and the elections themselves.
In Pakistan, the Tehreek-e-Insaf (PTI) party embraced the power of artificial intelligence to circumvent traditional media barriers. Despite their leader, Imran Khan, being incarcerated, AI technology enabled the party to disseminate his message directly to millions of voters. This was achieved by using technology to generate content in Khan’s own voice. To this effect Khan managed to speak virtually to millions of Pakistanis ahead of and after polls.
In Indonesia, the rebranding of a controversial figure as a ‘cuddly grandpa’ through AI avatars underscores technology’s ability to reshape public perception and electoral outcomes. Prabowo Subianto, a former army general, successfully navigated past his historical reputation (once tarnished by alleged human rights violations), to secure victory in the Indonesian election.
In the United States, the proliferation of generative AI facilitated the spread of disinformation, with manipulated videos and hyperrealistic robocall programs targeting then President Joe Biden. Political campaigns further leveraged AI to target voters with tailored messages, exploiting social divisions and amplifying partisan tensions.
The implications of this democratisation of disinformation are profound. As individuals gain access to sophisticated AI tools, the barrier for entry to spread false narratives diminishes, threatening the very foundation of democratic discourse. In the high-stakes arena of electoral politics, the ability to manipulate public opinion can tip the scales of power and undermine the legitimacy of democratic institutions.
If we are to confront the challenges of this potential ‘post-truth era’, it is imperative that governing bodies seek to take decisive action to safeguard democratic integrity. Principles and policies through self-enforcement mechanisms can be implemented to combat the spread of disinformation, but who holds bad actors accountable, especially when interference is from outside a country’s remit? Transparency in political advertising, robust cybersecurity measures and public awareness campaigns are essential tools in the fight against misinformation, but who sets out these policies and who polices the truth and mitigates attacks?
Moreover, does the global community recognise the interconnected nature of this threat? There are opportunities to fortify democratic institutions against the corrosive influence of disinformation and manipulation, including the use of technology itself. However, it is an unprecedented challenge.
What does our future hold if the integrity of our elections can’t be preserved?
References & further reading:
World Economic Forum:Global Risks Report 2024
Statista:Fake News in Europe Report
Brookings:Data misuse and disinformation: Technology and the 2022 elections
Politico:Imran Khan AI Victory Speech
BBC:Prabowo Subianto ‘Cuddly Grandpa’
Wired:Biden Robocall
Leave a Reply
You must be logged in to post a comment.