In the digital age, mass protests across the globe have been significantly influenced by two powerful forces: Artificial Intelligence (AI) and social media. While these tools have transformed how people organise, communicate, and mobilise, they have also brought about numerous challenges, including the spread of misinformation, heightened tensions, and manipulated narratives that can escalate violence. This article explores the harmful impact of AI and social media on mass protests, with a specific focus on recent events in Pakistan and Kenya, as well as broader global implications.
The Case of Pakistan: AI-Generated Misinformation in Protests
The recent protests in Pakistan, triggered by the arrest of former Prime Minister Imran Khan, offer a stark example of how AI and social media can exacerbate violence. Khan’s Pakistan Tehreek-e-Insaf (PTI) party organised large demonstrations, demanding his release. However, the protests quickly spiralled into violence, with protesters clashing with police and military forces in the capital, Islamabad. During the chaos, a range of AI-generated images were circulated online, purporting to show the horrific aftermaths of the protests, including streets allegedly covered in blood.
One of the most widely shared AI-generated images depicted Jinnah Avenue in Islamabad, supposedly showing bloodstains covering the road after violent clashes. However, fact-checkers quickly debunked the image, revealing that it was not only inaccurate but also generated by AI. Details such as the abnormal positioning of shadows, misplaced buildings, and artificial lighting indicated the image was computer-generated. Similar AI-manipulated images appeared across various platforms, with claims that up to 300 people had been killed during the protests, though official reports indicated much lower figures. These AI images, presented without context or verification, fueled the spread of disinformation, inciting further violence and inflaming tensions between the protesters and the government.
AI-generated visuals, often shared rapidly via platforms like X (formerly Twitter), Instagram, and Facebook, have the potential to create a distorted sense of reality. In this case, they contributed to a narrative of widespread carnage and governmental oppression that was not entirely grounded in truth. This manipulation of visual media has become a key tool in modern protest movements, where the digital landscape is as crucial as the physical one in shaping public perception.
Kenya: AI and Social Media as Tools for Mobilization and Misinformation
In Kenya, mass protests erupted in response to the Finance Bill 2024, which included controversial tax hikes. The protests were largely youth-led, organised through social media platforms such as TikTok and X. These protesters, many from the Gen Z and millennial demographics, utilised AI tools to amplify their cause, creating chatbots and databases exposing corruption among politicians and dissecting the economic impact of the bill. These tools helped spread awareness, mobilise protests, and even fund medical bills and funeral costs for those injured or killed during the demonstrations.
However, as with Pakistan, the rapid spread of misinformation and manipulated content has played a damaging role in the Kenyan protests. AI-generated bots, used to disseminate targeted disinformation, created confusion and heightened political polarisation. In a country with a history of ethnic tensions, this digital chaos can have grave consequences. AI tools that aggregate and disseminate content without proper verification have amplified political divisions, making it harder for protesters to communicate their messages effectively.
The Kenyan government’s response has been to express concern over the use of AI and its potential for spreading misinformation. The government even referenced the World Economic Forum’s Global Risks Report 2024, which warns that AI-driven misinformation is a global risk. In Kenya’s case, the use of AI for protest mobilisation has shown both its empowering and harmful potential. While it provides a powerful platform for youth activism, it also opens the door to manipulation by bad actors seeking to escalate unrest.
Global Implications: Social Media as a Double-Edged Sword
The influence of AI and social media on mass protests is not confined to Pakistan or Kenya. Across the world, social media platforms like Facebook, Twitter, and Instagram have become integral to political movements, both for organising protests and for spreading propaganda. The role of social media in fueling violence and division has been observed in numerous incidents, from the Arab Spring to the recent riots in the United Kingdom.
One notable case was the 2023 UK riots, where misinformation spread through social media platforms significantly exacerbated violence. Users on platforms like X shared graphic videos of protests, inflaming tensions and fueling aggressive confrontations between police and protesters. One viral post included a video from a completely unrelated location depicting a machete fight in Southend. Still, it was misattributed to the riots, further misleading the public and stoking fears of a widespread breakdown of order. Social media’s role in normalising violent conduct, particularly among younger generations, cannot be understated. Many individuals, like 18-year-old Bobby Shirbon, who participated in the riots, were influenced by inflammatory content online. Shirbon’s participation in the riots was motivated by what he had seen online, reinforcing the idea that social media can rapidly shift a person’s perception of reality and lead to dangerous actions.
In these instances, social media platforms amplify sensationalist content that stirs strong emotional reactions. Algorithms, which prioritise content that generates engagement, often boost provocative and violent material, overbalanced and factual reporting. This algorithmic bias makes it difficult to contain the spread of misinformation and hate, particularly in high-stress situations like mass protests.
The Power of Algorithms: Amplification of Hate and Division
Algorithms designed to increase user engagement are a fundamental part of the problem. These algorithms prioritise content that generates strong emotional reactions—whether those are anger, fear, or outrage. Consequently, disinformation, false narratives, and harmful content are more likely to appear in users’ feeds. This creates an environment where falsehoods can spread faster than truths, contributing to a cycle of misinformation that exacerbates real-world consequences.
In countries with diverse populations, such as Kenya, where ethnic tensions are high, the amplification of divisive content can be particularly dangerous. In such contexts, misinformation can quickly escalate into violence, as groups perceive one another as enemies based on fabricated or manipulated content. This is particularly true when AI-generated content is used to create seemingly credible but false images or videos that sway public opinion.
In the UK riots, for instance, algorithmic amplification of harmful content contributed to a sense of instability and division. Videos that depicted violent acts were often framed with provocative captions, inciting further violence and making it more difficult to distinguish between fact and fiction. The unchecked spread of these narratives through social media platforms only intensified the chaos.
The Path Forward: Regulation and Accountability
Given the harmful impact of AI and social media on mass protests, it is essential to explore regulatory measures to mitigate these risks. Governments and tech companies must collaborate to develop frameworks that promote accountability and transparency in the digital space. Social media platforms need to implement more effective content moderation systems that prioritise accuracy and contextualisation over sensationalism.
Additionally, the development of AI technologies must be done with ethical considerations in mind, ensuring that they are not used to manipulate public opinion or fuel conflict. Education on digital literacy and critical thinking should be promoted to help users recognise and reject misinformation.
While social media and AI have the power to amplify voices and mobilise political action, their potential for harm cannot be ignored. By addressing the spread of disinformation, fostering accountability, and ensuring that these technologies are used responsibly, it is possible to mitigate their negative impact on mass protests and global political stability.
Source: The Perilous Role of Artificial Intelligence and Social Media in Mass Protests