In the ever-evolving social media landscape, the advent of AI has been nothing short of disruptive. It’s changed how we communicate, consume information, and perceive reality. However, as with any tool of great power, AI can be used for both good and harmful purposes, especially in politics. Social media platforms like X (formerly Twitter) are filled with instances where AI is being weaponised to amplify narratives that lean to one political side, often at the expense of minorities and the truth itself.
One of the most insidious examples of this manipulation is the use of 'verified' bot accounts, often controlled by real individuals using large language models (LLMs) to manage several accounts simultaneously. These accounts, masquerading as real users, create an echo chamber that amplifies right-wing content, creating the illusion of widespread agreement with harmful narratives. Their sole purpose is to give a platform to posts from accounts spreading vitriol, often of a racist, xenophobic, or misogynistic nature, thereby granting these voices an undue influence in the online discourse.
Casual racism is nothing new. It’s been a recurring theme on platforms like X. But recently, we’ve seen how AI amplifies these attacks on a massive scale. Take the case of Imane Khelif, the Algerian woman boxer who won gold at the Olympics. From the moment she triumphed over her Italian opponent, a torrent of abuse was directed at her, questioning her identity and womanhood, and AI bots piled on, signalling agreement with hateful posts and keeping the narrative alive for much longer than it would have naturally persisted.
And this is only one example. Recently, right-wing have shifted their focus to Haiti, amplifying a flood of racist misinformation. The target this time? Migrants who are moving into Ohio, a key swing state in the upcoming U.S. elections. The narrative is that these Haitians are criminals, with unfounded and grotesque claims such as they eat pets or hunt wildlife. AI-powered accounts amplify and reinforce these lies, stoking fear and hostility in a population that has already started to violently respond to these narratives.
The purpose is clear: influence the swing state of Ohio ahead of the elections. These narratives are designed to sow fear and anger in local communities, encouraging a climate of distrust and even violence against Haitians. And these lies? They have real-world consequences. Already, people are absorbing this vitriol and acting on these false narratives, consciously or subconsciously.
One of the few silver linings is that users are beginning to identify and expose AI bots. Prompt injection, a technique where users trick AI models into revealing their artificial nature through clever prompts, has become a valuable tool in detecting these malicious actors. But while prompt injections can help root out some bots, many still slip through undetected, spreading their poison unchecked.
The fact that we need such methods to identify fake accounts points to a more significant issue: social media platforms fail to control how AI is being misused to manipulate public discourse. And the stakes couldn’t be higher. As of the time of writing (mid-September 2024), U.S. elections are just around the corner. A former president, deeply aligned with the conservative right, is leveraging AI-powered disinformation to sway public opinion. Swing states like Ohio are becoming battlegrounds, not just for votes but for the very soul of public discourse.
The racist attacks against the people of Haiti aren’t isolated incidents. They’re part of a broader campaign to push fear-based narratives that turn entire communities against one another. Stories about Haitian migrants in Ohio are being twisted beyond recognition: A man legally cleaning roadkill becomes a "Haitian immigrant hunting geese," or a neighbourhood barbecue becomes a rumour of "Haitian families eating dogs."
These may sound like absurd exaggerations, but they are dangerous because of their cumulative effect. Even if someone dismisses a single post as ridiculous, mimetic theory shows us that people begin to internalise these ideas once exposed to repeated messaging. It takes just one piece of confirmation bias for that vitriol to take root. Take this recent article, for example: "Haitian Driver Makes Illegal Turn in Springfield, OH, Smashes Into Mom's Truck with Autistic Daughter in Back." Just from the headline, assumptions and fears are already being seeded, regardless of whether anyone reads the full article.
How many people stop to fact-check? How many simply scroll past headlines and form impressions based on half-truths or outright lies? And how many of those headlines have been carefully designed to spread fear and bolster political agendas?
It’s easy to think this problem only affects "other people" and that the victims of these AI-driven campaigns are distant from your reality. But make no mistake: the same technology used to target Haitians, Algerians, and other minorities can just as quickly be turned on you. AI is a tool, and like any tool, its purpose depends on the hands that wield it. Today, it’s being used to spread racism and fear. Tomorrow, it could be used to distort facts about your community, your beliefs, or your actions.
We live in a time when fewer people are critical of the platforms providing them with information. AI has made it easier than ever to manufacture consensus, create outrage, and ultimately manipulate society. If alarm bells aren’t already ringing, they should be.
AI brings incredible potential, but without oversight, it is becoming a weapon of disinformation and political manipulation. We must recognise this danger and act before it’s too late. Today, it may be Haitians. Tomorrow, it may be anyone who stands in the way of the political agendas these AI bots are programmed to serve. It’s time to be critical of the content we consume and the platforms we trust. If we don’t, we risk allowing these tools to erode the very foundations of our society.