AI propaganda is now a major concern in the global political landscape due to its increasingly direct impact on election integrity. The surge in the use of artificial intelligence in manipulative content production has led society into a new era of information filled with uncertainty. Many countries are entering the largest election cycle in modern history, while generative AI is widening the gaps that facilitate public opinion manipulation.
For the first time, technology not only influences how campaigns are conducted but also touches the core of political trust. Amid the explosion of elections involving more than 40 percent of the world's population, analysts warn that AI's ability to generate realistic content in text, images, and audio has placed democracy in a fragile position. This looming information crisis becomes more apparent as the line between facts and fabrication blurs.
Modern Political Microtargeting Revolution
The new wave of AI propaganda leverages microtargeting much more precisely than a decade ago during the social media era. AI technology enables large-scale psychological analysis, identifying emotional vulnerabilities, issue preferences, and individual information consumption patterns. From a political perspective, this presents a new power that can shape voters' opinions without their awareness.
In practice, large language models are capable of mimicking human speech styles very convincingly. This allows previously rigid propaganda to transform into a personal message that feels intimate. According to the analysis by the Brookings Institution (link through https://www.brookings.edu/articles/the-impact-of-generative-ai-in-a-global-election-year/), the political content generated by AI now has a persuasion level approaching that of human writing. The combination of linguistic intelligence and targeting algorithms allows messages to be subtly directed toward specific groups.
This phenomenon endangers the mechanism of democratic accountability. When each voter only receives a different version of reality, the people's will underlying the election will be fragmented into inconsistent fragments of information. Campaign is no longer an exchange of ideas, but a one-way manipulation operation.
Increased Polarization Through Customized Content
Voters who receive content that validates their beliefs tend to be trapped in an information bubble. Algorithms reinforce existing biases, creating a more heated political space and making it harder to find common ground. This effect is further exacerbated when AI propaganda is used to trigger negative emotions such as anger or fear. From a psychological perspective, messages containing threats tend to spread more easily and shape opinions.
Unconscious Manipulation of Society
Most voters are unaware that the messages they see have been personalized to an extreme degree. They consider the information as a common opinion, not the result of an algorithmic design. This is what makes AI-based microtargeting very dangerous. The system not only disseminates messages but also influences how society understands political reality.
Implications for Democratic Countries
When political content is shaped by a system that prioritizes persuasive efficiency over truth, the entire democracy ecosystem is threatened. Voters are no longer in an equal information field. They actually become the subject of a massively ongoing narrative experiment.
Deepfake and Autonomous Influence Operations
AI propaganda has also entered a new chapter through deepfake and political bots. The most striking case occurred in New Hampshire in January 2024, when thousands of voters received fake voice messages resembling President Joe Biden asking them not to participate in the primary election. This incident demonstrates how easily unknown actors can manipulate voter participation with just one digital attack.
That phenomenon is not the only one. Deepfake visuals and videos have experienced a sharp surge in various countries. The generative AI system makes fake recordings increasingly realistic, making it difficult to distinguish from genuine artifacts. Georgia Tech research institute confirmed that in every election cycle, the number of deepfakes increases drastically, especially approaching the voting day.
Generative Bot and Astroturfing Operations
Furthermore, generative bots are able to mimic human communication patterns so that fake accounts appear more authentic than before. They can debate, respond to comments, and generate opinions that appear organic. Astroturfing strategy—creating the illusion of false public support—becomes an effective tool for spreading certain political narratives.
AI Text: A More Subtle Threat
If visual deepfake detection has started to develop, then AI text detection is still very lagging behind. Language models that can mimic human conversation styles open up opportunities for the spread of textual disinformation on an unprecedented scale. Many platforms do not yet have adequate verification systems to identify AI-generated messages.
Democratization of Disinformation
The ease of creating manipulative content has lowered the barriers to entry for both state and non-state actors. Anyone can launch influence operations with minimal capital. This is what makes experts refer to this era as the "democratization of disinformation."
Impact on the 2024–2026 Election Cycle
Although the impact of AI on the 2024 elections has not reached an apocalyptic level as predicted a few years ago, emerging trends indicate a real threat. Experts warn that this temporary calm can actually trigger a false sense of security. AI technology is developing much faster than its regulations.
Three major trends to watch: first, AI is becoming increasingly persuasive and emotional; second, the volume of AI-based content is rising sharply and circulating across platforms; third, the public is beginning to experience information fatigue. When society feels unable to distinguish the truth, they tend to withdraw from politics. This impact is more damaging than the disinformation itself.
Global Rivalry and AI Standard War
At the geopolitical level, the US-China competition in AI technology development worsens global fragmentation. The United States tightens export controls to maintain technological supremacy, while China expands investments in strategic industries. Countries such as the EU, Japan, UAE, and India become swing states in determining global standards.
The fragmentation of global AI standards hampers efforts to create a harmonious governance framework. Many countries have to choose between adopting the strict EU model or a looser system like the US.
Ongoing Fragmentation of International AI Regulations
The European Union, through the EU AI Act, leads in establishing comprehensive regulations regarding the use of AI. UU this prohibits profiling based on political beliefs and systems that manipulate free will. On the other hand, the United States is still stuck in patchwork state-level regulations. China focuses on content sensors through Deep Synthesis Regulations.
The absence of a uniform global framework makes the international information space increasingly vulnerable to disinformation.
Public Literacy as Systemic Defense
Experts agree that the solution is not only technical. Watermarking, deepfake detection, or model restrictions cannot be the only defenses. Public resilience must be strengthened through media literacy and digital skepticism. However, media literacy should not replace the obligation of technology platforms and regulators to provide structural protection.
Society must be supported with the ability to distinguish credible sources, understand algorithmic bias, and recognize narrative manipulation.









