THE ROLE OF GENERATIVE AI IN UNDERMINING ELECTORAL INTEGRITY A STUDY ON AIDRIVEN ELECTION INTERFERENCE

http://dx.doi.org/10.31703/gpr.2025(X-I).08      10.31703/gpr.2025(X-I).08      Published : Mar 2025
Authored by : AliImran , Muhammad IrfanAli , AbdurRehman

08 Pages : 89-106

    Abstract

    The rapid advancement of generative artificial intelligence(AI)technologies presents a transformative yet perilous frontier in the domain of electoral integrity. As tools like deepfakes and large language models(LLMs), including ChatGPT, become increasingly accessible, they offer new avenues for information manipulation, narrative distortion, and psychological influence at unprecedented scale and sophistication. This paper investigates the multifaceted impact of generative AI on democratic processes, focusing on three core dimensions:the erosion of voter trust through deepfake-driven disinformation campaigns; the weaponization of LLMs to manufacture and amplify persuasive electoral narratives; and the pressing need for international governance mechanisms to regulate and mitigate AI-fueled election interference.Drawing from recent electoral events, cross-national case studies, and emerging empirical research, the study reveals how deepfakes are reshaping public perception by blurring the line between reality and fabrication. Simultaneously,it explores how LLMs are being deployed to automate propaganda, target voter subgroups with hyper-personalized messaging, and exploit linguistic and cognitive biases.

    Key Words

    Generative AI, Deepfakes, Large Language Models (LLMs), Electoral Integrity, International Governance, Disinformation Campaigns, Voter Trust, Election Interference

    Introduction

    The intersection of artificial intelligence (AI) and democratic processes represents one of the most consequential technological shifts in modern political history. As our democratic societies become more and more dependent on digital ways of talking and sharing information, artificial intelligence is playing a bigger role in political conversations, campaign plans, and how news spreads. This brings exciting new possibilities, but many of them also raise serious concerns (Allen Lab for Democracy Renovation Fellow, 2025).

    AI's role in elections has been changed dramatically. It's not just used for simple tasks anymore. Now, it's key to creating political stories and changing what people think. Alarmingly, this technology can even damage the fairness of democratic elections. It's no longer on the sidelines; it's right in the middle of things. (Stanford Graduate School of Business, 2024).

    Generative AI is a type of technology that can create content on its own, such as text, images, and videos, that look and feel like they were made by humans. Two notable examples are deepfakes, which use complex algorithms to create realistic synthetic media, and LLMs, which can generate text that is both persuasive and contextually relevant. These technologies are transforming the way we live and interact with each other, and it's crucial to stay informed about their developments and implications. The capabilities of these tools extend beyond simple automation; they allow for sophisticated, scalable, and personalized manipulation of political information, often without the audience realizing the content is machine-generated (Brennan Center for Justice, 2023).

    The way AI and elections are mixing has sped up in the last five years. During this time, the world has become more worried about election meddling, online warfare, and democracies becoming weaker. People involved in politics are using AI in new ways. This includes regular campaign teams but also government-backed hackers and extreme groups. They use AI to make it look like everyone agrees with them, to spread false information, and to fill the internet with fake content. This content is made to confuse, trick, or play on voters' feelings (International IDEA, 2024).

    Initially, concerns about digital threats to elections focused on cybersecurity and foreign hacking, particularly during landmark events like the 2016 U.S. presidential election or the Brexit referendum. Early AI threats in elections were mostly about defense. People focused on protecting systems. They tried to stop attacks on things like voting machines. Now, things are different. Generative AI lets people actively create and spread stories. This is a big change. Instead of just hacking into systems, someone can change what voters see and hear. They can control the information voters use to make choices. This shifts the focus from attacking the system to attacking the voter's perception of reality. (Regaining Power Over AI, 2025).

    Political communication is becoming more personalized, more targeted, and more tech-savvy. This transformation is changing the way politicians connect with voters and the way campaigns are run. Generative AI technologies are uniquely positioned to exploit this transformation because they excel at content generation, contextual adaptation, and mass personalization. As AI becomes more integrated into electoral ecosystems, it is no longer just a tool of convenience; it has become a strategic asset and in some cases, a digital weapon (ISPI, 2024).

    The study of generative AI and its implications for electoral integrity is not just timely, it is vital. Democracies work best when people have good information, trust how they get that information and can see how political decisions are made. But generative AI can mess with all of this. It can change what people think is true. It can twist facts (Center for Democracy and Technology, 2024).

    Deepfakes, for instance, exploit the human brain’s instinctive trust in visual stimuli, while AI-generated text can simulate authority and credibility (Huschens et al., 2023).

    From a legal and regulatory perspective, the rapid proliferation of generative AI also presents a governance dilemma. Current laws and regulations surrounding elections, media, and online platforms are struggling to keep up with the rapid spread of AI-driven disinformation.  Generative AI also creates problems because it works everywhere. This means someone can interfere with an election in another country without being there. They can do it secretly, and it's very hard to find out who they are or to punish them. This global nature of AI makes it much harder to stop election interference and hold those responsible accountable (Taeihagh, 2025).

    The threat posed by generative AI in elections can be understood through two core technologies: deepfakes and LLM-driven text generation. Though distinct in their outputs, both operate under the same principle (Riedl, 2024).

    Deepfakes are AI-generated videos, images, or audio recordings created using techniques like Generative Adversarial Networks (GANs). When it comes to elections, deepfakes are a real threat. They can create fake videos that look like a candidate saying or doing something they didn't. They can also make it seem like someone important is supporting a candidate when they aren't. Because these fakes look so real, people tend to believe them. Our brains are naturally wired to trust what we see with our own eyes. This makes deepfakes especially harmful in the political arena. In high-stakes moments—such as just before election day—a strategically released deepfake can shift public opinion, cause reputational damage, or incite unrest before it can be fact-checked (Insikt Group, 2024).

    The rise of deepfakes has a profound impact on how we perceive visual evidence. It's not just about fake videos themselves, but also about the doubt they cast on everything else. This phenomenon is known as the "liar's dividend." Essentially, as deepfakes become more prevalent, trust in all video and photographic evidence starts to erode. The consequences are far-reaching. Political leaders, for instance, might try to discredit genuine footage by labeling it as "fake." By doing so, they can avoid being held accountable and fuel public skepticism, making it increasingly difficult to discern fact from fiction (Schiff et al., 2024).

    LLMs like ChatGPT present a parallel threat through textual manipulation. These AI systems learn from huge amounts of text written by people. Because of this, they can create content that sounds very natural and can easily change its style to fit different situations (Urman & Makhortykh, 2024).


    Research Questions

    1. How do deepfakes and AI-generated disinformation campaigns reshape voter perceptions and trust in democratic electoral processes?

    2. How can Large Language Models (LLMs), such as ChatGPT, be weaponized to influence electoral narratives?

    3. What role should international governance mechanisms play in regulating the use of generative AI in elections, and how can global cooperation be fostered to prevent cross-border AI-driven electoral manipulation?

    Literature Review

    With the rapid advancement of generative AI, the information warfare landscape has entered a new era. Deepfakes are AI-created videos that look incredibly real. They can show people doing or saying things that never actually happened. This makes it hard to trust any video or audio we see, which is a serious problem. These realistic fakes directly threaten whether we can believe what we see and hear. Westerlund (2019) warned of deepfakes’ destabilizing potential in democratic societies, especially during election cycles when timing and virality are crucial. Maras and Alexandrou (2019) further examined the forensic limitations in detecting such content, emphasizing how the growing realism of deepfakes is outpacing the development of detection technologies.

    The landscape of misinformation is evolving, and a new player has entered the scene. Large Language Models (LLMs), such as OpenAI's ChatGPT, GPT-4, and Anthropic's Claude, are changing the game. These powerful tools are shifting the focus from visual misinformation, like fake images and videos, to the realm of text and language. Hao and Wu (2023) demonstrated that LLMs can be manipulated through prompt engineering to generate politically biased content, simulate ideological debates, and automate mass commentary campaigns. These AI systems are quite crafty. They can talk in a way that sounds very human (Hao & Wu, 2023; Zhou et al., 2024; OpenReview, 2024).


    Nature and Evolution of Deepfakes in Politics

    Deepfakes refer to synthetic media—most often videos, audio, or images—that use advanced deep learning models, particularly Generative Adversarial Networks (GANs), to create hyper-realistic but fabricated content. What began as a niche technology for entertainment and experimental research has rapidly evolved into a political threat vector, enabling malign actors to manipulate public discourse at unprecedented scale and realism (Babaei et al., 2025; Batista, 2025; Ben Aissa et al., 2024).

    In the political domain, deepfakes are distinct in their intentionality and precision. They are designed not just to mislead, but to amplify polarization, undermine political candidates, and destabilize electoral processes. Their persuasive power lies in their realism—visual and auditory mimicking that can circumvent critical thinking mechanisms and provoke emotional reactions (Brennan Center for Justice, 2023; Insikt Group, 2024; Appel & Prietzel, 2022).

    Early iterations of political deepfakes included dubbed videos or simple visual edits. However, current versions are highly sophisticated, capable of generating entire speeches, facial expressions, and contextually relevant dialogue that mimic the style, tone, and voice of real individuals. The increasing availability of open-source tools and low entry barriers further exacerbate the threat, allowing even non-experts to deploy deepfakes for political interference (Insikt Group, 2024; Appel & Prietzel, 2022; Brennan Center for Justice, 2023).


    Case Studies of Verified Deepfake Incidents During Elections

    Several high-profile incidents across different electoral contexts illustrate the potency of deepfakes in real-world political environments:


    ?? India, 2020 Delhi Elections

    ? A video circulated widely featured BJP politician Manoj Tiwari, who seemingly spoke multiple languages to appeal to different voter segments.

    ? An investigation by Vice and AltNews revealed the videos were AI-manipulated using deepfake software. Though framed as a legitimate outreach innovation, it raised ethical and electoral transparency concerns (Vice, 2020; Strait Times, 2023).


    ?? United States, 2020 Presidential Election (Threat Scenarios)

    ? While no viral deepfake was confirmed during the election itself, intelligence reports from the U.S. Department of Homeland Security and Stanford Internet Observatory warned of ongoing testing of deepfake technology by foreign adversaries, including Russia and China.

    ? Deepfake-style manipulated media were also used to distort statements made by President Biden and Donald Trump, though these were technically "shallow fakes" (manipulated real footage) (Center for Informed Public, 2020; Malwarebytes Labs, 2020).


    ?? Ukraine, 2022 Conflict Disinformation Campaign

    ? A deepfake video surfaced showing President Volodymyr Zelensky allegedly urging Ukrainians to surrender to Russian forces.

    ? Meta and other platforms removed the video, confirming it was a foreign-sponsored influence operation.

    ? Though not tied to an electoral event, this case revealed how deepfakes can disrupt political legitimacy and morale during times of democratic stress (DW, 2022; Bronovytska, 2024).

    These incidents underscore a growing trend: synthetic media is shifting from novelty to tactical political tools, often eluding traditional fact-checking mechanisms and regulatory frameworks.


    Psychological and Social Effects on Voter Trust

    Deepfakes exploit the cognitive vulnerabilities of voters by presenting falsified content that appears viscerally convincing. This triggers affective polarization—a condition in which emotional responses override factual reasoning. The consequences include:

    ? Epistemic confusion: Voters struggle to distinguish fact from fiction, leading to skepticism about all media.

    ? “Truth decay”: Repeated exposure to manipulated content causes desensitization and loss of confidence in democratic information systems.

    ? “Liar’s dividend”: Politicians can now dismiss real, damaging evidence as "deepfakes, "undermining journalistic accountability.

    A 2021 study published in Harvard Kennedy, 2021 School's Misinformation Review found that even when participants were told a video was synthetic, up to 27% still perceived it as credible, and over 40% were unsure, highlighting the residual impact of manipulated visuals on memory and perception. Moreover, an MIT Media Lab experiment showed that false videos were shared six times faster than real ones, compounding their viral potential and making real-time correction mechanisms less effective.


    Survey and Data-Backed Insights on Public Perception

    Empirical data shows growing public awareness of deepfakes—alongside rising fear and distrust in political communications.

    According to a Pew Research Center 2022 survey:

    ? 63% of Americans were aware of deepfakes.

    ? 77% believed deepfakes posed a major threat to the integrity of elections.

    ? 53% expressed doubt about their ability to identify real from fake political content.

    In the EU DisinfoLab's 2023 report, nearly 48% of surveyed citizens across France, Germany, and Italy reported seeing or suspecting manipulated content related to political candidates during national elections.

    A Microsoft Deepfake Perception Index (2021) noted that:

    ? Young voters (18–35) were more vulnerable to persuasive deepfake content, especially when presented in a meme or short-video format (e.g., TikTok).

    ? Deepfakes often trigger outrage or humor, both of which lead to increased sharing regardless of truthfulness.

    These insights collectively indicate that deepfakes erode the social contract between voters and democratic institutions, even in cases where the manipulation is later debunked.


    Response Frameworks from Election Commissions and Digital Platforms

    Given the escalating threat, various national and institutional actors have begun to develop response frameworks—though these remain inconsistent and often reactive:


    Election Commissions and Governments

    ? India’s Election Commission issued guidelines requiring political parties to disclose AI-manipulated content and watermark deepfakes (Election Commission of India, 2025).

    ? The U.S. Federal Election Commission (FEC) is deliberating proposals to ban “materially deceptive synthetic content” under campaign laws (FEC, 2023).

    ? The EU Digital Services Act (DSA) mandates platforms to remove manipulated media but lacks specificity on generative AI (European Commission, 2024).


    Tech Platforms

    ? Meta (Facebook, Instagram) and YouTube have policies to remove misleading deepfakes, but enforcement is uneven (Meta, 2024).

    ? Twitter/X labels manipulated content, but not always reliably (Twitter, 2020).

    ? TikTok banned synthetic political content outright, but moderation gaps remain (TikTok, 2022).

    Some platforms have invested in deepfake detection tools, often AI-based (e.g., Microsoft’s Video Authenticator), but these tools are far from foolproof, and the arms race between creators and detectors is ongoing (Microsoft, 2020).


    Weaponizing LLMs: Manipulating Narratives at Scale

    Capabilities of ChatGPT-like Models in Generating Persuasive, Plausible Content Large Language Models (LLMs) like OpenAI’s ChatGPT, Meta’s LLaMA, and Google’s Gemini represent a seismic leap in natural language processing, capable of generating coherent, contextually relevant, and persuasive text that mirrors human discourse. Trained on massive datasets comprising books, articles, web content, and social media interactions, these models can generate or summarize political arguments, simulate ideological positions, and even emulate regional dialects or cultural references with alarming fluency (Bansal et al., 2024; AIContentfy, 2024; Horsey, 2025).

    Key attributes of LLMs that lend themselves to political manipulation include:

    ? Contextual Sensitivity: LLMs can tailor outputs based on prompts, making them ideal for crafting targeted misinformation aimed at specific voter segments (e.g., age, region, political leaning).

    ? Emotion Engineering: Prompt tuning can be used to manipulate sentiments—such as anger, fear, or nationalism—thereby intensifying polarization and identity-based politics.


    Examples of Coordinated Manipulation: Comment Flooding and Fake News Automation

    Several instances and theoretical models illustrate how LLMs can be co-opted into coordinated disinformation architectures. These include:


    Comment Flooding & Forum Hijacking

    ? LLMs can be used to generate high-volume, low-effort comments that flood online forums, news comment sections, or public consultations.

    ? During the U.S. net neutrality debate, millions of fake comments were submitted to the FCC, some later revealed to be bot-generated (BuzzFeed News, 2019).


    Fake News Automation

    ? GPT-based models have demonstrated the ability to generate false news stories that are indistinguishable from real journalism.

    ? A 2023 study by the University of Amsterdam showed that GPT-3 could generate highly convincing disinformation narratives related to COVID-19, climate policy, and migration.

    ? These articles passed linguistic authenticity tests and triggered comparable trust ratings to legitimate news in blind tests (Vykopal et al., 2023).


    Echo Chambers and Narrative Hardening

    Voters are increasingly exposed to one-sided, emotionally charged information, deterring reasoned discourse and increasing mistrust of opposing views (Cinelli et al., 2021).


    Chatbot Armies and Social Engineering

    ? Political actors or foreign adversaries can deploy LLM-powered chatbots across messaging apps, forums, and comment threads.

    ? The phenomenon of “astroturfing”—artificially manufacturing grassroots support—is significantly enhanced by generative AI (Marcellino et al., 2023).


    Algorithmic Reinforcement

    Generative narratives, once seeded, are boosted by

    platform recommender systems based on user engagement (Bontridder & Poullet, 2021).


    Risks of Language Manipulation in Multi-lingual Democracies:

    Localized Disinformation

    Political actors may use these to undermine minority or opposition narratives, especially in linguistically fragmented constituencies (Quelle et al., 2023).


    Targeted PsyOps

    LLM-generated content in minority languages often escapes mainstream moderation and fact-checking protocols, allowing unregulated narrative manipulation in vulnerable communities (Dubinsky & Starr, 2022; Muirhead, 2001).


    Inconsistent Platform Policies

    Many social platforms lack moderation capacity for non-English content, making regional-language deepfakes or fake news easier to propagate (Global Witness, 2023; Election Integrity Partnership, 2020).


    India’s 2024 Elections

    Observers noted the spread of AI-generated narratives tailored in Bengali, Kannada, and Marathi, many of which targeted religious or caste tensions—highlighting how multi-lingual architecture becomes a surface of attack in information warfare (Dhanuraj et al., 2024; Gupta & Mathews, 2024).


    Defending Democracy in the Age of Generative AI

    The convergence of generative artificial intelligence and electoral systems has ushered in a new frontier of democratic vulnerability. This research has critically explored how emerging technologies such as deepfakes and large language models (LLMs) like ChatGPT are not merely tools of innovation, but increasingly, instruments of manipulation capable of influencing voter perception, undermining electoral integrity, and circumventing existing regulatory and ethical safeguards. As democracies around the world navigate this complex terrain, this study provides both a conceptual and empirical foundation to understand, anticipate, and respond to the weaponization of AI in electoral processes.

    Key Themes and Gaps in Literature

    The reviewed literature reveals several overarching themes:

    ? AI as Amplifier and Innovator: Generative AI extends both the scale and innovation of political manipulation. It introduces new content forms (e.g., deepfake videos, LLM-generated essays, synthetic memes) that can mimic human creativity and intent.

    ? Governance Gap: There is a widening gap between technological capability and legal or ethical oversight. Most governance frameworks remain reactive and fragmented.

    ? Psychological Subtlety: Unlike earlier propaganda, AI-generated disinformation often works through subtle framing and emotional priming rather than blatant falsification—making it harder to detect, resist, or fact-check.

    ? Comparative Complexity: Strategies of AI weaponization vary based on platform design, regional regulatory environments, and political culture, demanding localized analysis within a global framework.

    ? Detection and Resilience: While emerging tools for deepfake and LLM detection exist, the adversarial evolution of generative AI continues to outpace them.

    Despite these insights, key gaps remain—particularly in empirically measuring the real-world influence of AI-generated content on electoral outcomes. Additionally, the ethical dimensions of AI-assisted political messaging, especially when voluntarily deployed by campaign teams, are underexplored.


    Research Objectives

    1. To examine the impact of deepfakes and AI-generated disinformation campaigns on voter perceptions and public trust in democratic electoral processes.

    2. To analyze the potential misuse of Large Language Models (LLMs), such as ChatGPT, in shaping and manipulating electoral narratives.

    3. To explore the role of international governance mechanisms in regulating generative AI technologies during elections and to identify strategies for fostering global cooperation against cross-border AI-driven electoral manipulation.


    Theoretical Framework

    Selected Theoretical Framework: Framing Theory


    Introduction to Framing Theory

    Framing Theory, originally developed in media studies and political communication (Goffman, 1974; Entman, 1993), posits that how information is presented—its frame—shapes how audiences interpret it. In political contexts, frames influence voters’ perceptions of legitimacy, urgency, causality, and morality.

    Framing is not merely about presenting facts, but about selecting certain aspects of reality to make them more salient in a communicating text. As Entman (1993) defined, framing involves “selection and salience”—highlighting some pieces of information while obscuring others to promote a particular interpretation.


    Why Framing Theory for Generative AI and Electoral Integrity?

    Framing Theory is especially suited for analyzing AI-generated content (LLMs, deepfakes, synthetic personas) because:

    ? AI tools can generate or reinforce specific political frames—portraying candidates as saviors or villains, issues as urgent or trivial, and institutions as trustworthy or corrupt.

    ? Deepfakes visually frame political actors, shaping how viewers emotionally and cognitively process them.

    ? ChatGPT-style bots can flood online spaces with consistent framing, creating an illusion of widespread consensus or dissent.

    In electoral interference, the frame is often weaponized—designed to erode trust, promote polarization, or fabricate scandals. Unlike traditional political advertising, these AI-generated frames can appear user-generated, authentic, or even neutral, making them more effective and insidious.


    Core Concepts of Framing Theory in This Context:

    Diagnostic, Prognostic, and Motivational Frames (Snow & Benford, 1988)

    Framing operates through three key processes:

    ? Diagnostic Framing: Identifies a problem and assigns blame. E.g., an AI-generated video claims that a politician was involved in fraud—framing them as corrupt and responsible for national decline.

    ? Prognostic Framing: Suggests solutions or strategies e.g., AI-driven propaganda may frame "electing outsiders" or "draining the swamp" as the solution to fabricated problems.

    ? Motivational Framing: Offers rationales for taking action e.g., LLMs flooding online spaces with emotional appeals to boycott elections or "take back the country."

    Together, these reinforce narrative legitimacy, a critical factor in democratic opinion formation.


    Emotional vs. Rational Framing

    AI-generated content often favors emotional framing (fear, anger, pride) over rational arguments. Deepfakes, by their visual nature, are uniquely suited to evoke strong emotional reactions, often overriding logical deliberation.

    For instance:

    ? A deepfake video showing a candidate expressing racist views—even if false—invokes anger or fear and shifts the election narrative entirely.

    ? ChatGPT-generated stories can simulate grassroots testimonials to reinforce partisan worldviews and distrust of institutions.

    Such emotionally charged frames are more likely to spread (virality) and influence low-information voters.


    Cultural Resonance and Identity Framing

    Framing effectiveness depends on how well the message aligns with existing cultural narratives and identity markers. Generative AI can tailor messages to micro-audiences, using regional, ethnic, or ideological cues to reinforce in-group/out-group dynamics. E.g.,

    ? Generative models targeting U.S. evangelical voters with content suggesting divine endorsement of a candidate.

    ? Deepfakes portray opposition leaders mocking local religious or ethnic groups to incite resentment.

    By framing political actors and issues through cultural resonance, AI-generated disinformation can manipulate not just beliefs, but identity.


    Implications for Democratic Integrity

    Framing Theory shows that truth is not always about facts—but how facts are framed. In the context of AI-driven election interference:

    ? The legitimacy of electoral outcomes can be undermined not by vote-rigging, but by narrative manipulation.

    ? Public trust collapses when consistent frames suggest elite conspiracy, foreign meddling, or systemic bias.

    ? Democratic institutions are vulnerable to disinformation frames that present them as opaque, illegitimate, or irrelevant.

    Thus, the weaponization of generative AI through framing becomes a critical threat to deliberative democracy.


    The Need for Counter-Framing and Regulation

    To counter these threats, framing theory suggests:

    ? Pre-bunking and inoculation strategies: Public education to recognize manipulative frames.

    ? Narrative audits: Algorithms to detect repeated, coordinated framing patterns.

    ? Platform accountability: Mandates for labeling AI-generated content and demoting manipulative frames.

    At the international level, a governance framework must:

    ? Establish global norms on election-related framing.

    ? Coordinate intelligence sharing on narrative manipulation.

    ? Impose sanctions or countermeasures on cross-border electoral framing operations.

    Research Methodology

    Methodological Overview

    This study employs a Qualitative research approach, combining qualitative content analysis, in order to thoroughly examine the impact of generative AI technologies—including deepfakes, Large Language Models (LLMs) like ChatGPT, and synthetic media—on electoral integrity across national contexts. 

    Qualitative Component: Content and Discourse Analysis

    The qualitative strand uses thematic content analysis and, optionally, critical discourse analysis (CDA) to examine how generative AI technologies construct and propagate political narratives. This component draws on Framing Theory as outlined in the theoretical framework.


    Data Sources for Qualitative Analysis:

    ? AI-generated content samples (deepfakes, ChatGPT outputs, manipulated memes).

    ? Social media content (public Facebook posts, YouTube videos, Telegram channels).

    ? Election-related narratives tracked by digital watchdogs (e.g., EUvsDisinfo, Graphika, AltNews).


    Limitations and Delimitations

    ? Access restrictions on certain platform data (e.g., Facebook post-level data) may limit the depth of network analysis.

    ? The study is focused on elections in democratic or semi-democratic contexts and does not address autocratic uses of AI in elections.

    ? Deepfake detection technologies are still evolving, which may impact identification accuracy in historical cases.


    Expected Contributions

    This methodology enables the study to:

    ? Map how AI technologies shift the narrative landscape of elections.

    ? Measure the actual spread and influence of disinformation in real time.

    ? Propose actionable regulatory, technological, and communicative responses.

    ? Offer comparative insight into the interplay between technology, politics, and transnational governance.


    International Governance and Regulation: Confronting AI-Driven Electoral Interference:

    The Global Legal Vacuum: Inadequate Frameworks for AI in Elections

    As generative AI technologies rapidly permeate the political and electoral domain, international legal frameworks remain conspicuously underdeveloped, leaving democratic systems exposed to sophisticated digital manipulation. While there is growing recognition of the dangers posed by AI-generated disinformation, deepfakes, and algorithmic influence, no binding global legal instrument currently addresses the specific threat of AI-driven electoral interference (Council of Europe, 2024; RAND Corporation, 2022).

    This governance vacuum manifests in several key areas:

    ? Lack of treaty-based obligations: Unlike cybercrime or terrorism, AI misuse in electoral systems has no existing treaty under the UN, Council of Europe, or OECD frameworks (Council of Europe, 2024; OECD, 2024).

    ? Absence of international norms: Although norms like “cyber-peace” or “responsible state behavior in cyberspace” are emerging, there is no consensus on the definition or accountability for AI-mediated election interference (ASPI, 2021; UN, 2021).

    ? Asymmetrical development: Advanced economies dominate the AI innovation landscape, while developing democracies remain defenseless or under-regulated, exacerbating geopolitical inequalities in disinformation resilience (UNCTAD, 2025; UNESCO, 2024).

    This absence of normative clarity enables both state and non-state actors to deploy generative AI in electoral contexts with near-complete impunity, exploiting the grey zones between freedom of speech, cyber operations, and electoral regulation.


    Analysis of Emerging Regulatory Models

    Several jurisdictions have begun experimenting with regulatory approaches aimed at controlling AI risks, though none directly resolve cross-border electoral manipulation.


    ?? European Union – The EU AI Act

    ? The EU Artificial Intelligence Act, adopted in 2024, is the world’s first major regulatory framework governing AI systems. It adopts a risk-based approach, categorizing AI applications into minimal, high, and unacceptable risks.

    ? AI used for “subliminal manipulation” or political deception can be categorized as “high-risk.”

    ? Requires transparency labeling for AI-generated content and disclosures for deepfakes.

    ? However, the Act’s enforcement is limited to EU territory, and election-specific provisions remain vague, raising concerns about practical deterrence.

    ? ?? United States – Transparency and Platform Responsibility

    ? U.S. initiatives have largely centered on voluntary frameworks:

    ? The Blueprint for an AI Bill of Rights outlines principles of transparency, accountability, and fairness.

    ? Federal Election Commission (FEC) debates on labeling AI-generated political ads are ongoing.

    ? Despite its technological leadership, the U.S. lacks coherent federal legislation on electoral AI manipulation, leaving gaps in enforcement, especially on social media platforms.


    Global South & Fragmented Norms

    Many emerging democracies have minimal AI regulation and lack the technical capacity to monitor or attribute AI-generated electoral content.

    In places like India, Brazil, and Nigeria, election commissions have relied on platform cooperation and judicial directives, rather than dedicated AI laws.

    Overall, current national efforts are disjointed, jurisdiction-bound, and reactive, underscoring the need for multilateral regulatory architecture.


    The Role of International Institutions:

    United Nations

    The UN Secretary-General's Global Digital Compact (2023) proposes international principles for responsible AI, including transparency and non-interference in democratic processes.

    However, enforcement is non-binding, and major powers remain divided on key provisions, especially around surveillance and state use of AI.


    OECD and G7

    The OECD AI Principles and G7 Hiroshima Process

     have recommended multi-stakeholder oversight, algorithmic transparency, and responsible innovation.

    Yet, implementation remains voluntary, and electoral interference is treated more as a data protection issue than a democratic threat.


    Interpol and Transnational Policing

    Interpol has begun integrating AI threat detection in its cybercrime operations, including tools to monitor malicious AI-generated content.

    However, its role in electoral contexts is constrained by national sovereignty, and few countries report AI disinformation as a cross-border criminal offense.

    Despite growing awareness, there is still no institution with a clear mandate to address generative AI in elections as a transnational governance issue.

    Cross-Border Enforcement and the Challenge of Cyber-Sovereignty The enforcement of electoral safeguards in a digital world confronts fundamental jurisdictional dilemmas:

    State Sovereignty vs. Transnational Platforms: While states regulate elections domestically, platforms like Meta, Google, and TikTok operate globally, often outside local jurisdiction or enforcement reach (PolSci Institute, 2023; Chapdelaine & Rogers, 2021).

    Attribution Complexity: AI-generated disinformation is hard to trace, allowing malicious actors (including foreign governments) to operate behind layers of anonymity and proxies (Bontridder & Poullet, 2021; Saeidnia et al., 2025).

    Geopolitical Contestation: Countries like China and Russia have opposed international regulation that limits their information sovereignty or domestic AI applications (Zürn, 2020; Mishra, 2025).

    Lack of Extraterritorial Laws: Most national electoral laws are territorially bound, meaning that actors in other jurisdictions cannot be held accountable for interference unless extradition or diplomatic pressure is viable (Criddle, 2024; Kamminga, 2020).

    These constraints leave democracies vulnerable to foreign influence operations that deploy AI tools from outside national borders, while legal instruments remain too weak or siloed to respond effectively.

    Policy Proposals: Towards Treaty-Level AI Governance

    To address the global vacuum, scholars and policy experts have proposed various pathways for treaty-based international cooperation:


    A UN Convention on AI and Democracy

    Modeled on conventions against cybercrime or human trafficking, a new “AI and Electoral Integrity Convention” could establish:

    ? Prohibited uses of AI in electoral contexts.

    ? Minimum standards for transparency, content labeling, and auditability.

    ? Cross-border cooperation mechanisms for investigation and response (Council of Europe, 2024; UN News, 2024).


    Election AI Protocols within Existing Treaties

    Add specific AI clauses to existing frameworks like the Budapest Convention on Cybercrime or the International Covenant on Civil and Political Rights (ICCPR) to prohibit electoral manipulation via synthetic content (Council of Europe, 2024; OHCHR, 2024).


    Platform-Government Compacts

    Develop binding agreements between states and platforms obligating tech companies to:

    ? Share data on coordinated AI influence operations.

    ? Implement global AI transparency tags for election-related content.

    ? Support independent election monitoring with AI audit tools (Partnership on AI, 2024; OECD, 2025).


    Global Digital Peace Council

    Proposed by civil society groups, this body would oversee AI norms, crisis response protocols, and intervention in digital election crises, modeled on peacekeeping in the information space (UN Office for Digital and Emerging Technologies, 2024; UNSDG, 2024).

    Discussion

    Interrelation Between Deepfakes, LLMs, and Governance Gaps

    The convergence of deepfakes, large language models (LLMs) like ChatGPT, and the global governance vacuum present a multi-dimensional threat to democratic electoral systems. Deepfakes are compromising the authenticity of visual content, allowing for fake videos and audio recordings that can impersonate public figures. Meanwhile, Large Language Models (LLMs) are taking aim at the written word, generating narratives, speeches, and false information that are eerily convincing. This symbiotic relationship between audiovisual deception and textual persuasion creates a highly potent disinformation ecosystem. The lack of synchronized international governance frameworks, enables actors to exploit these tools across borders with minimal accountability.


    Long-Term Democratic Implications

    The future impact of letting generative AI run open in elections is a big concern. It could fundamentally change things and hurt the public's faith in democracy. We're facing some significant risks, including:

    ? Erosion of Trust in Democratic Institutions: As manipulated content becomes indistinguishable from authentic media, voters may begin to mistrust not only fake content but real communications from candidates and institutions.

    ? Voter Apathy and Skepticism: The consequences of manipulated narratives can be far-reaching. When people are bombarded with false information, they can become disillusioned and disconnected from the democratic process. In extreme cases, citizens may choose not to participate in elections altogether, feeling their votes won't count or that no trustworthy candidates are available.

    ? Weaponization of Electoral Dissertation: Political players, even those from other countries, might use advanced AI language models more and more. They could create a lot of online content that pushes extreme viewpoints, spreads false information, or stirs up ethnic tensions. This could break down how well people in a society get along and make cultural differences seem bigger, especially in countries with many ethnic groups or languages.

    ? Normative Changes in Campaign Strategy: The increasing use of AI tools in political campaigns poses significant risks to the democratic process. These tools can be used not only for targeting and outreach but also for crafting messages that manipulate public sentiment, often blurring the lines of ethical behavior. This shift towards sentiment-driven manipulation could undermine ideologically grounded platforms, leading to a degradation of democratic decision-making.


    Ethical Concerns in Restricting Generative AI

    The intersection of freedom of speech, censorship, and algorithmic governance raises complex ethical dilemmas. One major concern is the potential restrictive regulations may inadvertently silence dissenting voices, satire, or marginalized perspectives. This can be particularly problematic in polarized or authoritarian environments.

    Another issue is algorithmic discrimination and bias. AI detection systems may embed biases, disproportionately flagging content from non-Western languages or minority communities due to gaps in training data. This can exacerbate epistemic injustice and perpetuate existing social inequalities.

    The current lack of public input in AI content policies also raises concerns about the private governance of political speech. This is crucial to prevent corporate interests from dictating the rules of AI governance.

    Over-reliance on AI-based censorship may undermine citizen agency, creating passive publics rather than resilient ones.

    Recommendations

    Policy-Level Recommendations

    AI Content Transparency Mandates

    Developers of large language models and visual synthesis systems (e.g., ChatGPT, DALL·E, MidJourney) should be legally required to disclose usage patterns, particularly when their tools are used at scale in electoral contexts. Such disclosures might include:

    ? Disclosure of high-volume generation for political domains.

    ? API-based logging of campaign-related content generation.

    ? Transparency dashboards for watchdogs and electoral bodies.


    Electoral-Specific Legislation on Generative AI Use

    Electoral commissions and legislative bodies should enact tailored policies that:

    ? Prohibit unauthorized impersonation of political figures via AI.

    ? Require AI-generated political ads or campaign content to be clearly labeled.

    ? Mandate timely takedown of malicious generative content during the campaign silence period.


    Cross-sector Oversight Committees

    Establish multi-stakeholder AI & Elections Councils comprising regulators, civil society, political parties, tech firms, and academic experts to conduct pre-election AI threat assessments, review complaints, and ensure proportional and balanced regulatory interventions.


    Technological Countermeasures:

    AI-Driven Content Detection and Fact-Checking

    Governments and platforms should invest in advanced detection systems capable of flagging:

    ? Deepfake videos and audio clips (using frame analysis and biometric inconsistency checks).

    ? LLM-generated disinformation (via linguistic anomaly detection and metadata tracing). These tools should be integrated into content moderation workflows, enabling rapid response to coordinated disinformation campaigns.


    Open-Source Detection Collaboration

    Facilitate global collaboration between AI firms and independent researchers to develop and maintain open-source tools for deepfake and LLM-text detection, especially for use in low-resource electoral contexts where proprietary tools may be unaffordable.



    Educational Campaigns and Voter Resilience Strategies:

    Public Awareness of AI-driven disinformation

    ? Understanding how deepfakes and AI-generated narratives work.

    ? Distinguishing credible content from manipulated media.

    ? Encouraging healthy skepticism and fact-checking habits.

    Curriculum Integration Integrate AI media literacy into school and university curricula, covering topics such as synthetic media ethics, algorithmic bias, and digital source verification.

    Voter Preparedness Simulations Develop interactive tools and games that simulate common disinformation strategies using AI. These tools would train citizens to identify cognitive biases, detect manipulation patterns, and become more resilient to deceptive content.

    Community Fact-Checking Networks Support grassroots organizations and local fact-checking groups to serve as first-line monitors of AI-driven electoral manipulation, especially in rural or under-connected regions.


    International Collaboration Frameworks:

    AI Ethics Councils with Electoral Oversight

    The United Nations, OECD, and regional bodies (e.g., African Union, ASEAN, EU) should convene transnational AI Ethics Councils empowered to:

    ? Create norms and principles for the responsible use of generative AI in elections.

    ? Monitor and report on AI-related electoral interference globally.

    ? Serve as consultative bodies during high-risk electoral cycles.


    Digital Peace Treaties

    Promote treaty-level agreements among democratic nations to prohibit:

    ? Cross-border use of AI for political interference.

    ? Export or sale of generative AI models to known propagandists or hostile foreign actors. These treaties should also include confidence-building measures, such as shared threat intelligence and joint election-monitoring missions.

    Conclusion

    Recapitulation of Key Insights

    At the heart of this inquiry lies a central concern: how is generative AI reshaping the democratic process, and what must be done to prevent its misuse? To that end, the research has illuminated several key insights:

    Deepfakes have evolved from niche novelty to strategic political weapons. As evidenced in recent elections across the U.S., India, and Europe, AI-generated videos and audio clips have been deployed to impersonate political candidates, spread false narratives, and distort public debates with alarming speed and realism.

    LLMs like ChatGPT have demonstrated unprecedented capacity for large-scale narrative generation, often used in the form of automated misinformation campaigns, social media flooding, and linguistic manipulation—especially in multilingual societies. These capabilities, when deployed maliciously, can reinforce political echo chambers and exacerbate polarization.

    There is a growing gap between technological capabilities and regulatory safeguards. While frameworks like the EU AI Act represent initial progress, most jurisdictions remain unprepared to address the global, cross-platform, and multilingual nature of AI-driven electoral interference.

    Public trust in democratic institutions is at risk. The psychological and social effects of synthetic disinformation contribute to a climate of uncertainty, cynicism, and disillusionment—conditions ripe for voter disengagement and democratic backsliding.

    International cooperation and governance mechanisms are underdeveloped. The lack of treaty-level coordination or AI-specific norms in international law leaves a regulatory vacuum that adversaries can exploit, especially in the realm of cross-border information warfare.

    The proliferation of deepfakes and generative disinformation undermines voter trust by injecting uncertainty into what is real or fake. Psychological studies show that even debunked content can leave lasting impressions, contributing to "truth decay" and fostering suspicion toward all political messaging. This environment creates fertile ground for delegitimization of electoral outcomes, regardless of factual accuracy.

    LLMs can be weaponized through coordinated content creation that saturates online discourse with synthetic narratives, influencing sentiment through scale, speed, and stylistic mimicry. From generating fake news articles to amplifying political bias through social bots, LLMs blur the line between organic discourse and algorithmic manipulation, complicating efforts at moderation and detection.

    International institutions must establish AI ethics norms, digital non-aggression pacts, and cross-border enforcement protocols. Given the global reach of generative AI tools and the borderless nature of digital information, cooperation among democracies is imperative. Mechanisms such as global AI ethics councils, Interpol coordination, and digital sovereignty frameworks can offer coordinated responses while respecting national jurisdiction.

    Closing Thought

    Democracy thrives on informed consent, deliberation, and trust. In an age where AI can simulate truth and manipulate reality, the defense of democracy must be as innovative and resilient as the technologies that threaten it. Let this research serve as a starting point for deeper inquiry, global action, and a renewed commitment to ethical technological progress in service of democratic ideals.

References

  • AIContentfy. (2024). The impact of ChatGPT on content creation and marketing. https://aicontentfy.com/en/blog/impact-of-chatgpt-on-content-creation-and-marketing

  • Aissa, F. B., Hamdi, M., Zaied, M., & Mejdoub, M. (2023). An overview of GAN-DeepFakes detection: proposal, improvement, and evaluation. Multimedia Tools and Applications, 83(11), 32343–32365. https://doi.org/10.1007/s11042-023-16761-4
  • Allen Lab for Democracy Renovation Fellow. (2025). Weaponized AI: The urgent need for global AI security standards. 
  • Appel, M., & Prietzel, F. (2022). The detection of political deepfakes. Journal of Computer-Mediated Communication, 27(4). https://doi.org/10.1093/jcmc/zmac008
  • ASPI. (2021). The UN norms of responsible state behaviour in cyberspace. 
  • Babaei, R., Cheng, S., Duan, R., & Zhao, S. (2025). Generative Artificial intelligence and the evolving challenge of deepfake Detection: A Systematic analysis. Journal of Sensor and Actuator Networks, 14(1), 17. https://doi.org/10.3390/jsan14010017
  • Bansal, G., Chamola, V., Hussain, A., Guizani, M., & Niyato, D. (2024). Transforming Conversations with AI—A Comprehensive Study of ChatGPT. Cognitive Computation, 16(5), 2487–2510. https://doi.org/10.1007/s12559-023-10236-2
  • Batista, M. M. (2025). Comparative analysis of deepfake detection models: New approaches and perspectives [Preprint]. arXiv. http://dx.doi.org/10.48550/arXiv.2504.02900
  • Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data & Policy. http://dx.doi.org/10.13140/RG.2.2.28805.27365
  • Brennan Center for Justice. (2023). Regulating AI deepfakes and synthetic media in the political arena. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
  • Bronovytska, Y. (2024). Deepfakes as digital propaganda: The Russian case in the war in Ukraine. Conflict, Justice, Decolonization. https://cjdproject.web.nycu.edu.tw/2024/12/04/deepfakes-as-digital-propaganda-the-russian-case-in-the-war-in-ukraine/
  • BuzzFeed News. (2019). Net neutrality fake comments: How political operatives duped Ajit Pai's FCC. https://www.buzzfeednews.com/article/jsvine/net-neutrality-fcc-fake-comments-impersonation
  • Center for Democracy and Technology. (2024). Election integrity recommendations for generative AI developers. https://cdt.org/insights/brief-election-integrity-recommendations-for-generative-ai-developers/
  • Center for Informed Public. (2020). Deepfakes and the U.S. elections: Lessons from the 2020 workshops. https://www.cip.uw.edu/deepfakes-and-the-u-s-elections-lessons-from-the-2020-workshops/
  • Chapdelaine, P., & Rogers, J. M. (2021). Contested sovereignties: states, media platforms, peoples, and the regulation of media content and big data in the networked society. Laws, 10(3), 66. https://doi.org/10.3390/laws10030066
  • Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. PNAS Proceedings of the National Academy of Sciences of the United States of America, 118(9), 1–8. https://doi.org/10.1073/pnas.2023301118
  • Council of Europe. (2024). The Framework Convention on Artificial Intelligence.
  • Criddle, E. J. (2024). Extraterritoriality’s Empire: How Self-Determination Limits Extraterritorial Lawmaking  SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4939794
  • Dhanuraj, D., Harilal, S., & Solomon, N. (2024). Generative AI and its influence on India's 2024 elections. Policy Paper. https://www.freiheit.org/sites/default/files/2025-01/a4_policy-paper_ai-on-indias-2024-electons_en-4.pdf
  • Dubinsky, S., & Starr, H. (2022). Weaponizing Language: Linguistic vectors of Ethnic Oppression. Global Studies Quarterly, 2(2). https://doi.org/10.1093/isagsq/ksab051
  • DW. (2022). The deepfakes in the disinformation war. https://www.dw.com/en/fact-check-the-deepfakes-in-the-disinformation-war-between-russia-and-ukraine/a-61166433
  • Election Commission of India. (2025). Label AI-generated content in political campaigns. https://www.eci.gov.in/eci-backend/public/api/download?url=LMAhAK6sOPBp%2FNFF0iRfXbEB1EVSLT41NNLRjYNJJP1KivrUxbfqkDatmHy12e%2FzGjJMI0%2FjETs7fjrM8lYn4ipTqYtDEvVosG8Bae5QB8%2Fj5TBF9Esc2hlzORgYtkmzyKzGsKzKlbBW8rJeM%2FfYFA%3D%3D
  • Election Integrity Partnership. (2020). Platforms of Babel: Inconsistent misinformation support in non-English languages. https://www.eipartnership.net/2020/inconsistent-efforts-against-us-election-misinformation-in-non-english
  • European Commission. (2024). The EU’s Digital Services Act.
  • FEC. (2023). Comments sought on amending regulation to include deliberately deceptive Artificial Intelligence in campaign ads. https://www.fec.gov/updates/comments-sought-on-amending-regulation-to-include-deliberately-deceptive-artificial-intelligence-in-campaign-ads/
  • Global Witness. (2023). How Big Tech platforms are neglecting their non-English language users. https://globalwitness.org/en/campaigns/digital-threats/how-big-tech-platforms-are-neglecting-their-non-english-language-users/
  • Gupta, N., & Mathews, N. (2024, September 25). India’s experiments with AI in the 2024 elections: The good, the bad & the in-between. TechPolicy.Press.
  • Harvard Kennedy School’s Misinformation Review. (2021). Study on deepfakes and their credibility.
  • Horsey, J. (2025, March 9). ChatGPT-4.5: The AI redefining emotional intelligence and creativity. Geeky Gadgets. https://www.geeky-gadgets.com/chatgpt-4-5-emotional-intelligence-creativity/
  • Huschens, M., Briesch, M., Sobania, D., & Rothlauf, F. (2023). Do you trust ChatGPT? -- Perceived credibility of Human and AI-Generated content. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2309.02524
  • Insikt Group. (2024, September 24). Targets, objectives, and emerging tactics of political deepfakes. Recorded Future.
  • International IDEA. (2024, September 17). Credibility of elections under threat worldwide. https://www.idea.int/news/credibility-elections-under-threat-worldwide
  • ISPI. (2024, February 28). An overview of the impact of GenAI and deepfakes on global electoral processes. https://www.ispionline.it/en/publication/an-overview-of-the-impact-of-genai-and-deepfakes-on-global-electoral-processes-167584
  • Kamminga, M. T. (2020). Extraterritoriality. Max Planck Encyclopedia of Public International Law.
  • Malwarebytes Labs. (2020, October 16). Deepfakes and the 2020 United States election: Missing in action? Malwarebytes. https://www.malwarebytes.com/blog/news/2020/10/deepfakes-and-the-2020-united-states-election-missing-in-action
  • Maras, M., & Alexandrou, A. (2018). Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos. The International Journal of Evidence & Proof, 23(3), 255–262. https://doi.org/10.1177/1365712718807226
  • Marcellino, W., Beauchamp-Mustafaga, N., Kerrigan, A., Navarre Chao, L., & Smith, J. (2023). The rise of generative AI and the coming era of social media manipulation 3.0: Next-generation Chinese astroturfing and coping with ubiquitous AI (PE-A2679-1). RAND Corporation. https://www.rand.org/pubs/perspectives/PEA2679-1.html
  • Meta. (2024, April 5). Our approach to labeling AI-generated content and manipulated media. https://about.fb.com/news/2024/04/metas-approach-to-labeling-ai-generated-content-and-manipulated-media/
  • Microsoft. (2020, September 1). New steps to combat disinformation. 
  • Mishra, S. (2025, March 25). The governance of geopolitical risk in 2025. Harvard Law School Forum on Corporate Governance. https://corpgov.law.harvard.edu/2025/03/25/the-governance-of-geopolitical-risk-in-2025/
  • Muirhead, J. (2001). The mind as a target: Psychological operations and data fusion technology. Air Intelligence Agency, Psychological Operations Division.
  • OECD. (2024, December 17). Recommendation of the Council on Information Integrity. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0505
  • OECD. (2025). Government automated decision-making: Transparency and responsibility in the public sector.
  • OHCHR. (2024). International Covenant on Civil and Political Rights. https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights
  • Partnership on AI. (2024). Policy alignment on AI transparency. https://partnershiponai.org/policy-alignment-on-ai-transparency/
  • PolSci Institute. (2023). International organisations and state sovereignty: Balancing power and authority. https://polsci.institute/political-theory/globalisation-impact-state-sovereignty/
  • Quelle, D., Cheng, C. Y., Bovet, A., & Hale, S. A. (2025). Lost in translation: using global fact-checks to measure multilingual misinformation prevalence, spread, and evolution. EPJ Data Science, 14(1). https://doi.org/10.1140/epjds/s13688-025-00520-6
  • RAND Corporation. (2022). Artificial intelligence, deepfakes, and disinformation: A primer. https://www.rand.org/pubs/perspectives/PEA1043-1.html
  • Regaining Power Over AI. (2025). Creative work in generative AI narratives. https://regainingpoweroverai.org/docs/research/ai-narratives-creative-work/
  • Riedl, M. J. (2024, October). Political deepfakes and misleading chatbots: Understanding the use of GenAI in recent European elections. Center for Media Engagement. https://mediaengagement.org/wp-content/uploads/2024/10/Political-Deepfakes-and-Misleading-Chatbots-Understanding-the-Use-of-GenAI-in-Recent-European-Elections.pdf
  • Saeidnia, H. R., Hosseini, E., Lund, B., Tehrani, M. A., Zaker, S., & Molaei, S. (2025). Artificial intelligence in the battle against disinformation and misinformation: a systematic review of challenges and approaches. Knowledge and Information Systems. https://doi.org/10.1007/s10115-024-02337-7
  • Schiff, D. S., Jackson, K., & Bueno, N. (2024, May 30). Watch out for false claims of deepfakes, and actual deepfakes, this election year. Brookings Institution.
  • Stanford School of Humanities and Sciences. (2024). New study shows that partisanship trumps truth. https://humsci.stanford.edu/feature/new-study-shows-partisanship-trumps-truth
  • Strait Times. (2023). Deepfake videos raise concern in India ahead of general election. https://www.straitstimes.com/asia/south-asia/deepfake-videos-raise-concern-in-india-ahead-of-general-elections
  • Taeihagh, A. (2025). Governance of Generative AI. Policy and Society. https://doi.org/10.1093/polsoc/puaf001
  • TikTok. (2022). Updating our policies for political accounts. https://newsroom.tiktok.com/en-us/updating-our-policies-for-political-accounts
  • Twitter. (2020). Building rules in public: Our approach to synthetic & manipulated media. https://blog.x.com/en_us/topics/company/2020/new-approach-to-synthetic-and-manipulated-media
  • UN News. (2024). General Assembly adopts landmark resolution on artificial intelligence.
  • UN Office for Digital and Emerging Technologies. (2024). Global Digital Compact.
  • UN. (2021). Eleven norms of responsible state behaviour in cyberspace.
  • UNCTAD. (2025). AI’s $4.8 trillion future: UN warns of widening digital divide without urgent action. https://news.un.org/en/story/2025/04/1161826
  • UNESCO. (2024). Artificial intelligence and democracy. https://www.unesco.org/en/articles/artificial-intelligence-and-democracy
  • UNSDG. (2024). Pact of the Future, Global Digital Compact and Declaration on Future Generations. https://unsdg.un.org/download/14969/127580
  • Urman, A., & Makhortykh, M. (2023). The Silence of the LLMs: Cross-Lingual Analysis of Political Bias and False Information Prevalence in ChatGPT, Google Bard, and Bing Chat.  https://doi.org/10.31219/osf.io/q9v8f
  • Vice. (2020). We’ve just seen the first use of deepfakes in an Indian election campaign. https://www.vice.com/en/article/the-first-use-of-deepfakes-in-indian-election-by-bjp/
  • Vykopal, I., Pikuliak, M., Srba, I., Moro, R., Macko, D., & Bielikova, M. (2024). Disinformation Capabilities of Large Language Models. Conference: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 14830–14847. https://doi.org/10.18653/v1/2024.acl-long.793
  • Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 39–52.
  • Zürn, M. (2020). On the role of contestations, the power of reflexive authority, and legitimation problems in the global political system. International Theory, 13(1), 192–204. https://doi.org/10.1017/s1752971920000391

Cite this article

    APA : Imran, A., Ali, M. I., & Rehman, A. (2025). The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference. Global Political Review, X(I), 89-106. https://doi.org/10.31703/gpr.2025(X-I).08
    CHICAGO : Imran, Ali, Muhammad Irfan Ali, and Abdur Rehman. 2025. "The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference." Global Political Review, X (I): 89-106 doi: 10.31703/gpr.2025(X-I).08
    HARVARD : IMRAN, A., ALI, M. I. & REHMAN, A. 2025. The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference. Global Political Review, X, 89-106.
    MHRA : Imran, Ali, Muhammad Irfan Ali, and Abdur Rehman. 2025. "The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference." Global Political Review, X: 89-106
    MLA : Imran, Ali, Muhammad Irfan Ali, and Abdur Rehman. "The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference." Global Political Review, X.I (2025): 89-106 Print.
    OXFORD : Imran, Ali, Ali, Muhammad Irfan, and Rehman, Abdur (2025), "The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference", Global Political Review, X (I), 89-106
    TURABIAN : Imran, Ali, Muhammad Irfan Ali, and Abdur Rehman. "The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference." Global Political Review X, no. I (2025): 89-106. https://doi.org/10.31703/gpr.2025(X-I).08