ARTICLE

THE ROLE OF GENERATIVE AI IN UNDERMINING ELECTORAL INTEGRITY A STUDY ON AIDRIVEN ELECTION INTERFERENCE

08 Pages : 89-106

http://dx.doi.org/10.31703/gpr.2025(X-I).08      10.31703/gpr.2025(X-I).08      Published : Mar 2025

The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference

    The rapid advancement of generative artificial intelligence(AI)technologies presents a transformative yet perilous frontier in the domain of electoral integrity. As tools like deepfakes and large language models(LLMs), including ChatGPT, become increasingly accessible, they offer new avenues for information manipulation, narrative distortion, and psychological influence at unprecedented scale and sophistication. This paper investigates the multifaceted impact of generative AI on democratic processes, focusing on three core dimensions:the erosion of voter trust through deepfake-driven disinformation campaigns; the weaponization of LLMs to manufacture and amplify persuasive electoral narratives; and the pressing need for international governance mechanisms to regulate and mitigate AI-fueled election interference.Drawing from recent electoral events, cross-national case studies, and emerging empirical research, the study reveals how deepfakes are reshaping public perception by blurring the line between reality and fabrication. Simultaneously,it explores how LLMs are being deployed to automate propaganda, target voter subgroups with hyper-personalized messaging, and exploit linguistic and cognitive biases.

    Generative AI, Deepfakes, Large Language Models (LLMs), Electoral Integrity, International Governance, Disinformation Campaigns, Voter Trust, Election Interference
    (1) Ali Imran
    MPhil Scholar, Department of Political Science and International Relations, University of Central Punjab, Lahore, Punjab, Pakistan.
    (2) Muhammad Irfan Ali
    Assistant Professor Department of Political Science and International Relations, University of Central Punjab, Lahore, Punjab, Pakistan.
    (3) Abdur Rehman
    Lecturer, Department of Political Science and International Relations, University of Central Punjab, Lahore, Punjab, Pakistan.
  • AIContentfy. (2024). The impact of ChatGPT on content creation and marketing. https://aicontentfy.com/en/blog/impact-of-chatgpt-on-content-creation-and-marketing

  • Aissa, F. B., Hamdi, M., Zaied, M., & Mejdoub, M. (2023). An overview of GAN-DeepFakes detection: proposal, improvement, and evaluation. Multimedia Tools and Applications, 83(11), 32343–32365. https://doi.org/10.1007/s11042-023-16761-4
  • Allen Lab for Democracy Renovation Fellow. (2025). Weaponized AI: The urgent need for global AI security standards. 
  • Appel, M., & Prietzel, F. (2022). The detection of political deepfakes. Journal of Computer-Mediated Communication, 27(4). https://doi.org/10.1093/jcmc/zmac008
  • ASPI. (2021). The UN norms of responsible state behaviour in cyberspace. 
  • Babaei, R., Cheng, S., Duan, R., & Zhao, S. (2025). Generative Artificial intelligence and the evolving challenge of deepfake Detection: A Systematic analysis. Journal of Sensor and Actuator Networks, 14(1), 17. https://doi.org/10.3390/jsan14010017
  • Bansal, G., Chamola, V., Hussain, A., Guizani, M., & Niyato, D. (2024). Transforming Conversations with AI—A Comprehensive Study of ChatGPT. Cognitive Computation, 16(5), 2487–2510. https://doi.org/10.1007/s12559-023-10236-2
  • Batista, M. M. (2025). Comparative analysis of deepfake detection models: New approaches and perspectives [Preprint]. arXiv. http://dx.doi.org/10.48550/arXiv.2504.02900
  • Bontridder, N., & Poullet, Y. (2021). The role of artificial intelligence in disinformation. Data & Policy. http://dx.doi.org/10.13140/RG.2.2.28805.27365
  • Brennan Center for Justice. (2023). Regulating AI deepfakes and synthetic media in the political arena. https://www.brennancenter.org/our-work/research-reports/regulating-ai-deepfakes-and-synthetic-media-political-arena
  • Bronovytska, Y. (2024). Deepfakes as digital propaganda: The Russian case in the war in Ukraine. Conflict, Justice, Decolonization. https://cjdproject.web.nycu.edu.tw/2024/12/04/deepfakes-as-digital-propaganda-the-russian-case-in-the-war-in-ukraine/
  • BuzzFeed News. (2019). Net neutrality fake comments: How political operatives duped Ajit Pai's FCC. https://www.buzzfeednews.com/article/jsvine/net-neutrality-fcc-fake-comments-impersonation
  • Center for Democracy and Technology. (2024). Election integrity recommendations for generative AI developers. https://cdt.org/insights/brief-election-integrity-recommendations-for-generative-ai-developers/
  • Center for Informed Public. (2020). Deepfakes and the U.S. elections: Lessons from the 2020 workshops. https://www.cip.uw.edu/deepfakes-and-the-u-s-elections-lessons-from-the-2020-workshops/
  • Chapdelaine, P., & Rogers, J. M. (2021). Contested sovereignties: states, media platforms, peoples, and the regulation of media content and big data in the networked society. Laws, 10(3), 66. https://doi.org/10.3390/laws10030066
  • Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. PNAS Proceedings of the National Academy of Sciences of the United States of America, 118(9), 1–8. https://doi.org/10.1073/pnas.2023301118
  • Council of Europe. (2024). The Framework Convention on Artificial Intelligence.
  • Criddle, E. J. (2024). Extraterritoriality’s Empire: How Self-Determination Limits Extraterritorial Lawmaking  SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4939794
  • Dhanuraj, D., Harilal, S., & Solomon, N. (2024). Generative AI and its influence on India's 2024 elections. Policy Paper. https://www.freiheit.org/sites/default/files/2025-01/a4_policy-paper_ai-on-indias-2024-electons_en-4.pdf
  • Dubinsky, S., & Starr, H. (2022). Weaponizing Language: Linguistic vectors of Ethnic Oppression. Global Studies Quarterly, 2(2). https://doi.org/10.1093/isagsq/ksab051
  • DW. (2022). The deepfakes in the disinformation war. https://www.dw.com/en/fact-check-the-deepfakes-in-the-disinformation-war-between-russia-and-ukraine/a-61166433
  • Election Commission of India. (2025). Label AI-generated content in political campaigns. https://www.eci.gov.in/eci-backend/public/api/download?url=LMAhAK6sOPBp%2FNFF0iRfXbEB1EVSLT41NNLRjYNJJP1KivrUxbfqkDatmHy12e%2FzGjJMI0%2FjETs7fjrM8lYn4ipTqYtDEvVosG8Bae5QB8%2Fj5TBF9Esc2hlzORgYtkmzyKzGsKzKlbBW8rJeM%2FfYFA%3D%3D
  • Election Integrity Partnership. (2020). Platforms of Babel: Inconsistent misinformation support in non-English languages. https://www.eipartnership.net/2020/inconsistent-efforts-against-us-election-misinformation-in-non-english
  • European Commission. (2024). The EU’s Digital Services Act.
  • FEC. (2023). Comments sought on amending regulation to include deliberately deceptive Artificial Intelligence in campaign ads. https://www.fec.gov/updates/comments-sought-on-amending-regulation-to-include-deliberately-deceptive-artificial-intelligence-in-campaign-ads/
  • Global Witness. (2023). How Big Tech platforms are neglecting their non-English language users. https://globalwitness.org/en/campaigns/digital-threats/how-big-tech-platforms-are-neglecting-their-non-english-language-users/
  • Gupta, N., & Mathews, N. (2024, September 25). India’s experiments with AI in the 2024 elections: The good, the bad & the in-between. TechPolicy.Press.
  • Harvard Kennedy School’s Misinformation Review. (2021). Study on deepfakes and their credibility.
  • Horsey, J. (2025, March 9). ChatGPT-4.5: The AI redefining emotional intelligence and creativity. Geeky Gadgets. https://www.geeky-gadgets.com/chatgpt-4-5-emotional-intelligence-creativity/
  • Huschens, M., Briesch, M., Sobania, D., & Rothlauf, F. (2023). Do you trust ChatGPT? -- Perceived credibility of Human and AI-Generated content. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2309.02524
  • Insikt Group. (2024, September 24). Targets, objectives, and emerging tactics of political deepfakes. Recorded Future.
  • International IDEA. (2024, September 17). Credibility of elections under threat worldwide. https://www.idea.int/news/credibility-elections-under-threat-worldwide
  • ISPI. (2024, February 28). An overview of the impact of GenAI and deepfakes on global electoral processes. https://www.ispionline.it/en/publication/an-overview-of-the-impact-of-genai-and-deepfakes-on-global-electoral-processes-167584
  • Kamminga, M. T. (2020). Extraterritoriality. Max Planck Encyclopedia of Public International Law.
  • Malwarebytes Labs. (2020, October 16). Deepfakes and the 2020 United States election: Missing in action? Malwarebytes. https://www.malwarebytes.com/blog/news/2020/10/deepfakes-and-the-2020-united-states-election-missing-in-action
  • Maras, M., & Alexandrou, A. (2018). Determining authenticity of video evidence in the age of artificial intelligence and in the wake of Deepfake videos. The International Journal of Evidence & Proof, 23(3), 255–262. https://doi.org/10.1177/1365712718807226
  • Marcellino, W., Beauchamp-Mustafaga, N., Kerrigan, A., Navarre Chao, L., & Smith, J. (2023). The rise of generative AI and the coming era of social media manipulation 3.0: Next-generation Chinese astroturfing and coping with ubiquitous AI (PE-A2679-1). RAND Corporation. https://www.rand.org/pubs/perspectives/PEA2679-1.html
  • Meta. (2024, April 5). Our approach to labeling AI-generated content and manipulated media. https://about.fb.com/news/2024/04/metas-approach-to-labeling-ai-generated-content-and-manipulated-media/
  • Microsoft. (2020, September 1). New steps to combat disinformation. 
  • Mishra, S. (2025, March 25). The governance of geopolitical risk in 2025. Harvard Law School Forum on Corporate Governance. https://corpgov.law.harvard.edu/2025/03/25/the-governance-of-geopolitical-risk-in-2025/
  • Muirhead, J. (2001). The mind as a target: Psychological operations and data fusion technology. Air Intelligence Agency, Psychological Operations Division.
  • OECD. (2024, December 17). Recommendation of the Council on Information Integrity. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0505
  • OECD. (2025). Government automated decision-making: Transparency and responsibility in the public sector.
  • OHCHR. (2024). International Covenant on Civil and Political Rights. https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights
  • Partnership on AI. (2024). Policy alignment on AI transparency. https://partnershiponai.org/policy-alignment-on-ai-transparency/
  • PolSci Institute. (2023). International organisations and state sovereignty: Balancing power and authority. https://polsci.institute/political-theory/globalisation-impact-state-sovereignty/
  • Quelle, D., Cheng, C. Y., Bovet, A., & Hale, S. A. (2025). Lost in translation: using global fact-checks to measure multilingual misinformation prevalence, spread, and evolution. EPJ Data Science, 14(1). https://doi.org/10.1140/epjds/s13688-025-00520-6
  • RAND Corporation. (2022). Artificial intelligence, deepfakes, and disinformation: A primer. https://www.rand.org/pubs/perspectives/PEA1043-1.html
  • Regaining Power Over AI. (2025). Creative work in generative AI narratives. https://regainingpoweroverai.org/docs/research/ai-narratives-creative-work/
  • Riedl, M. J. (2024, October). Political deepfakes and misleading chatbots: Understanding the use of GenAI in recent European elections. Center for Media Engagement. https://mediaengagement.org/wp-content/uploads/2024/10/Political-Deepfakes-and-Misleading-Chatbots-Understanding-the-Use-of-GenAI-in-Recent-European-Elections.pdf
  • Saeidnia, H. R., Hosseini, E., Lund, B., Tehrani, M. A., Zaker, S., & Molaei, S. (2025). Artificial intelligence in the battle against disinformation and misinformation: a systematic review of challenges and approaches. Knowledge and Information Systems. https://doi.org/10.1007/s10115-024-02337-7
  • Schiff, D. S., Jackson, K., & Bueno, N. (2024, May 30). Watch out for false claims of deepfakes, and actual deepfakes, this election year. Brookings Institution.
  • Stanford School of Humanities and Sciences. (2024). New study shows that partisanship trumps truth. https://humsci.stanford.edu/feature/new-study-shows-partisanship-trumps-truth
  • Strait Times. (2023). Deepfake videos raise concern in India ahead of general election. https://www.straitstimes.com/asia/south-asia/deepfake-videos-raise-concern-in-india-ahead-of-general-elections
  • Taeihagh, A. (2025). Governance of Generative AI. Policy and Society. https://doi.org/10.1093/polsoc/puaf001
  • TikTok. (2022). Updating our policies for political accounts. https://newsroom.tiktok.com/en-us/updating-our-policies-for-political-accounts
  • Twitter. (2020). Building rules in public: Our approach to synthetic & manipulated media. https://blog.x.com/en_us/topics/company/2020/new-approach-to-synthetic-and-manipulated-media
  • UN News. (2024). General Assembly adopts landmark resolution on artificial intelligence.
  • UN Office for Digital and Emerging Technologies. (2024). Global Digital Compact.
  • UN. (2021). Eleven norms of responsible state behaviour in cyberspace.
  • UNCTAD. (2025). AI’s $4.8 trillion future: UN warns of widening digital divide without urgent action. https://news.un.org/en/story/2025/04/1161826
  • UNESCO. (2024). Artificial intelligence and democracy. https://www.unesco.org/en/articles/artificial-intelligence-and-democracy
  • UNSDG. (2024). Pact of the Future, Global Digital Compact and Declaration on Future Generations. https://unsdg.un.org/download/14969/127580
  • Urman, A., & Makhortykh, M. (2023). The Silence of the LLMs: Cross-Lingual Analysis of Political Bias and False Information Prevalence in ChatGPT, Google Bard, and Bing Chat.  https://doi.org/10.31219/osf.io/q9v8f
  • Vice. (2020). We’ve just seen the first use of deepfakes in an Indian election campaign. https://www.vice.com/en/article/the-first-use-of-deepfakes-in-indian-election-by-bjp/
  • Vykopal, I., Pikuliak, M., Srba, I., Moro, R., Macko, D., & Bielikova, M. (2024). Disinformation Capabilities of Large Language Models. Conference: Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 14830–14847. https://doi.org/10.18653/v1/2024.acl-long.793
  • Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11), 39–52.
  • Zürn, M. (2020). On the role of contestations, the power of reflexive authority, and legitimation problems in the global political system. International Theory, 13(1), 192–204. https://doi.org/10.1017/s1752971920000391

Cite this article

    APA : Imran, A., Ali, M. I., & Rehman, A. (2025). The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference. Global Political Review, X(I), 89-106. https://doi.org/10.31703/gpr.2025(X-I).08
    CHICAGO : Imran, Ali, Muhammad Irfan Ali, and Abdur Rehman. 2025. "The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference." Global Political Review, X (I): 89-106 doi: 10.31703/gpr.2025(X-I).08
    HARVARD : IMRAN, A., ALI, M. I. & REHMAN, A. 2025. The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference. Global Political Review, X, 89-106.
    MHRA : Imran, Ali, Muhammad Irfan Ali, and Abdur Rehman. 2025. "The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference." Global Political Review, X: 89-106
    MLA : Imran, Ali, Muhammad Irfan Ali, and Abdur Rehman. "The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference." Global Political Review, X.I (2025): 89-106 Print.
    OXFORD : Imran, Ali, Ali, Muhammad Irfan, and Rehman, Abdur (2025), "The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference", Global Political Review, X (I), 89-106
    TURABIAN : Imran, Ali, Muhammad Irfan Ali, and Abdur Rehman. "The Role of Generative AI in Undermining Electoral Integrity: A Study on AI-Driven Election Interference." Global Political Review X, no. I (2025): 89-106. https://doi.org/10.31703/gpr.2025(X-I).08