top of page

The Role of Artificial Intelligence in Modern U.S. Electoral Systems: Enhancements and Risks

  • valentin2156
  • Aug 14, 2024
  • 8 min read

Abstract

One area that might experience immense transformation owing to the adoption of AI is the electoral system. Artificial intelligence (AI) holds tremendous potential for enhancing polls, campaign methods, and voter registration, but it also presents substantial challenges to the integrity of elections around the globe. This article discusses the political scene of 2024 and AI’s role, balancing the pros and cons of AI deployment. Case studies of nations that have implemented AI for voting purposes are presented, along with an analysis of the merits and disadvantages of these systems. The article goes on to warn of the perils of AI in elections and provides solutions to these concerns. Securing both the security and legitimacy of voting procedures is of utmost importance as the employment of artificial intelligence in elections becomes more prevalent. This paper calls on politicians, election authorities, and the public to address the challenges brought by AI in elections.

Keywords: Artificial Intelligence (AI); Elections; Vote counting; Cyber security; Threats; Case studies


Introduction

As artificial intelligence (AI) becomes increasingly integrated into various aspects of modern society, concerns about its potential impact on the security and integrity of election processes worldwide are growing. Analysing the implications of using AI in elections is becoming more critical as the political landscape of 2024 takes shape. This paper examines the intricate web of connections between AI and the voting process, exploring the potential advantages and significant risks this technological advancement presents. While AI has the potential to enhance election reliability, streamline voter registration, and aid in detecting and preventing fraud, these advancements come with substantial risks.

This article reviews case studies from countries that have implemented AI in their voting processes to better understand the pros and cons. Among the most pressing issues highlighted are the potential for AI algorithms to perpetuate existing biases and the increased vulnerability of electoral infrastructure to cyberattacks. The article emphasises the need for a multi-pronged strategy involving lawmakers, election officials, and the public to ensure free and fair elections in the AI era. It stresses the importance of developing and using AI technologies within the voting system with transparency, accountability, and ethical considerations. By proactively addressing these concerns, the authors aim to lay out a plan to protect democracy and ensure fair elections as AI continues to advance.


Threats of Generative AI on the 2024 US Elections


Key Threats

Deceptive Content Creation

Deepfakes: One of the most prominent threats is the creation of highly convincing deepfakes. These AI-generated videos, audio, or images can mislead the public, create false narratives, and manipulate voter perceptions. For instance, deepfake videos of candidates making false statements could go viral, influencing voter behaviour based on fabricated events.


Disinformation Campaigns

Foreign Influence Operations: Generative AI (GenAI) can amplify the scale and effectiveness of foreign influence operations. By generating vast amounts of false information quickly and cheaply, malicious actors can create convincing disinformation campaigns targeting election processes and officials.


Manipulation and Micro-targeting

AI-Powered Micro-targeting: AI can be used to micro-target voters with personalised messages designed to manipulate emotions and influence voting behaviour. While the direct impact on electoral outcomes might be limited, the perceived manipulation can undermine trust in the electoral process.


Phishing and Social Engineering

Enhanced Phishing Attacks: GenAI can generate sophisticated phishing schemes and social engineering attacks that are harder to detect and more effective. For example, AI-generated voice cloning can be used to impersonate election officials, gaining unauthorised access to sensitive information.


Centralisation of Information

Monopoly of AI Tools: If AI tools are concentrated within a few tech firms, it could lead to the centralisation of information, raising concerns about content moderation and bias. This centralisation can create distrust among voters who may perceive these firms as having undue influence over political discourse.


Minimising Risks


Multi-Source AI Development

Diverse AI Providers: Encouraging the development and availability of generative AI tools across multiple providers can prevent any single entity from having excessive control over political information. This decentralisation helps mitigate the risks associated with content moderation and bias.


Content Authentication and Labeling

Watermarks and Labels: Implementing watermarks and labels on AI-generated content can help identify the origin of the content and distinguish between real and fake media. This practice aids in maintaining the integrity of information disseminated during the election cycle.


Proactive Public Communication

Transparent Communication: Election officials should engage in proactive and transparent communication with the public about the measures in place to secure the elections. This includes educating voters on how to identify and respond to deceptive content.


Robust Cybersecurity Practices

Enhanced Cyber Defenses: Strengthening cybersecurity measures to protect against AI-enhanced phishing and malware attacks is critical. Election offices should implement advanced authentication techniques and train staff to recognise and respond to sophisticated social engineering tactics.


Self-Governance and External Oversight

User Involvement in AI Governance: Platforms providing generative AI tools should involve users and external stakeholders in setting and enforcing content guardrails. Efforts like Meta’s “community forum” for AI policy recommendations and Anthropic’s development of a “constitution” for AI use are steps in the right direction.


Legal and Policy Frameworks

Regulatory Measures: Governments should develop and enforce regulations to ensure that generative AI is used responsibly. This includes setting standards for the ethical use of AI in elections and ensuring that AI tools do not exacerbate existing risks.


Advantages of AI in Electoral Processes


Improving the Precision and Effectiveness of Polls

AI algorithms can sort through electoral records, demographic information, and social media postings to better forecast voter behaviour and election results. By analysing and interpreting this data, AI can improve the accuracy of predictions, allowing political campaigns to better plan and focus their strategies, resulting in more efficient and targeted campaigning.


Efficient Methods for Registering to Vote

Voter registration processes can be made more accessible and effective by automating and streamlining them. AI-powered systems can manage voter registration, minimising human error and ensuring the accurate registration of eligible voters. This can lead to a more inclusive voting system and higher voter turnout.


Voter Fraud Detection and Prevention

The use of AI to identify and prevent election fraud is becoming increasingly important. Machine learning algorithms can analyse large volumes of data to detect anomalies, such as voting fraud or duplicate registrations. This helps ensure an accurate vote count and safeguards the integrity of the voting process.


Case Studies


South Korea

In the 2020 parliamentary elections, South Korea used an AI-powered method to tally votes. This technology reduced counting time and human errors by accurately analysing paper votes using machine learning techniques, thereby improving the overall efficiency of the election process.


Estonia

Estonia has been using AI in its voting systems since 2005, enhancing its reputation for cutting-edge e-governance projects. The i-Voting system allows citizens to vote securely online, with AI algorithms playing a crucial role in ensuring the integrity of the voting process by authenticating voter identities and preventing fraud. A significant portion of Estonians uses this well-accepted electronic voting method during elections.


Switzerland

Switzerland has been at the forefront of technological transformation, experimenting with new voting techniques based on blockchain and AI. In 2018, the city of Zug piloted a blockchain-based municipal election system that allowed citizens to vote using their mobile devices. AI algorithms increased voter identification accuracy and reduced fraud, while blockchain technology ensured a transparent, secure, and immutable voting process.


Brazil

In 2020, during a municipal election, Brazil tested an AI-powered face recognition system to explore the limits of AI’s capabilities. This system compared voter ID images to voter registration photos to reduce the possibility of impersonation, protecting the integrity of the election and simplifying the verification process.


United States

The US has been exploring the use of AI in voting systems to make voting more manageable and accessible. For example, in 2018, West Virginia launched a pilot program allowing service members abroad to vote via Voatz, a mobile voting app. The Voatz app employs AI algorithms and blockchain technology to ensure secure and legitimate votes. The system uses biometric data and face recognition technology to provide accurate voter identification and prevent fraud.


Challenges and Perils of Artificial Intelligence in Elections


Data Privacy and Security

The reliance of AI algorithms on vast quantities of voter records raises serious concerns about data security. It is critical to prevent the disclosure, misuse, or unlawful access to voter records to maintain public trust. If voting systems dependent on AI are susceptible to hacking, it could jeopardise the legitimacy of elections. Encryption, intrusion detection, and secure network design are essential cybersecurity techniques to mitigate these threats.


Algorithmic Bias and Fairness

Algorithms trained on biased data reinforce preexisting biases, potentially skewing the algorithm’s output. This could disproportionately affect specific populations, undermining the principles of fair representation and voting rights. The opacity of advanced AI systems exacerbates this issue, making it difficult to understand decision-making processes and identify biases. Transparency and accountability in AI decision-making are crucial to addressing these concerns.


Accessibility and Inclusion

Technologically driven programs may exclude individuals due to a lack of digital literacy or unequal access to technology. Ensuring that everyone has an equal opportunity to vote requires accommodating individuals who choose not to or are unable to use AI technologies. Efforts to bridge the digital divide and promote digital literacy are essential for inclusive electoral participation.


Conclusion

As AI continues to integrate into electoral processes, its potential to enhance and disrupt elections becomes more evident. While AI can improve efficiency, accuracy, and security in various aspects of elections, it also introduces significant risks that must be addressed. Ensuring the integrity and security of elections in the AI era requires a comprehensive and collaborative approach involving lawmakers, election officials, technology providers, and the public. By proactively addressing the challenges and leveraging the benefits of AI, we can protect democracy and ensure fair and transparent elections worldwide.


References

  1. Brand, D. (2024, March 19). The use of AI in elections.

  2. Estes, A. C. (2024, January 4). 2024 will make AI a factor in elections, your job, and maybe everything else. Vox.

  3. Garcia, M., & Chen, L. (2022). The Role of Social Media and AI in Shaping Voter Perceptions: A Case Study of the 2020 Election. Journal of Information Technology and Politics, 20(2), 145-163.

  4. Jones, R. (2023, December 10). Rise of AI-Powered Disinformation Campaigns Sparks Concerns Ahead of Global Elections.

  5. Kim, E. (2023, November 20). AI-Driven Deepfakes: A Looming Threat to Electoral Integrity.

  6. Lee, H., & Kim, S. (2024). Deepfake Technology and Its Potential Impact on Election Integrity. Cybersecurity and Elections Quarterly, 8(1), 55-72.

  7. Mackenzie, L., & Scott, M. (2024, April 22). How people view AI, disinformation and elections — in charts. POLITICO.

  8. Roberts, L. (2024, February 28). The Dark Side of AI: Concerns Grow Over Election Interference.

  9. Smith, J. D., & Johnson, A. (2023). Artificial Intelligence and Electoral Manipulation: A Comparative Analysis. Journal of Political Science, 45(3), 321-339.

  10. Source, M. (2024, February 16). Technology industry to combat deceptive use of AI in 2024 elections – Stories.

  11. Taddeo, M., & Floridi, L. (2018). “Regulate Artificial Intelligence to Avert Cyber Arms Race.” Nature, 556(7701), 296-298.

  12. While 2024’s risks loom large, their worst outcomes are not inevitable. (2024, January 11). World Economic Forum.

  13. World Economic Forum. (2024). Global Risks Report: AI-Driven Threats to Electoral Integrity.

Additional Information

  • Ethan Bueno de Mesquita is the interim dean and Sydney Stein Professor in the Harris School of Public Policy at the University of Chicago. Bueno de Mesquita discloses that he receives consulting income from Meta Platforms, Inc for work related to governance issues.

  • Brandice Canes-Wrone is Professor of Political Science and Maurice R. Greenberg Senior Fellow of the Hoover Institution, Stanford University.

  • Andrew B. Hall is the Davies Family Professor of Political Economy in the Graduate School of Business at Stanford University, and a Professor of Political Science. He is a Senior Fellow at the Stanford Institute for Economic Policy Research and a Senior Fellow (courtesy) at the Hoover Institution. Hall discloses that he receives consulting income from Meta Platforms, Inc for work related to Augmented Reality strategy, and from Andreessen-Horowitz for work related to web3 governance.

  • Kristian Lum is Research Associate Professor in the Data Science Institute at the University of Chicago. She was previously a Senior Staff Machine Learning Researcher on the ML Ethics, Transparency and Accountability Team at Twitter.

  • Gregory J. Martin is Associate Professor of Political Economy in the Graduate School of Business at Stanford University.

  • Yamil Ricardo Velez is an Assistant Professor of Political Science at Columbia University.

Comments


bottom of page