Senior Experience: Generative AI and Elections

By Marley Berk | June 11, 2024

Large Photo
Small Photo
Photo
AI for Good

Editor's Note: Blog is written by Audrey Shiplett with introduction by Marley Berk

One of the principles NDI holds dear is finding new and emerging voices and elevating them. We were approached with the opportunity to have Audrey Shipplet join us for a few weeks as part of her Senior Experience. Her highschool grants graduating seniors the opportunity to explore a specific topic area further, and Audrey chose to take the time to learn more about Generative AI and its impact on elections. Audrey was able to read up on numerous papers including those from the European Parliament, International Idea, and the Global Investigative Journalism Network. Additionally, she virtually attended some sessions from the AI for Good Summit in Geneva. In addition to sharing the summaries of the papers and sessions she attended with our team, we asked her to write up a blog post about her experience over the past few weeks.

With half of the world holding elections this year, the concern over Generative AI and Deepfakes, and how they may help spread misinformation that is harmful to the integrity of the election is higher than ever. I’ve seen deep fakes online, and was shocked at how realistic some of them are, but I didn’t know much more about Generative AI and the impact it has on elections. For my Senior Experience, I was lucky enough to get the chance to learn more about AI and Elections with the DemTech team at NDI.

While AI has been used to place doubt in the integrity of elections, it also has the potential to be beneficial to increase efficiency, and fight back against misinformation and disinformation campaigns by malicious actors. Companies like Faktisk Verisfiserbar, a fact checking organization, have used AI to check videos, photos, and audios to ensure that people aren’t being misinformed about the source of the content. They have used AI to detect which languages people are speaking, where certain tanks come from, and where clips are taken based on the surrounding areas. This can help prevent misinformation from spreading, but there are still limitations.

One primary issue of using AI is that when it is used in elections there is often bias and discrimination against certain marginalized groups. With AI there is also a lack of transparency in how data is analyzed to get a specific result, and how data is taken, such as how biometric scanning is used and stored. Using this technology without human supervision can also lead to inaccurate results, which can result in a loss of trust in electoral integrity and lower voter turnout. All of this creates a question about the ethics of AI, and means that in the development of AI, ethics has to be one of the main focuses

I think that one of the largest issues stems from using AI to detect posts on social media that have malicious intent specifically in the Global Majority. Countries like Ghana and Georgia have many local languages, and AI isn’t programmed to read all of them, which means that information in these countries is missed. The technology also showed there is a lack of nuance regarding the political and cultural situations of the countries.

This issue shows the digital divide, and if companies don’t focus on investing more in the diversification of large language models, which would mean working in natural language processing, it could widen. Technology companies still need to focus more of their efforts on the global majority market, as many of these countries have only recently established their democracies, and it is important that malicious actors aren’t able to spread doubt.

To protect the integrity of elections and keep stability, many countries have already taken action against AI-related mis and disinformation in ways that other countries can implement. To me the most interesting ones were in Sweden, Estonia, and Ukraine. In Sweden, they focused on strengthening the society against misinformation and disinformation and gave election authorities tools and education to help counter threats. Preparing society can impact how well misinformation can spread in the long term. Ukraine, to protect against Russian disinformation that aimed to destabilize the country, has focused on collaboration throughout all aspects of society including the government, private companies, academia, and media. This has already been beneficial as shown with eSMI, which connects journalists and experts, and helps give journalists and other people trustworthy information. Estonia has done a similar thing for their Russian speaking population by investing in news outlets and other media that is in Russian, to protect from Russian disinformation that threatened their stability.

Through collaboration, and by working to prepare society, the negative impact that misinformation and AI technology like deepfakes can have on elections and democracy can be minimized, and eventually it can be used to make elections more efficient and accurate.

Share