This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

Three months to save democracy? New report reveals risks AI poses to elections

10 April 2024

Millions of voters are heading to the polls this year, making the ability to identify deepfakes and disinformation critical to safeguard democratic processes.

Image courtesy of University of Surrey
Image courtesy of University of Surrey

That's according to a new report by the University of Surrey's Institute for People-Centred AI.

The report calls for campaigns teaching the public to spot AI-generated content. It also demands greater funding for research into detecting deepfakes.

“Misinformation at election time is nothing new. Yet, AI makes it easier than ever before to sow false information among voters,” Dr Bahareh Heravi, Reader in AI and Media, commented.

“That’s why we must give voters the tools to tell fact from fiction. Greater media literacy can only strengthen our democracy.”

Among the report’s other key recommendations:
• Wider use of content verification – including clear labelling for AI-generated material
• A ‘fact-checkers code’ to encourage media companies to investigate and report misinformation
• Laws should be made to make social media companies responsible for content on their platforms
• Funding for UK-based research into AI tools which could help detect misinformation and disinformation
The report also calls for greater leadership from politicians on all sides

“This is a crucial year for the world’s democracies, with AI set to play a critical role whether we like it or not. Yet, so far, politicians have taken a back seat, letting academics and tech firms lead the conversation,” said Dr Andrew Rogoyski, Director of Innovation and Partnerships at the Surrey Institute for People-Centred AI.

“With so much opportunity arising from AI, it’s unhelpful to let the negative applications like fakery and disinformation grow in use.

“We need our leaders to show up in this debate. They should demand action to help their constituents navigate democracy in the age of digital media and AI,” Rogoyski concluded.

“They should also show personal leadership. Perhaps by pledging not to use AI to mislead voters in this crucial election year?"


Print this page | E-mail this page

MinitecBritish Encoder Products