Time running out for regulators to tackle AI threat ahead of general election, researchers warn

0 1

Time running out for regulators to tackle AI threat ahead of general election, researchers warn

Time is running out for regulators to tackle the threats posed by artificial intelligence (AI) to the democratic process ahead of July’s election, researchers have warned.

However, the idea of deepfakes swinging the election should not be overblown as there is no clear evidence of an election result being affected by AI, they said.

In a new report published by the Alan Turing Institute thinktank, researchers argued there are early signs of damage to the democratic system from AI-generated content.

There has been a wave of material claiming to depict politicians in recent months.

Sadiq Khan, the mayor of London, said deepfake audio that was generated to appear as though he was making inflammatory remarks before Armistice Day almost caused “serious disorder”.

AI-generated audio of US President Joe Biden appeared to encourage supporters not to vote in the New Hampshire primaries at the start of this year.

Sam Stockwell, research associate at the Alan Turing Institute and lead author of the report, said online harassment of public figures who are subject to deepfake attacks could push some to avoid engaging in online forums.

He said: “The challenge in discerning between AI-generated and authentic content poses all sorts of issues down the line… It allows bad actors to exploit that uncertainty by dismissing deepfake content as allegations, there’s fake news, it poses problems with fact-checking, and all of these things are detrimental to the fundamental principles of democracy.”

The report called for Ofcom, the media regulator, and the Electoral Commission, to issue joint guidance and request voluntary agreements on the fair use of AI by political parties in election campaigning.

It also recommended guidance for the media on reporting about AI-generated fake content and making sure voter information includes guidance on how to spot AI-generated content and where to go for advice.

Dr Christian Schroeder de Witt, a post doctoral researcher for AI and security research at the University of Oxford, said solutions to tackle the impact of deepfakes include adding watermarks to AI-generated content and provenance-based solutions, which involve tying the image and the metadata together in a way that will show if the original information has been tampered with, because only the owner can modify the provenance record.

Mr Stockwell added there was “no clear guidance or expectations for preventing AI being used to create false or misleading electoral information” weeks away from a general election.

“That’s why it’s so important for regulators to act quickly before it’s too late,” he said.

Dr Alexander Babuta, director of the Centre for Emerging Technology and Security at the Alan Turing Institute, said: “While we shouldn’t overplay the idea that our elections are no longer secure, particularly as worldwide evidence demonstrates no clear evidence of a result being changed by AI, we nevertheless must use this moment to act and make our elections resilient to the threats we face.

“Regulators can do more to help the public distinguish fact from fiction and ensure voters don’t lose faith in the democratic process.”

A spokesperson for the Electoral Commission said: “We are well-placed to combat false information about the voting process and the administration of elections.

“During this election period, we will provide voters with factual and accurate information, and work to correct misleading information that we see about how elections are run.”

They added the commission will create a hub on its website to support voters and link to relevant media literacy resources, and encouraged campaigners to be responsible and transparent.

“We recognise the challenges posed by AI but regulating the content of campaign material would require a new legal framework,” they said.

“We will be monitoring the impact at the general election and stand ready to be part of conversations about its impact on voter trust and confidence going forward.”

A spokesperson for Ofcom said: “We note the report’s recommendations which we’ll consider carefully.”

Source

Leave A Reply

Your email address will not be published.