EU reaches deal on how to regulate artificial intelligence

0 155

EU reaches deal on how to regulate artificial intelligence

European Union negotiators have agreed a deal on the world’s first comprehensive artificial intelligence rules.

The agreement paves the way for legal oversight of technology used in popular generative AI services such as ChatGPT.

Negotiators from the European Parliament and the bloc’s 27 member countries overcame big differences on generative AI and police use of facial recognition to sign a tentative political agreement for the Artificial Intelligence Act.

EU reaches deal on how to regulate artificial intelligence

“Deal!” tweeted European commissioner Thierry Breton.

The European Parliament and member states “have finally reached a political agreement on the Artificial Intelligence Act!”, the parliamentary committee co-leading the body’s negotiating efforts tweeted.

Officials provided few details on what will make it into the eventual law, which will not take effect until 2025 at the earliest.

The EU took an early lead in the global race to draw up AI guardrails when it unveiled the first draft of its rulebook in 2021.

Google touts new AI model that beats GPT in almost all tests – but UK yet to approve release

The recent boom in generative AI, however, sent European officials scrambling to update a proposal poised to serve as a blueprint for the world.

EU reaches deal on how to regulate artificial intelligence

Generative AI systems such as OpenAI’s ChatGPT have become increasingly ubiquitous in recent months – wowing users with their ability to create text, photos and songs but also causing concerns around jobs, privacy and copyright protection.

Now, the US, UK, China and global groups such as the G7 have jumped in with their own proposals to regulate AI, though they are still catching up with Europe.

Once the final version of the EU‘s AI Act is worked out, the text needs approval from the bloc’s 705 politicians before they break up for EU-wide elections next year. That vote is expected to be a formality.

The AI Act was originally designed to mitigate the dangers from specific AI functions based on their level of risk, from low to unacceptable.

But politicians pushed to expand it to foundation models, the advanced systems that underpin general purpose AI services such as ChatGPT and Google’s Bard chatbot.

What became the thorniest topic was AI-powered facial recognition surveillance systems, and negotiators found a compromise after intensive bargaining.

European politicians wanted a full ban on public use of facial scanning and other “remote biometric identification” systems because of privacy concerns while governments of member countries wanted exemptions so law enforcement could use them to tackle serious crimes such as child sexual exploitation or terrorist attacks.

Source

Leave A Reply

Your email address will not be published.