AI

EU Passes AI Act to Regulate ‘General Purpose’ AI Models

The European Union has passed the EU AI Act, which introduces additional scrutiny of ‘general purpose’ AI models based on their computational power, with potential penalties for non-compliance, while the United States lacks a similar regulatory structure.

At a glance

  • EU AI Act introduces scrutiny of ‘general purpose’ AI models based on computational power
  • Models like GPT-4, Gemini, and Claude are specifically regulated
  • Models with 10×25 FLOPs or greater deemed systemic risk must comply within three years
  • Exemptions for open-source models unless considered ‘general purpose AI’ or high-risk
  • The US lacks a similar regulatory framework, bipartisan support for AI regulation in Congress

The details

The European Union (EU) has passed the EU AI Act, which introduces additional scrutiny of ‘general purpose’ AI models based on their computational power.

This regulation specifically includes models such as OpenAI’s GPT-4, Google’s Gemini, and Anthropic’s Claude.

Models trained at a computational power of 10×25 FLOPs or greater are deemed to have systemic risk, and companies with such models in the market must ensure compliance within three years.

Regulation Exemptions

It is important to note that open-source models are exempt from these regulations unless they are considered ‘general purpose AI’ or high-risk models.

The EU is establishing an AI office dedicated to governing ‘general purpose AI’ models to oversee compliance with these regulations.

Companies found to be in violation of the EU AI Act could face penalties of up to 7% of their global annual revenue or €35 million.

Comparison with the United States

In contrast to the EU’s comprehensive regulatory framework on AI, the United States currently lacks a similar regulatory structure.

However, President Biden has signed an executive order with directives for federal agencies.

Developers of AI models trained at 10X26 FLOPs or higher are required to report to the Department of Commerce.

AI regulation is one of the few issues in Congress that garners bipartisan support.

Legal concerns surrounding ‘general purpose AI’ models include data privacy, security, algorithmic bias, intellectual property rights, and liability.

To mitigate algorithmic bias, diversity in training data is crucial.

Transparency of AI models is also highlighted as important for ensuring liability and accountability.

The EU AI Act imposes sanctions on AI systems that could potentially impact citizens’ rights.

Companies found to be non-compliant with the regulations could face fines of up to 7% of their annual revenue.

Nitish Mittal, a partner at Everest Group, believes that the EU regulation will set a precedent for how other regions and governments approach AI in the future.

The EU AI Act is expected to have implications beyond European borders. Ashley Casovan, the AI governance center managing director at IAPP, emphasizes the importance for companies using AI to understand and prepare for compliance with the Act.

The legislation could also affect web crawling and search engine optimization practices.

Ben Maling, a managing associate at the law firm EIP, has called for a standardized method for removing web content from scraping to train’ general-purpose AI’ models.

The EU AI Act also mandates that companies training ‘general purpose AI’ models for the EU market must respect machine-readable opt-outs from text and data mining.

The Act may also impact rival jurisdictions such as the U.K. Nikolaz Foucaud, the EMEA managing director at Coursera, believes that more efforts are needed to retrain the workforce to deploy AI technologies.

Article X-ray


Facts attribution

This section links each of the article’s facts back to its original source.

If you suspect false information in the article, you can use this section to investigate where it came from.

aibusiness.com
– The EU AI Act brings extra scrutiny of ‘general purpose’ AI models based on computational power
– Models like OpenAI’s GPT-4, Google’s Gemini, and Anthropic’s Claude are included
– Models trained at a computational power of 10×25 FLOPs or greater are considered to have systemic risk
– Companies with GPAI models in the market must comply within three years
– Open-source models are exempt unless they are GPAI models or high-risk
– The EU is setting up an AI office to govern GPAI models
– Penalties for violations can be up to 7% of global annual revenue or €35 million
– The US does not have a comprehensive regulatory framework on AI
– President Biden signed an executive order with directives for federal agencies
– Developers of models trained at 10X26 FLOPs or higher must report to the Department of Commerce
– AI is one of the few issues with bipartisan support in Congress
– Legal concerns for GPAI models include data privacy, security, algorithmic bias, IP rights, and liability
– Diversity in training data can help mitigate algorithmic bias
– Transparency of models is important for liability and accountability
aibusiness.com
– Lawmakers have passed the EU AI Act
– Businesses must ensure their AI applications comply with the rules
– The rules impose sanctions on AI systems that could impact citizens’ rights
– Companies caught foul of the rules face potential fines of up to 7% of their annual revenue
– Nitish Mittal, partner at Everest Group, said the regulation is likely to set a precedent for how other regions and governments might approach AI in the future
– AI systems are used around the world, with companies everywhere vying for potential integrations or tools to improve their workflows
– The EU rules will likely have an impact far beyond European borders
– Ashley Casovan, AI governance center managing director at IAPP, said it is crucial for companies using AI to understand the Act and prepare for compliance
– The EU AI Act could impact web crawling and search engine optimization
– Ben Maling, managing associate at law firm EIP, called for a standardized method to opt web content out from scraping for Gen AI training
– The EU AI Act requires companies training Gen AI models for the EU market to respect machine-readable opt-outs from text and data mining
– The Act could have an impact on rival jurisdictions like the U.K.
– Nikolaz Foucaud, EMEA managing director at Coursera, said there is a need for more to be done around retraining the workforce for AI deployment

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

You may also like

Comments are closed.

More in:AI