AI

EU Adopts Groundbreaking AI Act to Regulate Artificial Intelligence

The European Union has adopted the EU AI Act, a comprehensive regulation aimed at protecting rights, democracy, and the rule of law from high-risk artificial intelligence systems, with fines for violations ranging from $8 million to $38 million, and the legislation is set to enter into force in May.

At a glance

  • The EU has adopted the EU AI Act after three years of political debates and voting.
  • The legislation aims to protect rights, democracy, and the rule of law from high-risk AI systems.
  • Organizations violating the act could face fines ranging from $8 million to $38 million.
  • The EU is now considered a global standard-setter in AI following the passage of this legislation.
  • The EU AI Act introduces obligations for emotion recognition, social scoring, and predictive policing, making it the world’s first comprehensive regulation governing AI.

The details

The European Union (EU) has recently adopted the EU AI Act after three years of political debates and voting.

This legislation aims to protect rights, democracy, and the rule of law from high-risk artificial intelligence (AI) systems.

Organizations that violate the act could face fines ranging from $8 million to $38 million.

The EU is now considered a global standard-setter in AI following the passage of this legislation.

The EU AI Act will undergo final checks and needs to be endorsed by the Council before entering into force.

It will be fully applicable 24 months after publication, with bans on certain practices taking effect six months after entry into force.

The EU’s dedicated AI Office will support businesses in complying with the rules outlined in the Act.

This groundbreaking legislation is the world’s first comprehensive regulation governing AI. It introduces obligations for emotion recognition, social scoring, and predictive policing.

Law enforcement agencies will be allowed to use biometric identification systems under specific safeguards, while high-risk AI systems will be subject to assessments, logs, and human oversight.

Transparency requirements will be imposed on developers of general-purpose AI systems, and AI-generated audio and visual content will need to be clearly labeled as such.

The Act also includes provisions for testing and sandbox capabilities to support small and medium-sized enterprises (SMEs) and startups in the AI sector.

The adoption of the EU AI Act marks the beginning of a new era in AI regulation.

Businesses will need to understand how this legislation will impact their products and services, with compliance concerns raised regarding the definition of AI models and potential negative impacts.

The EU must strike a balance between regulatory controls and investments in positive AI use cases.

In related news, the European Parliament lawmakers have approved the AI Act, making it the first comprehensive regulation around high-risk AI systems, transparency for AI interactions with humans, and AI systems in regulated products.

The act is expected to enter into force this May, with Italian lawmaker Brando Benifei calling it “a historic day.”

US companies will need to comply with the EU AI Act while continuing with their AI adoption plans.

The EU is setting the standard for trustworthy, risk-mitigated, and responsible AI, with rules that have extraterritorial effects and hefty fines.

Organizations will need to assemble an ‘AI compliance team’ to effectively meet the requirements outlined in the Act.

Tech leaders like IBM and Salesforce have voiced support for the EU AI Act, with IBM offering to help clients comply with the legislation.

In a separate development, artificial general intelligence (AGI) is a topic of discussion, with the possibility of achieving AGI in foundation models in the near future.

Shane Legg, co-founder of DeepMind, believes that practical applications of AGI are still decades away and will depend on factors like cost reduction and maturity in robotics for deployment.

Legg defined AGI as a system that can perform cognitive tasks like humans, with the potential for even more capabilities.

He predicts a 50-50 probability of achieving AGI by 2028, noting that current models are at level 3 of AGI according to Google DeepMind’s six levels.

The development of AGI could have transformative effects on society but also poses risks such as bias, toxicity, and long-term consequences from superintelligence.

The blurring line between immediate and long-term risks in AI safety is becoming more pronounced with advancements in foundation models.

Multimodality in these models allows for a deeper understanding of human culture, making them more powerful.

AGI systems are already critical for many large companies and intelligence agencies, with Legg suggesting that it may be too late to halt AGI development due to its importance to various entities.

Article X-ray

Facts attribution

This section links each of the article’s facts back to its original source.

If you suspect false information in the article, you can use this section to investigate where it came from.

aibusiness.com
– The EU AI Act has been adopted after three years of political debates and voting
– The legislation aims to protect rights, democracy, and the rule of law from high-risk AI
– Organizations that violate the act face fines ranging from $8 million to $38 million
– The EU is now considered a global standard-setter in AI after the legislation passed
– The EU AI Act will go through final checks and needs to be endorsed by the Council
– The Act will enter into force twenty days after publication and be fully applicable 24 months after
– Bans on certain practices will apply six months after entry into force
– The EU’s dedicated AI Office will support businesses in complying with the rules
– The EU AI Act is the world’s first comprehensive regulation governing AI
– The Act introduces obligations for emotion recognition, social scoring, and predictive policing
– Law enforcement agencies can use biometric identification systems under certain safeguards
– High-risk AI systems will be subject to assessments, logs, and human oversight
– Transparency requirements will be imposed on developers of general-purpose AI systems
– AI-generated audio and visual content will need to be labeled as such
– The Act includes provisions for testing and sandbox capabilities for SMEs and startups
– The adoption of the AI Act marks the beginning of a new AI era
– Businesses need to understand how the AI Act will impact their products and services
– The AI Act will have a similar impact to the EU’s General Data Protection Regulation
– Compliance concerns have been raised regarding the definition of AI models and potential negative impacts
– The EU needs to balance regulatory controls with investments in positive AI use cases.
venturebeat.com
– European Parliament lawmakers approved the AI Act today
– The AI Act is the first comprehensive regulation around high-risk AI systems, transparency for AI that interacts with humans, and AI systems in regulated products
– The act will most likely enter into force this May
– Italian lawmaker Brando Benifei described it as “a historic day”
– US companies must comply with the EU AI Act while still moving forward with AI adoption plans
– The EU establishes the ‘de facto’ standard for trustworthy AI, AI risk mitigation, and responsible AI
– The rules have extra territorial effect, hefty fines, and pervasive requirements across the AI value chain
– Organizations must assemble an ‘AI compliance team’ to meet the requirements effectively
– Foundation model companies like OpenAI, Google, and Anthropic have not released comment on the EU AI Act approval
– Tech leaders like IBM and Salesforce have voiced support for the EU AI Act
– IBM stands ready to help clients and stakeholders comply with the EU AI Act
– Salesforce applauds EU institutions for taking leadership in the AI domain
aibusiness.com
– Artificial general intelligence might be achieved in foundation models soon, but practical applications are decades away
– Shane Legg, co-founder of DeepMind, stated that AGI must align with factors such as cost reduction and maturity in robotics for practical deployment
– Near-term applications of AGI include AI-powered scientific research assistants
– Legg suggested the term artificial general intelligence years ago after meeting an author who needed a title for his book
– He defined AGI as a system that can do cognitive things people can do and possibly more
– Legg predicted a 50-50 probability of AGI by 2028
– Foundation models have become increasingly able, hinting at AGI capability
– Current models are at level 3 of AGI, according to Google DeepMind’s six levels
– Models need to progress from System 1 to System 2 thinking
– Legg believes AI models will reach AGI soon and be transformational to society
– AI safety risks include bias, toxicity, and long-term risks from superintelligence
– The line between immediate and long-term risks is blurring with advancements in foundation models
– Multimodality in foundation models can absorb the richness of human culture, making them more powerful
– General AI systems can help narrow AI solve a range of related problems
– AGI development is already mission critical for many big companies and intelligence agencies
– It may be too late to stop AGI development due to its importance to various entities

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

You may also like

Comments are closed.

More in:AI