EU AI Act Hub

Welcome to the EU AI Act Hub, where we cover AI regulation and governance posts through blogs, videos and podcasts. This hub aims to help businesses understand and comply with the EU AI Act, highlighting key terms, identifying high-risk AI systems, and providing resources for better navigation of AI regulations.
 
Blogs
In-depth articles on AI regulation.
Videos
Expert insights and discussions.
Podcasts

Conversations with industry leaders.

Understanding the EU AI Act for Business

Assisting companies in complying with existing laws to facilitate ethical and effective AI deployment.

Business Concerns

The forthcoming EU AI Act is a crucial factor that businesses must grasp and navigate. This key legislation aims to set ethical and safe guidelines for AI applications. 

Whilst the Act imposes certain data restrictions and new requirements like watermarking for AI content, these are hurdles that can be surmounted. 

Alongside strict data privacy standards akin to GDPR, these laws encourage companies to adjust and innovate rather than hinder progress. Leaders in the field must stay informed and engaged with these regulatory frameworks. 

The EU AI Act will significantly influence AI and industry, and we plan to delve into this emerging topic with our industry peers in the coming years.

Key Terms:

  • Domain: The specific sector where the AI system operates, such as Healthcare or Education.
  • Purpose: The intended objective of the AI system, like Patient Diagnosis or Exam Scoring.
  • Capability: The function that the AI system offers, such as Facial Recognition or Sentiment Analysis.
  • User: The individual using the AI system, e.g., Doctor, Teacher.
  • Subject: The targeted individual or group for the AI system, like Patients or Students.
Identifying High-Risk AI

To gauge the risk associated with an AI system, its intended purpose must be scrutinised. 

Risk is determined by two criteria:

  • AI systems are tagged as high-risk if they serve as safety components in products subject to CE standards, as outlined in Annex II of the AI Regulation.
  • Standalone AI systems listed in Annex III of the AI Regulation are also considered high-risk. For instance, AI applications in biometric identification, education, credit services, and critical infrastructure management fall under this category.

It’s important to note that the responsibility for accurate classification lies with the provider. Ongoing research at AI Ireland on the impact of the EU AI Act continues.

To find out more about the AI Act, check out the website www.artificialintelligenceact.eu.

Latest EU AI Act Podcasts

Latest EU AI Act Videos

Latest EU AI Act News

Key Dates and Call to Action

 
The EU AI Act has a detailed timeline for its implementation:
  • August 1, 2024: The AI Act will come into force.
  • 6 months later: Provisions on prohibited AI systems will be enforced.
  • 12 months later: Requirements for general-purpose AI (GPAI) system providers will apply.
  • 24 months later: Most other parts of the AI Act will be enforced.
  • 36 months later: Specific obligations for certain AI systems will come into effect.
Employers must ensure their employees are trained about AI to comply with these regulations. 
 
Contact us for training resources and more information.