The European Union AI Act is no longer something organisations can treat as a future compliance problem. In 2026, the regulation is actively reshaping how businesses develop, deploy, procure and govern artificial intelligence across Europe and beyond.
While many organisations focused on the initial headlines around banned AI systems and generative AI regulation, the real shift now is operational. Companies are moving from awareness to implementation as enforcement deadlines approach and regulators begin clarifying expectations around governance, transparency, risk management and accountability.
What Is the EU AI Act?
The EU AI Act is the world’s first comprehensive legal framework specifically regulating artificial intelligence. It entered into force in August 2024 and applies directly across all EU member states.
The regulation uses a risk-based model, meaning obligations depend on how much potential harm an AI system could create. Systems are broadly grouped into four categories:
- Unacceptable risk: banned AI practices.
- High risk: heavily regulated systems.
- Limited risk: transparency obligations apply.
- Minimal risk: largely unrestricted.
The legislation also applies extraterritorially, meaning non-EU companies may still fall within scope if their AI systems or outputs are used within the EU.
Key EU AI Act Dates
Last week, the European Parliament and Council have reached a provisional agreement on the “Digital Omnibus on AI,” which amends the EU AI Act to simplify rules, reduce administrative burdens, and push back key implementation deadlines.
Several important milestones are already in effect or approaching quickly:
Date | Requirement |
February 2025 | Prohibited AI practices banned and AI literacy obligations began |
August 2025 | General-purpose AI (GPAI) obligations became applicable |
August 2026 | High-risk AI obligations and transparency rules begin enforcement |
August 2027 | Additional obligations for AI embedded in regulated products apply |
What AI Systems Are Considered High Risk?
High-risk AI systems are those that could significantly impact people’s safety, rights or opportunities.
Examples include AI used in:
- Recruitment and employment decisions
- Credit scoring
- Critical infrastructure
- Education and exams
- Medical devices
- Law enforcement
- Biometric identification
- Border control
- Essential public services
These systems face extensive compliance obligations, including:
- Risk management systems
- Technical documentation
- Human oversight
- Logging and traceability
- Data governance controls
- Accuracy and cybersecurity standards
- Post-market monitoring
For many businesses, the challenge is not simply determining whether AI exists inside the organisation, but identifying whether it materially influences important decisions affecting individuals.
The Growing Focus on AI Governance
One of the biggest developments in 2026 is the shift from theoretical compliance to practical evidence.
Regulators increasingly expect organisations to demonstrate:
- How AI systems behave in production
- How decisions can be reconstructed
- Where human oversight exists
- How risks are monitored continuously over time.
This is particularly relevant for AI agents and autonomous systems that can take actions, interact with tools, or make multi-step decisions with limited human involvement.
Many organisations are discovering that traditional governance approaches designed for static software systems are not sufficient for modern AI deployment
Generative AI and GPAI Rules
The AI Act also introduced specific obligations for General-Purpose AI (GPAI) models, including large language models and foundation models.
These obligations include:
- Technical documentation
- Copyright compliance
- Transparency obligations
- Safety and security assessments
- Systemic risk mitigation for the most advanced models
The GPAI Code of Practice is becoming increasingly influential as regulators clarify what “good compliance” looks like in practice. Major technology providers have taken different approaches, with some signing the voluntary framework and others criticising aspects of the rules.
For enterprises using tools like chatbots, copilots, internal AI assistants or AI-generated content systems, transparency obligations are becoming especially important.
AI Literacy Is Now Mandatory
One requirement many businesses initially overlooked is AI literacy.
Since February 2025, organisations using AI systems must ensure staff have an appropriate level of AI understanding based on their role and exposure to AI systems.
This does not necessarily mean every employee needs technical AI expertise. However, businesses are expected to educate teams on AI risks, explain acceptable usage. provide governance guidance and ensure decision-makers understand the limitations of AI systems.
For many organisations, this is driving investment in internal AI governance training and policy development.
Why Businesses Should Pay Attention Now
The EU AI Act is increasingly being treated similarly to GDPR in its early years: initially viewed as complex and distant, but rapidly becoming a standard expectation in procurement, enterprise governance, and customer trust.
Many companies are already reporting that clients and partners are asking about AI Act readiness during procurement and contract discussions.
Potential penalties are significant, ranging from up to €35 million or up to 7% of global annual turnover for the most serious violations.
However, beyond fines, the bigger commercial risk may be reputational damage, procurement exclusion or losing enterprise trust.
Final Thoughts
The EU AI Act is quickly becoming one of the most influential AI regulations globally. Whether organisations agree with every aspect of the legislation or not, the direction of travel is clear: AI governance is moving from optional best practice to regulated operational requirement.
For businesses using AI internally or externally, the key priority in 2026 is no longer simply experimentation. It is demonstrating responsible deployment, accountability, transparency and control.
The organisations that begin building governance structures early are likely to adapt far more successfully than those waiting for regulators or customers to force action later.
Frequently Asked Questions
Q: Does the EU AI Act apply to companies outside Europe?
A: Yes. The AI Act can apply to organisations outside the EU if their AI systems or outputs are used within the European Union.
Q: When does the EU AI Act fully come into force?
A: The regulation is being phased in gradually between 2025 and 2027, with most major enforcement obligations beginning from August 2026 onward.
Q: What are prohibited AI practices?
A: Examples include social scoring, manipulative AI systems, exploitative AI targeting vulnerable groups, certain biometric surveillance practices and emotion recognition in workplace or educational settings.
Q: What is considered a high-risk AI system?
A: High-risk systems are AI applications that could significantly affect people’s rights, safety, employment, healthcare, education or access to services.
Q: Are ChatGPT-style systems regulated under the AI Act?
A: Yes. General-purpose AI models and generative AI systems fall under dedicated GPAI obligations, especially regarding transparency, copyright, safety and documentation requirements.
Q: What is AI literacy under the AI Act?
A: AI literacy means organisations must ensure employees using or overseeing AI systems understand the relevant risks, capabilities, and governance expectations associated with those systems.
Q: What are the penalties for non-compliance?
A: The highest penalties can reach €35 million or 7% of global annual turnover, depending on the type of violation.
Q: Is the EU AI Act similar to GDPR?
A: Many experts compare the AI Act to GDPR because of its broad scope, extraterritorial reach, and likely global influence on governance standards.
Take the Next Step with AI Ireland
If your board or executive team is grappling with how to restructure for AI, you are not alone – but you do need a clear plan.
AI Ireland offers Executive AI Leadership Sessions designed to help boards and C-Suite leaders assess their current organisational structure, identify where autonomous AI is already operating, and build governance-ready operating models for the future.
These sessions focus on practical, commercially grounded guidance helping leadership teams understand accountability, AI risk, governance, and strategic adoption in the context of the EU AI Act.
AI Ireland also delivers AI Leadership Briefings to strengthen AI literacy at leadership level, helping executive teams understand what autonomous AI means for their sector and build confidence around strategic AI adoption.
Contact us to learn more and book your session.Related
Discover more from AI Ireland
Subscribe to get the latest posts sent to your email.
