The EU AI Act is now partially in effect, with Article 5’s prohibitions on specific AI practices taking precedence. The European Commission has released clarifying guidelines, a crucial resource for anyone working in the field. This post highlights key takeaways from the 140-page document, offering a practical understanding of these restrictions. (This overview is for informational purposes only and doesn’t constitute legal advice.)
In the EU AI Act course run by AI Ireland, we will delve deeper into AI governance, including these prohibitions. The course begins in March – learn more by signing up for our newsletter.
Key Considerations
Guidelines vs. Law: While helpful, these guidelines are non-binding. The Court of Justice of the EU holds the ultimate authority on interpreting the AI Act.
High-Risk Implications: AI systems falling under Article 5 exceptions often qualify as “high-risk” under Article 6.2 and Annex III. This includes, for example, certain emotion recognition systems and AI-based scoring systems used in credit or insurance contexts.
Other Laws Still Apply: Even if an AI system isn’t explicitly prohibited by the AI Act, it must still comply with other EU regulations, such as GDPR’s data processing rules.
Forbidden Practices: A Breakdown
1. Subliminal, Manipulative or Deceptive AI (Article 5(1)(a)): Prohibits AI that uses subliminal techniques or intentionally manipulative/deceptive tactics to distort behaviour and cause significant harm.
- Subliminal Techniques: Includes visual, auditory and other methods that operate below conscious awareness.
- Manipulative Techniques: Personalised manipulation using individual data or vulnerabilities to influence choices and cause harm. Intent is irrelevant – learned manipulation is still prohibited.
- Deceptive Techniques: AI chatbots or AI-generated content presenting false information to deceive and distort behavior. Intent is irrelevant here as well. Example: A chatbot impersonating a friend for a scam.
- Significant Harm: Encompasses physical, psychological, financial and economic harm. Assessing “significance” involves considering severity, context, scale, vulnerability and reversibility.
2. Exploiting Vulnerabilities (Article 5(1)(b)): Forbids AI that exploits vulnerabilities due to age, disability, or social/economic situation to distort behaviour and cause significant harm. Examples include AI targeting young users with addictive designs or anthropomorphic AI exploiting children’s emotional attachments.
3. Social Scoring (Article 5(1)(c)): Prohibits AI used to evaluate or classify people based on social behaviour or personal characteristics, leading to detrimental treatment unrelated to the original data context or disproportionate to the behavior. This applies to both public and private entities. Example: A credit agency using unrelated personal data to deny housing loans.
4. Crime Prediction Based Solely on Profiling (Article 5(1)(d)): Prohibits AI risk assessments for criminal offenses based solely on profiling or personality traits. Location-based crime prediction and administrative offenses are excluded. AI used to support human assessment of criminal involvement based on objective facts is permitted.
5. Facial Recognition Database Creation (Article 5(1)(e)): Forbids AI systems that create or expand facial recognition databases through untargeted scraping of images from the internet or CCTV. Targeted scraping (for specific individuals or groups) is not prohibited. Reverse image search is considered targeted scraping. Scraping other biometric data (voice samples) and generating fictitious facial images are also outside this prohibition.
6. Emotion Recognition in Workplace/Education (Article 5(1)(f)): Bans AI systems that infer emotions in workplaces and educational institutions, except for medical or safety reasons. Physical states like pain or fatigue are excluded. Tracking customer emotions in call centers is also outside this prohibition. “Safety reasons” are strictly limited to protecting life and health.
7. Biometric Categorisation (Article 5(1)(g)): Prohibits biometric categorisation systems that infer race, political opinions, union membership, religion, sex life or sexual orientation. Labeling or filtering lawfully acquired biometric datasets (e.g. for bias mitigation or medical diagnosis) is permitted.
8. Real-Time Remote Biometric Identification (Article 5(1)(h)): Restricts the use of real-time remote biometric identification in public spaces for law enforcement, unless strictly necessary for: (1) searching for specific victims or missing persons; (2) preventing imminent threats to life or safety or a genuine threat of terrorism; (3) locating suspects for serious criminal offenses (Annex II, punishable by at least four years imprisonment). Online spaces and controlled-access areas are excluded. Biometric authentication for access control is also outside this scope.
Final Thoughts
These guidelines offer invaluable clarity, dissecting each prohibition’s legal conditions and providing practical examples. Understanding these restrictions is crucial for responsible AI development and deployment.
AI Literacy Training
Has your Company introduced AI Literacy to your employees?
AI Ireland provides short interactive AI Literacy training that can be virtual and or in person.
Contact us today to find out more.
Related
Discover more from AI Ireland
Subscribe to get the latest posts sent to your email.
