Categories
News

Ethics in the Annual Report: What Stakeholders Really Want to See on AI

Bottom line: Your stakeholders now expect to see how you govern AI in your annual report. In 2025, 72% of S&P 500 companies disclosed at least one material AI risk in their filings – up from just 12% in 2023. 

The EU AI Act is due to enter full enforcement in August 2026, with fines reaching 7% of global turnover. If your annual report says nothing meaningful about AI ethics, you are signalling a governance gap to investors, regulators and the market.

This is a board-level issue. AI and leadership are now inseparable in the eyes of every major stakeholder group. This article sets out exactly what your annual report should cover, as well as where most organisations are falling short.

Why AI Ethics Disclosure Is Now a Fiduciary Concern

Annual reports have always been about trust and accountability. Stakeholders use them to judge whether a company is well-managed and prepared for what comes next. AI has now joined climate, cybersecurity and governance as a topic that investors actively seek out.

The pressure is coming from multiple directions:

1. Investor scrutiny is intensifying

PwC’s 2025 Global Investor Survey found that over 60% of investors say clear, consistent disclosures improve their confidence in governance. AI-related shareholder proposals rose significantly in 2024 and continue to climb.

2. Regulation is tightening

The EU AI Act’s transparency rules are expected to take effect in August 2026. The SEC has flagged AI-washing as an enforcement priority and is scrutinising whether company disclosures match actual AI practices.

3. Reputational risk tops the list

Among S&P (Standard and Poor’s) 500 companies, reputational risk is the most commonly disclosed AI risk. Bias, privacy failures and botched implementations erode trust fast.

4. Employees are already ahead of you

A 2025 survey found that 78% of employees are using AI tools at work, with 58% admitting they have shared sensitive company data with AI systems. Your workforce needs guardrails and stakeholders want to see them.

What Your Annual Report Must Cover on AI Ethics

Research from FTI Consulting and Trinity Business School shows companies are disclosing far more on AI governance than even a year ago. However, a gap persists between high-level strategy statements and meaningful governance detail. Saying “AI is on our agenda” is not sufficient. Stakeholders want specifics.

1. Board-Level AI Oversight

Who on the board owns AI governance? According to PwC’s 2025 Annual Corporate Director’s Survey, only 35% of boards have formally integrated AI into their oversight responsibilities. Stakeholders want to know which committee is accountable, how frequently AI risk reaches the board agenda and what director education is in place. This is the foundation of responsible AI and leadership at the highest level.

2. AI Risk Management Framework

Investors expect a structured approach to identifying and mitigating AI risk. The NIST AI Risk Management Framework – with its four functions of govern, map, measure and manage – provides a credible foundation. Your report should describe how you classify AI systems by risk level, assess them before deployment, and monitor them afterwards.

3. Ethical Principles and Policies

Over 120 countries have now adopted ethical AI guidelines. Stakeholders expect companies to show how internal policies align with established frameworks such as the UNESCO Recommendation on the Ethics of AI or the EU’s risk-based approach. Publish your AI ethics policy or principles and reference the standards you follow. This builds credibility and demonstrates fiduciary seriousness.

4. Transparency and Explainability

How are AI decisions explained to the people they affect? This matters most in hiring, lending, insurance and healthcare. The EU AI Act requires that high-risk AI systems provide sufficient transparency for users to interpret outputs. Your report should explain how you meet, or plan to meet, this standard. Responsible AI disclosure is rapidly becoming a competitive differentiator.

5. Bias Detection and Fairness

AI bias is not theoretical; it is documented. Stakeholders now look for evidence that you test for bias, monitor for drift, and take corrective action. Companies that disclose their bias-testing methodology and audit results build far more credibility than those that simply state they “value fairness.”

6. Workforce Impact and Upskilling

How is AI changing roles across your organisation? Stakeholders – especially employees and unions – want to see what retraining and AI literacy programmes exist. McKinsey’s 2025 State of AI survey found nine out of ten organisations now use AI. The human side of that story matters deeply. Leadership workshops and structured upskilling signal that your board takes this seriously.

The Cost of Getting It Wrong

1. Financial penalties

The EU AI Act allows fines of up to €35 million or 7% of global turnover. GDPR-related AI failures can add another €20 million or 4% on top.

2. SEC enforcement

The SEC (Securities and Exchange Commission) and DOJ (Department of Justice) have already taken action against companies for AI-washing, exaggerating AI capabilities to attract investment.

3. Lost investor confidence

Board-level AI oversight disclosure increased by over 84% year-on-year in 2024. Companies that lag behind signal a governance gap.

4. Reputational damage

45 S&P 500 companies now specifically cite AI implementation failure as a material reputational risk.

Mark Kelly, Founder at Ireland states: “Stakeholders don’t need a 50-page AI strategy. They need honest answers to three questions: What AI are you using? What could go wrong? And who is responsible when it does? If your annual report can’t answer those clearly, you have work to do.”

Transparency Is the New Competitive Advantage

AI ethics disclosure in the annual report is no longer optional. Investors, regulators, employees and customers expect transparency about how you govern AI, manage risk and ensure fairness. The organisations that get ahead of this build stronger trust, attract more capital and face fewer regulatory surprises.

If your board has not yet had a structured conversation about AI governance and disclosure, now is the time. An Executive AI Leadership Session gives your board and senior team a clear, practical understanding of AI risks, governance obligations, and what stakeholders expect to see in your next annual report. These are not technical training sessions, they are designed to help directors ask the right questions and make commercially sound decisions.

Book an Executive AI Leadership Session to help your board get ahead of this challenge. Contact AI Ireland to arrange a session tailored to your sector and governance structure.

You can also attend an AI Leadership Presentation or Briefing with AI Ireland to upskill your leadership team in AI, strengthen AI literacy at board level, and support better strategic decision-making across your organisation. These sessions are designed specifically for leaders who want to move from awareness to action.

Frequently Asked Questions

Q: Do boards legally have to disclose AI ethics in their annual report?

A: In many jurisdictions, not yet as a standalone requirement. However, the EU AI Act requires transparency disclosures from August 2026. The SEC is actively scrutinising AI claims. Sustainability reporting frameworks like ESRS (European Sustainability Reporting Standards) and IFRS Sustainability Disclosure Standards (ISSB)  now provide structured avenues for AI governance disclosure. Even where it is not mandatory, leading companies disclose voluntarily because stakeholders demand it.

Q: What is the biggest mistake boards make with AI disclosure?

A: Saying too much about strategy and too little about governance. A statement like “AI is central to our growth” is meaningless without evidence of oversight, risk management, and accountability. The SEC has already taken enforcement action against companies that exaggerated their AI capabilities, a practice known as AI-washing.

Q: What frameworks should we align our AI ethics disclosure with?

A: The most widely referenced include the NIST AI Risk Management Framework, the EU AI Act’s risk-based classification, and UNESCO’s Recommendation on the Ethics of AI. For corporate reporting, the European Sustainability Reporting Standards and IFRS Sustainability Disclosure Standards offer structured approaches to responsible AI disclosure.

Q: How can a leadership workshop help our board prepare for AI disclosure?

A: An Executive AI Leadership Session gives directors a practical understanding of AI risks, governance obligations and stakeholder expectations. It is not technical training. It is about helping leaders ask the right questions and make informed decisions about disclosure, oversight, and accountability, exactly what stakeholders are looking for.

Q: What is AI-washing and why should boards be concerned?

A: AI-washing is when a company makes exaggerated or inaccurate claims about its use of AI, often to attract investment or boost its market position. The SEC has made this an enforcement priority. For boards, the risk is both legal and reputational. Accurate, honest disclosure protects the organisation and builds long-term stakeholder trust.

Want to understand how AI is really shaping business in Ireland in 2026?

The AI Ireland 2026: The State of AI in Irish Business report reveals that most Irish organisations have moved beyond experimentation into real-world AI use — improving efficiency, boosting engineering productivity, and shifting from reactive to predictive operations — while also facing challenges around integration, skills and governance.

Download the full report to see how companies are turning AI from curiosity into measurable impact, and get strategic insights to inform your own AI roadmap.


Discover more from AI Ireland

Subscribe to get the latest posts sent to your email.

By AI Ireland

AI Ireland's mission is to increase the use of AI for the benefit of our society, our competitiveness, and for everyone living in Ireland.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from AI Ireland

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from AI Ireland

Subscribe now to keep reading and get access to the full archive.

Continue reading