October 15, 2025

AI Governance – Lessons from Australia

Deloitte’s AI scandal in Australia highlights why strong governance, not technology itself, determines whether artificial intelligence becomes a strategic asset or a reputational risk.

The Deloitte AI Scandal: A Costly Lesson

Deloitte’s recent AI misstep in Australia has made headlines around the world. The firm was found to have used artificial intelligence to draft a government report that contained numerous errors, inaccuracies, and outright fabrications, including references to non-existent academic papers and a fabricated quote attributed to a federal court judge.

The report, which cost Australian taxpayers $440,000, has now led to Deloitte refunding part of the fee. Yet the reputational damage may prove far more costly, with the story widely reported internationally.

Importantly, this incident is not a cautionary tale against using AI itself, but rather an example of what happens when AI is deployed without appropriate governance and oversight.

The Real Issue: Governance, Not AI

AI has been used across industries for years, but the rapid rise of generative AI has accelerated adoption at an unprecedented pace. In the rush to seize new opportunities, many organisations have focused more on what AI can do than on the risks it introduces.

These risks span multiple dimensions:

  • Commercial: financial losses from flawed or biased AI systems, or cyberattacks targeting AI models and data.
  • Reputational: damage to brand and trust due to misuse of AI or poor treatment of customers or employees.
  • Regulatory: breaches of legal or compliance obligations, particularly in data handling and transparency.

To manage these risks effectively, boards must ensure strong oversight of AI use, backed by robust governance structures and a clear understanding of where AI is deployed across the organisation’s value chain.

Building an Effective AI Governance Framework

1. Roles and Responsibilities

Boards must first clarify who is accountable for AI usage across the organisation.

  • Identify the Board-level owner responsible for AI oversight.
  • Maintain an AI inventory or register that tracks where and how AI systems are used.

Without a clear picture of AI deployment, it’s impossible for the Board to gain assurance that usage aligns with the organisation’s risk appetite.

2. Governance Structures

Decide where AI risk and opportunity oversight sits within the governance framework. For most organisations, it won’t require a new committee; existing structures such as the Risk Committee can often take ownership.

Review Board and committee charters to ensure AI is explicitly referenced within their terms of reference. Given the pace of technological change, consider external expert input where appropriate, particularly to help directors interpret technical AI issues and their strategic implications.

3. AI Framework, Policies and Processes

A comprehensive AI governance framework, aligned to corporate strategy and objectives, is central to effective oversight.

Key components include:

  • A firm-wide AI Usage Policy, clearly outlining approved use cases and internal guardrails.
  • A review of existing policy suites, including Data Protection, Cybersecurity, and Procurement, to ensure AI risks are integrated.
  • Oversight of vendors and third parties: understand which suppliers use AI on your behalf, conduct due diligence on their AI governance, and include contractual requirements around responsible AI use.

4. People and Training

Governance is only as strong as the people implementing it. Board members, management, and staff all need a baseline understanding of AI technologies, their benefits, and their risks.

Equally important is awareness of internal AI policies — ensuring everyone knows what AI use is permitted and in which contexts it is prohibited. Without this knowledge, the risk of unapproved or inappropriate AI use remains high.

5. Oversight and Reporting

Establish a risk-based monitoring and reporting system for mission-critical AI systems. Boards should receive clear, measurable information on AI performance, compliance, and emerging risks.

Periodic independent reviews or internal audits of AI systems can provide valuable assurance that governance arrangements are effective and evolving in line with the technology.

Lessons for Boards Worldwide

The Australian Institute of Company Directors (AICD) has provided a strong foundation for boards through its Director’s Guide to AI Governance, an excellent resource outlining key questions directors should ask about AI oversight.

However, the lessons extend far beyond Australia. As AI becomes embedded in business operations globally, boards everywhere must take proactive ownership of AI governance. Waiting for regulation or external scrutiny is no longer an option.

Conclusion

The Deloitte case is a stark reminder that AI itself isn’t inherently risky; poor governance is. Organisations that embed clear accountability, transparent policies, and continuous oversight will be best placed to harness AI’s transformative potential responsibly.

AI can drive extraordinary value, but only when guided by strong ethics, sound governance, and informed leadership.