Deloitte’s recent AI misstep in Australia has made headlines around the world. The firm was found to have used artificial intelligence to draft a government report that contained numerous errors, inaccuracies, and outright fabrications, including references to non-existent academic papers and a fabricated quote attributed to a federal court judge.
The report, which cost Australian taxpayers $440,000, has now led to Deloitte refunding part of the fee. Yet the reputational damage may prove far more costly, with the story widely reported internationally.
Importantly, this incident is not a cautionary tale against using AI itself, but rather an example of what happens when AI is deployed without appropriate governance and oversight.
AI has been used across industries for years, but the rapid rise of generative AI has accelerated adoption at an unprecedented pace. In the rush to seize new opportunities, many organisations have focused more on what AI can do than on the risks it introduces.
These risks span multiple dimensions:
To manage these risks effectively, boards must ensure strong oversight of AI use, backed by robust governance structures and a clear understanding of where AI is deployed across the organisation’s value chain.
Boards must first clarify who is accountable for AI usage across the organisation.
Without a clear picture of AI deployment, it’s impossible for the Board to gain assurance that usage aligns with the organisation’s risk appetite.
Decide where AI risk and opportunity oversight sits within the governance framework. For most organisations, it won’t require a new committee; existing structures such as the Risk Committee can often take ownership.
Review Board and committee charters to ensure AI is explicitly referenced within their terms of reference. Given the pace of technological change, consider external expert input where appropriate, particularly to help directors interpret technical AI issues and their strategic implications.
A comprehensive AI governance framework, aligned to corporate strategy and objectives, is central to effective oversight.
Key components include:
Governance is only as strong as the people implementing it. Board members, management, and staff all need a baseline understanding of AI technologies, their benefits, and their risks.
Equally important is awareness of internal AI policies — ensuring everyone knows what AI use is permitted and in which contexts it is prohibited. Without this knowledge, the risk of unapproved or inappropriate AI use remains high.
Establish a risk-based monitoring and reporting system for mission-critical AI systems. Boards should receive clear, measurable information on AI performance, compliance, and emerging risks.
Periodic independent reviews or internal audits of AI systems can provide valuable assurance that governance arrangements are effective and evolving in line with the technology.
The Australian Institute of Company Directors (AICD) has provided a strong foundation for boards through its Director’s Guide to AI Governance, an excellent resource outlining key questions directors should ask about AI oversight.
However, the lessons extend far beyond Australia. As AI becomes embedded in business operations globally, boards everywhere must take proactive ownership of AI governance. Waiting for regulation or external scrutiny is no longer an option.
The Deloitte case is a stark reminder that AI itself isn’t inherently risky; poor governance is. Organisations that embed clear accountability, transparent policies, and continuous oversight will be best placed to harness AI’s transformative potential responsibly.
AI can drive extraordinary value, but only when guided by strong ethics, sound governance, and informed leadership.