Artificial intelligence has become mainstream in Australian professional service firms. This hasn’t happened overnight, but over time through increased ChatGPT usage, and adoption of AI-based tools for legal research, client reports, and automated tax workflows. For many firms, adoption happened organically, driven by individual practitioners looking to save time. Governance, in most cases, has not kept pace.
This gap is now a liability. Regulators have noticed and so have customers and public onlookers alike. With a set of hard legislative deadlines approaching including significant Privacy Act obligations that take effect later this year (on 10 December 2026) the window to get ahead of this is closing.
This article sets out what AI governance actually means in practice, why it has become urgent for firms in accounting, tax, legal, wealth management, and insurance broking, and what the first concrete steps look like for a firm that is starting AI governance from scratch.
What is AI Governance? What is it Actually For?
AI governance is the set of policies, processes, and controls that determine how an organisation uses artificial intelligence — who can use it, for what purposes, under what conditions, and with what oversight. It is not a technology project: it is a risk management and accountability framework.
In a professional services context, AI governance answers questions such as:
- Which AI tools are approved for use in client work, and which aren’t?
- What client data (if any) may be entered into these tools?
- Who reviews AI-generated outputs before they are relied upon or provided to clients?
- What happens if an AI tool produces incorrect or misleading output?
- How is the firm’s use of AI disclosed to clients, as required by law?
Without clear, documented and enforced answers to these questions, firms can be exposed. They may be in breach of existing professional obligations without even realising it. As regulators in Australia become more active and better resourced, “we weren’t aware” is not a defence that carries weight any longer.
to be clear, AI governance is not about preventing AI adoption. Most firms that are serious about governance are also serious about using AI well. The goal is to create a structure within which AI can be adopted confidently, with known risks, clear accountability, and documented evidence of due diligence.
The Factors Driving Urgency: Data, Incidents, and Regulatory Signals
Australia’s data breach crisis is escalating
The volume of data breaches reported in Australia has reached record levels. In 2024, the Office of the Australian Information Commissioner (OAIC) was notified of 1,113 data breaches — the highest annual total since the Notifiable Data Breaches (NDB) scheme began in 2018. That figure represents a significant year-on-year acceleration.
- 1,113 Notifiable data breaches reported in Australia in 2024 — a record high
- +25% Increase in breach notifications compared to 2023
- $4.26M Average cost of a data breach to Australian business in 2024 (IBM)

Malicious and criminal attacks were the dominant cause of breaches, accounting for 69% of all notifications in the second half of 2024, with 61% of those being cyber security incidents. The finance sector — which includes wealth managers, financial advisers, and superannuation funds — was among the top reporting sectors.
For professional services firms, this matters acutely. Tax files, financial statements, estate plans, legal correspondence, insurance applications, these are among the most sensitive categories of personal information in existence. A breach in any of these contexts carries serious harm potential for clients and serious legal exposure for the firm.
IBM calculates that in 2024, the average cost to business of a data breach was $4.26 million. For a mid-size accounting or advisory firm, the reputational damage alone would be existential.
AI hallucinations: the professional liability risk hiding in plain sight
Beyond data breaches, there is a second category of risk that professional services firms have been slower to recognise: the risk that AI produces incorrect, fabricated, or misleading output — and that this output is passed on to a client or relied upon in professional work.
In 2025, Australia had its most prominent public example of this risk at an institutional level. Deloitte Australia agreed to partially refund the $440,000 AUD paid by the Australian government for a report that was found to contain apparent AI-generated errors, including a fabricated quote from a federal court judgement and references to nonexistent academic research papers.
“The responsibility still sits with the professional using it. Accountants have to own the work, check the output, and apply their judgment rather than copy and paste whatever the system produces.”
— Nikki MacKenzie, Georgia Institute of Technology’s Scheller College of Business, following the Deloitte AI incident, CFO Dive, October 2025
The Deloitte case is not an isolated outlier: it is a prominent data point in a rapidly growing body of evidence. By late 2025, aggregated datasets had recorded nearly 800 documented cases of AI-related citation errors across at least 25 jurisdictions, with a sharp increase occurring through 2025. Australian courts have not been immune. Multiple proceedings have been recorded in Australian courts involving instances where AI produced false or fabricated legal content that was submitted as part of proceedings.
For tax agents, accountants, lawyers, and financial advisers, the implications are serious. Professional indemnity insurance does not automatically cover liability arising from negligent use of AI tools. Regulatory bodies are paying attention, and clients who receive advice based on fabricated AI outputs have grounds to pursue professional negligence claims.
The core principle that regulators keep returning to is clear: the technology does not bear the responsibility. The professional does.
Regulators are sending consistent, escalating signals
ASIC was one of the first Australian regulators to formally address the AI governance gap in financial services. In October 2024, ASIC published REP 798 Beware the Gap: Governance Arrangements in the Face of AI Innovation, warning that licensees are adopting AI technologies faster than they are updating their risk and compliance frameworks — a lag that creates significant risks, including potential harm to consumers.
ASIC emphasised that the existing regulatory framework for financial services is technology-neutral, meaning it applies equally to AI and non-AI systems. The use of AI must comply with the general obligation to provide financial or credit services “efficiently, honestly and fairly,” and representations regarding AI use must be factual and accurate.
For wealth managers, financial planners, and insurance brokers holding Australian Financial Services Licences (AFSLs), this is not guidance to file away for later. It is a live compliance expectation.
The Tax Practitioners Board has also moved. In March 2026, the TPB opened public consultation on draft guidance specifically addressing the use of AI by tax practitioners, aimed at helping them understand their obligations under the Code of Professional Conduct when using AI tools in the delivery of tax agent services. Consultation closes in April 2026, with finalised guidance to follow, meaning formal TPB expectations around AI use are imminent.

Why Act Now? The Deadlines That Matter
10 December 2026: Automated decision-making transparency obligations
This is the most consequential near-term deadline for professional services firms using AI in client work.
On 10 December 2026, automated decision-making transparency obligations under the Privacy Act 1988 (Cth) will come into effect. These require APP entities to include additional information in their privacy policies relating to any automated decisions, specifically disclosing the types of personal information used, and the nature of decisions made by or substantially assisted by computer programs, where those decisions could significantly affect an individual’s rights or interests.
If your firm uses AI to assist with tax advice, financial planning recommendations, credit assessments, or insurance underwriting decisions — and that output materially affects a client — you will need to have mapped, documented, and disclosed those processes before December 2026.
What this means in practice
To comply, firms need to:
- Identify every AI-assisted decision process that touches personal information;
- Assess whether those decisions “significantly affect” individual rights or interests;
- Update privacy policies with clear, accessible disclosures; and
- Establish internal procedures for managing AI-related queries or complaints.
Non-compliance will expose organisations to the Privacy Act’s civil penalty regime. Fines can reach $62,600 per offence, and significantly more, up to the greater of $50 million, three times the benefit obtained, or 30% of adjusted turnover, for serious interference with privacy.
The OAIC is already sweeping — and watching
In January 2026, the OAIC commenced its first-ever privacy compliance sweep, targeting approximately 60 organisations across six sectors where personal information is commonly collected. The OAIC is assessing privacy policies for compliance with existing Australian Privacy Principles.
The message is clear: the regulator is not waiting for the December 2026 deadline before taking action.
The TPB Code Determination: obligations that already apply
The TPB’s Code of Professional Conduct Determination 2024 introduced eight additional Code obligations for registered tax practitioners.
For tax practitioners with 100 or fewer employees, these obligations applied from 1 July 2025. These include heightened requirements around quality management, supervision, and client disclosure — all of which are directly implicated by AI use. Tax agents using AI to prepare or review returns, draft correspondence, or conduct research without documented oversight processes may already be in breach.
Shadow AI: the risk you may not know you have
One of the most common governance failures in professional services firms is not a failure of policy, but a failure of visibility. Employees adopt AI tools individually without approval or oversight, and without any disclosure to clients whose data may be processed by those tools.
According to a KPMG study cited in coverage of the Deloitte incident, nearly six out of ten employees admit to making mistakes at work due to AI errors, approximately half use AI in the workplace without knowing whether it is allowed, and more than four in ten acknowledge using it improperly.
For a professional services firm, each of these instances is a potential compliance breach, a potential professional indemnity exposure, and a potential privacy violation, all occurring without management’s knowledge.

What Should Professional Services Firms Do Next?
Governance does not need to be complex to be effective. For most mid-size professional services firms, a structured, staged approach is far more practical than attempting to build a comprehensive framework in one effort. Here is where to start.
1. Conduct an AI usage audit
Before you can govern something, you need to know it exists. Survey your team — honestly and without blame — to understand which AI tools are currently in use, for what purposes, and whether any client data is being entered. This audit often surfaces significant surprise. Many firms discover staff are routinely using tools that were never formally approved.
2. Map your automated decision processes
Identify every workflow where AI output influences a decision that affects a client — tax lodgements, financial advice documents, legal drafts, insurance assessments. Assess whether these decisions “significantly affect” individual rights or interests under the Privacy Act definition. This mapping exercise is the foundation for your December 2026 compliance obligations.
3. Develop and communicate an AI use policy
Establish clear, written guidelines covering: which tools are approved for use; what client data may and may not be entered; what review and sign-off is required before AI-assisted work is delivered to a client; and how AI use is to be disclosed. A policy sitting in a drawer is not a governance framework — it needs to be trained, enforced, and documented.
4. Update your privacy policy and client disclosures
If you are using AI in any way that touches personal information, your privacy policy needs to reflect this. Your APP 5 collection notices may also need updating. This is not optional from December 2026; regulators are already actively checking for compliance with existing privacy policy requirements.
5. Establish human oversight checkpoints
Every AI-assisted output that is delivered to a client or relied upon in professional work should pass through a human review step. This is not just good practice — it is a professional obligation under the TPB Code, ASIC’s AFSL framework, and the duty of care that professional practitioners owe their clients. Document these checkpoints in your quality management procedures.
6. Assess your vendor and third-party risk
If you are using third-party AI tools — Harvey, Microsoft Copilot, Karbon AI, or any other platform — you need to understand what happens to client data within those platforms. Does the vendor train on your data? Who has access? Is client data processed offshore? The OAIC has confirmed that if an AI system developer has access to personal information processed through their system, this constitutes a disclosure that must be included in your APP 5 notices. Your vendor contracts need to be reviewed accordingly.
A note on timing
The December 2026 deadline for automated decision-making disclosure obligations feels distant. It is not. Building a governance framework, conducting an audit, updating privacy policies, training staff, and reviewing vendor contracts all take time, particularly in a professional services environment where billable time (or any diversion from that) is the primary constraint on operational projects.
Firms that begin this process in the second half of 2026 will find themselves under significant pressure. Those that begin now, before the compliance sweep reaches their sector, before a data breach forces the issue, before a client complaint triggers regulatory scrutiny, will be in a materially stronger position.
The AI governance gap that ASIC identified in financial services in 2024 exists across the professional services sector. Closing it is not a compliance exercise. It is a professional obligation and a commercial differentiator, particularly as clients become more sophisticated in their questions about how their data is handled.