Back to Insights
Strategy10 min read

OAIC Published New AI Guidance in January. Most Australian Boards Still Haven't Acted. Here's the Q2 Checkpoint.

The Office of the Australian Information Commissioner released detailed AI guidance in January 2026 — covering how the Australian Privacy Principles apply to AI development and use. With the OAIC's compliance sweep active and the December 10 deadline 252 days away, the time for reading is over. This is the Q2 action checklist for Australian boards and executive teams.

Kishore Reddy Pagidi
Kishore Reddy Pagidi

AI PM at SOLIDWORKS. Founder, Akira Data.

*Published 2 April 2026.*

In January 2026, the Office of the Australian Information Commissioner published two sets of guidance on how the Australian Privacy Principles apply to AI — one for organisations *developing* AI models, one for organisations *using* commercially available AI tools. In the same month, the OAIC launched its first-ever proactive compliance sweep, targeting 60 organisations across six sectors.

It is now Q2 2026. The December 10 automated decision-making deadline is 252 days away.

Most Australian boards have not acted on the January guidance. Many have not read it. This article is for the executive teams and boards that need a practical action list — not more analysis — to close the gap before the regulator closes it for them.

What the OAIC Actually Said

The OAIC's January guidance is the clearest statement the regulator has made on AI and privacy. It covers two scenarios:

For organisations developing AI: APP entities building AI models must ensure that personal information used in training is collected with appropriate notice and consent, is used only for disclosed purposes, and is protected to an appropriate standard throughout the development pipeline. The guidance specifically calls out that training data must not be retained beyond what is necessary and that individuals whose data was used have access and correction rights.

For organisations using commercially available AI tools: APP entities using tools such as Microsoft Copilot, Google Gemini, Salesforce Einstein, and similar platforms remain responsible for the Privacy Act compliance of any personal information those tools process. The vendor's terms of service do not discharge your obligations. If the tool processes Australian personal data, you need to have assessed whether that processing is disclosed, limited to necessary purposes, and covered by appropriate cross-border data transfer mechanisms.

The guidance also makes clear that the December 2026 automated decision-making transparency obligations will be enforced. From 10 December 2026, APP entities must be able to notify individuals when significant decisions affecting them are made using automated means, and must be able to provide meaningful explanations on request.

The OAIC's position is not ambiguous. The question is whether Australian organisations are acting on it.

Why Most Boards Haven't Moved

The pattern across Australian mid-market organisations is consistent. The OAIC guidance was noted — often by the GC or privacy team — distributed by email, and acknowledged. Then it sat.

This is not a compliance failure in the traditional sense. It is a prioritisation failure. The guidance does not specify a compliance date other than December 10 for the automated decision-making requirements. The immediate consequence of not acting in January or February was nothing visible. So the guidance joined the queue.

What has changed in Q2 is that the OAIC compliance sweep is now generating results. The 60 organisations targeted in January are responding to formal notices. The findings will become public. The sectors targeted — financial services, health, retail, telecommunications, professional services, digital platforms — cover the majority of Australian mid-market businesses.

When the first enforcement action that references AI system non-compliance is published, the board conversation changes immediately. The Q2 checkpoint is about acting before that moment, not in response to it.

The Q2 Action Checklist

This checklist is structured around the four things the OAIC guidance and the December deadline require. Each item is binary: done, or not done.

1. AI System Inventory — Do You Know What You're Running?

Status: Done or Not Done

The OAIC guidance requires that you can describe the AI systems you operate, what personal data they process, and what decisions they make. You cannot produce explanations for automated decisions about systems you have not inventoried.

The inventory question is not technical — it is organisational. It requires someone to ask every department: what AI tools are you using? Which of those tools process personal information about customers, employees, or members of the public? Which of those tools make or substantially assist in decisions that could significantly affect individuals?

The answer is almost always broader than the IT team's list. Procurement, HR, marketing, finance, and operations frequently use AI tools that were sourced outside the IT procurement process. The AI agent register needs to capture all of them.

Q2 action: Assign ownership. One named executive is accountable for the AI agent register. Set a 30-day deadline for the first version. It does not need to be perfect — it needs to exist.

2. Third-Party Tool Assessment — Have You Checked Your Vendors?

Status: Done or Not Done

The OAIC guidance is explicit: APP entities using commercially available AI tools are responsible for Privacy Act compliance. For each AI tool in your inventory that processes personal information, you need to be able to answer:

  • Is the processing disclosed in your privacy policy and collection notices?
  • Is the data being processed offshore? If so, are the cross-border transfer obligations met?
  • Does the vendor retain data for training purposes? If so, have individuals consented?
  • Can you switch the tool off without losing critical data about individuals?

For many Australian businesses, the honest answer to several of these questions is "we don't know." That is the finding the OAIC compliance sweep is looking for.

Q2 action: Run a third-party AI vendor review for the top 10 AI tools by volume of personal data processed. Document the findings. Update privacy policies where required. For tools that cannot pass the assessment, either remediate or replace.

3. Privacy Policy and Collection Notices — Do They Reflect Reality?

Status: Done or Not Done

The Privacy Act requires that your privacy policy accurately describes how you collect, use, and disclose personal information. For most Australian businesses, the privacy policy was written before the current generation of AI tools was deployed. It describes traditional data collection and processing. It does not describe automated decision-making.

The OAIC's January compliance sweep checked this specifically. Regulators reviewing a business's privacy compliance will look at the privacy policy first. If the policy does not mention AI-assisted processing, automated decision-making, or the use of third-party AI tools, the gap is visible before any technical investigation begins.

Q2 action: Privacy policy review against current AI tool inventory. Add disclosure of automated decision-making processes. Update collection notices where AI processing has not been disclosed. This is a legal drafting task — brief your lawyers with the output of the AI agent register.

4. Audit Trail Infrastructure — Can You Produce an Explanation?

Status: Done or Not Done

The December 10 obligation is functional, not documentary. It is not satisfied by having a policy that says "we use explainable AI." It requires that when an individual requests an explanation of a decision made about them by an automated system, you can retrieve:

  • The timestamp and identity of the individual
  • The input data the system processed
  • The reasoning pathway or factors the system weighted
  • The decision taken

If your AI systems do not generate structured decision logs — if the reasoning is not captured at the time the decision is made — you cannot produce this explanation. The decision happens. The result is returned. There is no record.

Building audit trail infrastructure after deployment is expensive and technically complex. The systems in scope for the December deadline need to be assessed now: can they produce a compliant explanation? If not, the engineering work to build that capability needs to start in Q2, not Q3.

Q2 action: Identify all Tier 1 AI systems — those making decisions that could significantly affect individuals. For each, assess whether structured decision logging exists. For those that do not have it, scope the engineering work and resource it.

The Sectors Under the Most Pressure

Based on the OAIC's six sectors targeted in the January sweep, and the pattern of AI adoption in the Australian market, the organisations with the highest current exposure are:

Financial services: Loan decisioning, risk scoring, fraud detection, and personalised advice tools are typically the highest-volume automated decision-making systems in any financial services business. APRA-regulated entities have additional obligations under CPS 230 and CPG 220. Many of these systems were deployed before the current Privacy Act guidance was published.

Healthcare: Clinical decision support, triage, and patient management AI systems process the most sensitive personal information in the Australian economy. The Privacy Act's enhanced protections for health information apply. Any health service using AI for clinical or administrative decision-making needs all four checklist items addressed before December.

Professional services: Law firms, accounting firms, and consultancies using AI for document review, client data analysis, or workflow automation are often the least prepared because they have historically not considered themselves high-risk for privacy enforcement. The OAIC's first-ever compliance sweep included professional services specifically.

Retail and eCommerce: Loyalty programme AI, personalised recommendation engines, and dynamic pricing tools all involve automated processing of personal information. The volume of decisions — millions per week for larger retailers — makes the audit trail requirement technically demanding.

The Board Conversation

At the next board meeting where AI is on the agenda, the right question is not "what AI are we deploying?" It is: "For the AI systems we have already deployed, are we in a defensible position if the OAIC requests information about our automated decision-making?"

The TrendAI research published last week found that 66% of Australian business decision-makers felt pressure to approve AI initiatives with known compliance risks. The OAIC guidance and the December deadline create a specific and time-bounded consequence for that pattern. The Q2 checkpoint is the last comfortable point at which to address it.

For organisations that have not yet started, the AI Readiness Sprint — completed in two weeks — provides the board with a current state inventory, gap assessment, and prioritised remediation plan that answers the regulator's likely questions directly.


*Akira Data builds the governance and technical infrastructure Australian businesses need to use AI compliantly — agent registers, audit trail systems, privacy-safe implementation, and December 2026 compliance builds. The AI Readiness Sprint (AUD $7,500) is the right starting point for boards that need a current state assessment before Q2 ends.*

*This article references OAIC AI Guidance published January 2026, the Privacy and Other Legislation Amendment Act 2024 (automated decision-making provisions effective 10 December 2026), the OAIC January 2026 proactive compliance sweep, and TrendAI AI Adoption and Governance Research 2026. It is general information and does not constitute legal advice.*

Share this article