What the OAIC Will Check When It Audits Your AI: The Complete 2026 Compliance Checklist
The OAIC's first-ever proactive compliance sweep is active right now — targeting 60 Australian organisations across six sectors. The sweep is not waiting for complaints. It is checking AI data practices, privacy policies, and automated decision-making disclosures. Here is the exact checklist the OAIC uses, what businesses are failing on, and how to pass before you receive a notice.

AI PM at SOLIDWORKS. Founder, Akira Data.
*Published 31 March 2026.*
In January 2026, the Office of the Australian Information Commissioner did something it had never done before.
It did not wait for a complaint. It did not wait for a notified data breach. It launched Australia's first-ever proactive privacy compliance sweep — sending formal notices to approximately 60 organisations across financial services, health, retail, telecommunications, professional services, and digital platforms.
The message was unambiguous: the era of reactive privacy enforcement in Australia is over. The OAIC is now actively auditing organisations whether or not anyone has complained about them.
For Australian businesses using AI, this changes everything about how to think about Privacy Act compliance. You cannot wait for someone to complain. You cannot assume that because nothing has gone wrong, nothing will be checked. The OAIC is building an evidence base, comparing AI data practices across sectors, and identifying the systemic gaps that individual complaints would never surface.
This article gives you the complete checklist of what the OAIC checks — the exact assessment methodology, the specific items businesses are failing on, and the actions that get you to pass before you receive a notice.
How the OAIC Compliance Sweep Actually Works
The OAIC's published compliance sweep methodology reveals a structured assessment across three phases.
Phase 1: Document review
The OAIC reviews publicly available documents — primarily your privacy policy — and cross-references them against your known business activities. For businesses that have been publicly identified as using AI (through press releases, LinkedIn announcements, product descriptions, or news coverage), the OAIC is specifically looking for whether AI use is disclosed in the privacy policy.
The gap most commonly identified in Phase 1: a privacy policy that was last updated in 2023 or earlier, describes data collection and storage in generic terms, and says nothing about AI processing, automated decision-making, or inferences derived from personal data.
Phase 2: Information request
For organisations selected for deeper review, the OAIC issues a formal information request. This typically covers:
- Specific AI systems deployed and the personal data they process
- The types of decisions made or substantially influenced by automated means
- Current audit trail and explainability capability
- Third-party AI vendors and data processing arrangements
- Incident response and breach notification history related to AI systems
Organisations that receive a Phase 2 information request and do not have complete records face enforcement proceedings as the next step.
Phase 3: Assessment and outcome
The OAIC produces a compliance assessment and notifies the organisation of its findings. Possible outcomes range from a formal compliance notice (requiring specific remediation within a stated timeframe) to referral for civil penalty proceedings for serious or repeated breaches. The most common outcome for the current sweep, based on OAIC communications, is a formal assessment with remediation requirements — not immediate proceedings. But organisations that received a Phase 1 notice and did nothing received Phase 2 information requests.
The Complete OAIC AI Compliance Checklist
The following checklist reflects the specific items the OAIC is assessing in its 2026 compliance sweep, mapped to the Australian Privacy Principles and the 2024 Privacy Act amendments.
CHECK 1: Privacy Policy — Automated Decision-Making Disclosure
What the OAIC checks: Does your privacy policy accurately and specifically disclose the use of automated decision-making?
The requirement: Under APP 1.3, APP 1.7, and APP 1.8 (which come into full mandatory effect December 10, 2026 but are already assessed for compliance posture), the privacy policy must disclose:
- Which types of decisions are made or substantially assisted by automated means
- What types of personal information are used in those decisions
- Whether decisions are made solely by automated means or with human review
- How individuals can request information about and explanation of automated decisions
What businesses are failing on: The most common failure is a privacy policy that describes AI in a generic corporate strategy section ("we use technology to improve our services") but does not list specific automated decision types. This does not satisfy the disclosure requirement.
What passes: Specific disclosure for each significant AI use case. "We use automated systems to assess loan applications, calculate insurance premiums, prioritise customer service requests, and screen employment applications. Each of these processes uses personal data including [list categories]. Decisions are [made by automated means / reviewed by a human before final action]. Individuals may request an explanation of any automated decision affecting them by contacting [privacy@yourdomain.com.au]."
CHECK 2: Privacy Policy — AI Inferences and Derived Attributes
What the OAIC checks: Does your privacy policy disclose that AI creates inferences or derived attributes from personal data?
The requirement: The Privacy and Other Legislation Amendment Act 2024 extended the definition of personal information to include inferences and derived attributes about individuals. If your AI creates a credit risk score, a health risk indicator, a churn probability, a sentiment classification, or any other derived value about an individual — that derived value is personal information, and the privacy policy must disclose it.
What businesses are failing on: Most businesses that have updated their privacy policies for AI disclose the categories of data they collect. Very few disclose the derived attributes and inferences their AI systems create from that data.
What passes: "Our AI systems may create derived attributes or inferences about you from the data we hold, including [specific examples: financial risk profiles, service propensity scores, communication preference inferences]. These derived attributes constitute personal information under the Privacy Act 1988 (Cth) and are handled in accordance with this policy."
CHECK 3: Collection — Minimal and Necessary
What the OAIC checks: Is personal data collected by AI systems limited to what is reasonably necessary?
The requirement: APP 3 requires that personal information be collected only if reasonably necessary for a function or activity. AI systems have a tendency toward data maximisation — collecting everything potentially relevant, "just in case it's useful for training." The OAIC is specifically assessing whether AI data collection is proportionate to the stated purpose.
What businesses are failing on: Training data practices. Many businesses that have deployed fine-tuned or custom AI models have used historical customer data for model training without assessing whether that use is reasonably necessary and within the scope of the original collection purpose. Customer service records collected to resolve enquiries are not automatically available for AI training.
What passes: A documented data minimisation policy for each AI system specifying which data categories are required, why they are necessary, and how the scope of collection was determined. For training data: documented assessment of whether historical data use for training is within the scope of the original collection purpose, and if not, what consent basis applies.
CHECK 4: Use and Disclosure — Purpose Limitation
What the OAIC checks: Is personal information used only for the purpose for which it was collected?
The requirement: APP 6 requires that personal information only be used for the primary purpose of collection, or a secondary purpose the individual would reasonably expect. The OAIC's Clearview AI determination (March 2026) established this principle applied to AI specifically: collecting publicly available information for one purpose does not authorise using it for AI training or profiling.
What businesses are failing on: Three common failures. First: repurposing customer interaction data (support tickets, call recordings, chat logs) for AI model training without separate authorisation. Second: using AI to create behavioural profiles for marketing targeting from data collected for service delivery purposes. Third: providing customer data to third-party AI vendors for purposes beyond the contracted service.
What passes: A documented purpose limitation register for each AI use of personal data, specifying the collection purpose, the AI use purpose, and the authorisation basis for each. For uses that go beyond the collection purpose: either an updated collection notice, separate consent, or documented assessment that the use falls within a secondary purpose the individual would reasonably expect.
CHECK 5: Cross-Border Data Transfers — AI API Providers
What the OAIC checks: Are cross-border transfers of Australian personal data to AI providers compliant with APP 8?
The requirement: APP 8 requires that personal information transferred to overseas recipients is subject to a comparable privacy regime, or that the Australian entity takes responsibility for ensuring overseas compliance. Most cloud AI API providers route through US infrastructure by default.
What businesses are failing on: Nearly every business that uses default cloud AI API endpoints is in technical breach. Calling api.openai.com, anthropic.com, or similar endpoints with Australian personal data is an APP 8 transfer that requires either:
- The overseas recipient being in a country with comparable privacy laws (the US does not satisfy this test without contractual protections), or
- An APP 8(e) contractual arrangement obligating the provider to comply with the APPs, or
- The individual's express consent to the transfer, or
- The transfer being necessary for a contract the individual has requested.
What passes: For businesses using major cloud AI providers: either reconfigure to Australian-region endpoints (AWS Sydney, Azure Australia East, Google Cloud Sydney) where the provider has committed to Australian data residency — which removes the APP 8 transfer question — or establish a documented APP 8 assessment and contractual arrangement with each offshore AI provider. The assessment must be renewed if the provider's data practices change.
CHECK 6: Security — AI Training Data and Output Data
What the OAIC checks: Are personal data stores used in AI systems — training data, inference data, output data — protected with appropriate security?
The requirement: APP 11 requires taking reasonable steps to protect personal information from misuse, interference, and loss, and from unauthorised access, modification, or disclosure.
What businesses are failing on: AI-specific security gaps appear in three places. First: training datasets often contain personal data extracted from production systems and stored in less-secured environments for model development. Second: AI output data — particularly the scores, classifications, and derived attributes created about individuals — is frequently stored without the same controls applied to the input personal data. Third: AI model artefacts (fine-tuned weights, embeddings) may encode personal data in ways that are not obvious and may not be protected with the same security controls.
What passes: An AI-specific security assessment that maps the security controls applied to each personal data store used in AI systems — training data, inference inputs, AI outputs, model artefacts — and verifies that controls are commensurate with the sensitivity of the data.
CHECK 7: Access and Correction Rights
What the OAIC checks: Can individuals access and correct personal information processed by AI systems, including derived attributes and inferences?
The requirement: APPs 12 and 13 give individuals the right to access personal information held about them and to request correction of inaccurate information. The 2024 amendments extend this to include inferences and derived attributes.
What businesses are failing on: The infrastructure to respond to access requests for AI-processed data does not exist in most organisations. An individual requesting "all personal data your AI systems hold about me" should receive the underlying personal data, the derived attributes created about them, and the model outputs produced about them. Most organisations have no capability to produce this.
What passes: A documented access request process that specifically addresses AI-processed data — how a request is received, how AI data stores are searched, what derived attributes and model outputs are included in the response, and what timeframe is required. The OAIC expects this process to be operational before December 2026.
CHECK 8: Notifiable Data Breaches — AI Incident Scope
What the OAIC checks: Does your notifiable data breach response framework cover AI-specific incidents?
The requirement: The Notifiable Data Breaches scheme (Part IIIC of the Privacy Act) requires notification to the OAIC and affected individuals when a breach is likely to result in serious harm. AI-specific incidents — model poisoning, adversarial attacks that extract training data, prompt injection attacks that cause an AI to disclose protected information — are covered.
What businesses are failing on: Breach response plans written before AI deployment typically cover database breaches, credential compromise, and malware. They do not cover AI-specific scenarios.
What passes: A breach response framework with explicit AI incident scenarios: what happens when an AI model is found to have extracted personal data from a production dataset without authorisation? What happens when a prompt injection attack causes an AI customer service agent to disclose another customer's personal information? What happens when it is discovered that a third-party AI model training included client personal data? Each scenario needs a documented response procedure.
CHECK 9: Explanation Capability — December 2026 Preparation
What the OAIC checks: Is the organisation demonstrably preparing for the December 2026 automated decision-making transparency obligations?
Note: The full mandatory transparency obligations take effect 10 December 2026. But the OAIC's current sweep is assessing compliance posture — whether organisations have started preparing, not whether they are fully compliant today. Organisations with no evidence of preparation are in a materially worse position.
What the OAIC expects to see: A current-state assessment of AI systems subject to the December 2026 obligations, a documented compliance gap analysis, and an implementation timeline. Not a completed build — a credible plan.
What passes: For each Tier 1 AI system (systems making or substantially assisting in decisions significantly affecting individuals): a completed Privacy Impact Assessment, a documented gap analysis against the December 2026 obligations, and an implementation roadmap with a completion date before December 10. The roadmap does not need to be complete. It needs to be credible.
CHECK 10: Children and Sensitive Information — Heightened AI Controls
What the OAIC checks: If AI systems process health information, biometric data, racial or ethnic origin, political opinions, or children's data, are heightened controls in place?
The requirement: Sensitive information categories under the Privacy Act attract stricter handling requirements — notably that collection requires express consent unless an exception applies. AI systems that infer sensitive attributes from non-sensitive data (inferring health status from purchase behaviour, inferring ethnic background from name patterns) are creating sensitive personal information in a way that is likely to breach APP 3.3 if not adequately managed.
What businesses are failing on: AI systems that make sensitive inferences without awareness that the inferences constitute sensitive information. A retail AI that infers dietary preferences related to religious observance is inferring religious beliefs — a sensitive category. An HR AI that infers pregnancy from absence patterns is inferring health information.
What passes: A documented assessment of whether any AI systems create inferences about sensitive categories of information, and if so, the consent basis and disclosure framework for those inferences.
Prioritising Your Response
The ten-point checklist above describes the complete scope of the OAIC's assessment framework. Not every business has equal exposure on each point.
To prioritise remediation:
Immediate priority (before any OAIC notice arrives):
- Check 1: Privacy policy automated decision-making disclosure — the most commonly failed check and the easiest to fix
- Check 2: Inferences and derived attributes — usually requires only a policy update, not a technical build
- Check 5: Cross-border AI API transfers — reconfiguring to Australian-region endpoints is a configuration change, not a rebuild
Within 30 days:
- Check 3 and 4: Data minimisation and purpose limitation — requires a data use audit but not major technical work
- Check 8: Breach response AI scenarios — requires a policy update and tabletop exercise
Before December 10, 2026:
- Check 7: Access and correction capability for AI-processed data
- Check 9: December 2026 compliance preparation — the documentation demonstrating preparation needs to be in place before the OAIC follows up
- Check 6: AI security assessment — requires technical review but typically no new infrastructure
Ongoing:
- Check 10: Sensitive information inferences — requires AI system review and may require system changes for some deployments
The OAIC Is Not Your Only Concern Right Now
The Privacy Act compliance sweep is one of three concurrent pressures on Australian businesses using AI in March 2026.
The new statutory tort of serious invasions of privacy — which commenced in mid-2025 — means that the same privacy failures the OAIC is assessing can now also result in individual civil lawsuits. A business with a failing grade on Check 1 (no AI disclosure in privacy policy) is not just exposed to OAIC enforcement. It is exposed to civil claims from individuals who were subject to undisclosed automated decision-making — because the lack of disclosure demonstrates that reasonable privacy expectations were violated.
The December 10, 2026 automated decision-making transparency deadline means that the nine months remaining are not "runway" for delayed compliance — they are the implementation window. Businesses that start the technical build now (audit trails, explanation capability, access rights infrastructure) will be compliant. Businesses that start in September will be in an emergency retrofit under deadline pressure.
What Happens After You Get the Notice
The OAIC compliance sweep notices are landing with organisations now. If you receive one:
Do not panic. A compliance sweep notice is not an enforcement action. It is an information request or an invitation to demonstrate your compliance posture.
Respond substantively. The worst response to an OAIC notice is silence or a generic statement. Organisations that respond with specific evidence of their compliance framework — Privacy Impact Assessments, documented gap analyses, implementation timelines — are in a materially different position than organisations that cannot produce documentation.
Fix the easy things immediately. If your privacy policy does not disclose your AI use, update it before you respond. This is the single most common gap identified in the current sweep and the one most likely to result in a formal compliance requirement.
Engage legal counsel. For organisations that receive a Phase 2 information request (formal written questions about specific AI practices), legal advice specific to your circumstances is warranted.
Akira Data's OAIC-Ready Assessment
Akira Data's AI Readiness Sprint (AUD $7,500, 2 weeks) includes a full OAIC compliance posture assessment against the ten-point checklist above. The deliverables: a gap analysis identifying which checks you currently pass, which you fail, and which require technical work to address; a Privacy Impact Assessment for your highest-risk AI systems; and a prioritised remediation roadmap with timelines and cost estimates.
For businesses that have already received an OAIC compliance sweep notice, we can turn an OAIC-ready assessment around in 5 business days.
For the Privacy-Safe AI Implementation (from AUD $20,000, 4–6 weeks), we close the technical gaps: audit trail infrastructure, explanation capability, access rights for AI-processed data, and privacy policy update covering all ten checks.
The OAIC is checking. The question is whether your business is ready to pass.
*Akira Data builds Privacy Act-compliant AI systems for Australian mid-market businesses. Every engagement is designed to pass an OAIC audit — privacy policy disclosure, audit trail infrastructure, explanation capability, and data sovereignty by default. [Start with an AI Readiness Sprint →](/contact)*
*This article was published 31 March 2026. It references the OAIC January 2026 proactive compliance sweep, the Privacy and Other Legislation Amendment Act 2024, the OAIC Clearview AI determination (March 2026), the new statutory tort of serious invasions of privacy (commenced mid-2025), and the automated decision-making transparency obligations taking effect 10 December 2026. This article is general information only and does not constitute legal advice.*
Share this article