40% of Australian AI Projects Will Miss Their Goals. Superannuation Shows Why — and How to Be in the 60%.
IDC's Asia-Pacific CIO Agenda 2026 predicts 40% of organisations will miss their AI goals by 2026. Australia's $3.7 trillion superannuation industry shows exactly why — and exactly how the funds that are getting it right are doing it differently. The lessons apply to every Australian industry.

AI PM at SOLIDWORKS. Founder, Akira Data.
*Published 1 April 2026.*
IDC's Asia/Pacific CIO Agenda 2026, published in February, contains a number that Australian technology and business leaders should be reading carefully: by 2026, 40% of organisations will miss their AI goals.
Not fail catastrophically. Not cancel projects. Miss their goals. Deliver less than promised, return less than invested, and enter the H2 2026 budget review explaining why the initiative that was going to transform the business has produced modest improvements to a workflow that was already functional.
IDC was equally clear about the organisations that will not miss: "Between 2026 and 2030, CIOs will be judged less on AI experimentation and more on their ability to operationalise AI securely, affordably, and in compliance with local regulations."
Operationalise. Securely. In compliance with local regulations.
These three qualifiers define exactly why so many Australian AI projects are on track to miss — and why the ones that succeed look distinctly different from the ones that stall.
No Australian industry makes this contrast more visible than superannuation.
Why Superannuation Is the Defining Australian AI Case Study
Australia's superannuation system is unique in the world. At $3.7 trillion in assets under management across 23 million accounts, it is the world's fourth-largest pension pool relative to GDP. Every working Australian has a super account. For most Australians, their superannuation balance is one of the two or three largest financial assets they will ever hold.
It is also, as of 1 July 2025, subject to APRA's CPS 230 Operational Resilience standard — which directly governs AI systems embedded in fund operations. And from 10 December 2026, it is subject to the Privacy Act's automated decision-making transparency obligations — which apply with particular force to the millions of decisions super funds make annually that significantly affect individual members.
Every Australian mid-market business has something in common with super funds: decisions made at scale, personal data processed about individuals, regulatory obligations that cannot be deferred, and a board that is increasingly impatient for AI results. The super fund lens clarifies what getting AI right actually requires — in a way that translates across every sector.
What the 40% Who Miss Have in Common
The IDC prediction is not a guess. It is based on the patterns IDC observed across Asia-Pacific in 2024 and 2025 — the year AI went from experimental to expected.
The common characteristics of the organisations that miss:
They pilot without measuring. The most common failure mode: a proof-of-concept that generates executive enthusiasm but never defines the baseline against which production results will be measured. When the CFO asks "what did we achieve?", the answer is model accuracy metrics rather than business outcomes.
A super fund implementing an AI system to prioritise member contact — which members are most likely to need to consolidate accounts, switch investment options, or increase contributions — that never measured call resolution rates before deployment cannot prove the system is working. The AI may be making excellent recommendations. Nobody can demonstrate it.
They treat compliance as a Phase 2 problem. The second most common failure: AI systems built without the compliance infrastructure, and the compliance bill arrives as a retrofit. For super funds, this means AI-assisted member advice systems deployed without the APRA CPS 230 governance requirements built in. For the December 2026 Privacy Act obligations, it means member decision AI without audit trails.
Retrofitting compliance is expensive. A system built without decision audit trails requires engineering work to add them — and that work competes with new feature development. The business case for the retrofit is weak because the system is already deployed. Leadership resistance is high because "it seemed to be working fine before."
They cannot explain what the AI is doing. IDC identified this as the emerging operational crisis: AI systems deployed in production that the organisation cannot describe clearly to their regulator, their board, or their members. For a super fund, this is not abstract. An APRA examiner asking "what is your AI system doing when it recommends one product over another?" requires a specific, documentable answer. "It uses machine learning" is not an answer.
The Privacy Act's December 2026 obligations make this even more concrete. A member who has received an AI-assisted recommendation about their investment option has the right to request a meaningful explanation of why that recommendation was made. The fund needs audit infrastructure to provide it.
They spread too thin. Multiple AI initiatives across multiple workstreams, none completed to production quality. The 54% of organisations still in pilot mode — documented by the CIO Playbook 2026 — are disproportionately in this category. For super funds specifically, this often means AI projects in member communications, claims processing, investment analytics, and regulatory reporting running simultaneously, none with a named business owner accountable for production results.
What the 60% Who Hit Their Goals Do Differently
The organisations that IDC expects to meet their AI goals in 2026 are not necessarily the ones with larger budgets or more sophisticated technology teams. They are the ones that approach AI differently at a structural level.
They define AI success in business terms before they build. A super fund that frames its AI investment as "reduce member contact time for routine balance enquiries from 6.5 minutes to under 2 minutes, as measured by call centre logs, for at least 70% of routine enquiries" has something a fund that frames it as "implement AI-powered member service" does not: a pass/fail criterion that is independent of the technology.
Business metric targets set before build — with a named measurement owner — are the single most reliable predictor of whether an AI project will be considered successful six months after go-live.
They treat compliance as an architecture requirement, not a governance exercise. The funds that are getting AI right in 2026 did not separate the AI build from the compliance build. They defined the compliance requirements — CPS 230, Privacy Act December 2026 obligations, member disclosure requirements — at the start of the scope and built to meet them.
This means every AI system that touches member data runs on Australian-jurisdiction infrastructure. Every AI-assisted recommendation produces a decision record that can be retrieved, translated, and delivered to a member within 30 days if requested. Every automated action in a member account is logged with a trace ID linked to the inputs, the model version, and the business rules applied.
Building compliance in takes 20-30% more engineering time upfront. It costs a fraction of the retrofit, and it eliminates the regulatory exposure during the period between deployment and retrofit completion.
They scope tightly and ship completely. The most productive pattern for Australian super funds deploying AI in 2026: one workflow, scoped to production, with full compliance infrastructure, delivered completely before the next workflow begins.
A fund that has deployed a production member account consolidation identification system — one that flags members with multiple accounts across funds, generates outreach sequencing recommendations, logs every recommendation with full audit trail, and has been running for 90 days with documented outcomes — has achieved more in AI terms than a fund with six pilots running in parallel.
The production system generates real data about real member outcomes. That data drives the next iteration. That iteration drives the business case for the next workflow. The compound value is order-of-magnitude greater than the parallel pilot portfolio.
They invest in explanation infrastructure from day one. The organisations IDC places in the "will hit their goals" category have, by 2026, accepted that explainability is not an optional feature — it is an operational requirement.
For super funds, the explanation infrastructure serves two masters simultaneously. The APRA master: being able to demonstrate to an APRA examiner exactly what the AI system did, what data it used, and what decision it produced. The Privacy Act master: being able to respond to a member explanation request in under 30 days with a human-readable account of why the AI made the recommendation it made.
The technical requirements for both are identical: run-level logging with input snapshots, step-level distributed tracing, and decision rationale generation at decision time. Building this infrastructure once satisfies both requirements. Failing to build it creates exposure on both.
The Australian Super Fund AI Gap: What Is Actually Being Deployed
Australia's superannuation funds span a wide range of AI maturity in 2026. At the leading edge: the large industry funds — Australian Super, AwareSuper, Rest Super — that have invested in production AI systems for member engagement, investment analytics, and operational efficiency. At the lagging edge: regional and employer-sponsored funds that have not yet moved beyond generic SaaS features.
The most common deployed AI uses in Australian super in 2026:
Member contact and engagement AI: Identifying which members would benefit from proactive outreach — those approaching retirement, those with flagged financial stress indicators, those eligible for insurance or contribution optimisation. This is a Tier 1 use case under the December 2026 Privacy Act obligations: the AI is substantially assisting in decisions that significantly affect members.
Account consolidation identification: Identifying members who likely hold multiple superannuation accounts (estimated 4.4 million Australians hold multiple super accounts, costing them approximately $2.4 billion in fee duplication annually). AI models that identify these members and sequence outreach are making recommendations that affect individual financial outcomes — explicit Privacy Act obligations.
Investment option recommendations: Suggesting investment strategy adjustments based on member age, contribution trajectory, and risk profile. Even "suggestions" that influence member behaviour are substantially assisting in decisions that significantly affect the member's retirement outcome. Full audit trail and explanation capability is required.
Operational claims processing: AI triage of insurance claims, disability support claims, and hardship withdrawal applications. High compliance sensitivity — the AI is making or substantially influencing decisions about financial access with significant member impact.
The common thread across all of these use cases: they involve personal data, they make decisions affecting individual financial outcomes, and from December 2026, they require the full Privacy Act transparency framework.
The Translation to Every Australian Industry
Superannuation makes the AI governance challenge concrete because the stakes are visible — these are Australians' retirement savings — and the regulatory framework is well-defined. But the same patterns apply across every Australian industry where AI is being deployed.
Financial services: Credit decisioning AI in lending, insurance pricing AI, fraud detection that affects account access. Same pattern: decisions affecting individuals at scale, Privacy Act obligations, APRA regulatory overlay, need for explainability and audit infrastructure.
Healthcare: Clinical decision support, triage prioritisation, patient communication AI. Same pattern: high sensitivity data, decisions affecting individual outcomes, December 2026 obligations, need for Australian-jurisdiction processing.
Professional services: Contract review AI, due diligence automation, client matter triage. Same pattern: confidential data, decisions affecting client interests, professional indemnity overlay to the Privacy Act obligations.
Mining and resources: The Privacy Act obligations are lower because most mining AI operates on operational rather than personal data. The operationalisation challenge is the same: measure the baseline, build with observability, define success in business terms before you start.
The IDC framework is not industry-specific. The 40% who will miss are missing for the same structural reasons across sectors. The 60% who will hit are hitting for the same structural reasons across sectors.
The Three Things Australian Boards Need to Decide Before July 2026
The H1 2026 budget accountability moment is real. IDC's APAC analysis is explicit: CIOs are being judged on whether AI investments are generating measurable business outcomes — not whether AI pilots are running.
Three decisions that boards and executive teams need to make before the mid-2026 review:
Decision 1: Which one AI workflow are we deploying to production in the next 90 days?
Not which category of AI to explore. Which specific workflow, with a named business owner, a baseline metric, and a success criterion in AUD. For super funds: member account consolidation identification, or claims triage automation, or member contact prioritisation. Pick one. Define it precisely. Deliver it completely.
Decision 2: Is our compliance infrastructure ready for the December 2026 Privacy Act obligations?
For any AI system currently in production that makes or substantially assists in decisions affecting individuals: does it have an audit trail? Can it produce a human-readable explanation of a specific decision? Is the practice disclosed in the privacy policy? If the answer to any of these is no, that is the compliance work to scope and budget immediately.
The nine months between now and December 10 sounds like comfortable runway. For organisations with complex systems requiring technical retrofits, it is not.
Decision 3: Are we building with or against the IDC prediction?
The 40% who miss will miss because they are treating AI as a technology project rather than a business delivery. The 60% who hit will hit because they have a production deployment with a measurable business outcome and the compliance infrastructure to defend it when the regulator checks.
The difference is not talent, budget, or technology. It is whether the organisation has decided to operationalise AI — securely, affordably, in compliance with local regulations — or is still running experiments.
What "Operationalise AI Securely and Compliantly" Actually Means for Australian Businesses
IDC's framing — "operationalise AI securely, affordably, and in compliance with local regulations" — has specific meaning in the Australian context:
Securely: AI systems on Australian-jurisdiction infrastructure (AWS Sydney, Azure Australia East), data not crossing borders without assessment, ASD Essential Eight aligned security controls.
Affordably: Supervised single-agent systems deployed in weeks for AUD $25,000–$50,000, not months-long multi-agent orchestration projects. ROI measured in AUD before deployment begins. Production builds, not perpetual pilots.
In compliance with local regulations: Privacy Impact Assessments before deployment. Audit trail and explainability infrastructure built in. Privacy policy disclosures accurate. December 2026 obligations addressed in the architecture, not the retrofit queue.
Australian businesses that make these decisions in April 2026 will be presenting production results at the H1 budget review, entering December 2026 with compliant systems, and building on a foundation of operational AI capability into 2027.
The 40% that miss will be in the retrofit queue when the December deadline arrives.
*Akira Data helps Australian businesses operationalise AI securely and compliantly — Privacy Act compliant from day one, Australian-jurisdiction infrastructure by default, production deployments in weeks not months. The AI Readiness Sprint (AUD $7,500, 2 weeks) delivers the baseline measurement, compliance assessment, and production build plan that defines whether you are in the 60% or the 40%. [Start the conversation →](/contact)*
*This article was published 1 April 2026. It references IDC Asia/Pacific CIO Agenda 2026: Five Predictions Defining the Shift to Agentic AI (February 2026), APRA CPS 230 Operational Resilience (effective July 2025), the Privacy and Other Legislation Amendment Act 2024, and APRA data on Australian superannuation assets. It is general information only and does not constitute financial or legal advice.*
Share this article