Back to Insights
Observability8 min read

ASD Essential Eight and AI: A Practical Security Guide for Australian Organisations

Australia's Essential Eight framework predates modern AI systems. Here is how to apply it to AI deployments — and where the framework's gaps are leaving organisations exposed.

The Australian Signals Directorate's Essential Eight Maturity Model is the de facto security framework for Australian government and regulated industry. Achieving Maturity Level 2 or 3 is now a requirement for many government contracts and is increasingly expected by boards and enterprise customers.

The problem: the Essential Eight was designed for traditional software systems. AI systems — particularly agentic AI that calls external APIs, processes unstructured data, and takes autonomous actions — introduce security considerations the framework does not fully address.

Here is how to apply Essential Eight to AI deployments, and where you need to go beyond it.

Mapping Essential Eight to AI Systems

1. Application Control

AI systems frequently call out to third-party APIs (model providers, data sources, tool APIs). Application control policies need to explicitly govern which AI systems can make external calls, to which endpoints, and under what conditions.

Practical action: Create an explicit allowlist of permitted external endpoints for each AI system. Log all outbound calls with payload summaries (not full payloads — PII considerations).

2. Patch Applications

AI libraries (PyTorch, Transformers, LangChain, LlamaIndex) have rapid release cycles and frequent security CVEs. Model provider SDKs are equally fast-moving.

Practical action: Include AI libraries in your software asset inventory and patch management process. Use dependency scanning tools (Dependabot, Snyk) with AI-specific rule sets.

3. Configure Microsoft Office Macro Settings

Less directly relevant to AI systems, but worth noting: AI-powered Office integrations (Copilot) operate within the macro and extension security model. Configure accordingly.

4. User Application Hardening

AI agents that operate via web browsers (computer-use agents) require specific hardening considerations — sandboxed browser environments, restricted credential access.

5. Restrict Administrative Privileges

AI agents should operate under least-privilege service accounts. Define exactly what data each agent can read, write, and delete. Review quarterly.

Practical action: Create a separate service account for each AI agent with explicit, minimal permissions. Do not use admin accounts for AI system authentication.

6. Patch Operating Systems

AI systems running on VMs or containers need the same OS patching discipline as any other workload.

7. Multi-Factor Authentication

AI systems that access sensitive data via APIs should use certificate-based authentication, not password-based. MFA-equivalent controls for service-to-service calls.

8. Regular Backups

Include AI system configurations, prompts, fine-tuned model weights, and agent tooling definitions in your backup and recovery scope. These are increasingly critical business assets.

Where Essential Eight Falls Short for AI

The framework does not address several AI-specific risks:

Prompt Injection Malicious content in data processed by AI agents can alter the agent's behaviour — causing it to exfiltrate data, take unintended actions, or bypass controls. This is the AI equivalent of SQL injection and is not covered by Essential Eight.

Model Supply Chain Open-source models and pre-trained model weights from Hugging Face and other repositories can contain malicious code or backdoors. No equivalent of Essential Eight addresses this.

Agent Action Scope Creep Over time, AI agents that have broad permissions tend to be directed at increasingly sensitive tasks. Without regular scope reviews, agents accumulate effective privileges well beyond their original design.

Data Exfiltration via AI Outputs AI systems can inadvertently include sensitive data in their outputs (training data leakage, context window exposure). Traditional DLP tools do not inspect AI outputs.

Building an AI Security Framework

For Australian organisations that want to go beyond Essential Eight for AI, we recommend:

  • Maintain an AI Asset Register — every AI system, what data it accesses, what actions it can take, who owns it
  • Implement Prompt Injection Controls — input validation and output filtering for all AI systems processing external data
  • Quarterly Agent Permission Reviews — audit what each agent is actually doing vs. what it is supposed to do
  • Model Provenance Documentation — where did each model come from? What was it trained on? Has it been validated?
  • AI Incident Response Playbook — what do you do if an AI agent takes an unintended action or exfiltrates data?

*Akira Data builds AI systems aligned to ASD Essential Eight and the Australian Privacy Act. Our Privacy-Safe AI Implementation service includes a full security architecture review.*

Share this article