WA Government Architecture Decision Records
Reusable architecture patterns for WA Government digital services, maintained by the Office of Digital Government (DGOV) Digital Transformation and Technology Unit (DTT).
For WA Public Sector Agencies
These patterns help you build secure, compliant digital services faster. Instead of starting from scratch, use proven approaches that align with WA Government security and compliance requirements.
Getting Started
- Review the Architecture Principles - Six guiding principles for all technology decisions
- Choose a Reference Architecture - Project kickoff templates combining multiple decisions:
- Content Management - Websites, intranets, and content portals
- Data Pipelines - Analytics, reporting, and data processing
- Identity Management - User authentication and single sign-on
- OpenAPI Backends - Backend services and integrations
- Check the Compliance Mapping - Find which ADRs apply to your security and compliance requirements
Compliance Alignment
These ADRs align with:
- WA Cyber Security Policy
- ACSC Information Security Manual (ISM)
- WA Government AI Policy and Assurance Framework
Supporting training: DGOV Technical - DevSecOps Induction
Contributing
New ADRs document the context (problem), decision (solution), and consequences (trade-offs). See the Contributing Guide for workflow and templates.
For AI agent assistance, see AGENTS.md.
Repository Structure
This project uses mdBook to generate documentation:
development/,operations/,security/- ADRs by domainreference-architectures/- Project kickoff templatesSUMMARY.md- Navigation structure
just setup # One-time tool installation
just serve # Preview locally (port 8080)
just build # Build website and PDF
Architecture Principles
Status: Accepted | Date: 2025-03-07
These six principles guide all architecture decisions in this repository. Each ADR should align with one or more of these principles.
1. Establish secure foundations
Integrate security practices from the outset, and throughout the design, development and deployment of products and services, per the ACSC Foundations for modern defensible architecture.
2. Understand and govern data
Use authoritative data sources to ensure data consistency, integrity, and quality. Embed data management and governance practices, including information classification, records management, and Privacy and Responsible Information Sharing, throughout information lifecycles.
3. Prioritise user experience
Apply user-centred design principles to simplify tasks and establish intuitive mappings between user intentions and system responses. Involve users throughout design and development to iteratively evaluate and refine product goals and requirements.
4. Preference tried and tested approaches
Adopt sustainable open source software, and mature managed services where capabilities closely match business needs. When necessary, bespoke service development should be led by internal technical capabilities to ensure appropriate risk ownership. Bespoke software should preference open standards and code to avoid vendor lock-in.
5. Embrace change, release early, release often
Design services as loosely coupled modules with clear boundaries and responsibilities. Release often with tight feedback loops to test assumptions, learn, and iterate. Enable frequent and predictable high-impact changes (your service does not deliver or add value until it is in the hands of users) per the CNCF Cloud Native Definition.
6. Default to open
Encourage transparency, inclusivity, adaptability, collaboration, and community by defaulting to permissive licensing of code and artifacts developed with public funding.
ADR 001: Application Isolation
Status: Accepted | Date: 2025-02-17
Context
Not isolating applications and environments can lead to significant security risks. The risk of lateral movement means threats of vulnerability exposure of a single application can compromise other applications or the entire environment. This lack of isolation can enable the spread of malware, unauthorised access, and data breaches.
- Open Web Application Security Project (OWASP) Application Security Verification Standard (ASVS)
- Australian Cyber Security Centre (ACSC) Guidelines for System Hardening
Decision
To mitigate the risks associated with shared environments, all applications and environments should isolate by default.
This isolation can be achieved through the following approaches (strongest to weakest):
- Dedicated Accounts: Use separate cloud accounts / resource groups for different environments (for example, development, testing, production) to ensure complete isolation of resources and data. Strongest isolation - use for production and sensitive data.
- Kubernetes Clusters: Deploy separate Kubernetes clusters for different applications or environments to isolate workloads and manage resources independently. Strong isolation - use for distinct products or security domains.
- Kubernetes Namespaces: Within a Kubernetes cluster, use namespaces to logically separate different applications or environments, providing a level of isolation for network traffic, resource quotas, and access controls. Moderate isolation - use for related services within a product.
The preferred approach for isolation should be driven by data sensitivity and product boundaries.
Consequences
Benefits:
- Network microsegmentation preventing lateral movement
- Simplified incident containment and forensic analysis
- Compliance with regulatory isolation requirements
Risks if not implemented:
- Single vulnerability compromising multiple applications
- Difficult incident response across shared environments
- Data breaches through unauthorised cross-system access
ADR 005: Secrets Management
Status: Accepted | Date: 2025-02-25
Context
Per the Open Web Application Security Project (OWASP) Secrets Management Cheat Sheet:
Organisations face a growing need to centralise the storage, provisioning, auditing, rotation and management of secrets to control access to secrets and prevent them from leaking and compromising the organisation. Often, services share the same secrets, which makes identifying the source of compromise or leak challenging.
To address these challenges, we need a standardised, auditable approach to managing and rotating secrets within our environments. Secrets should be accessed at runtime by workloads and should never be hard-coded or stored in plain text.
- Amazon Web Services (AWS) Secrets Manager Compliance validation
- Using Elastic Kubernetes Service (EKS) encryption provider support for defense-in-depth
Decision
Use AWS Secrets Manager to store and manage secrets.
- Secrets should be fetched and securely injected into AWS resources at runtime.
- The secret rotation period must be captured in the system design.
Recommended rotation periods:
- Database credentials: 30-90 days (automate via Secrets Manager)
- API keys: 90 days or on suspected compromise
- Certificates: Before expiry (automate via ACM where possible)
- Rotate secrets automatically where possible, or ensure that a manual rotation process is documented and followed.
- Use Identity and Access Management (IAM) policy statements to enforce least-privilege access to secrets.
Kubernetes Integration:
- ADR 002: AWS EKS for Cloud Workloads kubernetes workloads should use EKS Key Management Service (KMS) secrets encryption with namespace-local secrets by default.
- If secrets need to be accessed by multiple clusters, use External Secrets Operator to synchronise them from AWS Secrets Manager.
Consequences
Benefits:
- Automated secret rotation reduces human error
- Meets compliance and auditing requirements
- Enhanced security through centralised management
Risks if not implemented:
- Security exposure from manual secret handling
- Operational overhead and error-prone processes
- Non-compliance with security requirements
Trade-offs:
- AWS vendor dependency may complicate future migrations
ADR 008: Email Authentication Protocols
Status: Accepted | Date: 2025-08-15
Context
Government email domains are prime targets for cybercriminals who exploit them for phishing attacks, business email compromise, and brand impersonation. Citizens and businesses expect government emails to be trustworthy, making email authentication critical for maintaining public confidence and preventing fraud.
Without proper email authentication, attackers can easily spoof government domains to conduct social engineering attacks, distribute malware, or harvest credentials from unsuspecting recipients.
References:
Decision
Implement email authentication standards for all government domains:
Required Standards:
- SPF: Publish records defining authorized mail servers. Use “-all” (hard fail) for domains with well-defined mail infrastructure; use “~all” (soft fail) only during initial rollout or when third-party senders are being onboarded.
- DKIM: Sign all outbound email with minimum 2048-bit RSA keys, rotate annually.
- DMARC: Implement with a progression timeline:
- Start with “p=none” to collect reports (2-4 weeks)
- Move to “p=quarantine” once legitimate sources are aligned (4-8 weeks)
- Progress to “p=reject” when reports show minimal false positives
- Include “rua=” for aggregate reports and “ruf=” for forensic reports
- Apply same policy to subdomains with “sp=reject”
- MTA-STS: Publish MTA-STS policy to enforce TLS for inbound mail transport.
Recommended:
- BIMI: Implement verified brand logos with Verified Mark Certificates (VMCs) for high-profile citizen-facing domains.
Implementation:
- Monitor DNS records for tampering
- Regular authentication testing and effectiveness reviews
- Incident response procedures for authentication failures
- Integration with email security gateways
Consequences
Benefits:
- Automated email authentication blocking domain spoofing
- Enhanced brand protection and citizen trust
- Comprehensive threat visibility through DMARC reporting
Risks if not implemented:
- Phishing attacks exploiting government domain reputation
- Reduced email deliverability affecting citizen communications
- Non-compliance with government security requirements
ADR 011: AI Tool Governance
Status: Accepted | Date: 2025-08-15
Context
Generative AI tools used for development and operations can process sensitive data and make automated decisions affecting security, privacy, and compliance. Without governance ensuring human oversight, these tools pose significant risks including unauthorized data exposure, biased decision-making, and compliance violations.
High-risk scenarios include:
- Automated Decision-Making: AI tools making policy, approval, or resource allocation decisions without human review
- Government Data Processing: Sensitive organisational data processed by offshore AI services
- Uncontrolled AI Outputs: AI-generated content, code, or analysis used without human validation
- Privacy Violations: Personal information processed by AI without appropriate consent or controls
References:
- ACSC Information Security Manual (ISM)
- WA Cyber Security Policy
- Privacy Act 1988
- WA Government Artificial Intelligence Policy and Assurance Framework
- Linux Foundation Agentic AI Foundation
- Oxide RFD 576: Using LLMs at Oxide - values-based approach to AI tool governance
Decision
Implement mandatory human oversight for all AI tool usage with pre-approval for any AI tools that process organisational data or generate outputs used in official capacity.
Human Oversight Requirements:
Adopt a values-based approach to AI governance (per Oxide RFD 576):
- Responsibility: Humans bear responsibility for AI-generated artifacts - the tool acts at human behest
- Rigor: AI should promote and reinforce rigorous thinking, not replace it with generated content
- Output Validation: All AI-generated content must be reviewed by qualified humans before use
- Decision Accountability: Clear human responsibility for all AI-assisted decisions
Covered AI Tools:
This ADR applies to all AI tools including:
- Development and coding assistance tools
- Content generation and writing assistants
- Data analysis and business intelligence platforms
- Automated testing and code review tools
Requirements:
AI tools must not:
- Automatically perform actions in customer-facing environments or on production infrastructure
- Process sensitive data with third parties without a formal contractual arrangement
- Automatically merge or release changes without human review
AI tools must:
- Run in isolated or local environments (refer to ADR 001: Application Isolation) with minimal permissions
- Have no direct network access to internal systems or databases
- Include technical guardrails against data exfiltration
Implementation Examples:
- Endorsed: Developer tooling with human code review for all generated code before merge
- Rejected: Automated tools that merge pull requests or deploy without human approval
Strategic Research
The following AI-assisted security tools are under investigation for potential future adoption:
| Tool | Purpose | Status | Links |
|---|---|---|---|
| ZeroPath | AI-powered security code review and vulnerability detection | Under Investigation | Documentation |
These tools are being evaluated for alignment with the human oversight requirements outlined in this ADR. Any adoption will require demonstrated compliance with mandatory requirements above.
Consequences
Benefits:
- Ensures human accountability for all AI-assisted decisions
- Maintains compliance with Privacy Act and data sovereignty requirements
- Prevents automated actions in production environments without approval
- Establishes clear audit trail for responsible AI usage
Risks if not implemented:
- Unauthorized data exposure to offshore AI services
- AI making critical decisions without human oversight
- Compliance violations and regulatory breaches
- Operational errors from unchecked AI outputs
ADR 012: Privileged Remote Access
Status: Accepted | Date: 2025-08-15
Context
Traditional privileged access methods using jump boxes, bastion hosts, and shared credentials create security risks through persistent network connections and broad administrative access. Modern cloud-native alternatives provide better security controls and audit capabilities for administrative tasks.
Decision
Replace traditional bastion hosts and jump boxes with cloud-native privileged access solutions:
Session Manager provides MFA enforcement, session recording, and audit trails without persistent network access.
Prohibited Methods:
- Bastion hosts and jump boxes with persistent SSH access
- Direct SSH/RDP access to production systems
- Shared administrative credentials and keys
- VPN-based administrative access
Required Access Methods:
- Server Access: AWS Systems Manager Session Manager (replaces SSH to EC2)
- Infrastructure Management: AWS CLI with temporary credentials (replaces persistent VPN)
- Kubernetes Access: kubectl with IAM authentication (replaces cluster SSH)
- Infrastructure Deployment: Infrastructure as Code with audit trails per ADR 010: Infrastructure as Code (replaces manual deployment)
Access Controls:
- Multi-factor authentication for all access
- Time-limited sessions
- Identity-based access through cloud IAM
- Approval workflows for privileged access
- Session recording and audit logging per ADR 007: Centralised Security Logging
Implementation:
- All sessions initiated through APIs only
- Short-lived credentials
- Real-time monitoring and alerting
- Integration with SIEM systems
Consequences
Benefits:
- Zero-trust network access with session recording
- Enhanced audit capabilities through centralised logging
- Short-lived credential security reducing persistent threats
Risks if not implemented:
- Unauthorised lateral movement across network systems
- Prolonged security breaches from persistent access
- Non-compliance with government zero-trust requirements
ADR 013: Identity Federation Standards
Status: Accepted | Date: 2025-08-15
Context
Applications need to integrate with multiple identity providers including jurisdiction citizen identity services, enterprise directories, and cloud identity platforms. Current approaches use inconsistent protocols (SAML, OIDC, proprietary) creating integration complexity and security inconsistencies.
Modern identity federation requires support for emerging standards like verifiable credentials while maintaining compatibility with legacy enterprise systems.
- Digital ID Act 2024
- OpenID Connect Core 1.0
- OWASP Authentication Cheat Sheet
- EU Digital Identity Wallet Architecture and Reference Framework (ARF) - European digital wallet standards
- ISO/IEC 18013-5:2021 Mobile Driving Licence - mobile document (mDL) standard for verifiable credentials
Decision
Standardise on OpenID Connect (OIDC) as the primary federation protocol for all new identity integrations, with SAML 2.0 support only for legacy systems that cannot support OIDC.
Protocol Standards:
- Primary: OpenID Connect for modern identity providers and new integrations
- Legacy Support: SAML 2.0 only when upstream providers require it and OIDC is unavailable
- Security: Implement PKCE for OIDC public clients and proper token validation
- Compliance: Support Digital ID Act 2024 requirements for jurisdiction identity services
Architecture Requirements:
- Applications should integrate through managed identity platforms (AWS Cognito, Microsoft Entra ID), not directly with identity providers
- Separate privileged and standard user domains for administrative access isolation (see Reference Architecture: OpenAPI Backend)
- Support multiple upstream identity providers per application
- Maintain audit trails per ADR 007: Centralised Security Logging
Identity Federation Flow:
The managed platform handles protocol translation between OIDC and SAML providers, token validation, and audit logging.
Emerging Standards:
- Support W3C Verifiable Credentials for jurisdiction identity services as they mature
- Plan for OpenID4VC wallet-based authentication patterns
- Align with EU Digital Identity Wallet (EUDI) architecture for international interoperability
- Support ISO/IEC 18013-5 mobile document (mDL) credentials for government-issued identity
Implementation Requirements:
- Implement fallback authentication mechanisms for critical systems
- Choose identity platforms with high availability and data export capabilities
Consequences
Benefits:
- Consistent modern federation standard across all applications
- Better security through OIDC’s improved token handling and PKCE support
- Simplified integration with jurisdiction citizen identity services
- Clear separation of administrative and standard user access
Risks if not implemented:
- Fragmented authentication systems across applications
- Legacy SAML limitations hindering citizen service integration
- Inconsistent security posture across identity touchpoints
ADR 016: Web Application Edge Protection
Status: Accepted | Date: 2025-08-15
Context
Government web applications face heightened security threats including state-sponsored attacks, DDoS campaigns by activist groups, and sophisticated application-layer exploits targeting public services. These attacks can disrupt critical citizen services and damage public trust.
Traditional perimeter security is insufficient for protecting modern web applications that serve millions of citizens. Edge protection through CDNs and WAFs provides the first line of defense, filtering malicious traffic before it reaches application infrastructure.
References:
- ACSC Information Security Manual (ISM)
- ACSC Guidelines for System Hardening
- OWASP Web Application Security Testing Guide
Decision
All public web applications and APIs must use CDN with integrated WAF protection:
The CDN edge handles SSL termination, caching, WAF filtering, and DDoS mitigation before traffic reaches application infrastructure.
CDN Requirements:
- Geographic distribution with SSL/TLS termination at edge
- Cache optimization and origin shielding
- IPv6 dual-stack support on edge (internal use of IPv4 allowed)
WAF Protection:
- OWASP Top 10 protection rules enabled
- Layer 7 DDoS protection and rate limiting
- Geo-blocking and bot management
- Custom rules for application-specific threats
DDoS Protection:
- AWS Shield Advanced or equivalent
- Real-time attack monitoring and alerting
- DDoS Response Team access
Implementation:
- WAF logs integrated with SIEM per ADR 007: Centralised Security Logging
- Fail-secure configuration (no fail-open)
- Regular penetration testing and rule tuning
- CI/CD integration for automated deployments
Consequences
Benefits:
- Automated threat detection and mitigation at network edge
- Global content delivery and caching capabilities
- Comprehensive attack surface reduction through filtering
- Real-time traffic analysis and bot management
Risks if not implemented:
- Critical citizen services disrupted by attacks
- Direct server exposure to malicious traffic
- Slow response times affecting user adoption
- No early warning of emerging attack patterns
ADR 002: AWS EKS for Cloud Workloads
Status: Accepted | Date: 2025-02-17
Context
Organisations want to efficiently manage and scale bespoke workloads in a secure and scalable manner. Traditional server management can be cumbersome and inefficient for dynamic workloads. Provider-specific control planes can result in lock-in and artificial constraints limiting technology options.
Decision
To address these challenges, use a CNCF Certified Kubernetes platform with automatically managed infrastructure resources. Due to hyperscaler availability and size AWS EKS (Elastic Kubernetes Service) in auto mode is the preferred option.
This leverages Kubernetes for orchestration, AWS EKS for managed Kubernetes services, AWS Elastic Block Store (EBS) for storage and AWS load balancers for traffic management.
- AWS EKS Auto Mode: Provide a managed Kubernetes service, that automatically scales the infrastructure based on workload demands.
- Managed Storage and NodePools: Ensure that the underlying infrastructure is maintained and updated by AWS.
- Load Balancers: Standardise ingress and traffic management.
- Persistent Storage: Databases and object storage should use Database-as-a-Service (DBaaS) to enable higher resilience for point-in-time recovery (PITR) and backups with lower overheads as per ADR 018: Database Patterns
Consequences
Benefits:
- Efficient resource utilisation through managed scaling
- Clear boundaries for shared responsibilities with a small operational overhead
- Enhanced security through automatic updates and patches
- Improved availability with managed storage and node pools
Risks if not implemented:
- Resource inefficiency from manual scaling
- High operational overhead managing custom infrastructure
- Security vulnerabilities from delayed updates
- Service downtime during traffic spikes
Strategic Research
CNCF Kubernetes AI Conformance
The CNCF Kubernetes AI Conformance Program establishes standards for AI/ML workload portability across Kubernetes platforms. Only platforms meeting these standards should be supported, ensuring workloads can interoperate as flexible nodes within a broader state/federal ecosystem.
Current Platform Conformance:
- AWS EKS meets proposed standards: ai-conformance EKS, AI on EKS
HPC Requirements:
Physical infrastructure for HPC (High-Performance Computing) projects must meet CNCF Kubernetes AI Conformance capabilities. This ensures models developed on local compute can scale to centralised HPC facilities without environment mismatches. Target sovereign Australian platforms meeting security and privacy requirements (ASD IRAP assessed, PRIS compliant).
Digital Sovereignty
Analysis like Cloud services and government digital sovereignty in Australia and beyond. / Mitchell, Andrew D.; Samlidis, Theodore. in the International Journal of Law and Information Technology, Vol. 29, No. 4, 2021, p. 364-394 highlights the ongoing issues with depending on hyperscalers in a single foreign jurisdiction. Based on this changing landscape, exploring simplified options for secure sovereign owned hosting options such as Australian Dedicated Servers and local colo in Tier 3+ datacentres (designed for 99.98% uptime) is warranted and touched on below.
Bare metal management
Use a platform like Proxmox VE to run standalone clusters at multiple facilities with multiple 2U servers per location. Example hardware (starts approx $15k AUD per server) - Dell PowerEdge R7725, HPE ProLiant DL385 Gen11, Lenovo ThinkSystem SR665 V3
Year 1 estimated costs:
- Hardware: ~$200k for 6x ~$33k servers
- Colo (2 sites, Tier 3+): ~$50k for 2x 5kw racks with 1 Gbit IP Transit
- Total: $250k for ~2-3TB ram, ~500 cores, 100TB disk across 2 sites (reduce by a factor of 2-3 for redundancy)
ADR 006: Automated Policy Enforcement
Status: Proposed | Date: 2025-07-29
Context
Cloud infrastructure requires automated policy enforcement to prevent misconfigurations, ensure compliance, and provide secure network access patterns. Manual checking cannot scale effectively across multiple accounts and services.
Decision
Implement comprehensive automated policy enforcement using AWS native services for governance, network security, and access control.
Governance (Control Tower, Config) enforces policies on network security (Transit Gateway, Security Groups), which controls access to workloads.
Governance Foundation
- AWS Control Tower: Account factory, guardrails, and compliance monitoring across organisation
- Service Control Policies: Preventive controls blocking non-compliant resource creation
- AWS Config Rules: Detective controls for compliance monitoring and drift detection
Network Security & Access
- Transit Gateway: Central hub for intra-account resource exposure via security groups
- Security Group References: Use security group IDs instead of hardcoded IP addresses for dynamic, maintainable access policies
- Shield Advanced: DDoS protection for public-facing resources per ADR 016: Web Application Edge Protection
- VPC Flow Logs: Complete egress traffic monitoring and analysis per WA SOC Cyber Network Management Guideline
Note: This approach creates dependency on AWS for traffic and network protection. Open-source equivalents include Security Onion for network security monitoring, OPNsense and pfSense for firewall and intrusion detection capabilities.
Core Policy Areas
- Encryption: Mandatory encryption for all data stores and communications
- Access Control: IAM least-privilege access and security group-based resource access
- Resource Tagging: Governance and cost allocation requirements
- Data Sovereignty: Geographic restrictions for jurisdiction compliance
- Network Segmentation: Security group-based micro-segmentation over IP-based rules
Implementation Requirements:
- Implement policy validation in CI/CD pipelines following ADR 010: Infrastructure as Code
- Use security group references over hardcoded IP addresses for maintainable policies
- Monitor VPC Flow Logs for egress traffic analysis and anomaly detection
Consequences
Benefits:
- Proactive security misconfiguration prevention through automated guardrails
- Comprehensive egress traffic visibility via ADR 007: Centralised Security Logging
- Centralised network access management reducing operational complexity
Risks if not implemented:
- Security misconfigurations deploying to production environments
- Unmonitored egress traffic enabling data exfiltration
- Fragmented access policies creating security gaps
ADR 007: Centralised Security Logging
Status: Accepted | Date: 2025-02-25
Context
Security logs should be centrally collected to support monitoring, detection, and response capabilities across workloads. Sensitive information logging must be minimised to follow data protection regulations and reduce the risk of data breaches. Audit and authentication logs are critical for security monitoring and should be collected by default.
- Open Web Application Security Project (OWASP) Logging Cheat Sheet
- Australian Cyber Security Centre (ACSC) Guidelines for system monitoring
- DGOV Technical Baseline for Detection Coverage (MITRE ATT&CK)
Decision
Use centralised logging using Microsoft Sentinel and Amazon CloudWatch.
Configuration:
- Configure default collection for audit and authentication logs to simplify security investigations.
- Container workloads should configure Container insights with enhanced observability and EKS control plane logging for audit and authentication logs by default.
- Configure logging to avoid capturing and exposing Personally Identifiable Information (PII).
Operations:
- Review and update logging configurations regularly to ensure coverage and privacy requirements are met.
- Extract and archive log information used during investigations to an appropriate location (in alignment with record keeping requirements).
Consequences
Benefits:
- Faster incident detection and response
- Simplified compliance with data protection regulations
- Centralised security log management reduces operational overhead
Risks if not implemented:
- Delayed security incident detection from decentralised logs
- Sensitive information exposure leading to data breaches
- Incomplete audit trails hindering forensic investigations
ADR 010: Infrastructure as Code
Status: Accepted | Date: 2025-03-10
Context
All environments must be reproducible from source to minimise drift and security risk. Manual changes and missing version control create deployment failures and vulnerabilities.
Compliance Requirements:
Decision
Golden Path
- Git Repository Structure: Single repo per application with
environments/{dev,staging,prod}folders matching AWS account names (e.g.,app-a-infrarepo →app-a-dev,app-a-staging,app-a-prodaccounts) - State Management: Terraform remote state with locking, separate state per environment
- CI Pipeline:
- Validate: Trivy scan +
terraform plan/kubectl diffdrift check - Plan: Show proposed changes on PR
- Apply: Deploy on tagged release only
- Validate: Trivy scan +
- Versioning: Git tags = semantic versions (x.y.z) deployable to any environment
- Disaster Recovery: Checkout tag + run
just deploy --env=prodwith static artifacts from ADR 004
Required Tools & Practices
| Tool | Purpose | Stage | Mandatory |
|---|---|---|---|
| Trivy | Vulnerability scanning | Validate | Yes |
| Terraform or kubectl/kustomize | Configuration management | Deploy | Yes |
| Justfiles | Task automation | All | Recommended |
| devcontainer-base | Dev environment | Local | Recommended |
| k3d | Local testing | Dev | Optional |
Infrastructure as Code Workflow:
Git tags represent deployable versions. Environment folders
(environments/{dev,staging,prod}) map to separate AWS accounts with
isolated state storage.
Consequences
Benefits:
- Reproducible infrastructure deployments with version control
- Automated drift detection and prevention mechanisms
- Reliable disaster recovery through infrastructure as code
Risks if not implemented:
- Configuration drift creating security vulnerabilities
- Failed rollbacks during critical incident recovery
- Inconsistent environments affecting application reliability
References
ADR 014: Object Storage Backups
Status: Proposed | Date: 2025-07-22
Context
Current backup approaches lack cross-region redundancy and automated lifecycle management, creating single points of failure and compliance risks for government data retention requirements. Traditional storage systems do not provide the durability and geographic distribution needed for critical government systems.
Key challenges:
- Single region backup storage creating vulnerability to regional outages
- Manual backup processes prone to human error
- Lack of automated recovery testing
- Insufficient geographic separation for disaster recovery
References:
Decision
Implement standardised object storage backup solution with automated cross-region replication and lifecycle management for all critical systems and data.
All storage (primary, backup, and replica) uses S3 buckets with versioning and immutable retention policies. Primary S3 buckets use native versioning for point-in-time recovery. DBaaS exports to backup buckets. Both primary and backup buckets replicate cross-region for geographic redundancy.
Storage Requirements:
- Object storage with versioning and immutable storage capabilities
- Database, application data, and infrastructure configuration backups
- Encryption at rest and in transit per ADR 005: Secrets Management
- Access controls aligned with ADR 001: Application Isolation
Critical Systems Definition:
- Production databases containing citizen or business data
- Application source code and deployment configurations
- Security logs and audit trails
- Infrastructure as Code templates and state files
Geographic Distribution:
- Cross-region replication within Australia (e.g., ap-southeast-2 to ap-southeast-4)
- Monitoring integration per ADR 007: Centralised Security Logging
Lifecycle Management:
- Automated storage tiering based on age and access patterns
- Compliance-based retention policies
- Recovery testing and validation procedures
Recovery Objectives:
- Recovery Time Objective (RTO): 4 hours for critical systems, 24 hours for standard systems
- Recovery Point Objective (RPO): 1 hour for databases, 24 hours for static content
- Implementation Example: AWS S3 Cross-Region Replication to Australian regions
Consequences
Benefits:
- Automated disaster recovery meeting defined RTO/RPO objectives
- Geographic redundancy protecting against regional outages
- Compliance with government data retention requirements
Risks if not implemented:
- Permanent data loss from infrastructure failures
- Extended service recovery times affecting citizen services
- Regulatory violations from inadequate data protection
ADR 015: Data Governance Standards
Status: Proposed | Date: 2025-07-28
Context
Data pipelines require governance to ensure quality and compliance. Modern approaches use code-based validation and version control rather than separate governance tools.
Decision
Use code-based data governance with git workflows. Data transformations written in Ibis are version-controlled, testable, and provide implicit lineage through code dependencies. See Reference Architecture: Data Pipelines for full implementation patterns.
Priority Focus Areas
- Schema Contracts: Define expected schemas in code, validate in CI/CD pipeline
- Data Lineage: Track through transformation code history in git
- Quality Validation: Use Ibis expressions for data validation checks, run as automated tests
- Audit Integration: Follow ADR 007: Centralised Security Logging for transformation logs
Implementation
# Example: Schema validation with Ibis
import ibis
def validate_customers(table: ibis.Table) -> ibis.Table:
"""Validate customer data before processing."""
return table.filter(
table.email.notnull() &
table.created_at.notnull() &
(table.status.isin(['active', 'inactive', 'pending']))
)
Consequences
Benefits:
- Data quality validation as code, testable in CI/CD
- Lineage tracked through git history and code dependencies
- No separate governance infrastructure to maintain
Risks if not implemented:
- Data quality issues reaching downstream systems
- Unable to trace data issues back to source transformations
- Compliance gaps from undocumented data handling
ADR 017: Analytics Tooling Standards
Status: Proposed | Date: 2025-07-28
Context
Organisations need simple, secure reporting with reproducible outputs. Reports should be version-controlled alongside the data transformations that produce them.
Decision
Use Quarto for analytics and reporting.
Why Quarto
- Multi-format: Same source produces HTML, PDF, Word, presentations
- Version-controlled: Reports live alongside data transformation code in git
- Open source: Markdown-based, portable, no vendor lock-in
- Accessible: Built-in support for WCAG compliance
Capabilities
| Need | Quarto Feature |
|---|---|
| Static reports | Markdown + code blocks |
| PDF documents | PDF output with professional formatting |
| Interactive charts | Observable JS for client-side interactivity |
| Dashboards | Quarto Dashboards for layout and filtering |
| Parameterised reports | Parameters for automated report generation |
Integration
- Data Sources: Query via Ibis or DuckDB per ADR 018: Database Patterns and Reference Architecture: Data Pipelines
- Deployment: Static HTML hosted per ADR 016: Web Application Edge Protection
- CI/CD: Automated report generation per ADR 004: CI/CD Quality Assurance
Consequences
Benefits:
- Version-controlled, reproducible analytics outputs
- Static hosting with minimal operational overhead
- Consistent tooling across reports, dashboards, and documents
Risks if not implemented:
- Inconsistent reporting approaches across teams
- Reports not tracked in version control
- Difficulty reproducing historical analytics outputs
ADR 018: Database Patterns
Status: Proposed | Date: 2025-07-28
Context
Applications need managed persistent storage (databases, datalakes, files, objects) with automatic scaling and jurisdiction-compliant backup strategies.
- AWS Aurora Serverless v2 Documentation
- Percona Everest Documentation and Pigsty Documentation for development/non-AWS environments
- s3proxy and rclone serve s3 for development/non-AWS object storage
Decision
Use Aurora Serverless v2 outside EKS clusters with automated scaling, multi-AZ deployment, and dual backup strategy.
Datalakes: Use SQL engines over object storage:
- DuckLake over AWS S3 for simpler analytical workloads
- Trino over S3 Tables for larger-scale data processing
See Reference Architecture: Data Pipelines for full datalake patterns.
Implementation
- Database: Aurora Serverless v2 (PostgreSQL/MySQL) with built-in connection pooling and automatic scaling
- Object Storage: Amazon S3 and Amazon S3 Tables for datalakes, files and objects
- Deployment: Outside EKS cluster (handles complexity automatically)
- Credentials: Follow ADR 005: Secrets Management for endpoint and credential management
- Backup: Follow ADR 014: Object Storage Backups plus AWS automated snapshots
- Security: Follow ADR 007: Centralised Security Logging and ADR 012: Privileged Remote Access
Consequences
Benefits:
- Serverless scaling reducing operational costs during low usage periods
- Automated high availability with managed backup strategies per ADR 014: Object Backup
- Compliance with jurisdiction requirements through dual backup approach
Risks if not implemented:
- High operational overhead managing database infrastructure
- Inconsistent backup strategies across database systems
- Cost inefficiency from overprovisioned database resources
ADR 003: API Documentation Standards
Status: Accepted | Date: 2025-03-26
Context
Secure, maintainable APIs require mature frameworks with low complexity and industry standard compliance. Where existing standards exist, prefer them over bespoke REST APIs.
Compliance Requirements:
Decision
API Requirements
| Requirement | Standard | Mandatory |
|---|---|---|
| Documentation | OpenAPI Specification | Yes |
| Testing | Restish CLI scripts | Yes |
| Framework | Huma (Go), Litestar (Python), or equivalent | Recommended |
| Naming | Consistent convention | Yes |
| Security | OWASP API security coverage | Yes |
| Exposure | No admin APIs on Internet | Yes |
Development Guidelines
- Self-Documenting: Use frameworks that auto-generate OpenAPI specs
- Data Types: Prefer standard types over custom formats
- Segregation: Separate APIs by purpose (see Reference Architecture: OpenAPI Backend)
- Testing: Include security vulnerability checks in test scripts
API Development Flow:
Use self-documenting frameworks that generate OpenAPI specifications, then validate with automated security and behaviour tests.
Consequences
Benefits:
- Standardised API documentation automatically generated from code
- Enhanced security through consistent validation patterns
- Reduced maintenance overhead via automated testing integration
Risks if not implemented:
- Documentation drift creating integration difficulties
- Security vulnerabilities from inconsistent API patterns
- Increased development time debugging undocumented APIs
ADR 004: CI/CD Quality Assurance
Status: Accepted | Date: 2025-03-10
Context
Ensure security and integrity of software artifacts that are consumed by infrastructure repositories per ADR 010. Threat actors exploit vulnerabilities in code, dependencies, container images, and exposed secrets.
Compliance Requirements:
Decision
CI/CD Pipeline Requirements
Pipeline Flow: Code Commit → Build & Test → Quality Assurance → Release
| Stage | Tools | Purpose | Mandatory |
|---|---|---|---|
| Build | Railpack and Docker Bake | Multi-platform builds with SBOM/provenance | Yes |
| Scan | scc and Trivy | Complexity and Vulnerability scanning | Yes |
| Analysis | GitHub CodeQL | Static code analysis | Yes |
| Test | Playwright | End-to-end testing | Recommended |
| Performance | Grafana K6 | Load testing | Optional |
| API | Restish | API validation per ADR 003 | Optional |
Development Environment
- Use devcontainer-base for standardised tooling
- Use Railpack and Docker Bake to define and standardise build processes
- Use Justfiles for task automation
- Use GitHub Actions for CI/CD automation
CI/CD Pipeline:
Build produces container images with SBOM/provenance. Scan runs vulnerability and static analysis. Release produces static artifacts consumed by ADR 010: Infrastructure as Code.
Consequences
Benefits:
- Automated security scanning and vulnerability remediation
- Standardised artifact integrity and compliance alignment
- Consistent deployment pipelines with audit trails
Risks if not implemented:
- Vulnerable containers deployed to production
- Exposed secrets in application artifacts
- Manual security processes prone to human error
- Compliance violations and audit failures
References
ADR 009: Release Documentation Standards
Status: Accepted | Date: 2025-03-04
Context
To ensure clear communication of changes and updates to security and infrastructure operations teams, release notes should be standardised. The release notes should succinctly capture key information, including new features, improvements, bug fixes, security updates, and infrastructure changes, with links to relevant changelogs.
Decision
Adopt a standardised release notes template in Markdown format. Brief descriptions should include the security implications and operational impacts of changes such as vulnerability fixes, compliance improvements, or changes to authentication and authorisation mechanisms. Descriptions should also detail operational aspects, including deployment processes, logging & monitoring considerations, and any modifications to Infrastructure as Code (IaC).
Git Tagging Requirements:
- Create a git tag for each release following semantic versioning (v1.0.0, v1.1.0, etc.)
- Tags must be annotated with release notes summary
- Tags should be created after all ADR acceptance and README updates
- Tag message should reference the release documentation
A template is provided below that can be tailored per project. A completed release notes Markdown document should be provided with all proposed changes.
## Release Notes
### Overview
- **Name:** Name
- **Version:** [Version Number](#)
- **Previous Version:** [Version Number](#)
### Changes and Testing
High level summary
**New Features & Improvements**:
- [Feature/Improvement 1]: Brief description including testing.
- [Feature/Improvement 2]: Brief description including testing.
**Bug Fixes & Security Updates**:
- [Bug Fix/Security Update 1]: Brief description with severity level and response timeline.
- [Bug Fix/Security Update 2]: Brief description with severity level and response timeline.
- **Response Timelines**: Critical (24h), High (7d), Medium (30d), Low (90d)
### Changelogs
*Only include list items changed by this release*
- **Code**: Brief description. [View Changes](#)
- **Infrastructure**: Brief description. [View Changes](#)
- **Configuration & Secrets**: Brief description.
### Known Issues
- [Known Issue 1]: Brief description.
- [Known Issue 2]: Brief description.
### Action Required
- [Action 1]: Brief description of any action required by users or stakeholders.
- [Action 2]: Brief description of any action required by users or stakeholders.
### Contact
For any questions or issues, please contact [Contact Information].
Consequences
Benefits:
- Standardised release communication improving cross-team coordination
- Comprehensive change tracking supporting ADR 007: Centralised Security Logging
- Enhanced collaboration through consistent documentation processes
Risks if not implemented:
- Critical release information lost between development teams
- Poor decision making from insufficient release context
- Security incidents from undocumented system changes
Reference Architecture: Content Management
Status: Proposed | Date: 2025-07-28
When to Use This Pattern
Use when building:
- Public websites and intranets
- Content portals with editorial workflows
- Headless CMS backends for mobile apps or multi-channel publishing
Overview
This template implements content management systems meeting WA Government compliance requirements, combining security isolation, managed infrastructure, and edge protection.
Core Components
Project Kickoff Steps
Foundation Setup
- Apply Isolation - Follow ADR 001: Application Isolation for CMS service network and runtime separation
- Deploy Infrastructure - Follow ADR 002: AWS EKS for Cloud Workloads for CMS container deployment
- Configure Infrastructure - Follow ADR 010: Infrastructure as Code for reproducible deployments
- Setup Database - Follow ADR 018: Database Patterns for Aurora Serverless v2 content storage
Security & Operations
- Configure Secrets Management - Follow ADR 005: Secrets Management for database credentials and API keys
- Setup Logging - Follow ADR 007: Centralised Security Logging for audit trails and editorial tracking
- Setup Backup Strategy - Follow ADR 014: Object Storage Backups for content and media backup
- Configure Edge Protection - Follow ADR 016: Web Application Edge Protection for CDN and WAF setup
- Identity Integration - Follow ADR 012: Privileged Remote Access for editorial authentication
Implementation Details
Content Workflows & Editorial:
- Configure content workflows and editorial approval processes
- Setup media asset management and CDN integration per ADR 016: Web Application Edge Protection
- Implement headless CMS API following ADR 003: API Documentation Standards
- Configure content moderation and approval workflows
Compliance & Accessibility:
- Configure WCAG 2.1 AA accessibility compliance and automated testing
- Setup jurisdiction-specific compliance requirements (e.g., privacy policies, cookie consent)
- Implement content governance and retention policies per ADR 015: Data Governance Standards
- Configure multilingual content management if required
Performance & SEO:
- Setup SEO metadata management and structured data (JSON-LD)
- Implement content performance monitoring per ADR 007: Centralised Security Logging
- Configure CDN caching strategies and cache invalidation
- Setup content analytics and user behaviour tracking
Reference Architecture: Data Pipelines
Status: Proposed | Date: 2025-01-28
When to Use This Pattern
Use when building:
- Analytics and business intelligence reporting
- Data integration between organisational systems
- Automated data processing and transformation workflows
Overview
This template implements scalable data pipelines using Ibis for portable dataframe operations and DuckLake for lakehouse storage. The approach prioritises simplicity and portability - write transformations once in Python, run them anywhere from laptops to cloud warehouses.
Core Components
Key Technologies:
| Component | Tool | Purpose |
|---|---|---|
| Transformation | Ibis | Portable Python dataframe API - same code runs on DuckDB, PostgreSQL, or cloud warehouses |
| Local Engine | DuckDB | Fast analytical queries, runs anywhere without infrastructure |
| Lakehouse | DuckLake | Open table format over S3 with ACID transactions |
| Reporting | Quarto | Static reports from notebooks, version-controlled |
Project Kickoff Steps
Foundation Setup
- Apply Isolation - Follow ADR 001: Application Isolation for data processing network separation
- Deploy Infrastructure - Follow ADR 002: AWS EKS for Cloud Workloads for container deployment
- Configure Infrastructure - Follow ADR 010: Infrastructure as Code for reproducible infrastructure
- Setup Storage - Follow ADR 018: Database Patterns for S3 and DuckLake configuration
Security & Operations
- Configure Secrets - Follow ADR 005: Secrets Management for data source credentials
- Setup Logging - Follow ADR 007: Centralised Security Logging for audit trails
- Setup Backups - Follow ADR 014: Object Storage Backups for data backup
- Data Governance - Follow ADR 015: Data Governance Standards for data quality
Development Process
- Configure CI/CD - Follow ADR 004: CI/CD Quality Assurance for automated testing
- Setup Releases - Follow ADR 009: Release Documentation Standards for versioning
- Analytics Tools - Follow ADR 017: Analytics Tooling Standards for Quarto integration
Implementation Details
Why Ibis + DuckDB:
- Portability: Write transformations in Python, run on any backend (DuckDB locally, PostgreSQL, BigQuery, Snowflake)
- Simplicity: No complex orchestration infrastructure required for most workloads
- Performance: DuckDB handles analytical queries on datasets up to hundreds of gigabytes on a single machine
- Cost: Run development and small-medium workloads without cloud compute costs
When to Scale Beyond DuckDB:
- Data exceeds available memory/disk on a single node
- Need concurrent writes from multiple processes
- Require real-time streaming ingestion
Data Quality:
- Use Ibis expressions for data validation
- Implement schema checks in CI/CD pipeline
- Track data lineage through transformation code in git
Cost Optimisation:
- Run DuckDB locally for development and testing (zero infrastructure cost)
- Use S3 Intelligent Tiering for automatic archival
- Scale to cloud warehouses only when data volume requires it
Reference Architecture: Identity Management
Status: Proposed | Date: 2025-07-29
When to Use This Pattern
Use when building:
- Applications requiring user login via government or enterprise identity providers
- Single sign-on across multiple services
- Integration with Australian Government Digital ID or verifiable credentials
Overview
This template implements OIDC-based identity federation using a broker pattern. A central identity broker translates between upstream providers (Government Digital ID, enterprise directories) and downstream applications (your services), providing a single integration point with centralised policy enforcement.
Identity Federation Pattern
This pattern implements a broker-based identity federation that translates between upstream identity providers (Government Digital ID, verifiable credentials) and downstream identity consumers (AWS Cognito, Microsoft Entra ID).
Key Benefits:
- Single integration point for multiple upstream providers
- Standardised OIDC/SAML interface for downstream consumers
- Centralised policy enforcement and audit logging
- Support for both government and commercial identity ecosystems
Core Components
The architecture consists of three layers:
- Identity Providers: Government Digital ID, enterprise directories, verifiable credentials
- Identity Broker: Normalises claims, enforces policies, provides audit logging
- Applications: Consume standardised OIDC/SAML tokens via AWS Cognito or Entra ID
Project Kickoff Steps
- Infrastructure Foundation - Follow ADR 001: Application Isolation, ADR 002: AWS EKS for Cloud Workloads, and ADR 018: Database Patterns for identity service deployment and data separation
- Security & Secrets - Follow ADR 005: Secrets Management for OIDC client secrets and ADR 007: Centralised Security Logging for authentication audit trails
- Identity Federation - Follow ADR 013: Identity Federation Standards for upstream provider integration and downstream consumer configuration
- Privileged Administration - Follow ADR 012: Privileged Remote Access for identity service administration access
Implementation Considerations
Privacy & PII Protection (Digital ID Act 2024):
- Data minimisation: Prohibit collection beyond identity verification requirements
- No single identifiers: Prevent tracking across services using persistent identifiers
- Marketing restrictions: Prohibit disclosure of identity information for marketing purposes
- Voluntary participation: Users cannot be required to create Digital ID for service access
- Biometric safeguards: Restrict collection, use, and disclosure of biometric information
- Breach notification: Implement cyber security and fraud incident management processes
Identity Proofing Level Selection:
- IP1-IP2: Low-risk transactions with minimal personal information exposure
- IP2+: Higher-risk services requiring biometric verification and stronger assurance
- Risk assessment: Match proofing level to transaction risk and data sensitivity
- Credential binding: Ensure authentication levels align with proofing requirements
Standards Compliance:
- Verifiable credentials: ISO/IEC 18013-5:2021 and W3C Verifiable Credentials
- Government Digital ID: Digital ID Act 2024 privacy and security requirements
- International interoperability: eIDAS Regulation patterns
Reference Architecture: OpenAPI Backend
Status: Proposed | Date: 2025-07-28
When to Use This Pattern
Use when building:
- Backend services that other applications consume via API
- Systems requiring clear separation between public and administrative operations
- Services needing auto-generated API documentation
Overview
This template implements OpenAPI-first API services with complete separation between user-facing operations (api.domain) and administrative operations (admin.domain). The separation provides network and authentication isolation for privileged functions.
Core Components
Standard APIs (api.example.com/v1/*): Business operations for authenticated users
Admin APIs (admin.example.com/v1/*): System management for privileged users
The two endpoints use separate authentication realms per ADR 013: Identity Federation Standards, providing network and authentication isolation.
Project Kickoff Steps
- Infrastructure Foundation - Follow ADR 001: Application Isolation and ADR 002: AWS EKS for Cloud Workloads
- API Standards - Follow ADR 003: API Documentation Standards for OpenAPI specification
- Identity Federation - Follow ADR 013: Identity Federation Standards for domain separation
- Edge Protection - Follow ADR 016: Web Application Edge Protection for rate limiting and security
- Database & Secrets - Follow ADR 018: Database Patterns and ADR 005: Secrets Management
- Logging & Monitoring - Follow ADR 007: Centralised Security Logging for audit trails
ADR ###: Specific Decision Title
Status: Proposed | Date: YYYY-MM-DD
Context
What problem are we solving? Include background and constraints.
Decision
What we decided and how to implement it:
- Requirement 1: Specific implementation detail
- Requirement 2: Configuration specifics
- Requirement 3: Monitoring approach
Consequences
Positive:
- Benefit 1 with explanation
- Benefit 2 with explanation
Negative:
- Risk 1 with mitigation
- Risk 2 with mitigation
Reference Architecture: Pattern Name
Status: Proposed | Date: YYYY-MM-DD
When to Use This Pattern
Clear use case description for when to apply this architecture.
Overview
Brief template description focusing on practical implementation.
Core Components
Project Kickoff Steps
- Step Name - Follow relevant ADRs for implementation
- Next Step - ADR needed for missing standards
- Final Step - Reference to existing practices
Contributing Guide
When to Create ADRs
Create ADRs for foundational decisions only:
- High cost to change mid/late project
- Architectural patterns and technology standards
- Security frameworks and compliance requirements
- Infrastructure patterns that affect multiple teams
Do not create ADRs for:
- Implementation details (use documentation)
- Project-specific configurations
- Operational procedures that change frequently
- Tool-specific guidance that belongs in user manuals
Quick Workflow
- Open in Codespaces - Automatic tool setup
- Get number -
just next-number - Create file -
###-short-name.mdin correct directory (see content types) - Write content - Follow template below
- Lint -
just lintto fix formatting, check SUMMARY.md, and validate links - Add to SUMMARY.md - Include new ADR in navigation (required for mdBook)
- Submit PR - Ready for review
Directory Structure
| Directory | Content |
|---|---|
development/ | API standards, CI/CD, releases |
operations/ | Infrastructure, logging, config |
security/ | Isolation, secrets, AI governance |
reference-architectures/ | Project kickoff templates |
Content Types: When to Use What
ADRs (Architecture Decision Records)
Purpose: Document foundational technology decisions that are expensive to change
Format: ###-decision-name.md in development/, operations/, or security/
Examples: “AWS EKS for workloads”, “Secrets management approach”, “API standards”
Reference Architectures
Purpose: Project kickoff templates that combine multiple existing ADRs
Format: descriptive-name.md in reference-architectures/
Examples: “Content Management”, “Data Pipelines”, “Identity Management”
Rule: Reference architectures should only link to existing ADRs, not create new ones.
ADR Template
See templates/adr-template.md for the complete template.
Note: ADR numbers are globally unique across all directories (gaps from removed drafts are normal)
Reference Architecture Template
See templates/reference-architecture-template.md for the complete template.
Quality Standards
Before submitting:
- Title is concise (under 50 characters) and actionable
- All acronyms defined on first use
- Active voice (not passive)
- Passes
just lintwithout errors
Title Examples:
- GOOD: “ADR 002: AWS EKS for Cloud Workloads” (concise, ~30 chars)
- GOOD: “ADR 008: Email Authentication Protocols” (specific, clear)
- BAD: “ADR 004: Enforce release quality with CI/CD prechecks and build attestation” (too long)
- BAD: “Container stuff” or “Security improvements” (too vague)
Status Guide
| Status | Meaning |
|---|---|
Proposed | Under review |
Accepted | Active decision |
Superseded | Replaced by newer ADR |
ADR References
Reference format:
[ADR 005: Secrets Management](../security/005-secrets-management.md)- Quick reference:
per ADR 005 - Multiple refs:
aligned with ADR 001 and ADR 005
Examples:
- “Encryption handled per ADR 005: Secrets Management”
- “Access controls aligned with ADR 001”
Writing Tips
- Be specific: “Use AWS EKS auto mode” not “Use containers”
- Include implementation: How, not just what
- Define scope: What’s included and excluded
- Reference standards: Link to external docs
- Australian English: Use “organisation” not “organization”, “jurisdiction” not “government”
- Character usage: Use plain-text safe Unicode - avoid emoji, smart quotes, em-dashes for PDF compatibility
- D2 diagrams: Use D2 format for diagrams with clean syntax and universal compatibility
- Use when text alone isn’t sufficient (system relationships, data flows, workflows)
- Keep simple: 5-7 components max, clear labels, logical flow, consistent colors
Compliance Mapping
This table maps ADRs to specific controls and requirements in Western Australian and Australian compliance frameworks.
ACSC Information Security Manual (ISM)
| ADR | Topic | ISM Guidelines & Control IDs | Key Controls |
|---|---|---|---|
| 001 Isolation | Application isolation | Guidelines for Networking (ISM-1182, ISM-0535, ISM-1277, ISM-1517) | Network segmentation, micro-segmentation, preventing bypass of controls |
| 002 Workloads | Cloud workloads | Cloud Computing Security (ISM-1588, ISM-1589, ISM-1452, ISM-0499) | Cloud security assessment, multi-tenant isolation, virtualisation hardening |
| 004 CI/CD | Build and release | Guidelines for Software Development (ISM-1256, ISM-0400, ISM-1419, ISM-2032) | Secure development lifecycle, environment segregation, automated testing |
| 005 Secrets | Secrets management | Guidelines for Cryptography (ISM-0507, ISM-0488, ISM-0518, ISM-1090) | Key management, secure storage of secrets, key rotation |
| 007 Logging | Security logging | Guidelines for System Monitoring (ISM-0580, ISM-1405, ISM-1985, ISM-0988) | Event logging policy, centralised logging, log protection, time synchronisation |
| 008 Email Auth | Email authentication | Guidelines for Email (ISM-0574, ISM-1151, ISM-1540, ISM-0259) | SPF, DKIM, DMARC, email encryption |
| 010 IaC | Infrastructure as code | Guidelines for System Hardening (ISM-1211, ISM-1409, ISM-1383) | Configuration management, automated deployment, drift detection |
| 011 AI Governance | AI tool governance | Guidelines for Software Development (ISM-2074, ISM-1755, ISM-0226) | AI usage policy, supply chain risk management, software assessment |
| 012 Privileged Access | Privileged access | Guidelines for System Management (ISM-1175, ISM-1507, ISM-1483, ISM-1173) | Restricting privileged access, JIT access, jump servers, MFA for admins |
| 013 Identity | Identity federation | Guidelines for Personnel Security (ISM-0418, ISM-1173, ISM-1420, ISM-1505) | Authentication, MFA, federated identity trust, credential management |
| 016 Edge Protection | WAF and CDN | Guidelines for Gateways (ISM-1192, ISM-1262, ISM-1460) | Web application firewalls, traffic inspection, DDoS protection |
WA Government Cyber Security Policy (WA CSP)
The 2024 WA Government Cyber Security Policy defines baseline cyber security requirements for WA Government entities.
| ADR | WA CSP Requirement | Section |
|---|---|---|
| 001 Isolation | Cyber security context & risk management | 2.1, 2.2 |
| 002 Workloads | Supply chain risk, data offshoring | 2.3, 1.5 |
| 005 Secrets | Information security (Cryptography) | 3.1 |
| 006 Policy Enforcement | Cyber security governance | 1.4 |
| 007 Logging | Continuous monitoring | 4.2 |
| 011 AI Governance | Supply chain risk management | 2.3 |
| 012 Privileged Access | Identity and access management | 3.6 |
| 013 Identity | Identity and access management | 3.6 |
Implementation Guidance:
- 1.1 Accountable Authority - See Policy Implementation section
- 1.3 Cyber Security Operations - WA SOC Guidelines
WA Government AI Policy
The WA Government AI Policy and Assurance Framework requires AI Accountable Officers and self-assessments for AI projects.
| ADR | WA AI Policy Requirement |
|---|---|
| 011 AI Governance | AI Accountable Officer, AI Assurance Framework self-assessment |
| 015 Data Governance | Data quality validation for AI systems |
Key Requirements:
- Nominate: AI Accountable Officer per entity
- Assess: Complete AI Assurance Framework self-assessment (downloadable template available on policy page)
- Submit: Refer high-risk projects (or >$5M) to the Office of Digital Government
Privacy and Responsible Information Sharing (PRIS)
The Privacy and Responsible Information Sharing (PRIS) framework governs personal information handling and upcoming statutory requirements.
| ADR | PRIS Alignment |
|---|---|
| 007 Logging | Minimise PII in logs (Data Minimisation) |
| 013 Identity | Data minimisation, consent protocols |
| 015 Data Governance | Information classification, retention schedules |
Digital ID Act 2024 (Commonwealth)
The Digital ID Act 2024 establishes privacy safeguards for the Australian Government Digital ID System (AGDIS).
| ADR | Digital ID Act Requirement |
|---|---|
| 013 Identity | Data minimisation (s15), no single identifiers (s16), voluntary participation (s18), biometric safeguards (Part 4) |
Key Privacy Safeguards:
- Prohibit collection beyond identity verification requirements
- Prevent tracking across services using persistent identifiers
- Users cannot be required to create a Digital ID for service access (voluntary)
- Strict restrictions on collection, use, and disclosure of biometric information
Additional Resources
- ACSC Essential Eight
- WA SOC Cyber Security Guidelines
- WA Government Cyber Security Policy - includes Data Offshoring Position
- National Framework for AI Assurance in Government
Glossary
Acronyms and Definitions
ACSC - Australian Cyber Security Centre
ADR - Architecture Decision Record
API - Application Programming Interface
ATT&CK - Adversarial Tactics, Techniques & Common Knowledge (MITRE)
AWS - Amazon Web Services
BIMI - Brand Indicators for Message Identification
CDN - Content Delivery Network
CI/CD - Continuous Integration/Continuous Deployment
CNCF - Cloud Native Computing Foundation
DBaaS - Database as a Service
DGOV - Office of Digital Government (Western Australia)
DKIM - DomainKeys Identified Mail
DMARC - Domain-based Message Authentication, Reporting and Conformance
DNS - Domain Name System
DTT - Digital Transformation and Technology Unit
EKS - Elastic Kubernetes Service (AWS)
ETL - Extract, Transform, Load
GCP - Google Cloud Platform
IAM - Identity and Access Management
IAP - Identity-Aware Proxy
ISM - Information Security Manual (ACSC)
JIT - Just-In-Time
OIDC - OpenID Connect
OWASP - Open Web Application Security Project
PII - Personally Identifiable Information
PITR - Point-in-Time Recovery
PKCE - Proof Key for Code Exchange
RDP - Remote Desktop Protocol
RPO - Recovery Point Objective
RTO - Recovery Time Objective
SAML - Security Assertion Markup Language
SBOM - Software Bill of Materials
SIEM - Security Information and Event Management
SPF - Sender Policy Framework
SSO - Single Sign-On
TLS - Transport Layer Security
VMC - Verified Mark Certificate
VPN - Virtual Private Network
WAF - Web Application Firewall
WCAG - Web Content Accessibility Guidelines
AI Agent Guide
This repository is designed for use by AI coding agents (primarily Goose) to assist with architecture decisions and implementation.
Philosophy
This guide aligns with pragmatic engineering principles:
- The Grug Brained Developer: Prefer simplicity over complexity. “Complexity very bad.” Choose boring technology, avoid over-engineering, and make systems easy to understand and debug.
- CNCF / Linux Foundation: Preference for open-source, cloud-native technologies with strong community governance. Avoid vendor lock-in where practical.
Getting Started
- Review the Architecture Principles - Foundation for all decisions
- Browse Reference Architectures - Project kickoff templates combining multiple ADRs
- Search ADRs by domain -
security/,operations/,development/
Essential Commands
mise tasks
build Build the book (just build with tool prereqs for CI)
just --list
default # Show all available commands
[build]
build # Build documentation website (includes link checking)
clean # Clean generated files
[development]
next-number # Get next ADR number for new files
serve # Preview documentation locally (port 8080)
[quality]
check-summary # Verify SUMMARY.md includes all markdown files
lint # Run all checks and fixes
[setup]
setup # Install required tools and dependencies
Workflow
- Get ADR number -
just next-number - Create file - Use pattern
###-short-name.mdin correct directory - Follow workflow - See CONTRIBUTING.md for complete workflow, templates, and writing guidelines
PDF Generation
- Build -
just buildcreates both website and PDF atbook/pandoc/pdf/architecture-decision-records.pdf - Live PDF - Available at architecture-decision-records.pdf
- Format - A4 ISO standard with D2 diagrams
Troubleshooting
Validation Issues:
- Run
just lintto check all issues
Automated Workflows
- Website deployment - Automatically builds and deploys to GitHub Pages on push to
main - PDF generation - Automatically creates and attaches PDFs to GitHub releases
- Chapter ordering - Files sorted numerically by ADR number
mdBook Project Notes
- Book Format - Project uses mdBook, navigation defined in
SUMMARY.md - Manual Updates - Add new ADR files to
SUMMARY.mdfor navigation (required) - Diagram Support - Use D2 diagrams in code blocks for universal compatibility
D2 Diagram Guidelines
When to use: System relationships, data flows, workflows, or architecture patterns where text alone isn’t sufficient.
Keep simple: Maximum 5-7 components, clear labels, logical flow, consistent colors.
Example:
Quality check: Adds value, easy to follow, clear labels, logical connections.