The Public Sector’s AI Crossroads
Ethical AI is not a conceptual debate; it’s a live operational challenge for agencies under pressure to adopt AI, meet digital service expectations, and comply with rising standards around privacy, auditability, and public trust. For many, it can feel like an insurmountable task: deploying AI under executive mandate while safeguarding community trust and safety.
The question isn’t “Should we use AI?” but “How do we govern it?” Australia’s government has recognised this challenge at the highest levels. The prior national AI Action Plan set out a vision for Australia to lead in “trusted, secure and responsible AI” and now the focus has shifted to concrete safeguards.
A Safe and Responsible AI agenda is taking shape with new policies and standards. For example, a Policy for the Responsible Use of in Government took effect in late 2024, requiring every federal department to designate an AI accountable official and publish AI transparency statements within set deadlines. By mid-2025, the government also released an AI Technical to embed transparency, accountability and safety across an AI system’s lifecycle.
This push for governance is well-founded. Public trust in government use of AI is low, acting as a brake on adoption, and international moves toward AI regulation highlight the need for proactive risk-based guardrails. Over 60 Australian government agencies recently participated in a six-month trial of generative AI (Microsoft 365 Copilot), reporting productivity gains like saving an hour per day on administrative tasks.
Leaders are urging the Australian Public Service not to be left behind in the AI race, but to embrace AI wisely, with appropriate safeguards. In short, agencies are at a crossroads: the starting gun for AI adoption has fired, but steering the course requires robust governance.
That’s why the Drupal AI matters. It’s one of the few open-source efforts actively building ethics, transparency and regulatory alignment into the foundation of AI capability, not bolting it on after the fact. Launched in June 2025 as a coordinated effort by the Drupal community, the initiative channels Drupal’s AI modules into a unified product vision with dedicated leadership and funding. It emphasises AI–human partnership and responsible innovation: helping organisations create smarter digital experiences while maintaining control and avoiding vendor lock-in.
Backed by top Drupal agencies (including Australian contributors) and over $175,000 in as of August 2025, a dedicated team (15+ full-time staff) is now advancing Drupal’s AI capabilities with governance “baked in.” This makes Drupal’s initiative a practical testbed for ethical, auditable government AI. But does it go far enough?
What the Drupal AI Initiative Gets Right
The initiative’s governance-minded design is a strong starting point. Rather than relying on vendor black boxes or hard-wiring closed services into an open platform, the Drupal AI Initiative provides:
- Auditability by default: Full traceability of AI-generated content, model choices and change history. In Drupal’s vision, this is part of a “comprehensive trust infrastructure” – an advanced governance framework with approval workflows, audit trails and compliance tools built in.
From a public sector lens, this kind of auditability is not a luxury but a requirement. Any AI capability touching citizen data or influencing services must be inspectable, adjustable, and defensible. - No vendor lock-in: Agencies can “bring their own LLM” – choosing or self-hosting the AI models that meet their security and compliance needs. Drupal’s open architecture supports integrations with 21+ major AI providers out of the box, meaning agencies aren’t forced into one proprietary solution.
This freedom to swap or combine AI services without lock-in is critical for government compliance (e.g., ensuring data stays in approved jurisdictions) and aligns with Australia’s emphasis on technological sovereignty. - Community-driven governance: A diverse leadership and advisory team is shaping the initiative through open decision-making and transparent funding. In fact, Drupal’s approach explicitly prioritises “community-driven innovation” guided by real-world needs over vendor roadmaps.
For the public sector, having an open-source community that includes government voices is a strength. It helps ensure features like accessibility, security controls, and audit features are prioritised from the start. (Notably, Australian public-sector tech firms such as Salsa Digital are founding contributors, helping represent government requirements in the project’s governance.)
From a government perspective, these attributes of the Drupal AI Initiative map closely to what official frameworks demand. Audit logs and explainability, the ability to use approved secure models, and inclusive governance are explicitly called out in policies like Australia’s AI Ethics (which stress privacy, accountability and transparency). By design, Drupal’s AI effort is aligning itself with these principles and providing a platform agencies could build on confidently.
Where Vigilance Is Still Needed
Despite the strong framing, execution remains the hardest part. There are three areas where digital leaders need to stay alert:
Governance ≠ governance documentation. It’s one thing to outline ethical AI principles, another to enforce them day-to-day. The Drupal AI strategy articulates governance ambitions, but sustaining them over time will require active policies, clearly defined decision rights, and mechanisms for ongoing compliance testing.
Without continuous oversight, even well-intended community efforts can drift or dilute under competing priorities. Government users will need to ensure the promised “trust infrastructure” isn’t just a document but is backed by real operational processes (e.g. periodic audits of AI outputs, fail-safes for bias or error, etc.).
In short, an open-source initiative can provide the tools, but agency leadership must still wield them diligently.Alignment with Australian AI obligations. The initiative is globally focused, but public sector clients in Australia must meet numerous home-grown rules and standards. These include the federal Protective Security Policy (PSPF) and cybersecurity mandates (the ), privacy law (the Privacy ), and AI-specific guidelines such as Australia’s Voluntary AI Safety (the “10 AI guardrails” published in 2024) and the government’s AI Ethics Principles (both and ).
Community ≠ consensus. An open model is a strength, but only if government voices actively show up. Open-source governance means anyone can contribute, but it does not automatically guarantee that public-sector needs will be front and center unless public institutions engage.
If agencies want their requirements around privacy, accessibility, security, and accountability to be reflected, they must get involved early before decisions are locked in. Governance that’s open but passive will miss the mark. It’s encouraging that companies like Salsa Digital hold a seat at Drupal’s AI governance table and advocate for government needs.
Moving forward, more public-sector stakeholders (or their technology partners) should participate in the Drupal AI Initiative’s discussions to ensure that factors such as government accessibility standards and security accreditations are considered in design decisions. Engagement can take the form of contributing code, funding, or simply providing use-case feedback and requirements. The key point is that openness only helps if you participate.
Overall, the Drupal AI Initiative provides a viable framework for ethical AI, but it’s up to each agency to operationalize it in accordance with their own obligations and to pose the challenging questions before any launch. Even within a solid framework, agencies need to verify compliance and risk at every step.
AI Governance Isn’t a Document – It’s a System
Too often, “AI ethics” gets treated as a box-checking exercise or a visionary preamble in strategy documents. But real governance requires infrastructure: practical policies and procedures, audit mechanisms, and an operational design that keeps humans meaningfully in the loop. Probabilistic systems like machine learning carry very real risks, from biased outcomes to inadvertent privacy breaches or even failures in critical services. Good governance is the most effective safeguard for citizen data, public safety, and trust. Governance has to be engineered into the entire AI lifecycle – much like security or reliability – rather than pasted on at the end.
Notably, Australian government agencies are beginning to build such governance systems around their AI efforts. For example, Services Australia, one of the largest federal agencies, released an Automation and AI Strategy for that explicitly commits to “robust and responsive governance, assurance and decision-making” and the appointment of an AI Accountable Official, in line with the whole-of-government framework.
The Australian Taxation Office (ATO), after being the subject of a detailed audit on its use of , agreed to all of the Australian National Audit Office’s recommendations to strengthen AI governance. The ATO is now working to align its AI systems with ethical principles, create AI-specific policies and risk controls, and improve transparency around its AI-driven decisions. These examples show that while strategic vision is important, it must translate into concrete governance measures on the ground.
What Government Leaders Should Do Next
If you’re a CIO, digital director or policy owner considering AI adoption in the public sector, here are some practical next steps:
Don’t assume “open” means “compliant.” Leverage open-source innovations like Drupal AI, but validate their claims around audit, explainability and security against your agency’s risk posture and legal obligations. Open technology can give you a head start, but you are responsible for ensuring it checks all the boxes.
- Push for participation. If a platform like Drupal is forming its governance now, that’s the time to get involved, not after implementation. By engaging early (through contributions, pilots or dialogue), agencies can influence features to better meet public sector needs. Don’t be shy about joining open-source initiatives or industry forums on AI. Your input can ensure government standards are embedded from the outset. Reach out to the communities or to partners like us if you have questions on how to effectively contribute.
-
Treat AI like infrastructure, not just a feature. Build governance into every phase: procurement, onboarding, monitoring, and incident response, just as you would for any critical IT infrastructure. This means establishing clear accountability (who “owns” the outcomes of an AI service), putting in place monitoring and contingency plans (for when an AI system misbehaves or needs tuning), and baking compliance checkpoints into the AI development process.
In practice, this could involve updating procurement checklists to include AI risk criteria, requiring ethics reviews during project gating, and ensuring there’s an incident response plan for AI-related issues. By treating AI solutions as part of your long-term architecture, you move beyond the hype and manage AI as a sustainable capability.
In conclusion, ethical AI isn’t a slogan – it’s a systems challenge. Australia’s public sector is moving quickly from high-level principles to on-the-ground enforcement: new government policies and standards are raising the bar for transparency, accountability and safety in AI.
Open-source initiatives like Drupal’s demonstrate that practical solutions can be built with ethics in mind, giving governments a viable path to innovate without compromising trust. The opportunity now is to marry these two threads, the top-down governance requirements and the bottom-up technology capabilities, into a coherent approach.
With vigilance, collaboration, and a willingness to invest in governance infrastructure, agencies can harness AI to improve services while maintaining the confidence of the people they serve.
