
LAB³ Insights: NZ Government’s Guidance for GenAI Provides Value For All Organisations
Beyond The Public Sector: The NZ Government’s GenAI Guidance is timely for all organisations aiming to achieve business value from AI Adoption
Given LAB³’s proven capabilities in implementing Azure’s AI services, our team thought it might be of value to some of our clients for us to highlight the New Zealand Government’s recently released Responsible AI Guidance for the Public Service: GenAI which is aimed at supporting the public service to explore generative AI systems “in ways that are safe, transparent and responsible”.
Whilst the Government’s Guidance has been designed to help New Zealand public sector agencies, the guidance is also broadly applicable to almost any organisation wishing to take advantage of the rapid advances in Generative AI. Based on our experiences, there are further insights to share.
Our AI Credentials & Experience
The LAB³ experience across multiple industries tells us that succeeding with GenAI is not achieved by technical brilliance alone; rather, successful AI adoption across an organisation results from a holistic approach, including executive sponsorship, strong governance and ethics, clear communications, and a commitment to skills. Furthermore, this should be tailored to where the organisation is up to in their AI journey (as depicted in Figure 1).
Recently, LAB³ became one of the first Microsoft partners globally to attain two coveted AI specializations, with Microsoft recognising that LAB³ has exceeded performance requirements in partnering with multiple clients for the delivery of innovative AI projects. We have developed an AI Adoption at Scale framework that ensures business value can be achieved moving forward, whatever stage your organisation is at.
For those wishing to learn more about our LAB³ AI Adoption at Scale framework, we recommend you read our AI White Paper.
The NZ Government’s Guidance & LAB³ Thoughts
Let’s turn to some of the key points arising from the Government’s Guidance:
- Appoint a senior official to oversee the safe and secure adoption of GenAI. This is very sound. Such a role is essential for maintaining public and employee trust in AI systems, as algorithms themselves cannot be held accountable for decisions.
- Conduct impact assessments: AI-related risks vary depending on the context and nature of the problem being addressed. As AI technologies become more common, their definitions and applications evolve. However, where risks are manageable, citizens should benefit from increased productivity within the public service.
- Ensure human oversight: This is particularly important as Agentic AI (as explained in the above-mentioned AI Primer) gains rapid adoption, promising greater autonomy. However, it is important to remember that algorithms cannot be held accountable for their decisions. This is especially relevant in government settings, where GenAI systems process unstructured data like human language, which is inherently ambiguous and uncertain. No AI system can be perfect in every scenario.
- Security: When integrating GenAI—whether through public tools or within existing enterprise software, including SaaS, it is crucial to assess security risks. Proper safeguards, such as data access controls, encryption, and compliance with security standards, can mitigate risks of inadvertent exposure of confidential or sensitive information. Organisations should implement robust security measures to ensure responsible and secure GenAI adoption while maintaining trust and compliance.
- Privacy: Government entities routinely handle personally identifiable information (PII), which can often appear in system logs. For example, when using public GenAI tools or government-provided chatbots, citizens may share their name, address, or other details, which could be recorded in conversation logs. Implementing automated PII detection can help minimise privacy risks.
How LAB³ Services Align with the NZ Government’s Guidance
Wherever your organisation may be on its AI adoption journey, the LAB³ AI Adoption at Scale framework aligns with the NZ Government’s Guidance and covers the whole journey. We start by helping identify a pilot use case and then scale this more broadly across your organisation, including designing and deploying AI Landing Zones, and ensuring appropriate AI governance.
LAB³ Business Envisioning Workshops
- If you are only just beginning your AI Adoption journey, our Business Envisioning Workshops help organisations take the right first step in AI adoption.
- Our experienced AI consultants collaborate with your business and IT stakeholders to deeply understand the problem domain, uncover innovative solutions, and assess AI opportunities from multiple perspectives—including risk identification and mitigation for Responsible AI.
- LAB³ Business Envisioning Workshops helps you to understand how to solve for desirable business goals while balancing for feasibility and understanding required guardrails.
AI Landing Zones by LAB³
- AI Landing Zone: Designing AI architectures that support NZ Government Guidelines – including user security, API security, and model monitoring—can be complex and time-consuming for individual projects. LAB³ simplifies this with our AI Landing Zone Accelerator. This deploys a pre-configured, code-driven, secure foundation that enables you to deliver AI solutions while supporting compliance.
- Beyond security, the AI Landing Zone includes cost monitoring, enabling agencies to track and control expenditures, ensuring alignment with the value delivered.
- Deployed using pre-built automation in the Azure Cloud, LAB³’s AI Landing Zone Accelerator supports the scalability, governance, and security you need for sustainable and compliant AI adoption.
LAB³ Establishes Your AI Centre of Excellence (CoE)
- Once a pipeline of desirable, feasible, viable, and responsible AI initiatives is established, LAB³ ensures Responsible AI guidelines are actioned across design, implementation, deployment, and evolution in production through establishing an AI CoE within your organisation.
- This ensures the consistent application of Responsible AI principles, including bias monitoring, accuracy tracking, and governance. The CoE provides expertise, structure, and best practices for project teams to oversee AI delivery and production usage throughout its lifecycle.
Getting Started and Scaling AI with LAB³
The possibilities for achieving business value with AI are exciting, but as many organisations have discovered, AI adoption is a challenging. You are welcome to connect with our AI experts, for further information about how you can action a proven path forward.