Site icon

New ST4S module aims to earn yourtrust for safer use of AI

Artificial Intelligence (AI) is giving schools a whole new set of issues to think about. There are risks
and rewards – but how do you know what to trust? That’s where the Ministry of Education’s
new ST4S AI Module comes in, with the goal of ensuring that AI technologies deployed in schools
are safe, ethical, and transparent.

Artificial Intelligence is steadily moving from the periphery to the heart of many education platforms. From AI-driven chatbots to content recommendation engines and adaptive learning tools, this technology is helping schools personalise learning, streamline administration, and explore new ways to engage students.

However, alongside that promise comes growing concern – particularly around privacy, safety, and transparency. For example, AI can generate unexpected content, repurpose student data in opaque ways, or introduce new risks that teachers and schools aren’t equipped to spot.

Maybe your school is considering a new product that uses AI? Or perhaps AI has quietly been added to something you already use? How do you know if it’s safe? How would you tell which products are using AI responsibly?

Here are just a few of the things your school or kura needs to consider:

The good news: the ST4S initiative includes an AI module

To address this, the Safer Technologies for Schools (ST4S) initiative has taken a significant step forward. In late 2024, it released its most detailed update yet: a comprehensive AI module that introduces new expectations and controls for vendors offering AI-enabled services to schools in Australia and New Zealand. This gives schools and venders a practical, nationally coordinated way to evaluate the risks and responsibilities that come with AI in education.

Why the AI Module Now?

The ST4S AI Module emerged in response to concern from schools, government agencies, and education authorities around AI-related risks. It was piloted in July 2024 and finalised in December, aligning with the Australian Ministerial Framework for AI in Education and other emerging global standards.

Its goal was to ensure that AI technologies deployed in schools are safe, ethical, and transparent – particularly when used by or with children. Crucially, this is not about blanket bans or rigid restrictions. Instead, the ST4S AI Module introduces a clear set of criteria that vendors must meet to achieve a compliant assessment outcome.

Three risk areas the AI Module focuses on

While the full module spans 15 sub-domains, the framework centres on three interlocking areas of risk that every vendor should understand. Let’s briefly explore these:
1. Privacy and data use is one of the most urgent concerns in educational AI. Many large language models and generative AI tools rely on user interactions to refine or expand their capabilities, which raises red flags about how student and teacher data is handled. The ST4S module looks at whether a product collects, stores, or reuses any personal data – de-identified or otherwise – and whether that data is being used to train or optimise models. It also examines whether schools are given clear information about these practices and the ability to opt out where appropriate. Transparency about how AI learns and how long it retains information is no longer optional.

2. Functionality and safety are assessed to determine whether the AI behaves in a way that aligns with school expectations and safeguards. If a product includes features like chatbots, image or content generation, or decision-making capabilities, the module considers whether these could lead to unsafe, biased, or misleading outputs. ST4S reviewers examine whether the service has been tested under realistic school conditions, what guardrails are in place, and how developers have mitigated the risk of hallucinations or inappropriate content. The focus here is not on what AI can do in theory, but on what it does – and how reliably and safely it does it – in real classrooms.

3. Governance and control are perhaps the most complex but essential aspects of AI risk management. The AI module requires vendors to show that someone in the organisation is responsible for AI oversight and that clear processes are in place for incident management and accountability. It asks whether humans are involved in reviewing or moderating AI-generated outputs, and whether there’s a documented escalation path if something goes wrong. This ensures vendors are not only building powerful tools, but also maintaining the structures needed to use them responsibly in education settings.

That’s a very brief tour of the AI module but what does it mean for education technology vendors and schools?

Setting a new bar for trust in school technology

This isn’t just a compliance exercise. The AI module sets a new bar for trust in school technology. It sends a clear message: if you’re embedding AI into your product and marketing it to schools, you need to demonstrate how you’re protecting student privacy, supporting educator control, and reducing harm.

For vendors, this may mean revisiting privacy policies, updating documentation, or improving technical safeguards. It may mean pausing to consider not just what the AI can do, but how and where it might go wrong.

For schools, they receive clear reports on the security standards of potential products they’re considering, to help guide their vendors.

It’s also worth noting that not every AI tool or use case is currently in scope for ST4S. High-risk and emerging technologies – like facial recognition or biometric tracking – are excluded for now. But ST4S has indicated that the AI Module will be updated regularly as industry standards and best practices evolve, for example to line with the Office of the Privacy Commissioner’s biometric code of practice, once published.

A final word for vendors: Don’t wait to get ready

If your product uses AI, now is the time to engage with the ST4S initiative, even if you haven’t yet been asked to complete the assessment. Start by reviewing your hosting, testing, and privacy practices. Consider how you describe your AI functionality to schools and kura. And think beyond compliance: how can your approach to AI build long-term trust?

In education, trust matters. The new ST4S AI Module offers a framework to earn it.

Article by the Ministry of Education’s Digital Services Team.

Learn more here.


About Safer Technologies 4 Schools

This is an initiative that aims to take the guesswork out of choosing the right technology for your school.

Led by the Ministry of Education, and its Australian equivalent, ST4S gives you clear guidance on whether software products for schools meet strict privacy and security standards. The products that do meet the standard can display the trusted ST4S badge.

It makes your decision-making process simpler with detailed reports on how each product manages and protects data. These reports help you make informed choices, save you time and give you peace of mind.

Find out more at st4s.edu.au


INTERFACE June 2025

Exit mobile version