Human-Led, AI-Enabled: Key Takeaways for Safety Leaders
Page Published Date:
April 23, 2026
Safety leadership has always required the ability to manage complexity, balancing the technical, the human and the regulatory needs while keeping people out of harm's way. The arrival of AI into that space doesn't change the fundamental obligations, but it does change the tools, and in some respects, the pace at which risk can be surfaced and acted upon.

The Next Group's Human-Led, AI-Enabled event brought safety and HR leaders together in Melbourne to explore what responsible, practical AI adoption actually looks like in complex, high-risk environments. The insights from Mark Lipman, Head of Enterprise Risk at Qantas, were particularly instructive for safety professionals, while Kelly Brough from Accenture and Davin d'Silva from Coles added vital organisational context.
AI's Genuine Advantage in Safety: Pattern Recognition at Scale
The case for AI in safety is practical. Qantas receives 130,000 safety reports every year. No human team can reliably identify trends across that volume, particularly across time, geography, and operational domain simultaneously.
"AI is really good at looking at trends across an organisation, surfacing them,
and bringing them to people's attention so they can be actioned.
The move from 'people telling us things are going on' to 'action' becomes much faster."
AI’s most obvious and appropriate safety function is in pattern recognition, signal surfacing, and acceleration of human decisions and process optimisation. And rather than becoming redundant, the analyst responsible has amplified capacity for more meaningful work.
At Qantas, a separate application uses a bespoke model to sit over safety manuals and make them conversationally accessible. A team member in the field can ask "when do I need to do a risk assessment?" and get an immediate, accurate answer, rather than navigating a regulatory document written primarily for an auditor. The productivity and engagement gains are significant.
Computer Vision and the Limits of AI in the Field
Mark described another application — computer vision deployed over security cameras in Qantas's freight shed, monitoring proximity between plant and pedestrians and flagging entries into designated no-go zones.
The system began recording a spike in no-go zone entries. Without additional investigation, a safety leader might have concluded there was a behavioural compliance issue, but the actual cause was a machine breakdown, and increased maintenance personnel were legitimately accessing the zone.
"AI can identify the hazard. What it cannot do is add context. Humans always have to add that context.
AI supports judgment, it never replaces it."
This was a perfect illustration of where the human-AI boundary must be understood and respected. Computer vision can observe. It cannot interpret. It cannot understand the operational context that a supervisor with experience in that environment carries. The value of that knowledge — what Kelly called the ability to "walk on site and hear and smell what is happening with the machinery" — is not replicable.
Where AI Should Not Be Used in Safety
As Mark attested, AI is not a suitable tool for safety-critical decision-making in real time. Citing research on AI in financial contexts (where AI has been shown to be least reliable precisely when it is needed most) he applied the same logic to safety controls. In a crisis, AI can amplify risk rather than reduce it.
This is a reason to be precise about where AI belongs in the safety management system. We know it’s value in pattern-surfacing and information-retrieval and translation. But it’s not capable of automated real-time safety control.
Governance: What Good Looks Like
Mark shared Qantas's explicit governance position on AI: generative AI will not be used to make or assist in making employment decisions of any kind. This is a statement of principle, communicated clearly and reinforced through governance frameworks.
For safety leaders, the parallel conversation is about where the human remains in the loop for safety decision-making, and ensuring that position is clearly documented, not assumed. Culture frameworks, existing compliance obligations, and fatigue and risk management policies do not become irrelevant because an AI tool is introduced. They provide the context into which AI is deployed.
Key Takeaways for Safety Leaders
- Use AI where it adds genuine value: trend analysis, signal surfacing, and making technical information accessible are strong starting points.
- Invest in a supervised, bespoke model for sensitive safety data: commercial, public LLMs that use your data for training present governance and confidentiality risks.
- Never remove the human from contextual interpretation: AI identifies patterns; your experienced leaders interpret them. Protect that boundary.
- Document where AI will not be used, not just where it will: safety governance should explicitly define the no-go zones for AI application.
- Be cautious about AI in real-time, high-consequence decisions: AI is least reliable when the stakes are highest. Design your systems accordingly.
The Safe Step works with safety leaders across Australia to build capable, resilient HSE teams and functions. If you'd like to continue the conversation, we'd welcome the connection.




