Donate My RoSPA
    Basket is empty.
Net Total: £0.00

How to integrate AI into health and safety

How to integrate AI into health and safety

 

Bridget Leathley looks at how artificial intelligence is being used in health and safety, with practical examples advice on how to integrate it into activities.

In an article for RoSPA in September 2023, Louis Wustemann explained how AI was evolving to be more effective in occupational health and safety. This is a fast-moving area, so a year on, what’s happening?

In response to some of the fears about the use of AI, the EU AI Act came into force in August 2024, with some strict requirements on how organisations in the European Union – or working with organisations in the EU – use AI.

The 2022 UK ‘pro innovation’ approach is less prescriptive, supported by lengthy reports about opportunities and vague challenges. The AI Safety Institute was set up as part of the Department for Science, Innovation and Technology to address how best to control the risks of AI, but at the time of writing it has been silent since the General Election. After a failed Private Members’ Bill on the topic in 2023, the TUC published a proposed AI (Regulation and Employment Rights) Bill in April 2024, which would focus on supporting the development of AI systems that are safe, secure and fair for workers.

This leaves organisations unsure of what to do in the short term. Avoiding AI because it’s too complicated isn’t the answer. Organisations that don’t make good use of AI will be replaced by those that do. This applies just as much to safety professionals as elsewhere, so we need to work with the technology, not let it work against us.

Computers are better at remembering facts than humans, but despite AI, we remain better than computers at creativity. We need imagination to perceive the possibility of a hazardous outcome from circumstances that haven’t previously occurred, or to find original solutions to old problems. AI can support human creativity by reminding us not just of facts (like a traditional database), but by identifying and explaining links and relationships between facts that we might not have considered. AI and people working together will be the most effective at solving the problems that face us.

We’ll look at two applications which aren’t some futuristic uses of AI, but are being used by companies to improve safety management now.

AI in computer vision

The previous article mentioned the use of computer vision, plugged into closed-circuit TV systems, to identify near misses and hazardous situations. Computer vision (CV) can look at still or moving images and classify shapes as people, vehicles or equipment, and importantly, classify activities as hazardous, such as when a person is too close to a vehicle.

While you won’t be watched by AI when you walk into an M&S shop, the safety team have been working with CV specialist ProtexAI to promote safety improvements in their distribution network. HSE Specialist at Marks and Spencer, Alice Conners, has overseen the project since it started in 2022 and explains the range of events the technology can detect and report on.

“We can see where there are problems with vehicles going too fast, or driving the wrong way in a one-way system, where pedestrians get too close to vehicles, and even where a fire exit might be blocked.”

The safety team at M&S use this information to improve systems and training. The project has gone so well that they are adding more use cases to further improve safety.

While ProtexAI provides information after triggered events to inform workplace and process design, other CV systems such as Spark Cognition and Cognita AI provide real-time alerts. One of Cognita AI’s uses is to stop operations if it detects a human in a prohibited space.

Spark Cognition offers organisations a mix of data for tracking trends and real-time alerts. Real-time alerts can be fed to drivers to prevent an accident, or passed to supervisors so that they can detect fatigue in drivers and suggest a break.

Another AI CV solution, sensingfeeling.io, provides solutions for urban spaces and traffic routes. Information collected can be used to improve town planning (for example, re-siting crossing places) or to provide a rapid response to crowding.

AI for improving contractor risk assessments and method statements (RAMS) for a construction management company

Simon Cox is the Director of Sustainability and SHEQ for MWH Treatment, a leading solution provider with a 200-year legacy providing design and delivery services for the UK water sector. One of the challenges faced by MWH Treatment – as with most client organisations – has been how much resource to put into reviewing RAMS. Too little, and an important control might be missed, resulting in an accident. But a rigorous approach can be very time-consuming and disempowers the supply chain which should be responsible for planning safe work.

Simon has been an advocate for using the best technology available for a long time, having improved reporting for a previous employer by introducing a cloud-based system.

“I don’t like inefficiency,” explains Simon. “People should be able to get on with the job they are trained to do, not spending time at a computer screen.”

He was very keen, therefore, to assess the benefits of using AI as a tool for improving RAMS. MWH Treatment use a product called Intuety, which makes use of the human plus AI synergies. As Simon explains:

“A professional still writes the RAMS, and when the AI makes suggestions for improvements, it’s still a human being that responds to those suggestions and decides what improvements to make. The final acceptance of the RAMS is from a site manager, so there is plenty of human oversight to the process.”

Simon reports that the response from the contractors has been good. They find feedback useful, and they can submit their RAMs directly to Intuety and act on feedback before submitting to MWH Treatment.

“It means our teams can spend less time booting up, and more time boots on,” explains Simon, referring to the ability to spend less time in front of a computer, and more time out on site.

How to do it

Simon and Alice have both learned lessons from what’s worked in these AI projects. For both organisations, keeping a human in the decision loop was essential from day one. At M&S, a safety team member decides how to respond to the data collected by computer vision, whether by getting new equipment, changing workplace layouts, or updating training. At MWH Treatment, professionals still write and review the RAMS.

However we use AI, this principle should be key. We must not allow AI to dismiss a worker or reject a contractor on the basis of an automated scoring system. If an individual or contractor is flagged up by the AI, we need to look at the reasons why. Perhaps a contractor uses a more visual RAMS which is easier for the workers to use, but harder for the AI to read. Perhaps a worker takes on the harder tasks that other workers refuse to do.

The ‘humans in the loop’ must be trustworthy. Alice offers this advice: “Make sure you have a good relationship with your workers before you start. Demonstrate before you introduce new tech that you are there for their safety. This builds trust.”

Simon acknowledges that for MWH Treatment there was not enough support in the early stages for people to understand what the product was. “For future developments, I’d plan a comms strategy around it, providing more support in understanding how to use it efficiently and how to drive behaviours in the supply chain.”

Simon warns people to manage their expectations: “Don’t expect AI to do more than it can do. I’d like it to do a lot more, and I’m always talking with Intuety about what else it could do!”

At M&S, Alice explains how they started small, with a single use-case. “Once you start using AI, you will still need to teach it. We had to tweak some of the rules in the first few weeks to avoid false events. Once we had sensible results that made sense, we could increase our use of ProtexAI.”

End thought

Even the experts can’t agree on where AI will take us, and how quickly. Some predict rapid revolution. Maybe we’ll all be wearing augmented reality glasses by 2030, with hazards flagged up in our visual field, and a voice-controlled bot providing safety advice into a headset.

Others believe that legislation, public fears, energy demands and bottlenecks in AI chip production will restrict progress. Concerns about cyber security make people nervous about using a technology they don’t fully understand.

Nothing about the future of AI is inevitable. But we can make one prediction: organisations that don’t make the best use of available technology will fall behind, with increasing costs and lower productivity than their competitors, and possibly, higher workplace accidents.
 

Bridget Leathley

With a first degree in computer science and psychology, Bridget Leathley started her working life in human factors, initially in IT and later in high-hazard industries. After completing an MSc in Occupational Health and Safety Management, she moved full-time into occupational health and safety consultancy, training and writing.

  
 
 

Already a member? Login to MyRoSPA to read more articles

 
 
Login to you MyRoSPA account
Login to MyRoSPA to view more exclusive content

Login
   
 
 

| Join RoSPA 

Become a member now
Become a member to access MyRoSPA to view more exclusive content
 
Join 
 


 
 

Already a member? Login to MyRoSPA to read more articles

 
 
Login to you MyRoSPA account
Login to MyRoSPA to view some more exclusive content

Login
   
 

| Join RoSPA 

Become a member now
Become a part of the MyRoSPA team to view more exclusive content

Register
 


 

Contact Us

General Enquiries
+44 (0)121 248 2000
+44 (0)121 248 2001
[email protected]
Contact form