Robot Workers

AI’s role in workplace safety

A look at the possibilities (and limitations)

AI.jpg
Photo: metamorworks/gettyimages

Ever since OpenAI released ChatGPT in November 2022, the buzz around artificial intelligence has built to a fever pitch. The possibilities of AI seem boundless, inspiring reactions ranging from awestruck optimism to apocalyptic nightmares (especially after hundreds of technology leaders signed a public statement in May 2023 warning of AI’s existential threat to humanity).

The truth is, AI isn’t a single tool – it’s a general-purpose technological advancement along the lines of electricity or the internet, with similarly world-changing potential, says Cam Stevens, CEO of the Pocketknife Group, a consultancy focused on the intersection between technology and workplace safety.

“AI is an umbrella term that describes a field of computing that’s dedicated to creating systems that are capable of performing tasks that would normally require some form of human intelligence,” Stevens explains. “It’s one of the technology mega-trends that are shaping the future of work.”

That includes the future of workplace safety.

How is AI being used now?

What separates AI from old-fashioned computer programs is its ability to learn, adapt and respond with some degree of autonomy.

Still, AI isn’t really new. For decades before ChatGPT blew up, AI was quietly helping us plan driving routes with GPS, guarding our smartphones with facial or fingerprint recognition, and cleaning up our spelling in texts and emails.

But in recent years, advances and investment in AI have led professionals in every field, including occupational safety and health, to explore how the technology could revolutionize their work.

The result has been a profusion of innovative health and safety applications – from robotic exoskeletons that help prevent musculoskeletal injuries and smart helmets that can monitor vital signs and working conditions, to virtual-reality safety training. But at the moment, most of these applications are still in the experimental stage or in small-scale use.

“There’s a lot of promise in the emerging technologies area, but generative AI is what everyone has access to and is the primary form used in the workplace,” says Jay Vietas, chief of the Emerging Technologies Branch at NIOSH. “You can ask it to write a health and safety plan for your area. You can ask it to tell you what the risks are with respect to electrical safety or how to design a lockout/tagout program, for example.

“You could argue that it’s just a Google search on steroids. However, we still need safety and health professionals to go back and evaluate what has been provided to you to ensure it’s appropriate and applicable to your particular area.”

What’s on the horizon?

Other AI applications that Stevens sees coming into wider use are computer vision and natural language processing.

Computer vision can take advantage of existing closed-circuit TV cameras to monitor safe working practices and alert workers to hazards, such as potential human-forklift interactions on a factory floor.

“We train machine learning algorithms, which are basically identifying the same patterns that we as humans would be looking for, but without us needing to be there,” Stevens says. “Machine learning algorithms can be applied across thousands of hours of footage, identify patterns and then provide us with insights that we can then use to take action.”

Natural language processing has a wide range of uses that could be helpful to safety pros, such as recording meetings or coaching conversations – with consent, of course – and summarizing, taking notes, interpreting the tone and dynamics of the interactions, or providing translation on the spot.

“The ability to have real-time language translation of health and safety or work-based information, typically using a smartphone, is critically important for organizations that have a multilingual workforce,” Stevens says.

What’s in the future?

Potential uses of AI in safety are so wide ranging that they’re hard to predict. “It was very difficult to say exactly how you were going to use the internet when it first came out,” Stevens notes.

He likes to think future AI solutions will empower individual workers to receive “hyper-personalized” safety training in whatever form optimizes their learning (a Spanish-language comic book, for example) and make well-informed safety decisions.

“I think the real power will be when we start getting artificial intelligence solutions in the hands of frontline workers,” Stevens says, “where we can support them to obtain the right information at the right time so that their decision-making is augmented with everything they need at their fingertips.”

What are some barriers and risks?

AI has dozens of safety applications, “ranging from quicker and better analysis of worksite conditions, ergonomics, hazards and so on – all the way to continuous monitoring of and adjustment to the work interface and daily decision-making,” says John Dony, vice president of workplace strategy at the National Safety Council.

So, what barriers and risks stand between safety pros and all of this potential?

Cost: One of the reasons generative AI is so commonly used among safety pros is that it’s typically either low cost or free, Vietas says. The issue isn’t only the monetary cost of other AI tools, but also the resources needed to program, customize and implement them, as well as train workers on their use.

“But as the computer power continues to increase, and then with the amount of investment happening in artificial intelligence systems, I believe the cost (of the other AI tools) will become more reasonable in the near future,” Vietas claims.

Lack of high-quality data: As they say in computer science, “Garbage in, garbage out.” And, unfortunately, much of the health- and safety-related data currently available to train AI systems is low quality, Stevens notes.

“It’s typically incomplete,” he continues. “It may contain quite significant bias. It may not be robust, accessible, stored, secure, adequately protected, de-identified, private. No matter how sophisticated these AI technologies and tools get, if we’re providing and training these tools with poor-quality health- and safety-related data or work data, it’s ludicrous to think we’re going to get a good outcome.”

Cybersecurity and privacy: “Many AI tools that are widely available for free use also open up security concerns for organizations with proprietary or personally identifying information,” Dony says. “Purchasing secure, internal-only versions is – at present – the purview of larger organizations and/or those who are on the early adoption side of technology.”

Possible bias and inequity: Because generative AI draws on existing data sets, it reflects the biases and stereotypes of the humans who created that content, raising the risk of inequitable outcomes. “An AI system can be designed in one environment for one group of people, and that could work out to be very successful,” Vietas notes. “If, then, you decide to try and put that into a new environment with entirely different demographics of workers, you shouldn’t expect to get the same outcome.”

Worker pushback: Discomfort with AI in the workplace can be traced to a number of causes:

  • Lack of familiarity with the technology and how it works
  • Fear among workers that they’ll be replaced with AI tech or pushed into less meaningful roles
  • Anxiety about learning to use the new tools and keep up as the technology changes
  • Concerns about violations of privacy, “Big Brother”-style monitoring and how their data will be used

Transparency and worker engagement can soften resistance, Stevens says.

“An organization needs to simply articulate what AI means in their business: what applications (in simple terms) these solutions are being used for, how those tools have been trained, how workers are expected to interact with them and how their jobs are expected to change because of it,” he adds. “And workers need to have a say in designing the strategy for implementation and adoption of those tools.”

NSC and AI

Find more information from the National Safety Council on artificial intelligence, technology and the future of work.

Are humans still required? (Yes)

Ultimately, the real dangers of integrating AI into workplace health and safety lie not with the technologies but with the humans who (mis)use them, especially if they fail to recognize that AI still requires substantial human direction, training and supervision.

“Overreliance on any tool or system – no matter how strong – is dangerous, and the same is true with AI,” Dony says. “Organizations and people will need to find a place of mutual balance and comfort in which AI tools are viewed as reliable and effective – but not infallible – guidance, and are used to strategically and tactically act more quickly and thoroughly than before.

“Once this equilibrium is reached, the potential for AI to have a real and lasting effect on safety is massive – a true enabler to a future in which no one loses their life on the job.”

Getting up to speed with AI

If you feel like you’re falling behind when it comes to artificial intelligence, you’re in good company. As Cam Stevens, CEO of the Pocketknife Group – a consultancy focused on the intersection between technology and workplace safety – points out, the technology is moving so fast that journal articles on AI are often out of date before they’re published.

Jay Vietas, chief of NIOSH’s Emerging Technologies Branch, advises safety professionals to equip themselves with a working knowledge of AI systems’ capabilities and best practices for design, implementation and maintenance.

“The more we understand how these systems work and how they translate into improved workplace safety, the more effective they’re going to become,” Vietas says.

Books

“Co-intelligence: Living and Working with AI,” by Ethan Mollick

“Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI,” by Reid Blackman

“Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence,” by Kate Crawford

Government resources

National Institute of Standards and Technology AI Risk Management Framework

Blueprint for an AI Bill of Rights

EU AI Act: First Regulation on Artificial Intelligence

Reports from the European Agency for Safety and Health at Work

e-newsletters

Charter Work Tech

One Useful Thing

Superhuman AI

Post a comment to this article

Safety+Health welcomes comments that promote respectful dialogue. Please stay on topic. Comments that contain personal attacks, profanity or abusive language – or those aggressively promoting products or services – will be removed. We reserve the right to determine which comments violate our comment policy. (Anonymous comments are welcome; merely skip the “name” field in the comment box. An email address is required but will not be included with your comment.)