Resources On the Safe Side Safety culture Injury prevention Podcasts

On the Safe Side podcast Episode 23: AI in safety and achieving a just culture

On the Safe Side podcast

EDITOR’S NOTE: Each month, the Safety+Health editorial team discusses important safety topics, and interviews leading voices in the profession.

In Episode 23, the S+H editorial team discusses the role of artificial intelligence in safety and its potential impacts. Also, safety leadership expert Rajni Walia of DEKRA answers questions on the safety benefits for organizations that achieve a “just culture.”

Web links associated with this episode:



Don’t miss the next one: Sign up to be notified in an email about each new “On the Safe Side” podcast episode. SIGN UP

Subscribe:

  • Search your favorite podcast app for “On the Safe Side.” Please let us know if your podcast app doesn’t show a listing.
  • Subscribe to the podcast using the RSS feed.


More ways to listen:


Catch up:

Listen to past episodes.

Post a comment to this article

Safety+Health welcomes comments that promote respectful dialogue. Please stay on topic. Comments that contain personal attacks, profanity or abusive language – or those aggressively promoting products or services – will be removed. We reserve the right to determine which comments violate our comment policy. (Anonymous comments are welcome; merely skip the “name” field in the comment box. An email address is required but will not be included with your comment.)

Title

Simon Di Nucci
January 18, 2022
A.I. is a topic that seems to get people overheated. First, lots of things get called A.I. when they are not - machine learning or big data analytics don't necessarily allow a machine to do something unexpected or original, which is, arguably, the measure of true A.I. Then there's safety. If we think of A.I. as a black box, i.e. we can't understand what makes it work, then it's just a tool. (Even if we can inspect the A.I. it might be so complex and unpredictable as to make it untestable, or unanalysable, but that's not unusual for complex software systems.) So, if it's a black box, does its behavior meet the specification? Does it do what it is supposed to do and - crucially - NOT do what it's NOT supposed to do? Has anyone done the work to enable them to know the difference? Now, that's the *real* question!!! If the specification is inadequate, or absent, or the thing is truly capable of being unexpectedly original, is there a 'supervisor' or a sandbox that prevents it from doing anything unsafe? If so, good; if not, don't use it in an environment where it might do harm. Again, this is no different from many complex, but unintelligent, software systems. Just saying!