The human brain does plenty of amazing things, including helping humans stay alive. The brain assesses threats, avoids dangers and maintains all the daily functions.
But it does have weaknesses, including accurately predicting risk. That’s where artificial intelligence (AI) comes into play—and why it is uniquely able to make substantial improvements in workplace safety.
Keith Bowers, president of Bowers Management Analytics, knows this all too well. He previously spent 20 years at Honeywell in diverse EHS and ISC roles and holds multiple patents in AI for workplace safety.
Bowers will be speaking with Geoff Walters, corporate director of enterprise safety at Owens Corning, about how AI is ushering the future of workplace at the 2024 Safety Leadership Conference that’s taking place Aug. 26-28 in the greater Denver area. More information, including registration, can be found here. Below is a conversation with Bowers in anticipation of his presentation.
EHS Today: What do you think of the way Owens Corning is using AI to improve workplace safety?
Bowers: I think Owens Corning does a nice job of using AI and advanced analytics at the strategic level. Too often, we simply use AI, computer vision and other advanced tools on the floor to do smaller things. Owens Corning and a few others are using AI and advanced analytics to help determine strategy and focus where they need to work and assess how good their existing systems are.
The problem is—and human and organizational performance (HOP) has done a really nice job of pointing this out to us—is how poor humans are at estimating risk, particularly of infrequent events like serious injuries and fatalities. We get the lessons from thinking fast and slow, from common and temporary security, but we have huge cognitive biases and deficits when it comes to estimating the risk or estimating infrequent events.
An example I like to use is shark attacks. If you're at the beach and you ask someone what's the highest risk for fatality here on the beach, most people—including the very sophisticated, well-educated people—will say shark attacks when, in fact, fatalities from shark attacks are very rare. In the U.S. alone, over 100 people drown from rip currents every year, and none of those with any help from sharks.
It's a good example of how poor we are at estimating risk, particularly fatality risk. We think of sharks first because of a very normal fear of being eaten by larger predators and popular movies and such. We think of that first, and we think it's the biggest risk when, in fact, we're almost always very, very wrong.
So, if we use AI tools and advanced analytics, we can overcome that hurdle and invest our limited resources, our limited money and our limited people on the proper risks.
In your opinion, where do the possibilities for AI and safety overlap?
This is after a very smart question! A big problem with most AI tools is that they're very poorly suited for understanding and preventing risk. Most AI tools are really good at preventing very frequent and very common events. However, they are poorly suited for very infrequent events like serious injuries and fatalities.
When a serious injury or fatality happens at a company, it's often the first time that scenario or situation has led to a serious injury or fatality. How do you predict an event, a very infrequent event, that’s never happened at your site before? I think the answer lies in advanced AI, natural language processing tools and other, external data to supplement company data.
It’s interesting because like you were saying, it's an infrequent event, but the risk is always present. I think that makes it more difficult for our brains to understand the potential severity of a problem. Our brains weren’t built to process information that way.
Yeah, very nicely phrased. We evolved in an environment that required us to manage infrequent events successfully, such as getting bit by a tiger.
Using the Campbell Institute data as an example, if you look at cases of total recordables and days away from work, they have been declining steadily for the last couple of decades. However, deaths and death rates have not. They plateaued or even arguably increased slightly.
What's something you wish safety professionals understood about AI?
I think AI and HOP together tell us that we need to be less arrogant, that we know less than we think we do, and that we need to gather data. We need to mine our incident data in a more effective way, and we need to go out and gather new data about possible hazards.
For example, we started putting up cameras in areas with forklifts, a well-known and potentially fatal hazard that many, many companies deal with. So, we put up cameras in areas we thought were safe and had rules.
Here's a video clip I’ll probably show in Denver. You can see that this is a worker trying to load this hopper with these pallets. He just turns off the safety device and he crawls in there. His buddy comes in and pulls them out.
This factory has a spotless safety record, good morale, and healthy long-term and well-paid employees. Nobody had imagined that stuff like this happened until we went out and gathered some data.
Someone who works at this factory might feel they know exactly what's going on and exactly what the risks are. Then we go out and we gather this kind of detailed data, where we look at every second for two weeks, and we find stuff that they had no idea what's happening.
People are on the best behavior when we walk through the facility, but we don't see what happens there at 2:00 a.m. on a Sunday when people are trying to make quota.
We need to gather more data in order to appreciate the risks and to mitigate them.
This video playback gives me the willies!
Exactly! A forklift is 3,000 pounds of rapidly moving steel that can turn on a dime and in unexpected ways. It’s why it’s one of the more common ways people are seriously injured or killed in the workplace.
Do you have any words of advice or perhaps caution for safety professionals who are interested in working with (or doing more) with AI to improve workplace safety?
I would caution that we shouldn’t get too entranced by shiny new toys. We need to make sure we understand the risks. You need to use data to figure out what your problems are.
AI and computer vision can be very helpful with that. Then you need to identify technologies and AI tools that are appropriate for your set of problems. It's easy to be distracted by fun, new cool stuff, but we need to make sure it’s addressing your biggest risks.
What's one thing you hope attendees take away from your presentation at the Safety Leadership Conference?
I hope they realize that we humble humans are quite poor at understanding serious risks, and that we should get help and take a deliberate approach to understanding our biggest risks.
I hope they realize that we have better tools for gathering more data about these risks, and that we should focus on reducing those risks once we make sure we're working on the right problems. We can spend all of our money and all of our time working on the wrong problems, only to be blindsided by the real risks.
You’ve spent over 20 years as an EHS professional. Do you have any words of wisdom that you want share with fellow EHS professionals?
I wish all safety folks understood and valued the scientific method. When we have a belief, a plan or a program, we must make sure that there's data behind it. We do that through testing and the scientific method.
A lot of times, we think that putting some safeguard or safety processes in place is going to reduce this risk. But if you don't test it, if you don't gather data before and after, then there's a good chance that you're completely wrong. Some folks are driving by intuition or best professional judgment when we should be driving by data and rigorous, objective scientific testing.
It’s a good reminder that the scientific method is a solid foundation for how to approach all kinds of problems.
Exactly! Test your solutions. Make sure they are doing what you think they're doing.