The use of artificial intelligence (AI) for environment, health, and safety (EHS) management continues to gain strong interest from organizations everywhere. It’s no longer a matter of whether AI will take hold in EHS, but when.
The Global Corporate Survey 2024: EHS Budgets, Priorities and Tech Preferences report from independent research firm Verdantix, reveals many organizations have rolled out artificial intelligence for EHS, or intend to. Results show:
- 10% have already widely rolled out AI across the organization
- 27% have partially rolled it out and looking to increase its use
- 15% have partially rolled it out and not looking to further disseminate it
- 6% have partially rolled it out and looking to reduce its use
- 12% are piloting AI technology
- 30% have no current plans to roll it out
While there are concerns about expanding the use of AI for safety because of data quality concerns, almost half of respondents say they already use AI widely, are looking to increase its use, or are piloting AI use cases.
The recent Enablon Sustainable Performance Forum (SPF) in Chicago featured a panel discussion with leaders and experts, including Stuart Neumann, vice-president of advisory services at Verdantix, Mammad Alizada, advisor to the president on health and safety at SOCAR, and Francisco Mendoza, global safety innovation leader at Grupo Bimbo.
The panel offered great advice on how to prepare workers for AI use in safety, especially for computer vision where AI analyses camera feeds and video footage to identify hazards, unsafe behaviors or conditions, near misses, incidents, or ergonomic risks. Below are some of the insights shared.
Address surveillance and privacy concerns early
When introducing AI technologies like computer vision in the workplace, it’s natural for workers to worry that these tools may be used to monitor productivity rather than enhance workplace safety. Concerns about surveillance and privacy may quickly lead to mistrust if not addressed proactively.
The best way to build trust is through transparency. If a company plans to deploy AI for safety—whether through computer vision or other applications—it’s essential to openly acknowledge and address potential concerns from the start. Be clear about the purpose: that AI will be used to protect workers, not to monitor them.
By addressing the proverbial “elephant in the room” early and communicating the intent behind the AI use case for safety, organizations foster a greater sense of trust and promote collaboration. Don’t wait until after deployment; engage workers in conversations from the very beginning.
It’s about worker needs, not corporate goals
When introducing a digital transformation initiative or an innovative technology—especially one involving AI—organizations often emphasize broader business benefits. But to gain employee trust and buy-in, messaging should prioritize the tangible, personal benefits for workers.
People are often resistant to change and may view AI-powered safety tools with suspicion, often associating them with increased surveillance and/or loss of privacy. It’s critical to internally communicate and clearly explain how AI-based solutions will keep workers safe, reduce workplace risks, and create a more positive working environment.
Show workers the benefits for them, not what’s in it for business management or investors.
Use peer champions to build trust
People are often more likely to trust and be influenced by peers. The same holds true in the workplace where employees are far more receptive to new technology if it's introduced and endorsed by fellow workers rather than by management or outside consultants.
To build credibility and foster acceptance, organizations should identify and empower frontline workers to serve as evangelists and champions of AI safety tools.
For example, if an AI solution is piloted at a specific site, consider recruiting those workers involved to become trusted advocates across the broader organization. Their first-hand experiences and relatable perspectives may go a long way in helping others see the value of AI technology and feel confident in its use.
Tailor your approach to cultural contexts
When planning to introduce AI safety tools, it’s important to recognize that cultural attitudes toward technology, authority, and workplace environments may significantly vary across regions. It’s an especially important consideration for organizations that operate globally.
In some countries or company cultures, management and workers may have a collaborative relationship, making it easier to gain acceptance for new initiatives. In other regions or organizations, strict hierarchies may see workers simply complying with directives without much pushback, but may not necessarily be bought in. And, in some environments, skepticism towards leadership runs deep, elevating the need for peer champions to help build trust and credibility.
There’s no one-size-fits-all strategy. To succeed, organizations must localize messaging, rollout plans, and engagement tactics that align with the cultural norms of every workgroup.
Learn more about the impact of AI on EHS and the key trends and priorities that will shape EHS this year and beyond by watching the recording on our webinar “EHS in 2025: What's the Present and Future?”