Skip to Content
Risk Management

Proactive Safety Without the Hype: What AI Can (And Can't) Do for the Field

Justin Morrow | October 10, 2025

On This Page
Businessman standing on stage in front of a blue and orange binary code wall, speaking to audience

Everywhere we turn, new promises are being made about artificial intelligence (AI). Business leaders have even told their employees that adopting AI is no longer optional but inevitable. Safety has not been left out of this wave of promises: Predictive analytics, natural language processing, and computer vision are all being promoted as tools that will finally make the jobsite safer.

The truth is that AI can help. It can process information faster than a human ever could, highlight weak signals, and eliminate some of the burdens that bog down supervisors and field leaders. But AI is not a cure-all, and its value is often overstated. Without good data and leadership willing to act on the results, AI becomes little more than another line item in the budget.

AI has a role to play in proactive safety, but it is a supporting role. The real performance of an organization still depends on the quality of the data collected and the culture that leaders build around it.

What AI Brings to the Table

AI works best when there is a high volume of consistent data. Given enough material, it can quickly identify patterns that a human would miss. For safety teams, that might mean analyzing thousands of observations and finding recurring hazards across different projects. It might mean scanning images or video feeds to ensure standard operating procedures are being followed properly or flagging when equipment appears to be out of place.

Another strength of AI is its ability to reduce repetitive work. Forms, checklists, and logs are a part of everyday safety practice, but they consume enormous amounts of time when they need to be tracked manually. AI can automate much of this, from reading handwriting on a scanned document to prefilling digital records. The value here is not in replacing human judgment, but in freeing people to spend more time engaging with crews and less time chasing signatures.

These capabilities matter and allow leaders to recognize small patterns before they grow into bigger problems. They also help distribute this information more evenly across organizations, so those who may not visit every jobsite still have a clearer picture of what is happening on the ground. But it is crucial to remember that this visibility is only information, it is not action. AI can show what is happening, but it does not decide what to do about it.

The Data Problem

The greatest barrier to meaningful AI in safety is not the complexity of the tools themselves but the quality of the data they are asked to process. For decades, safety systems have been turned into compliance wish lists.

  • Did everyone sign the form?
  • Did the right boxes get checked?
  • Were the regulatory requirements satisfied?

The systems built for these questions generate plenty of data points, but they are shallow.

The weakness here is obvious: If the only information being collected is that a meeting took place and that the crew signed the form, then the system may have a complete record, but it does not provide much insight. AI cannot overcome that limitation. If the inputs are minimal, the outputs will be minimal.

This is why the phrase "garbage in, garbage out" is so persistent. Algorithms cannot create substance from nothing. If the field is only asked to provide surface-level detail, then any analysis will reflect only that surface.

To change this, organizations need to reconsider how data is collected in the first place. Instead of treating field reporting as a compliance burden, it should be designed as an opportunity to understand the work more deeply.

  • Is the field describing the work or just marking hazards from a drop-down list?
  • Are supervisors noting the changes that they made in real time or simply recording that they were present?
  • Are patterns of communication and culture being captured, or are they left invisible because the system was not designed to recognize them?

When data becomes richer, AI becomes more effective. With meaningful context from the field, algorithms can amplify insights that matter instead of recycling shallow compliance records.

From Compliance to Culture

One of the most significant opportunities in digital safety is shifting the focus from paperwork to culture. While well intended, these safety systems have failed to capture the real language of the field. They have only evolved to produce check marks and signatures. But real safety culture does not live in check marks. It lives in the way that people talk about the work, anticipate risks, and prepare for changing conditions.

Capturing the content of these conversations changes the game. Instead of only recording who attended a meeting, these tools can capture what was said, how supervisors framed the day's work, and whether workers felt comfortable speaking up. Leaders reviewing this information can see if the crew was engaged, if hazards were discussed thoroughly, or if the meeting was rushed and barely scratched the surface.

This shift also answers a larger organizational need. In today's industries, leaders cannot be everywhere at once. Crews are distributed across a multitude of sites, projects, and regions. Collecting conversations gives leaders visibility into culture without having to physically attend every meeting. They can see, at scale, whether the behaviors that they want are actually present.

When AI is applied to these activities, its potential becomes clear. Instead of simply cataloging compliance, it can highlight patterns of communication, identify where conversations are consistently strong, and flag areas where they are weak. Leaders can act earlier, reinforce what is working, and provide help where it is needed. The conversation level becomes both the data source and the measure of culture.

What AI Cannot Do

Even with these strengths, it is important to recognize the limits of AI. These new tools are powerful, but they are not leaders. They cannot build trust between supervisors and field workers. They cannot interpret cultural nuances. They cannot set the standard for what "good" looks like in a given organization's safety practices.

AI also cannot substitute for presence. Safety leadership has always relied on showing up, asking questions, and listening. Data can supplement that presence, but it cannot replace it. If leaders rely entirely on digital reports, they risk losing the human connection that gives those reports meaning.

At its best, AI organizes information, processes it at scale, and predicts potential risks. But it cannot mentor, coach, or inspire. It can only point to where leadership is needed. The responsibility to act will always remain with people.

Moving Past the Hype

The key to proactive safety is not adopting the flashiest algorithm but rethinking the fundamentals of how information is collected, shared, and acted upon. AI can play an important role, but it is only one part of the system.

The organizations that stand out will be those that turn field reporting into a meaningful practice, not just a compliance exercise. They will treat data as a reflection of culture, not just a record of activity, and they will use AI to amplify insights rather than mask weak inputs. When context from the field is strong, AI becomes a force multiplier: spotting patterns sooner, surfacing risks earlier, and extending the reach of leadership across the organization.

The future of safety will not be defined by who has the most advanced tools but by who uses them with purpose. The companies that thrive will be the ones that see AI as a support, not a substitute, and who use technology to listen more closely to their people and act with greater precision. Proactive safety without the hype means putting culture first, using data wisely, and remembering that, no matter how advanced the system, leadership remains human.


Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.