Those who believe that what you cannot quantify does not exist
also believe that what you can quantify, does.
—Aaron Haspel
Safety leadership has consistently relied on data to drive action. This
results in the development and management of multiple complex processes: audit scores,
incident rates, lost time, observation volume, and near-miss totals. Historically,
safety has been measured by what can be captured in a spreadsheet, creating a tendency
to think that, if the data can't be easily captured, it must not be relevant.
This mindset does not come from a place of malice; it comes from the human
instinct to find certainty in a world that is dangerous and unpredictable. When people
are hurt or killed doing their work, leaders grasp for control. Data can feel like
control. It can feel like clarity. It can create a narrative that can be shared with
owners, insurers, regulators, clients, and executives. Data also offers the strong
illusion of causality.
But illusions do not protect workers; numbers that only tell us what has
already happened cannot prevent what happens next. If our only compasses are rearview
mirrors, then we may be steering toward the crash we are trying to avoid.
It is time to rethink what we use to understand the health of our safety
systems; it is time to shift from reactive metrics to proactive metrics. To do that, we
must learn to read signals that are messier, more human, and more dynamic than simple
counts.
The Box Built Around Safety Metrics
Incident rates remain the primary scoreboard for most
organizations. Historically, regulators and insurers have gravitated toward lagging
indicators because of the need to establish a standard language to quantify risk.
They influence contracts, prequalification, and executive bonus structures. For some
leaders, incident rates are the only safety metric with which they are familiar.
But incident metrics suffer from one fatal flaw: They only
activate after someone gets hurt—when the system has already failed. They do not
predict on their own.
In response to this limitation, safety professionals adopted audit
scores and structured observations. These tools promised to push safety upstream.
Instead of waiting for injuries, safety professionals could now look for precursors.
High-energy hazards, unsafe conditions, workarounds, improper equipment, unclear
procedures, and fatigue are just a few examples of the cascade of contributing
factors that safety professionals are tasked to collect and organize. In theory,
this push toward proactive indicators would allow them to identify and correct risk
before anyone suffered the consequences.
Once organizations began scoring audits and tracking observation volumes, the tools became performance barometers. Leaders started using them to evaluate project teams and superintendent performance. Low metrics became a mark of failure; high metrics became the symbol of success. What was designed as an early warning system slowly mutated into a compliance scoreboard. As soon as metrics become scoreboards, the temptation emerges to manage the scoreboard rather than manage the risk.
Safety professionals recognize this pattern. Observation trends become overwhelmingly positive as project leadership uses the forms as a feel-good exercise instead of a diagnostic tool. Anything uncomfortable or negative is left out because it may reflect poorly on the individual or the job.
Project teams want to be perceived as doing well. No one wants to
explain why their jobsite has a dozen hazards logged compared to the neighboring
project with zero. The system evolves to hide risk, and the jobsite appears safe
until it is not.
This is how organizations determine that their safety systems are working, while the field quietly absorbs pressure, confusion, and unsafe conditions. The environment becomes susceptible to fragility, and one moment can expose the discrepancy between the numbers and the truth.
What Happens When You Rely on What Is Easy to Measure
Data systems tend to reward whatever is easiest to capture.
Checklists, signatures, and countable positives are simple and do not require
vulnerability or deep thinking. They do not ask leaders to question their own
assumptions. A form can be stamped in 5 minutes, whereas a great safety conversation
may require 20 minutes, including the planning required and the activity itself.
When the input is shallow, then the output is shallow. Many
companies then attempt to compensate through technological means. They invest
heavily in mobile forms, digital audits, automated workflows, or predictive
analytics. These tools can accelerate insight when paired with quality data, but
most organizations struggle to solve the input problem and simply digitize their
blind spots.
The risk becomes even more pronounced when generative analytics,
predictive modeling, and artificial intelligence enter the conversation. A weak data
pipeline produces weak predictions regardless of how advanced the tool may be. If
the inputs are box-checked tailboards, recycled talking points, and sanitized
observations, then the algorithms will simply reflect the false confidence of the
processes.
When leadership relies on the quantity of data that is easiest to
capture and measure, they overlook the human side of safety systems. These systems
are built on trust, clarity, alignment, honesty, and shared accountability. None of
those qualities is captured in a five-part form field.
A Better Path: Proxy Indicators and Leading Indicators
Historically, there was never a good way to measure field culture
directly, but it is not impossible. Trust, communication quality, and leadership are
the forces that influence all aspects of work. Proxy indicators provide the
opportunity to capture the overall health of these cultural influences. A proxy
indicator is a measurable variable that reliably correlates with the health of
something more complex that we care about.
Safety is no different: In the past, it may not have been the
easiest to measure culture, but today we can measure participation, tone, and depth
of engagement. We can see whether workers ask questions or remain silent. We can
determine whether leaders coach or lecture. We can measure the quality of daily
planning conversations and link those patterns to outcomes. Proxy indicators turn
the abstract into something observable. They help us see the invisible forces
shaping the jobsite before they manifest as harm.1
Evidence from Conversation-Level Metrics
Daily planning conversations are one of the richest proxy
indicators we have and the moments when teams align on risk. They reveal confusion,
assumptions, interpersonal dynamics, pressure, and psychological safety. When the
conversation is thoughtful, contextual, and worker-led, the crew begins their day
with clarity and control. When it is rushed, dismissive, or theatrical, the crew
starts their day with fear and ambiguity.
The results reveal a stark difference: Projects where these simple
conversations are lacking are four times more likely to experience incident exposure
than projects with rich daily planning conversations. The variable is simply a
reflection of leadership behavior; when workers feel comfortable predicting
problems, incidents fall. Teams that plan with intention are also better prepared
for surprises that will inevitably arise.
The quality of the conversation does not prevent injuries by
itself; the conversation is simply the window into the capacity of those closest to
the work. It makes the invisible visible and provides leadership with the starting
place for action.
The Paradox of Negative Information and How to Handle It
The moment that an organization encourages reporting, something
predictable happens: They notice and report more hazards. They describe gaps,
challenges with staffing, confusion about procedures, or any number of concerns that
they are now empowered to identify.
The influx of more "negative" information can be met with a degree
of resistance. Some leaders are met with the challenge of grappling with this influx
of information, and the natural instinct is to fix the signal. If 20 hazards are
recorded in 2 days, then leadership demands an explanation. In this environment,
operational leaders must expect to hear "bad" news and provide an actionable,
supportive response. In the event that the leadership response results in blame or
shame, the system will fall apart, and the signal from the field will fade.
An example of this in action is the common "Days Since Last Injury" sign. What started as a way to encourage workers to think through their next task can become an internal competition to not be the one to drop the sign back to zero.
The paradox is simple: If leaders want the outcome that comes from
people sharing bad news under pressure, they cannot punish the bad news. A culture
of fear kills the signal; a culture of learning amplifies it.
Creating this culture requires leadership maturity and requires
executives and superintendents to see negative information as the price of
prevention rather than a mark of failure. The best teams are not those that find
zero hazards; the best teams are those that surface them relentlessly and fix them
without hesitation.
Learning Through Iteration Instead of Perfection
There is a well-known story from a pottery class that gives us a window into how to apply these learnings. One group was graded on producing the single best pot possible. They had all semester to theorize, plan, and attempt a single masterpiece. The other group was graded on volume. The more pots they made, the higher their final grade.
The second group experimented constantly. They made failures
early, refined techniques, adjusted, and tried again. They iterated their way to
excellence. However, the first group theorized their way to mediocrity.
Safety systems work the same way. They do not emerge from a perfect design session. They emerge from hundreds of micro iterations in the field. They emerge from crews discovering what works, what does not, and what needs to change. They emerge from workers speaking candidly about risk and leaders listening without defensiveness.
No organization achieves safety excellence through a single perfect process. Excellence grows through the compounding effect of small, open improvements.2
A Real-World Example: TDIndustries
Several years ago, a mechanical contractor in Dallas found itself
stuck at an incident rate slightly above 2.0. Their inspections and observations
were sparse, and field participation was anemic. Rather than doubling down on rules,
they chose to double down on engagement. They asked a simple question: What would
happen if we listened more?
They encouraged workers to participate in observations without
fear and trained supervisors to facilitate conversations instead of lectures. They
created feedback loops where workers could identify concerns and see corrective
action quickly. Most importantly, they refused to treat negative information as
proof of incompetence and treated it as proof of awareness.
In the first year, the incident rate dropped from 2.0 to 1.4. By
the third year, the incident rate dropped to 1.1. Observation participation
increased by more than a thousand percent. They went from collecting around 12,000
observations annually to collecting 155,000.
Those 155,000 data points were not noise; they were learning
cycles. Each represented a moment when a worker paused, noticed something, and chose
to share it. Improvement emerged not through grand reform but through consistent
iteration.
The system became healthier because the people closest to the work
were empowered to become part of the safety engine.3
From Compliance to Something Greater
Traditional metrics will not disappear; regulators will require
them, clients will request them, and insurers will price them because they tell us
what the system has already done.
If we want to know whether the system will perform tomorrow, we need indicators that track what people are doing today. We need indicators of how well crews are communicating, how openly they address hazards, how quickly leaders respond, and how much trust exists across the team.
Proxy indicators illuminate those realities. They allow us to
measure the invisible qualities that shape every jobsite. They show whether people
understand risk or simply check a box confirming they have discussed it. They enable
leadership to understand if conversations are alive and dynamic … or hollow and
scripted.
A safety system grounded in proxy indicators does not eliminate
risk but reveals risk early enough to learn from it. It gives voice to the people
who see danger first. It replaces fear with clarity. It replaces ambiguity with
transparency.
The most powerful safety technology remains the oldest one we
have: It is the act of listening. Safety is not built on perfect numbers; it is
built on honest conversations, continuous improvement, and the courage to learn from
what people see.
Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.
Safety leadership has consistently relied on data to drive action. This results in the development and management of multiple complex processes: audit scores, incident rates, lost time, observation volume, and near-miss totals. Historically, safety has been measured by what can be captured in a spreadsheet, creating a tendency to think that, if the data can't be easily captured, it must not be relevant.
This mindset does not come from a place of malice; it comes from the human instinct to find certainty in a world that is dangerous and unpredictable. When people are hurt or killed doing their work, leaders grasp for control. Data can feel like control. It can feel like clarity. It can create a narrative that can be shared with owners, insurers, regulators, clients, and executives. Data also offers the strong illusion of causality.
But illusions do not protect workers; numbers that only tell us what has already happened cannot prevent what happens next. If our only compasses are rearview mirrors, then we may be steering toward the crash we are trying to avoid.
It is time to rethink what we use to understand the health of our safety systems; it is time to shift from reactive metrics to proactive metrics. To do that, we must learn to read signals that are messier, more human, and more dynamic than simple counts.
The Box Built Around Safety Metrics
Incident rates remain the primary scoreboard for most organizations. Historically, regulators and insurers have gravitated toward lagging indicators because of the need to establish a standard language to quantify risk. They influence contracts, prequalification, and executive bonus structures. For some leaders, incident rates are the only safety metric with which they are familiar.
But incident metrics suffer from one fatal flaw: They only activate after someone gets hurt—when the system has already failed. They do not predict on their own.
In response to this limitation, safety professionals adopted audit scores and structured observations. These tools promised to push safety upstream. Instead of waiting for injuries, safety professionals could now look for precursors. High-energy hazards, unsafe conditions, workarounds, improper equipment, unclear procedures, and fatigue are just a few examples of the cascade of contributing factors that safety professionals are tasked to collect and organize. In theory, this push toward proactive indicators would allow them to identify and correct risk before anyone suffered the consequences.
Once organizations began scoring audits and tracking observation volumes, the tools became performance barometers. Leaders started using them to evaluate project teams and superintendent performance. Low metrics became a mark of failure; high metrics became the symbol of success. What was designed as an early warning system slowly mutated into a compliance scoreboard. As soon as metrics become scoreboards, the temptation emerges to manage the scoreboard rather than manage the risk.
Safety professionals recognize this pattern. Observation trends become overwhelmingly positive as project leadership uses the forms as a feel-good exercise instead of a diagnostic tool. Anything uncomfortable or negative is left out because it may reflect poorly on the individual or the job.
Project teams want to be perceived as doing well. No one wants to explain why their jobsite has a dozen hazards logged compared to the neighboring project with zero. The system evolves to hide risk, and the jobsite appears safe until it is not.
This is how organizations determine that their safety systems are working, while the field quietly absorbs pressure, confusion, and unsafe conditions. The environment becomes susceptible to fragility, and one moment can expose the discrepancy between the numbers and the truth.
What Happens When You Rely on What Is Easy to Measure
Data systems tend to reward whatever is easiest to capture. Checklists, signatures, and countable positives are simple and do not require vulnerability or deep thinking. They do not ask leaders to question their own assumptions. A form can be stamped in 5 minutes, whereas a great safety conversation may require 20 minutes, including the planning required and the activity itself.
When the input is shallow, then the output is shallow. Many companies then attempt to compensate through technological means. They invest heavily in mobile forms, digital audits, automated workflows, or predictive analytics. These tools can accelerate insight when paired with quality data, but most organizations struggle to solve the input problem and simply digitize their blind spots.
The risk becomes even more pronounced when generative analytics, predictive modeling, and artificial intelligence enter the conversation. A weak data pipeline produces weak predictions regardless of how advanced the tool may be. If the inputs are box-checked tailboards, recycled talking points, and sanitized observations, then the algorithms will simply reflect the false confidence of the processes.
When leadership relies on the quantity of data that is easiest to capture and measure, they overlook the human side of safety systems. These systems are built on trust, clarity, alignment, honesty, and shared accountability. None of those qualities is captured in a five-part form field.
A Better Path: Proxy Indicators and Leading Indicators
Historically, there was never a good way to measure field culture directly, but it is not impossible. Trust, communication quality, and leadership are the forces that influence all aspects of work. Proxy indicators provide the opportunity to capture the overall health of these cultural influences. A proxy indicator is a measurable variable that reliably correlates with the health of something more complex that we care about.
Safety is no different: In the past, it may not have been the easiest to measure culture, but today we can measure participation, tone, and depth of engagement. We can see whether workers ask questions or remain silent. We can determine whether leaders coach or lecture. We can measure the quality of daily planning conversations and link those patterns to outcomes. Proxy indicators turn the abstract into something observable. They help us see the invisible forces shaping the jobsite before they manifest as harm. 1
Evidence from Conversation-Level Metrics
Daily planning conversations are one of the richest proxy indicators we have and the moments when teams align on risk. They reveal confusion, assumptions, interpersonal dynamics, pressure, and psychological safety. When the conversation is thoughtful, contextual, and worker-led, the crew begins their day with clarity and control. When it is rushed, dismissive, or theatrical, the crew starts their day with fear and ambiguity.
The results reveal a stark difference: Projects where these simple conversations are lacking are four times more likely to experience incident exposure than projects with rich daily planning conversations. The variable is simply a reflection of leadership behavior; when workers feel comfortable predicting problems, incidents fall. Teams that plan with intention are also better prepared for surprises that will inevitably arise.
The quality of the conversation does not prevent injuries by itself; the conversation is simply the window into the capacity of those closest to the work. It makes the invisible visible and provides leadership with the starting place for action.
The Paradox of Negative Information and How to Handle It
The moment that an organization encourages reporting, something predictable happens: They notice and report more hazards. They describe gaps, challenges with staffing, confusion about procedures, or any number of concerns that they are now empowered to identify.
The influx of more "negative" information can be met with a degree of resistance. Some leaders are met with the challenge of grappling with this influx of information, and the natural instinct is to fix the signal. If 20 hazards are recorded in 2 days, then leadership demands an explanation. In this environment, operational leaders must expect to hear "bad" news and provide an actionable, supportive response. In the event that the leadership response results in blame or shame, the system will fall apart, and the signal from the field will fade.
An example of this in action is the common "Days Since Last Injury" sign. What started as a way to encourage workers to think through their next task can become an internal competition to not be the one to drop the sign back to zero.
The paradox is simple: If leaders want the outcome that comes from people sharing bad news under pressure, they cannot punish the bad news. A culture of fear kills the signal; a culture of learning amplifies it.
Creating this culture requires leadership maturity and requires executives and superintendents to see negative information as the price of prevention rather than a mark of failure. The best teams are not those that find zero hazards; the best teams are those that surface them relentlessly and fix them without hesitation.
Learning Through Iteration Instead of Perfection
There is a well-known story from a pottery class that gives us a window into how to apply these learnings. One group was graded on producing the single best pot possible. They had all semester to theorize, plan, and attempt a single masterpiece. The other group was graded on volume. The more pots they made, the higher their final grade.
The second group experimented constantly. They made failures early, refined techniques, adjusted, and tried again. They iterated their way to excellence. However, the first group theorized their way to mediocrity.
Safety systems work the same way. They do not emerge from a perfect design session. They emerge from hundreds of micro iterations in the field. They emerge from crews discovering what works, what does not, and what needs to change. They emerge from workers speaking candidly about risk and leaders listening without defensiveness.
No organization achieves safety excellence through a single perfect process. Excellence grows through the compounding effect of small, open improvements. 2
A Real-World Example: TDIndustries
Several years ago, a mechanical contractor in Dallas found itself stuck at an incident rate slightly above 2.0. Their inspections and observations were sparse, and field participation was anemic. Rather than doubling down on rules, they chose to double down on engagement. They asked a simple question: What would happen if we listened more?
They encouraged workers to participate in observations without fear and trained supervisors to facilitate conversations instead of lectures. They created feedback loops where workers could identify concerns and see corrective action quickly. Most importantly, they refused to treat negative information as proof of incompetence and treated it as proof of awareness.
In the first year, the incident rate dropped from 2.0 to 1.4. By the third year, the incident rate dropped to 1.1. Observation participation increased by more than a thousand percent. They went from collecting around 12,000 observations annually to collecting 155,000.
Those 155,000 data points were not noise; they were learning cycles. Each represented a moment when a worker paused, noticed something, and chose to share it. Improvement emerged not through grand reform but through consistent iteration.
The system became healthier because the people closest to the work were empowered to become part of the safety engine. 3
From Compliance to Something Greater
Traditional metrics will not disappear; regulators will require them, clients will request them, and insurers will price them because they tell us what the system has already done.
If we want to know whether the system will perform tomorrow, we need indicators that track what people are doing today. We need indicators of how well crews are communicating, how openly they address hazards, how quickly leaders respond, and how much trust exists across the team.
Proxy indicators illuminate those realities. They allow us to measure the invisible qualities that shape every jobsite. They show whether people understand risk or simply check a box confirming they have discussed it. They enable leadership to understand if conversations are alive and dynamic … or hollow and scripted.
A safety system grounded in proxy indicators does not eliminate risk but reveals risk early enough to learn from it. It gives voice to the people who see danger first. It replaces fear with clarity. It replaces ambiguity with transparency.
The most powerful safety technology remains the oldest one we have: It is the act of listening. Safety is not built on perfect numbers; it is built on honest conversations, continuous improvement, and the courage to learn from what people see.
Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.