Skip to Content
Internal Controls

Seven Frontiers of Internal Control and Risk Management

Matthew Leitch | January 1, 2006

On This Page
Risk management on a world map

As we enter 2006, I thought it would be a good time to look ahead at how things in the world of internal control and corporate risk management might develop over the next several years.

Here are my "seven frontiers" of internal control and risk management.

1. More Controls Design and Less Audit and Remediation

A war is going on. A war between the quality movement, the previously dominant approach to process reliability (and efficiency), and the internal controls movement, which is gradually gaining ground. Perhaps this trend has something to do with the change in employment from manufacturing toward services, and financial services in particular. It surely has little to do with technical merit. Although the internal control perspective has the advantage of risk thinking and explicit consideration of fraud risks, it is still far behind the quality movement on measurement and design engineering.

Why doesn't the internal controls movement have a thriving tradition of controls design? Simply, it is because the controls movement has been led by auditors, and auditors do not design. Indeed the few experiments in internal controls design have usually produced disappointing results, very slowly, because they applied audit techniques to design problems. However, as quality and internal control gradually swap ideas, and as more and more money is spent on controls, people are beginning to spend more of that money on people whose job is to design and implement better controls.

Another driver is the deluge of "remediation" produced by projects to comply with section 404 of the Sarbanes-Oxley Act 2002. Some companies and their auditors have listed thousands of control remediation actions, and too many of these have been poorly thought out. I predict a backlash that includes putting competent people in charge of controls improvement.

2. Corporate Risk Management Getting Closer to Internal Control

Over the last couple of decades, the definitions of both "risk management" and "internal control" have become ever broader, and now I see no worthwhile distinction between them. Perhaps that's not quite true in the definitions stated by influential guidance documents and standards, but it is the way more and more people are thinking.

However, risk managers and internal controls managers tend to have different backgrounds and preoccupations. Risk managers tend to be concerned with big, nonrecurring risk events and often have insurance or engineering backgrounds. Internal controls people are more concerned with smaller, recurring internal risk events and tend to have audit or accounting backgrounds.

Already this difference is breaking down. I have met operational risk managers in banking who seem almost equally interested in both routine and nonroutine risks, and whose background no longer seems to have much influence on their approach.

There is also a technical reason for internal control and risk management coming even closer together. While risk managers tend to be better at getting involved in the big business issues and talking with senior management about things that really concern them, the internal controls community is getting better and better at running a "system." The trend is toward documenting risks and controls in detail and using confirmations and self-assessment to make sure every last control is complied with all the time.

Gradually people are seeing that the grinding power of the "system" approach can also be applied to the risks that management—even senior management—take. I have coined the phrase "intelligent controls" to refer to things that managers can do to manage uncertainty more effectively. Scenario planning, for example, is an intelligent control, and a company can make a policy of using it, just as it would make a policy for doing bank reconciliations.

3. Better Quantification

It's ironic that internal controls thinking, despite being a movement led by the big audit firms (of accountants), has paid almost no attention to quantifying risks or the benefits of controls in a credible, mathematically competent, and data-supported way. Most assessments don't get past "high-medium-low." This is a huge contrast to the quality movement, with its vast array of statistical process control techniques and its emphasis on measurement and on results.

However, as organizations spend more and more on internal controls, they reach a point where intuition is no longer enough and reassurances that the work is worthwhile need to be backed up with facts. Again, operational risk management in banks may be the leading edge of a trend toward better data gathering and quantification. Many banks have done a lot of work to measure operational risk. Some have also begun to look for statistically important relationships between potential drivers of operational risk and the events that result. Gradually, intuition is giving way to a more scientific approach.

4. Behavior Change Beyond Risk Registers

The objectives of a risk register are to have better risk management and to confirm by control mapping that the main risks are covered. When risk managers begin introducing risk management systems in an organization, this is typically where they start. Once that particular system is running smoothly, they often get involved with initiatives to improve controls, such as injecting risk assessments into projects, working out procedures for business case approval, and developing policies for resilient sourcing.

They do this because, in practice, having a nice looking risk register is no guarantee that risk is being managed well. If managers still pretend to be more certain than they really are (or should be) to get their way, if people hold back bad news in the hope that things will turn out right in the end, if risk management procedures for bids are seen as an obstacle to be gamed until the right answer comes out, and if the company still staggers from one "unexpected" crisis to another, then it doesn't matter what the risk register looks like—risk and uncertainty are being mismanaged.

It is common sense that a risk management program should cause managers to manage risk better, ensuring different behavior (and not just to the extent of filling in the risk register). This individual progression from risk registers to directly improving behavior may be the way that the risk management profession as a whole progresses.

Perhaps we will also see risk managers turning their attention to ways of influencing managers' behavior directly, such as by education programs that explore cognitive biases, social factors influencing risk perception and communication, and skills for communicating uncertain information without losing face.

5. Risk Management That Targets Psychological Factors

I often talk about the psychology of uncertainty and how it leads to bad planning and decisions. This is something people find very interesting, and everyone can think of examples from their own experiences of occasions when someone suppressed uncertainty about something, usually with unfortunate results. For example, at a company whose business involves bidding for large contracts, a system was introduced that worked out minimum bid prices. This system asked salespeople for information about risk factors and used this as a significant part of the calculation. Unfortunately, when the system gave a price the salespeople did not like, some would delete risk factors until they got a number they preferred.

This kind of thing is astonishingly common. I think we can expect to see increasing interest in ways to counter it. At the moment, it is often mentioned informally, but in the future, it could become an accepted part of risk management (and internal control) theory.

6. Risk and Performance Management Merging through a Causal Model

Go into a typical large organization and ask for its risk register and scorecard or something similar that states the firm's major goals and measures of progress. Now compare the two. You will notice that they are remarkably similar.

Very often something stated as a "critical success factor," perhaps on the scorecard, has a similar item in the risk register that is just the potential failure to achieve what is stated in the critical success factor. Where you find a critical success factor that does not have a matching risk, you have to wonder why not. Conversely, where there is a risk without a matching critical success factor, and the risk does not refer to some external condition, you have to wonder why the critical success factor is missing.

This is hardly surprising as risk analyses and are very often driven from statements of objectives. What is surprising, however, is that there are two separate documents looked after by two separate teams. One group is trying to come up with actions that will make something happen. The other is trying to come up with actions that will make sure it does not fail to happen. Obviously, some kind of integration looks promising. However, once you start down this route, you discover something more interesting still that offers to solve some of the most frustrating and difficult problems in corporate risk assessment.

The current leading thinking in performance management is that measures of performance should be based on a causal model of how the organization and its environment work. This model links levers management can pull to the results ultimately achieved. Robert S. Kaplan and David P. Norton, the original "balanced scorecard" gurus, call this a "strategy map."

One way of building a risk model would be to derive it directly from one of these causal models. There would be a "risk" for the future values of each variable in the model, and another "risk" for each connection between variables, representing our uncertainty about the structure and parameters of the model itself. This solves our problems with understanding the causal links between risks and of estimating impact in some way. Think about it: How can you work out the impact of something without analyzing how one thing leads to another? Isn't it odd that companies that have not modeled how their actions lead to results, nevertheless expect risk managers to work out how failures could damage those results.

7. Technical Risk Register Reforms

My final frontier is more of a plea than a prediction. I have seen many risk registers over the years, and not one has been entirely free of serious technical flaws. Many types of flaws are so widespread they are usually accepted as good practice. Surely this cannot continue.

The main problems stem from what I call the "single risk fallacy." This is the belief that the items on a risk register are single risks. In fact, they are nearly always sets of risks as indicated by the fact that the impact of the events described can usually vary, e.g., a fire is not just a fire but a range of possible fires causing varying levels of damage.

If you believe a risk register item is a single risk, then it is obvious that no effort is needed to define the boundaries of that risk. It also makes perfect sense to rate the impact if that risk occurred. Consequently, most risk register items are so vaguely worded that it is hard to tell what is included, and risk sets that clearly need to be modeled with a probability distribution over impact, are instead reduced to a single impact level, typically "medium."

There are many simple ways to improve risk registers, even if you choose not to use an explicit causal model.

Summary

Imagine the effect of making progress on all seven frontiers of risk management and internal control. Imagine systematic implementation of hard-hitting risk management controls, with measured benefits, profound behavior change leading to wiser management at all levels, techniques that are both simple and effective, and the satisfying feeling of having efficient controls carefully designed and implemented in good time.


Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.