Skip to Content
Internal Controls

Why the COSO Frameworks Need Improvement

Matthew Leitch | April 1, 2005

On This Page

The recent enterprise risk management (ERM) framework published by COSO is new, lengthy, and inherently flawed. Before it becomes the basis for future regulatory oversight, changes need to be made, including updating of the internal control framework and an overhaul or removal of the Evaluation Tools.

I can't think of a document that has had more influence on thinking about internal control than the Committee of Sponsoring Organizations of the Treadway Commission's (COSO's) "Internal Control—Integrated Framework." It is endlessly quoted and paraphrased in control and governance documents for different sectors and has recently become the de facto standard for controls over financial reporting thanks to the Security and Exchange Commission's (SEC's) interpretation of the Sarbanes-Oxley Act of 2002 (SOx). Thousands of people have written hundreds of thousands of pages about their internal controls using formats taken from this framework.

More recently COSO has published Enterprise Risk Management—Integrated Framework" which some are already calling "COSO II." This looks set to be as influential as the internal control framework. So, even quite small technical weaknesses in these documents could have huge practical implications.

In this article I will show that weaknesses do exist and they are far from small. The practical implications are huge, and we need to press for improvements with the minimum of delay.

What's Wrong with COSO's Internal Control Framework?

COSO's internal control framework was an exciting breakthrough in internal control thinking. Suddenly, internal controls became a system instead of just a list of objectives or controls. There were definitions that expanded and defined the concept in an exciting new way. At the time, it seemed a great step forward but with the benefit of time and experience, we can see the practical implications of some of its conceptual weaknesses.

The definition of "internal control" is so wide that almost every aspect of management is arguably part of management control. The definition reads:

Internal control is broadly defined as a process, effected by an entity's board of directors, management and other personnel, designed to provide reasonable assurance regarding the achievement of objectives in the following categories:

  • Effectiveness and efficiency of operations.
  • Reliability of financial reporting.
  • Compliance with applicable laws and regulations.

When the internal control framework was launched, its most ardent supporters saw it as a complete guide to management. The idea was that for business success, you just define some objectives and the rest is internal control. The framework divided risks into three categories: operational, financial reporting, and compliance.

At first this seems clear enough, but what about financial reporting that must be reliable to be compliant? Where do you draw the line between data processing for doing business and data processing for financial reporting?

Most confusing of all for most people are the five components of internal control. The "control activities" component is straightforward enough, but who can honestly say they aren't just a tiny bit hazy on "information and communication," "monitoring," "risk assessment," and of course "control environment"?

All these problems are minor compared to a part of the framework that isn't even mentioned in the executive summary. One of the books in the COSO set is called the Evaluation Tools. It includes a large number of illustrative control matrices showing what controls might be in place for every major process in a typical business. In effect, these matrices are lists of control objectives, with controls next to them.

This format has been taken up by auditors and companies desperate to comply with § 404 of SOx, so there are already hundreds of thousands of such matrices around the world. And that's a pity because, for reasons I will now explain, the format is unreliable and impractical. When COSO's internal control framework was written and consultation was in progress, who at that time had any inkling of the use to which these matrices might be put? How many reviewers had the interest or patience to even comment on them?

If people had known at the time what use would be made of these matrices, I don't think they would have been published, at least in their current form.

Those Matrices

Unless you use a computer system that can display controls in other ways too, the COSO matrix will produce the following problems.

  • Gaps in control objectives. The COSO matrices are based on abstract models of business processes with no concrete details about the systems or people involved. In reality a process like "raise a bill" may be split across half a dozen computer systems and a vast number of interfaces. There is massive scope for missing controls and this is not visible on the COSO matrices. Beyond this, there is no consistent framework by which the objectives are derived and which gives assurance that the objectives are complete.
  • No usable list of controls. Controls are noted on the matrix, but not all controls and not just once. Many controls will appear more than once because they address more than one objective. In practice it is common to find that the same control appears with different wording. De-duplication is not easy.
  • Systematic understatement of controls. The duplication tends to deter people from writing a control down every time it applies to an objective/risk. Consequently, the extent of control is systematically understated. In fact control systems tend to be multi-layered and there are certain controls that apply across very many control objectives. To appreciate the control system's design we need to see that structure.
  • Gaps in controls. Many controls are not mentioned at all, nor is their absence visible. I have conducted several experiments where I have rewritten COSO style matrices in a form that structures the controls into different types and this has always shown large gaps in the control structure documented, usually important.

In short, COSO matrices are very hard to review properly, are rarely of good quality, and don't give a usable list of controls. The sooner this situation is corrected the better. The most practical thing to do in the short term is simply to remove the Evaluation Tools from the framework.

What's Wrong with COSO's ERM Framework?

COSO is to be congratulated on a document that was produced with public consultation and tries hard to recognize a wide variety of alternative ways to manage risk. It shows great knowledge of risk management techniques and contains many interesting examples.

Unfortunately, with two volumes totaling 246 pages, it is so large that it is hard to see how every part of it can have received adequate comment during the consultation phase. Although the published documents reveal that there were 78 responses to the consultation, the responses themselves have not been made public, so we cannot know how much of the documents was seriously considered.

My impression of the two volumes is that there are a lot of ideas there that are new or different from usual practice, and some distinctions that will not be understood by most readers. For example, many people will not notice on initial reading that "risk tolerances" do not relate to risks (because there is no element of uncertainty). The distinction between "risk responses" and "control activities" will also be confusing.

In short, the ERM framework is far too big for a first version. Not surprisingly, it contains some obvious technical flaws. For example, although keen to talk about "opportunities," it doesn't have the logic worked out properly and the crucial paragraphs on what to do with them are unclear. The document explains that if an event happens that is favorable, then this is an opportunity that is sent to strategic planning so that plans can be made to take the opportunity. There is no such comment on what happens if an event happens that is unfavorable. Does that mean plans are left unchanged?

The crucial paragraphs on what happens to upside risks are unclear. My best guess is that some get taken out of risk management to be looked at elsewhere, while others stay in risk management. This appears to exclude the possibility of integrated uncertainty management that deals with both unexpected good and bad events in one approach.

Another problem is the many examples of rating risk items for their probability and impact. When risk register items are rated using (1) a number for probability of occurrence, and (2) a number for impact on occurrence, their risk is systematically understated. It is hard to see the problem when ratings are as rough as High/Medium/Low, but when numbers are given, the fault is obvious.

Imagine that at an early stage in a project the risk of "Overspend due to client originated changes" was rated as Probability = 0.5 and Impact = £10m. By this method, the possibility of an impact other than £10m has been excluded and we should be particularly concerned that the risk of impact greater than £10m has disappeared from view. For an item like this, there may well be a 20 percent chance of an overspend of more than £15m, for example, and this is obviously something people need to know!

The framework is also overly narrow, something else that would have been less likely to happen with a shorter document. For example:

  • The framework effectively excludes use of risk management methods that do not involve explicit event identification and risk assessment. In practice, most risk responses are put in place without explicit event identification and risk assessment, and this is an efficient and reliable approach.
  • The framework has no place for methods of designing risk responses that do not involve writing responses against a list of risks. This unnatural method leads to piecemeal design. Would an architect design a building by listing the required windows, doors, and walls? Hardly. This way leads to rooms without doors, walls that do not join up, and main entrances that open directly into the kitchens.
  • The framework is written on the basis that risk management is a way to be more confident of reaching objectives that aregivens. (Yes, I know it acknowledges that risk management is not a linear thought process, but having done so, it goes on for hundreds of pages as if it is.) Uncertainty should be considered in setting and revising objectives.

Should we tolerate a document with the influence of COSO's framework containing logical flaws and being excessively prescriptive? The current version says that the internal control framework remains the document for internal controls assessment (i.e., for SOx purposes) but the ERM framework is clearly designed to supersede it one day. Sooner or later, we will face the prospect of the ERM framework having virtually the same status as law. What now may seem trivial theoretical gripes will in time emerge as major barriers to spreading the word about the benefits of great risk management.

Conclusion

COSO's internal control framework urgently needs updating, and the Evaluation Tools in particular should be removed until something better is available. The ERM framework is new, but before it becomes the basis for some future regulatory paper-chase, we should press for it to become shorter, more open, and less flawed.


Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.