Skip to Content
Internal Controls

Embedded Risk Management: The Auditors' Contribution

Matthew Leitch | April 1, 2004

On This Page

While most people agree that embedded risk management sounds good, few know how to achieve it. Tips on how it can be accomplished, along with examples of common situations, can help provide guidance.

Most people seem to agree that "embedded" risk management sounds a good idea, and that "embedded" means "not something extra added on." They want risk management to be real, natural, convenient, and effective. How can a risk program achieve this, and who is well placed to do it?

The Job To Be Done

Even in organizations without a risk manager, or other risk management specialists, risk is being managed. Risk and uncertainty are so pervasive in our lives that we deal with them all the time. Risk management already exists, in some form, before risk managers and auditors come along to try to "implement" it.

So, however we go about cultivating risk management in an organization or individual, we need to understand how risk is being managed now. Having understood the current situation, we need to think about possible improvements and about the easiest ways to create evidence that risk has been managed. Finally, we have to continue collecting that evidence, and make sure our risk management methods keep up with changing conditions and requirements.

Who Can Do This?

Anyone who has the patience to listen can learn to do this work, but I suggest it is auditors, and particularly internal auditors, who are most used to the lifestyle of traveling to visit people, asking them questions, studying reality, and then discussing and agreeing to recommendations for improvement. Internal auditors are the ones most likely to make embedded risk management work, but they need some new skills to do it well.

The Technical Skills Needed

Auditors who want to develop the best possible skills for uncovering and cultivating risk management need to concentrate on understanding management thinking, and on recognizing a range of thought processes that manage risk but do not involve a risk register or signed form. Knowing about internal controls is not quite enough.

In principle, there is no difference between a risk management system and an internal control system. You may feel differently and there are many views on this, but the scope of each phrase seems to be getting wider and they are converging. However, there are some big practical differences between the things we usually think of as risk responses and the things we usually think of as internal controls.

The archetypal internal control is carried out by a clerk, involves little judgment, is routine, and probably covers something to do with book-keeping or financial commitments. The archetypal risk response is carried out by a manager, involves considerable judgment, is not routine, and is more likely to be about general business or project matters.

The risk responses we auditors already know and love include sign-offs and authorizations, documents, risk-response tables, testing, insurance, contingency planning, IT security, and contracting.

The behaviors and techniques we tend to be less confident with include keeping options open, trying things to see what works, incremental delivery of projects, use of time buffers (as in the Critical Chain method of project management), decisions based on uncertainty rather than on specific risks, design of control systems from risk factors (i.e. without itemizing risks or control objectives), other uses of risk factors (other than in credit decisions which everyone knows about), statistical process control, certain other ideas in quality concerning requirements, explicit representation of uncertainty in models used in decisions (including rolling forecasts), conversational skills that probe for uncertainty and risk, and probably more besides.

Without an increased appreciation of these latter techniques we are likely to miss a lot of relevant management thinking and will struggle to make good recommendations for improvement. Auditees don't want recommendations that all involve documenting information on new forms and collecting formal sign-offs. This is a huge area, and one can never stop learning.

Examples

Test your knowledge. What risk thinking do you see in the following scenarios and what, if anything, would you recommend?

1. An improvement plan

Scenario

A service department has been challenged to improve its performance by a certain amount on various metrics. The improvement cannot be achieved without innovation as new resources are not available.

To meet this challenge, a plan is devised with over 30 improvement actions, some more specific than others. The plan is extensively circulated and the plan document is formally approved at a high level.

A monitoring group meets regularly to assess progress against the plan and deal with problems. Measures of progress have been identified. Actions have been prioritized rigorously.

What risk managing activities do you see here, and what could be improved?

On the plus side, the plan has been documented (reducing risk of miscommunication), has been reviewed widely, and has formal approval; there is a monitoring group that meets regularly and they have measures of progress (needed because things may not go according to plan); and actions have been prioritized (reflecting an awareness of uncertainty as to how many of the actions can be carried out).

On the other hand, bearing in mind that innovation was required, they seem over-confident that their improvements will be effective and that their prioritization is correct. More should have been said in the plan about using experience to find out as early as possible which actions appear to be effective, and to generate improved actions. The monitoring group is only assessing progress against the plan, and this again reflects an assumption that the plan is correct. Progress should be assessed against the most recent forecasts and revised plans that reflect what has been learned so far.

2. A software project

Scenario

A bespoke workflow system for a large firm of solicitors is being constructed by an in-house IT team. They have already delivered a crude working system that is used and useful to one department. The plan shows a large number of small deliveries over the next 7 months, all individually worthwhile. So far, the plan has been changed several times with some deliveries changing scope and sequence. The later deliveries are not specified in any detail.

The IT team meets its sponsor weekly and each month it meets a steering committee to discuss the value of the software so far and what can be expected out of future deliveries.

The project has an overall budget set, and a final deadline agreed with senior partners in the firm.

How do you assess this? Is it even a project?

On the plus side, the combination of incremental delivery, evolution of the plan, and active reconsideration of the benefits with stakeholders gives a project with an inherently outstanding risk profile. Unexpected events that would derail and devalue conventional waterfall projects can easily be accommodated. Benefits and learning are happening early. (If you want to know more about this, try Googling for "Evo" and "Active Benefits Realization.")

The only negative clearly indicated in the scenario is that an overall budget and deadline have been set. Although this is probably the result of the firm's usual business planning procedures, it is at odds with the thinking behind evolutionary projects. Evolutionary projects should logically continue until there is something better to do with the resources. Every small increment delivered is evaluated to ensure that is worthwhile. This means the scope of the project is not fixed at the outset, though it will have been considered.

3. Running conferences

Scenario

An organization has a department that runs conferences. Their biggest decisions concern which conferences to present. To help with the decision they have a formal procedure for evaluating potential conferences.

Although most costs are easy to predict, and the price is a matter of choice, it is usually hard to predict how many people will actually pay to attend. Although some conferences are established annual events that attract a fairly predictable number of delegates, there is also a need for more topical, specialist conferences. They have tried various forecasting methods and a group of experienced people sits down to discuss each conference proposed and to agree on an estimated number of attendees. The estimate is used in a spreadsheet model and the resulting expected profit or loss is considered in deciding whether to present the conference.

Once they have decided to go ahead there is a process for promoting the event and, if interest is much lower than expected, they have sometimes had to cancel and apologize. By this stage, the largest cost incurred is usually a nonrefundable deposit on the venue.

The organization asks you to suggest a way of reaching better forecasts.

How might you advise them?

On the plus side, they recognize the importance of their forecasts and get a group together to benefit from a broad range of experience and opinions. Some of their forecasting methods may even have been quite sophisticated although no doubt you could suggest something new.

However, the real problem here is that, however good the forecast, they throw away most of the information in it when they reduce the forecast to a single estimated number for attendees.

With their current modeling method, the fact that the deposit is nonrefundable never affects the output of the model, which is clearly wrong.

They are risking large errors in the projected profit/loss due to the "flaw of averages," which applies when the relationship between inputs and outputs is not linear. In this case, the relationship is not linear because they have the option of canceling.

It's almost certain that they are also failing to model the advantage of being able to gather some information before deciding whether to cancel. This failure to account for the value of their option to cancel is another reason why their projections could be materially inaccurate.

Once they start modeling their uncertainty explicitly they will probably realize that they can change the way they promote conferences and select venues to gather more information about demand earlier on, cheaply delay the final cancellation decision, and reduce the size of nonrefundable deposits.

4. A program of training courses

Scenario

IXYZ, a membership organization, offers a range of training courses to its members and the general public. The courses are promoted and administered by IXYZ but presented by trainers from various companies.

A printed catalogue of courses is produced each calendar year and a great deal of thought goes into deciding what courses to offer in it. (No other courses are run.)

Companies wanting to present courses submit course proposals for consideration in May of the preceding year and the submissions are sifted by a committee that meets several times before final decisions are made about what to include, when, and how many times.

IXYZ's course selection committee uses its accumulated experience of past courses, knowledge of trends, and a points system for evaluating submissions. It is clear that the right people are on the committee, that they consider each course carefully and consistently, and the financial commitment the catalogue represents is well understood.

Is this how you would want to do it?

Much effort has gone into managing the risk of choosing unpopular courses by careful evaluation of each proposal. There is no shortage of sign-offs and precautions. By normal standards of internal control this is squeaky clean, even though it is difficult to think of a worse way to manage a training program.

The quickest they can react to a topical issue arising is 7 months, and that is only possible if the issue happens to arise in May and someone immediately proposes a relevant course. If something happens in June, just after the deadline for submissions, it will be at least 18 months before any response is possible.

The long cycle time means that learning from experience takes years when it should take just weeks or a few months. It is very difficult for them to experiment with new ideas. Consequently, their training catalogue will probably remain devoid of topical and leading edge courses, instead featuring old favorites, year after year, with falling returns. All this because they happen to print an annual catalogue.

Conclusion

Consider the preceding examples. Imagine the conversations that would have to take place to uncover those facts and arrive at better ways of managing uncertainty and risk. The changes are to the way the work is done, not extra procedures added on top. Both performance and management are enhanced. This is natural embedding. The examples also show how this is a natural development of conventional internal audit work, though it does require knowledge of risk-smart management that goes beyond the usual repertoire.


Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.