Risk managers have become in desperate need
of reliable methods for measuring and managing operational risks. In this first
in a series of articles, Samir Shah describes several promising methods for
quantifying operational risks.
The risk management industry has seen a tremendous surge in interest in measuring
and managing operational risks. This outpouring is a result of a combination
of recent regulatory developments in corporate governance and capital adequacy,
and a growing realization that an enterprisewide view of risk management is
simply good business. The wave of recent well-publicized corporate failures
has shown that, more often than not, the culprit was an operational risk—for
which no capital is held—rather than market, credit, or insurance risks.
In response, regulators in Canada, the United Kingdom, and Australia have
revised corporate governance standards to hold directors responsible for managing
all risks: market, credit, insurance, legal, technology, strategic, regulatory,
etc. The Basel Committee has proposed an operational risk capital charge for
banks to protect against "…failed internal processes, people and systems or
from external events." Risk managers have become in desperate need of reliable
methods for measuring and managing operational risks.
This series of articles will describe several methods that are promising
candidates for quantifying operational risks.
Before we can talk about modeling operational risks, it's useful to first
understand the unique characteristics of operational, or "op" risks and their
implications on modeling methods.
The endogenous and dynamic nature of op risks suggests a greater reliance
on expert input and professional judgement to fill data gaps—at least until
companies gather enough historical data over varying business environments.
Use of operational strategies to mitigate op risks suggests a causal modeling
approach that managers can use to perform "what-if" analyses. After all, the
goal of risk management is to reduce op risks, not just measure them.
There is a continuum of methods to model risks (see Figure 1). Although there
are many ways to classify these modeling methods, for our purpose it is useful
to organize methods based on the extent to which they rely on historical data
versus expert input. This list of methods is by no means exhaustive. However,
it illustrates very nicely that there is large inventory of risk modeling methods
across finance, engineering, and decision science disciplines that can be drawn
on to suit a particular circumstance.
Market, credit, and insurance risks rely heavily on statistical analysis
of historical data for quantification. These risks are modeled primarily by
using methods on the left side of Figure 1. These include, for example:
Operational risks can also be modeled using these methods, when there is
adequate amount of representative historical data. High-frequency, low-severity
op risks, such as bank settlement errors for example, usually generate enough
data to use methods based on statistical analysis. Although even in this example,
as banks implement straight-through-processing (STP), the risk will change,
and the historical data may not be a reliable indicator of prospective risks.
Decision scientists have long relied on methods listed on the right side
of Fig. 1 to quantify risks when there is little or no objective data. They
have had to rely almost exclusively on expert input to quantify risks, such
as likelihood of success or failure of a new drug in early stages of research.
Over time, they have refined these methods to minimize the pitfalls and biases
arising from estimating subjective probabilities, thereby increasing the reliability
of these approaches.
The methods listed in the middle of Figure 1 rely on a combination of historical
data, to the extent it's available, and expert input as needed to fill data
gaps. They include, for example:
Most of these methods are borrowed from other disciplines, primarily the
As in the case of Goldilocks, for op risks, "The statistical methods require
toooo much data," "The decision science methods rely toooo much on expert input,"
and "The methods in the middle are juuust right!" These methods offer the best
match to the unique characteristics of op risks.
As businesses have become more complex and the interdependencies have increased,
managers have struggled to maintain control and make decisions under uncertainty.
Use of enterprise data warehousing and data mining has substantially increased
the amount of data that is available to managers. However, the sad truth is
that the terabytes of data have not significantly increased their understanding
of the enterprisewide business dynamics.
The complexity of the systems is increasing at a faster rate than our knowledge
of it. Managers have responded by focusing on smaller areas of their business
and becoming more specialized. They have a much deeper understanding of their
domain but a much lesser understanding of how their domain interacts with others.
Modeling techniques need to be flexible enough to consolidate knowledge that
is fragmented across many experts. They also need to effectively leverage both
data and expert input in order to develop a clearer and more reliable representation
The following methods for measuring and managing operational risks are described
in detail in separate articles. Please click on a method to view other articles.
Opinions expressed in Expert Commentary articles are those of the author and are
not necessarily held by the author's employer or IRMI. Expert Commentary articles
and other IRMI Online content do not purport to provide legal, accounting, or other
professional advice or opinion. If such advice is needed, consult with your attorney,
accountant, or other qualified adviser.
Please use the print button on the IRMI toolbar to print/preview this page.
© 2000-2015 International Risk Management Institute, Inc. (IRMI). All rights reserved.