Comments on the COSO ERM Public Exposure 2016

In June, COSO, the Committee of Sponsoring Organizations of the Treadway Commission, requested comments on a new draft of its framework.  I discovered this two days before the due date for comments, and rushed to respond.  My comments are below. The document is available for public review here.

Most of my comments about this draft address Section 2, which deals with terminology. I’d like to stress that this concern stems not from a desire for semantic purity but from observations of miscommunication and misunderstandings between ERM professionals and those of various business units as well as a lack of conceptual clarity about risk within ERM.

Before diving into that topic in detail, I’ll offer two general comment based on observations from work in industry. I think we all agree that risk management must be a process, not a business unit. Despite this, many executives still equate risk management with regulatory compliance or risk transfer through insurance. That thinking was apparent in the Protiviti and EIU surveys of the 2000’s, and, despite the optimism of Deloitte’s 2013 survey, is readily apparent if one reads between its lines. As with information technology, risk management is far too often viewed as a department down the hall, rather than an integral process. Sadly, part of this problem seems to stem from ERM’s self-image; ERM is often called “an area of management” in ERM literature. Risk management can no more be limited to an area of management than can engineering or supply-chain management; i.e., they require management, not just Management.

My second general comment is that the framework expends considerable energy on risk management but comparatively little on risk assessment. It is hard to imagine how risks can be managed without first being assessed, i.e., managed risks must be first identified and measured.

Nearly all risk management presentations lean on imagery and examples from aerospace and other human endeavors where inherently dangerous activities have been made safe through disciplined risk analysis and management. Many ERM practitioners believe that their best practices and frameworks draw heavily on the body of knowledge developed in aviation over the last 70 years. This belief is not totally justified. ERM educators and practitioners often use aerospace metaphors (or nuclear, mountaineering, scuba, etc.) but are unaware that the discipline of aerospace risk assessment and management categorically rejects certain axioms of ERM – particularly those tied to the relationships between the fundamental concepts of risk, likelihood or probability, severity and uncertainty. I’d like to offer here that moving a bit closer to conceptual and terminological alignment with the aerospace mindset would better serve the objectives of COSO.

At first glance ERM differs greatly in objective from aircraft safety, and has a broader scope. This difference in scope might be cited as a valid basis for the difference in approaches and mindsets I observe between the two domains. I’ll suggest that the perception of material differences is mostly an illusion stemming from our fear of flying and from minimal technical interchange between the two domains. Even assuming, for sake of argument, that aerospace risk analysis is validation-focused rather than a component of business decision-making and strategy, certain fundamental concepts would still be shared. I.e., in both cases we systematically identify risks, measure their impact, modify designs and processes to mitigate them, and apply the analysis of those risks to strategic decisions where we seek gain. This common thread running through all risk management would seem to warrant commonality in perspective, ideology, method, and terminology. Yet fundamental conceptual difference exist, which, in my view, prevent ERM from reaching its potential.

Before discussing how ERM might benefit from closer adherence to mindsets fostered by aerospace risk practice (and I use aerospace here as a placeholder – nuclear power, weapons systems, mountaineering and scuba would also apply) I’d like to stress that both probabilistic and qualitative risk analysis of many forms profoundly impact strategic decisions of aircraft makers. At McDonnell Douglas (now Boeing) three decades ago I piloted an initiative to use probabilistic risk analysis in the conceptual-design phase of aircraft models considered for emerging markets (as opposed to merely in the realm of reliability assessment and regulatory compliance). Since risk analysis is the only rational means for allocating redundancy within complex systems, the tools of safety analysis entered the same calculus as those evaluating time-to-market, financing, credit, and competitive risk.

In the proposed framework, I have significant concerns about the definitions given in Section 2 (“Understanding the Terms”). While terminology can be expected to vary across disciplines, I submit that these definitions do not serve COSO’s needs, and that they hamper effective communication between organizations. I’ll offer suggested revisions below.

P22 begins:

“There is risk in not knowing how an entity’s strategy and business objectives may be affected by potential events. The risk of an event occurring (or not), creates uncertainty.”

It then defines risk, given the context of uncertainty specified above:

Risk: “The possibility that events will occur and affect the achievement of strategy and business objectives.”

The relationship between risk and uncertainty expressed here seems to be either backward or circular. That is, uncertainty always exists in setting an entity’s strategy and business objectives. That uncertainty exists independent of whether a party has a stake in the success of the entity. Uncertainty – the state of not being definitely known, being undecided, or having doubt – only entails risk, as “risk” is commonly used in society, most of business, science, and academics to those with a stake in the outcome.

I am aware that in many ERM frameworks, risk is explicitly defined as uncertainty about an outcome that can be either beneficial or undesirable. Such usage of term has two significant problems. First, it causes misunderstandings in communications between ERM insiders and those affected by their decisions. Second, even within ERM, practitioners drift between that ERM-specific meaning and the meaning used by the rest of the world. This is apparent in the frequent use of expressions such as “risk mitigation” and “risk avoidance” within ERM literature. Use of these phrases clearly indicates a scope of “risk” limited to unwanted events, not to desired outcomes. Logically, no one would seek to mitigate benefit.

While the above definition of risk doesn’t explicitly connect beneficial outcomes with risk, the implicit connection is obvious in the relationships between risk and the other defined terms. If risk is “the possibility that events will occur” and those events can be beneficial or undesirable, then, as defined, the term risk covers both beneficial and undesirable events. Risk then communicates nothing beyond uncertainty about those events. As such, risk becomes synonymous with uncertainty.

Equating risk with uncertainty is unproductive; and expressing uncertainty as a consequence of risk (as stated at the beginning of P22) puts the cart before the horse. The general concept in risk studies is that risk is a consequence of uncertainty, not the cause of uncertainty. Decisions would be easy – virtually automatic – if uncertainty were removed from the picture.

Uncertainty about potential outcomes, some of which are harmful, is a necessary but insufficient feature of risk. The insufficiency of uncertainty alone in expressing risk is apparent if one considers, again, that no risk exists without a potential loss. Uncertainty exists at the roulette wheel regardless of your participation. You have risk only if you wager. The risk rises as your wager rises. Further, for a given wager, your risk is higher in America than in Europe roulette because American roulette’s additional possible outcome – the double-zero (not present elsewhere) – reduces your probability – i.e., increases your uncertainty – of winning. Thus rational management of risk entails recognition of two independent components of risk – uncertainty and loss. Below I suggest a revision of the definition of risk to accommodate this idea.

Understanding risk to involve both uncertainty and potential loss provides consistency with usage of the term in the realms of nuclear, aerospace, medicine, manufacturing statistical-process control, and math and science in general.

When considering uncertainty’s role in risk (and that they have profoundly different meanings), we can consider several interpretations of uncertainty. In math, philosophy, and logic, uncertainty usually refers to quantities that can be expressed as a probability – a value between zero and one – whether we can state that probability with confidence or not. We measure our uncertainty about the outcome of rolling a die by, assuming a fair die, examining the sample space. Given six possible outcomes of presumed equal likelihood, we assign a probability of 1/6 to each possible outcome. That is a measurement of our uncertainty about the outcome. Rolling a die thousands of times gives experimental confirmation of our uncertainty measurement. We express uncertainty about Manhattan being destroyed this week by an asteroid through a much different process. We have no historical (frequency) data from which to draw. But by measuring the distribution, age, and size of asteroid craters on the moon we can estimate the rate of large asteroid strikes on the earth. This too gives a measure of our uncertainty about Manhattan’s fate. We’re uncertain, but we’re not in a state of complete ignorance.

But we are ignorant of truly unforeseeable events – what Rumsfeld famously called unknown unknowns. Not even knowing what a list of such events would contain could also be called uncertainty; but it is a grave error to mix that conception of uncertainty (perhaps better termed ignorance) with uncertainty about the likelihood of known possible events. Much of ERM literature suffers from failing to make this distinction.

An important component of risk-management is risk-analysis in which we diligently and systematically aim to enumerate all possible events, thereby minimizing our ignorance – moving possible outcomes from the realm of ignorance to the realm of uncertainty, which can be measured, though sometimes only by crude estimates. It’s crucial to differentiate ignorance and uncertainty in risk management, since the former demands thoroughness in identifying unwanted events (often called hazards, though ERM restricts that term to a subset of unwanted events), while the latter is a component of a specific, already-identified risk.

Beyond facilitating communications between ERM practitioners and those outside it, a more disciplined use of language – using these separate concepts of risk, uncertainty and ignorance –  will promote conceptual clarity in managing risk.

A more useful definition of risk should include both uncertainty and loss and might take the form:

Risk:  “The possibility that unwanted events will occur and negatively impact the achievement of strategy and business objectives.”

To address the possible objection that risk may have a “positive” (desirable) element, note that risk management exists to inform business decisions; i.e., making good decisions involves more than risk management alone; it is tied to values and data external to risks. Nothing is lost by restricting risk to the unwanted consequences of unwanted events. The choice to accept a risk for the purpose of achieving a desirable outcome (gain, reward) is informed by thorough assessment of the risk. Again, without uncertainty, we’d have no risk; without risk, decisions would be easy. The possibility that by accepting a managed risk we may experience unforeseen benefits (beyond those for which the decision to accept the risk was made) is not excluded by the above proposed definition of risk. Finally, my above proposed definition is consistent with the common conception of risk-reward calculus.

One final clarification: I am not proposing that risk should in any way be an arithmetic product of quantified uncertainty and quantified cost of the potential loss. While occasionally useful, that approach requires a judgment of risk-neutrality that can rarely be justified, and is at odds with most people’s sense of risk tolerance. For example, we have no basis for assuming that a bank would consider one loss of a million dollars to be an equivalent risk to 10,000 losses of $100 each, despite both having the same mathematical expectation (expected value or combined cost of the loss).

An example of the implicit notion of a positive component of risk (as opposed to a positive component of informed decision-making) P25 states:

“Organizations commonly focus on those risks that may result in a negative outcome, such as damage from a fire, losing a key customer, or a new competitor emerging. However, events can also have positive outcomes, and these must also be considered.“

A clearer expression of the relationship between risk (always negative) and reward would recognize that positive outcomes result from deciding to accept managed and understood risks (risks that have been analyzed). With this understanding of risk, common to other risk-focused disciplines, positive outcomes result from good decisions that manage risks, not from the risks themselves.

This is not a mere semantic distinction, but a conceptual one. If we could achieve the desired benefit (“positive outcome”) without accepting the risk, we would certainly do so. This point logically ties benefits to decisions (based on risk analysis), not to risks themselves. A rewording of P25 should, in my view, should explain that:

  • events (not risks) may result in beneficial or harmful outcomes
  • risk management involves assessment of the likelihood and cost of unwanted outcomes,
  • risks are undertaken or planned-for as part of management decisions
  • those informed decisions are made to seek gains or rewards

This distinction clarifies the needs of risk management and emphasizes its role in good corporate decision-making.

Returning to the concept of uncertainty, I suggest that the distinction between ignorance (not knowing what events might happen) and uncertainty (not knowing the likelihood of an identified event) is important for effective analysis and management of risk. Therefore, in the context of events, the matter of “how” should be replaced with shades of “whether.” The revised definition I propose below reflects this.

The term severity is common in expressing the cost of the loss component of risk.  The definition of severity accompanying P25 states:

Severity: A measurement of considerations such as the likelihood and impacts of events or the time it takes to recover from events.

Since the definition of risk (both the draft original and my proposed revision) entail likelihood (possibility or probability), likelihood should be excluded from a definition of severity; they are independent variables. Severity is a measure of how bad the consequences of the loss can be. I.e., it is simply the cost of the hypothetical loss, if the loss were to occur. Severity can be expressed in dollars or lost lives. Reputation damage, competitive disadvantage, missed market opportunities, and disaster recovery all ultimately can be expressed in dollars. While we may only be able to estimate the cost of a loss, the probability of that loss is independent of it severity.

Recommended definitions for Sections 2:

Event: An anticipated or unforeseen occurrence, situation, or phenomenon of any magnitude, having beneficial, harmful or unknown consequences

Uncertainty: The state of not knowing or being undecided about the likelihood of an event.

Severity: A measurement of the undesirability or cost of a loss

Risk:  The possibility that unwanted events will negatively impact the achievement of strategy and business objectives.

 

Historical perspective on the divergent concepts of risk, uncertainty, and probability

Despite having mastered geometry and the quadratic formula in ancient times, our study of probability and uncertainty only dates to the late 17th century when Blaise Pascal was paid by a client to develop mathematics to gain an advantage in gambling. This was the start of the frequentist interpretation of probability, based on the idea that, under deterministic mechanisms, we can predict the outcome of various trials given a large enough data set. Pierre-Simon Laplace then formalized the subjectivist (Bayesian) interpretation of probability in which probability refers to one’s degree of belief in a possible outcome. Both these interpretations of probability are expressed as a number between zero and one. That is, they are both quantifications of uncertainty about one or more explicitly identified potential outcomes.

The inability to identify a possible outcome, regardless of probability, stems from ignorance of the system in question. Such ignorance is in some cases inevitable. An action may have unforeseeable outcomes; flipping our light switch may cause a black hole to emerge and swallow the earth. But scientific knowledge combined with our understanding of the wiring of a house gives us license to eliminate that as a risk. Whether truly unforeseeable events exist depends on the situation; but we can say with confidence that many events called black swans, such as the Challenger explosion, Hurricane Katrina and the 2009 mortgage crisis were foreseeable and foreseen – though ignored. The distinction between uncertainty about likelihood of an event and ignorance of the extent of the list of events si extremely important.

Yet confusing the inability to anticipate all possible unwanted events and a failure to measure or estimate the probability of identified risks is common in some risk circles. A possible source of this confusion was Frank Knight’s 1921 Uncertainty and Profit. Knight’s contributions to economic and entrepreneurial theory are laudable, but his understanding of set theory and probability was poor. Despite this, Knight’s definitions linger in business writing. Specifically, Knight defined “risk” as “measurable uncertainty” and “uncertainty” as “unmeasurable uncertainty.” Semantic incoherence aside, Knight’s terminology was inconsistent with all prior use of the terms uncertainty, risk, and probability in mathematical economics and science. (See chapters 2 and 10 of Stigler’s The History of Statistics: The Measurement of Uncertainty before 1900 for details).

The understanding and rational management of risk requires that we develop and maintain clarity around the related but distinct concepts of uncertainty, probability, severity and risk, regardless of terminology. Clearly, we can navigate through some level of ambiguous language in risk management, but the current lack of conceptual clarity about risk in ERM has not well served its primary objective. Hopefully, renewed interest in making ERM integral to strategic decisions will allow a reformulation of the fundamental concepts of risk.

One thought on “Comments on the COSO ERM Public Exposure 2016

  1. Hi,

    I came across your site while researching Risk and am in agreement with much of what you say. I’m looking at how we change the focus from Risk Management to Objective Management whereby we look at Objectives, Enablers, Blockers, Uncertainty and Execution.

    One point that I wish to raise is your definition of Risk:

    “Risk: The possibility that unwanted events will negatively impact the achievement of strategy and business objectives.”

    Do you need to add “strategy” into the definition as the main purpose of strategy is to achieve business objectives?

    Regards,

    David

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s