Category Archives: Enterprise Risk Management

The State of Risk Management

Norman Marks recently posted some thoughtful comments on the state of risk management after reading the latest Ponemon survey, “The Imperative to Raise Enterprise Risk Intelligence.”

The survey showed some expected results like the centrality of reputation and cyber risk concerns. It also found little recent progress in bridging silos between legal, IT and finance, which is needed for operational risk management to be effective. Sadly, half of the polled organizations lack a formal budget for enterprise risk management.

The Ponemon report differentiates ERM from enterprise risk intelligence by characterizing ERM as the application of rigorous and systematic analyses of organizational risks and enterprise risk intelligence as the insight needed to drive business decisions related to governance, risk and compliance.

Noting that only 43 percent of respondents said risk intelligence integrates well with the way business leaders make decisions, Marks astutely observes that we should not be surprised that ERM lacks budget. If the CEO and board don’t think risk management works, then why fund it?

Marks writes often on the need for an overhaul of ERM doctrine. I share this view. In his post on the Ponemon report, he offers eight observations, each implying a recommendation for fixing ERM. I strongly agree with six and a half of them, and would like to discuss those where I see it differently.

His points 4 and 5 are:

4. Risk practitioners too often are focused on managing risks instead of achieving business objectives. There’s a huge difference.

5. Risk practitioners don’t connect with business executives because they talk technobabble instead of the language of the business. A discussion of risk appetite or a risk appetite framework is not something that any executive focused on results will want to attend.

My interviews in recent months with boards and CEOs indicated those leaders thought almost the exact opposite. They suggested that risk managers should support business decisions by doing a better job of

  • identifying risks – more accurately, identifying unwanted outcomes (hazards, in my terminology)
  • characterizing hazards and investigating their causes, effects (and therefore severity) and relevant systems
  • quantifying risks by assessing the likelihoods for each severity range of each hazard
  • Enumerating reasonable responses, actions and mitigations for those risks

Note that this list is rather consistent, at least in spirit, with Basel II and some of the less lofty writings on risk management.

My understanding of the desires of business leaders is that they want risk management to be deeper and better, not broader in scope. Sure, silos must be bridged, but risk management must demonstrate much more rigor in its “rigorous and systematic analysis” before ERM will be allowed to become Enterprise Decision Management.

It is clear, from ISO 31000’s definition of risk and the whole positive-risk fetish, that ERM aspires to be in the decision analysis and management business, but the board is not buying it. “Show us core competencies first,” says the board.

Thus I disagree with Norman on point 4. On point 5, I almost agree. Point 5 is not a fact, but a fact with an interpretation. Risk practitioners don’t connect with business executives. Norman suggests the reason is that risk managers talk technobabble. I suggest they too often talk gibberish. This may include technobabble if you take technobabble to mean nonsense and platitudes expressed through the misuse of technical language. CEOs aren’t mystified by heat maps; they’re embarrassed by them.

Norman seems to find risk-appetite frameworks similarly facile, so I think we agree. But concerning the “techno” in technobabble, I think boards want better and real technical info, not less technical info.

Since most of this post addresses where we differ, I’ll end by adding that Marks, along with Tim Leech and a few others, deserves praise for a tireless fight against a seemingly unstoppable but ineffectual model of enterprise risk management. Pomp, structure, and a compliance mindset do not constitute rigor; and boards and CEO’s have keen detectors for baloney.

 

Risk Culture

Risk culture Risk culture has been a hot topic of late. For example, it’s common to hear claims that culture is the most undervalued aspect of risk, or that it is the element most critical for the Board’s management of risks. If that seems a stretch, consider our recent credit crunch, and see the film, The Big Short. The importance of culture in corporate risk may be the one thing on which we all agree – all but a few die-hard quants.

Despite agreement on the importance of risk culture, the topic gets rather thin coverage in many frameworks. What then, might an ideal risk culture be?

On most accounts, risk culture involves the values, norms, beliefs, ethics and attitudes about risk shared by a group. Most writings on the topic also include the claim that senior management must be the driver of change to an effective risk culture. It’s a plausible claim, since there are few alternative sources. Regulatory bodies don’t seem to have that effect on employees, and organic growth of optimal risk culture seems unlikely.

Two fields I have experience in – aviation and pharmaceuticals – immediately come to mind. In aviation, risk is deeply embedded at nearly all levels of organizations. Oddly, the aviation industry started out with an affable relationship with its regulator. It has cooled slightly in recent decades, but is still today far from contentious. In pharmaceuticals, risk culture is poorly developed, and relationships with the FDA are often adversarial.

This dichotomy likely stems more from accidental environmental factors than from any inherent differences in dispositions or competencies between the fields. Commercial aviation was lucky enough to emerge at a time when the FAA was so resource-strapped that it was forced into a tight partnership with aircraft builders – a situation from which we all benefited greatly. The early FDA had a much broader scope, and was regulating a vastly larger number of suppliers (food, drugs, tobacco, etc.) who were much less virtuous. The FDA’s short leash had the unwanted side-effect of fostering a culture where risk management is equated with regulatory compliance. Attempts to move beyond that state (e.g., in ICH Q8, 9, and 10) have been slow to progress.

Lessons from the comparison between the two fields? To start, risk culture is real. Safety risk in passenger flight has fallen by a factor of a thousand or more, in a risk culture that extends from subcontractors to pilots to controllers. Technological advances cannot claim all the credit for this. Aviation workers are proud of their work. The motivation for doing the right thing is intrinsic, and the goals of workers align reasonably well with those of management and regulators.

Second, no external agent (agency) can supply your firm with risk-avoidance. A regulator might protect society from a firm’s evils and errors, but it won’t protect the firm from itself. The FDA only cares about a pharma firm’s bottom line to the extent that it seeks to prevent drug-availability crises.

The uncommonly beneficial state of risk culture in commercial aviation, which was not imposed, but grew organically, could be taken as an argument that kick-starting something similar in a random firm will be impossible. It need not be. But it will require a different tool kit than what’s in the standard ERM bag, because we’re now squarely in the realm of Change Management.

Michael Beer and John Kotter are my two favorite Change Management writers (Beer hates the term). They disagree on quite a lot; but they agree that any time the CEO needs to push a cultural change downstream, he first has to be seen as walking the walk. That is, there must be a vision; and management must embody it. The vision need not be mystical, Beer points out.

Further, employees must believe top and middle management is committed to the vision; and that management isn’t shallow, or deceiving themselves with hogwash about yet another strategic initiative.

Kotter and Beer, along with Bert Spector and Russell Eisenstat, all agree that under-communicating the vision – in this case, the risk culture objective – is a leading cause of failed transformation efforts. Frequent communications, using every possible channel, over a long period, are essential. The purpose is not to coerce workers into compliance. It is to demonstrate the relevance of the vision and to train by example. Kotter notes that even with several communications per week, if management behavior is antithetical to the vision, cynicism spreads fast, and no one believes the communications.

Drawing on the aviation example, I think we might strengthen the Change Management experts’ points for the specific area of risk culture by observing that clear goals, purpose, autonomy, continuous feedback, and a sense of control greatly add to development of inner standards and pride of work. These intrinsic motivators apply at levels from factory workers to the CFO. Worker engagement leads to trust; and trust promotes acceptance of shared values, norms, beliefs, and ethics, which is what definitions or risk culture rightly tell us should be our goal.

 – – – – –

Bill StorageAre you in the San Francisco Bay area?

If so, consider joining the Risk Management meetup group.

Risk management has evolved separately in  various industries. This group aims to cross-pollinate, compare and contrast the methods and concepts of diverse areas of risk including enterprise risk (ERM), project risk, safety, product reliability, aerospace and nuclear, financial and credit risk, market, data and reputation risk.

This meetup will build community among risk professionals – internal auditors and practitioners, external consultants, job seekers, and students – by providing forums and events that showcase current trends, case studies, and best practices in our profession with a focus on practical application and advancing the state of the art.

https://www.meetup.com/San-Francisco-Risk-Managers/

ISO 31000 and Those Who Don’t Know History

William Storage – Dec 8, 2016
VP, LiveSky, Inc.
Visiting Scholar, UC Berkeley History of Science

Risk: “the effect of uncertainty on objectives.”
ISO 31000 risk definition

ISO 31000, along with other frameworks, uses a definition of risk that is not merely incompatible with the common business and historical usage; it is highly destructive to its own goals. A comment on a recent LinkedIn post about “positive risk” asked fellow risk managers, “can we grow up?” I share the frustration. ERM must step into the real world, meeting business on its own terms – literally.

The problem with an offbeat definition of risk isn’t just a matter of terminology. The bad definition is at the heart of several derivative concepts, which ultimately lead to contradictions and confusion. That confusion is not lost on CEOs and boards of directors. Proponents claim that these audiences welcome ERM and that they align strategies accordingly, e.g. COSO 2009: “boards and management teams are embracing the concept of ERM”. But dig into this recent Deloitte survey, like many before it, and you’ll see that the self-congratulatory self-assessment projected onto boards comes with some less optimistic hard data. For example, Deloitte’s data actually shows that just over half of even financial-service boards get updates on top risks, and less than half of those get such updates more than once a year.

I’ve recently had the chance to speak about risk management with a few Fortune-500 CEOs (telecom, insurance and healthcare) and a number of their board members. Unsurprisingly, these folk tend to be learned – some downright expert in science and math. Many were aware of ERM’s quirky use of “risk” and related terms central to science, and did not need prompting to express dismay. All five healthcare execs I spoke with told me their boards have no contact with ERM output.

A retired CEO told me she suspected that ERM’s “positive risk” concept is a turf grab – a way for risk managers to inject themselves into strategic decisions. Of course, risk managers have good evidence that risk should move upstream in the decision process. But idiosyncratic language and muddled reinterpretations of core analytical concepts are unlikely to persuade educated executives. If you think otherwise, try searching the web for praise of an ERM framework by a board of directors or top executive.

To understand why the issue of defining risk is one of several big changes that ISO 31000 and some of its brethren must undergo, a historical perspective on risk and the roots of ERM’s conception of it may help.

Risk started with probability theory, which, oddly, did not emerge until the 16th century. Before that, despite widespread gambling, humans, possibly for religious reasons, could not imagine any way to predict the future. As historian Ian Hacking  (The Emergence of Probability) wrote, “someone with only the most modest knowledge of probability mathematics could have won himself the whole of Gaul in a week.”

Then Geralomo Cardano realized that, whether or not through the will of God, rolling two dice resulted in more sevens than twos. Pascal and Fermat later devised a means of calculating probability based on a known problem-space. Soon after, John Graunt realized he could predict future death rates based on historical data.and, With help from Huygens and Bernoulli, statistical inference was born.

While annuities and mutual-aid societies existed in ancient Rome, modern insurance had to wait for Graunt’s concepts to spread. Only then could probability and statistical inference (as these terms are used where italicized above) become a rational basis for setting premiums, as shown by Edmond Halley, who discovered other regularities in the natural world.

“Insurance Against Risk”

Risk insurance was soon widespread. Risk‘s Latin root means danger, and that’s how the term was used in insurance. The 1828 American Dictionary of the English Language says risk signifies a degree of hazard or danger. It explains that “the premiums of insurance are calculated upon the risk.” In insurance, science, medicine, and engineering, risk is a combination of likelihood and severity of a hazard (potential loss); and that how the term is used everywhere outside of ERM and some Project Management imitators.

For example, in Google’s data, the top 25 two-word collocations starting with “risk” all associate risk with cost or loss:

risk bigrams

Further, in Google’s data “positive risk” or similar expressions do not occur in the first 10,000 bi-grams ending in “risk,” despite the popularity of that concept in blog posts and on LinkedIn.

Defining risk as the effect of uncertainty on objectives causes many problems. One is that we don’t know the context of uncertainty; another is that it omits mention of loss. The rationale for this omission is that the consequences associated with a risk can enhance the achievement of objectives.

This rationale confuses risk-reward calculus with the concept of risk alone. Despite claiming to be neutral about risk (not the same thing as risk-neutrality) nearly all usage in the ISO 31000 is in terms of risk being tolerated, retained/transferred, shared, reduced, controlled, mitigated or avoided.

Uncertainty

To understand risk as the effect of uncertainty on objectives, we must know what is meant by uncertainty. Again, this isn’t just an exercise in philosophy of language. Uncertainty has been a problem term since Frank Knight (Risk, Uncertainty & Profit 1921) chose to redefine it (misuse, according to Frank Ramsey and other of Knight’s contemporaries) in two ways – incompatible with each other and with the standard use in math and science. We see echoes of Knight’s work in risk frameworks.

Knight’s concept of uncertainty relevant to this discussion is the one in which he equates risk with “measurable uncertainty”:

“To preserve the distinction which has been drawn in the last chapter between the measurable uncertainty and an unmeasurable one we may use the term “risk” to designate the former and the term “uncertainty” for the latter.” 

Knight’s critic (as we can infer Ramsey, Kolmogorov, von Mises and de Finetti were) might point out that Knight has constructed a self-referential definition; but a charitable reading of Knight is that risk equals uncertainty and uncertainty equals ignorance, in the non-pejorative sense, i.e., “unknown unknowns.”

Even in the charitable interpretation, Knight’s usage makes dialog with math and science nearly impossible, since in those realms we call the measure of uncertainty probability, (whether the frequentist or subjectivist variety). That is, it is not merely Knight’s language that is at odds with math and science, it is his world view and ontology.

Effects of Uncertainty

If the uncertainty in ISO 31000’s definition of risk is the Knightian variety, i.e., ignorance, then uncertainty describes an agent’s state of mind.The immediate effect of that uncertainty is necessarily a reflection on his/her/its ignorance, if there is an effect at all (a person unaware of his uncertainty would not be uncertain).  Given that the only possible first effect of awareness of a state of ignorance is cognitive or emotional, defining risk as the effect of uncertainty (the sort of Knightian uncertainty described above) is unworkable. Risk is certainly not an emotional response or a mental state of reflection, yet that is what a literal reading of ISO 31000 would require, assuming Knightian uncertainty.

If instead of Knight’s understanding of uncertainty, we use the math/science meaning of the term, things are only slightly better. If uncertainty involves a known problem space (as opposed to ignorance) the effect of uncertainty in any situation would be to affect our decisions. We might deliberate on what to do about quantified uncertainty (and therefore quantified risk). If we follow a subjectivist interpretation of probability we might choose to gather more information with which to refine our estimated probabilities (modify our uncertainty by updating our priors). But in neither of these cases, where uncertainty is not ignorance, would we call what we’re doing about uncertainty (the effect it has on us) “risk.” Here, uncertainty is a component of risk; but risk is not the effect of uncertainty on objectives.

An obvious remedy is to abandon arcane conceptions of risk and accept that a few centuries of evolution of rational thought has given us a decent alternative. Risk is a combination of the likelihood of an unwanted occurrence and its severity. This holds however we choose to measure or estimate likelihood, and regardless of how we measure severity. It does not require that we multiply likelihood times severity; and it allows that taking risks might have benefits. Further, it addresses the role of analysis of risks in decision making, i.e., “objectives.” I think this is where ISO 31000 was heading, but went off course, leaving much confusion in its wake. It’s time for a correction.

– – –

ISO 31000 risk definition


 

Are you in the San Francisco Bay area?

If so, consider joining the Risk Management meetup group.

Risk management has evolved separately in  various industries. This group aims to cross-pollinate, compare and contrast the methods and concepts of diverse areas of risk including enterprise risk (ERM), project risk, safety, product reliability, aerospace and nuclear, financial and credit risk, market, data and reputation risk.

This meetup will build community among risk professionals – internal auditors and practitioners, external consultants, job seekers, and students – by providing forums and events that showcase current trends, case studies, and best practices in our profession with a focus on practical application and advancing the state of the art.

https://www.meetup.com/San-Francisco-Risk-Managers/

Risk Neutrality And Risk Appetite Skewness

In a recent post I I argued that risk frameworks’ models of an entity’s risk appetite contain implicit risk-neutrality. Some readers understood me to say that the frameworks promote indifference toward taking or not taking a well-motivated business risk. That wasn’t my intent; I don’t think risk frameworks have that particular problem.

Risk registers often model risk as the arithmetic product of likelihood (probability) and the cost of an unwanted event. By doing this, risk frameworks assume an enterprise is indifferent to two risks having the same numerical value of that product, where one risk has high probability and low cost and another has high cost and low probability.

Frameworks further mischaracterize an enterprise resulting in poor risk guidance and crypto-normativity, i.e., implicit bias, telling the enterprise what its values should be rather than supporting a decision process consistent with its values. Assuming that users of frameworks compensate for implicit risk-neutrality, they must then deal with the presumption of constancy of risk-adversity or risk-seekingness across costs or opportunities. This is a highly inaccurate model of how humans and enterprises address risk.

The example in my risk-neutrality post was equivalent to a single horse race with high and low odds options. That is, in a race, one horse has high odds (low probability – high winnings) while another has low odds (high probability – low winnings).

It might be more useful to view business decisions as a day at the races rather than a single race. Not all races at Churchill Downs, on any given day, may have an extreme low-probability bet, so a risk seeker would likely skip betting on that race. In addition to picking horses we must pick the races in which we place bets and decide how much to bet.

How enterprises behave in an equivalent business scenario depends on their values, their distributed knowledge of the domain, and some irrational beliefs. I’m not concerned here with the latter, and risk frameworks do little to dispel such beliefs. I’ll assume, for sake of argument, that an enterprise’s picks of races and bet amounts are justified.

With that assumption, evidence still suggests the complexity of judgment in picking races and the amount to wager (risk preferences) is high, and that risk frameworks cannot accommodate it.

Continuing with the horse race analogy, work of several researchers has shown that the risk appetite of real horse-race gamblers can be modeled with a utility function that, in addition to the mean value and expected value of returns, considers skewness.

At low odds (high probability – low winnings) the gamblers are risk averse, but for high odds (low probability – high winnings) they are risk seeking.

Assume, for sake of argument, that all available bets at the track have roughly the same expected value, i.e., the track or bookie’s income is from margin, not speculation. This is usually true, although bookmakers sometimes adjust odds and point spreads to increase the number of of bettors against a horse perceived as being on a winning streak (thereby making the wager literally unfair).

But all races may not have a high-odds (low probability – high winnings) option. For such races, the gambler might still bet, but be risk-averse,  yet be risk-seeking for races having a high-odds option. Golec and Tamarkin cover this in Bettors love skewness, not risk, at the horse track. Garrett and Sobel found the same for state lotteries, giving an explanation for why otherwise risk-averse people pay a dollar for lottery tickets with an expected value of fifty cents.

The economic utility function of a risk-averse entity is convex (blue below) and concave for risk seekers (red). Golec and Tamarkin modeled the utility function of many gamblers as a curve of order 3 (cubic), as seen in green below.

risk neutrality - onriskof.com

The preferences of organizations, whether reasonable or unreasonable in the view of any particular observer, may be beyond the scope of risk management. If risk frameworks care to judge the justification of preferences, they should do so explicitly, rather than embedding implicit neutrality (or any other utility function) into the frameworks.  In addition to the insufficiency of risk registers as a basis for enterprise decision-making, we must accept that risk registers aren’t merely insufficient, they are outright wrong, or worse.

 

–  –  –


In the San Francisco Bay area?

If so, consider joining us in a newly formed Risk Management meetup group.

Risk assessment, risk analysis, and risk management have evolved nearly independently in a number of industries. This group aims to cross-pollinate, compare and contrast the methods and concepts of diverse areas of risk including enterprise risk (ERM), project risk, safety, product reliability, aerospace and nuclear, financial and credit risk, market, data and reputation risk, etc.

This meetup will build community among risk professionals – internal auditors and practitioners, external consultants, job seekers, and students – by providing forums and events that showcase leading-edge trends, case studies, and best practices in our profession, with a focus on practical application and advancing the state of the art.

If you are in the bay area, please join us, and let us know your preferences for meeting times.

https://www.meetup.com/San-Francisco-Risk-Managers/

Risk Neutrality and Risk Frameworks

William Storage – Oct 29, 2016
VP, LiveSky, Inc.,  Visiting Scholar, UC Berkeley History of Science

Wikipedia describes risk-neutrality in these terms: “A risk neutral party’s decisions are not affected by the degree of uncertainty in a set of outcomes, so a risk-neutral party is indifferent between choices with equal expected payoffs even if one choice is riskier”

While a useful definition, this statement is still problematic, since we don’t all agree on what “riskier” means. We can compare both the likelihoods and the costs of different risks, but comparing their riskiness using a one-dimensional range (i.e., higher vs. lower) requires a scalar calculus of risk. If risk is a combination of probability and severity of an unwanted outcome, riskier might equate to a larger value of the arithmetic product of the relevant probability and severity. But defining risk as such a scalar (area under the curve, therefore one dimensional) value is a big step, one which analysis of human behavior suggests is not at all an accurate representation of how we perceive risk. It implies risk-neutrality.

Most people agree, as Wikipedia states, that a risk-neutral party’s decisions are not affected by the degree of uncertainty in a set of outcomes. On that view, a risk-neutral party is indifferent between all choices having equal expected payoffs.

Under this definition, if risk-neutral, you would have no basis for preferring any of the following four choices over another:

1) a 50% chance of winning $100.00
2) An unconditional award of $50.
3) A 0.01% chance of winning $500,000.00
4) A 90% chance of winning $55.56.

If risk-averse, you’d prefer choices 2 or 4. If risk-seeking, you’d prefer 1 or 3.

Now let’s imagine, instead of potential winnings, an assortment of possible unwanted events, which we can call hazards, for which we know, or believe we know, probability values. One example would be to simply turn the above gains into losses:

1) a 50% chance of losing $100.00
2) An unconditional payment of $50.
3) A 0.01% chance of losing $500,000.00
4) A 90% chance of losing $55.56.

In this example, there are four different hazards. To be accurate, we observe that loss of money is not a useful statement of a hazard. Loss of a specific amount of money is. The idea that rational analysis of risk entails quantification of hazards (independent of whether probabilities are quantified) is missed by many risk management efforts, and is something I discuss here often. For now, note that this example uses four separate hazards, each having different probabilities, resulting in four risks, all having the same $50 expected value, labeled 1 through 4. Whether those four risks can be considered equal depends on whether you are risk-neutral.

If forced to accept one of the four risks, a risk-neutral person would be indifferent to the choice; a risk seeker might choose risk 3, etc. Banks are often found to be risk-averse. That is, they will pay more to prevent risk 3 than to prevent risk 4, even though they have the same expected value. Viewed differently, banks often pay much more to prevent one occurrence of hazard 3 than to prevent 9000 occurrences of hazard 4, i.e., $500,000 worth of them. Note the use of the terms “hazard 3” and “risk 3” in the preceding two sentences; hazard and risk have very different meanings here.

If we use the popular heat-map approach (sometimes called risk registers) to visualizing risks by plotting the four probability-cost vector values (coordinates) on a graph, they will fall on the same line of constant risk. Lines of constant risk, as risk is envisioned in popular risk frameworks, take the form of y = 1/x. To be precise, they take the form of y = a/x where a represents a constant number of dollars called the expected value (or mathematical expectation or first moment) depending on area of study. For those using the heap-map concept, this number is exactly equal to the “risk” being modeled. In other words, in their model, risk equals probability times cost of the hazard: R = p * c. So if we graph probability on the x-axis and cost on the y-axis, we are graphing c = R/p, which is analogous to the y=a/x curve mentioned above. A sample curve of this form, representing a line of constant risk appears below on the left.

In my example above, the four points (50% chance of losing $100, etc.) have a large range of probabilities. Plotting these actual values on a simple grid isn’t very informative because the data points are far from the part of the plotted curve where the bend is visible (plot below on the right).

risk neutrality

Good students of high-school algebra know a fix for the problem of graphing data of this sort (monomials): use log paper. By plotting equations of the form described above using logarithmic scales for both axes, we get a straight line, having data points that are visually compressed, thereby taming the large range of the data, as below.

Popular risk frameworks use a different approach. Instead of plotting actual probability values and actual costs, they plot scores, say from one ten. Their reason for doing this is more likely to convert an opinion into a numerical value than to cluster data for easy visualization. Nevertheless, plotting scores – on linear, not logarithmic, scales – inadvertently clusters data, though the data might have lost something in the translation to scores in the range of 1 to 10. In heat maps, this compression of data has the undesirable psychological effect of implying much small ranges for the relevant probability values and costs of the risks under study.

A rich example of this effect is seen in the 2002 PmBok (Project Management Body of Knowledge) published by the Project Management Institute. It assigns a score (which it curiously calls a rank) of 10 for probability values in the range of 0.5, a score of 9 for p=0.3, and a score of 8 for p=0.15. It should be obvious to most having a background in quantified risk that differentiating failure probabilities of .5, .3, and .15 is pointless and indicative of bogus precision, whether the probability is drawn from observed frequencies or from subjectivist/Bayesian-belief methods.

The methodological problem described above exists in frameworks that are implicitly risk-neutral (most are, with a few noted exceptions, e.g., commercial aviation, medical devices, and some of NASA). The real problem with the implicit risk-neutrality of risk frameworks is that very few of us – individuals or corporations – are risk-neutral. And no framework has any business telling us that we should be. Saying that it is somehow rational to be risk-neutral pushes the definition of rationality too far. Doing so crosses the line from deductive (or inductive) reasoning to human values. It is convenient, for those seeking the persuasive power of numbers (however arbitrary or error-laden those scores and ranks might be) to model the universe as risk-neutral. But human preferences, values, and ethics need not abide that convenience, a convenience persuasive because of apparent mathematical rigor, but one that makes recommendations inconsistent with our values.

As proud king of a small distant planet of 10 million souls, you face an approaching comet that, on impact, will kill one million in your otherwise peaceful world. Your planet’s scientists and engineers rush to build a comet-killer nuclear rocket. The untested device has a 90% chance of destroying the comet but a 10% chance of exploding on launch thereby killing everyone on your planet. Do you launch the comet-killer, knowing that a possible outcome is total extinction? Or do you sit by and watch one million die from a preventable disaster? Your risk managers see two choices of equal risk: 100% chance of losing one million and a 10% chance of losing 10 million. The expected value is one million lives in both cases. But in that 10% chance of losing 10 million, there is no second chance – an existential risk.

If these two choices seem somehow different, you are not risk-neutral. If you’re tempted to leave problems like this in the capable hands of ethicists, good for you. But unaware boards of directors have left analogous dilemmas in the incapable hands of facile risk frameworks.

The risk-neutrality embedded in risk frameworks is a subtle and pernicious case of Hume’s Guillotine – an inference from “is” to “ought” concealed within a fact-heavy argument. No amount of data, whether measured frequencies or subjective probability estimates, whether historical expenses or projected costs, even if recorded as PmBok’s scores and ranks, can justify risk-neutrality to parties who are not risk-neutral. So why do we embed it in our frameworks?

 


“If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.” – David Hume, An Enquiry Concerning Human Understanding

 

–  –  –


In the San Francisco Bay area?

If you are, consider joining us in a newly formed Risk Management meetup group.

Risk assessment, risk analysis, and risk management have evolved nearly independently in a number of industries. This group aims to cross-pollinate, compare and contrast the methods and concepts of diverse areas of risk including enterprise risk (ERM), project risk, safety, product reliability, aerospace and nuclear, financial and credit risk, market, data and reputation risk, and so on.

This meetup aims to build community among risk professionals – internal auditors and practitioners, external consultants, job seekers, and students – by providing forums and events that showcase leading-edge trends, case studies, and best practices in our profession, with a focus on practical application and advancing the state of the art.

If you’re in the bay area, please join us, and let us know your preferences for meeting times.

https://www.meetup.com/San-Francisco-Risk-Managers/

ERM and the Prognostication Everlasting of Thomas Digges

William Storage – Oct 19, 2016
VP, LiveSky, Inc.,  Visiting Scholar, UC Berkeley History of Science

Enterprise Risk Management is typically defined as a means to identify potential events that affect an entity and to manage risk such that it is within the entity’s risk appetite. Whether the “events” in this definition are potential business opportunities or are only potential hazards is a source of confusion. This definition ties a potentially abstract social construct – risk appetite – to the tangible, quantifiable concept of risk. If the events under consideration in risk analysis are business opportunities and not just hazards (in the broader sense of hazard, including, e.g., fraud, insufficient capital, and competition), then the definition also directly entails quantifying the opportunity – its value, time scale, and impact on other mutually-exclusive opportunities. Underlying the complex subject of enterprise risk are the fundamental and quantifiable elements of probability, uncertainty, hazard severity, cash value of a loss, value of a gain, and to some extent, risk appetite or tolerance.

ERM practitioners tend to recognize that these concepts lie at the heart of ERM, but seem less certain about how the concepts relate to one another. In this sense ERM reminds me of the way 16th century proto-scientists wrestled with the concepts of mass, forces, cause and effect, and the difficulties they had separating long-held superstitious and theology-based beliefs from beliefs based on evidence and rational thought.

A great example is Thomas Digges’ 1605 almanac, Prognostication Everlasting, an augmented version of his father’s 1576 almanac of the same name. Both Digges had a keen interest in nature and physics. These writers, like their contemporaries including William Gilbert and Galileo, are examples of proto-scientists. In his extended Prognostication, Thomas Digges predicted the weather by a combination of astrology and atmospheric phenomena including clouds and rainbows. Stars and planets were parts of nature too. Lacking any concept of gravity and how natural forces give rise to observed effects, it seemed reasonable that the position of celestial bodies could impact weather and human life. Digges was able to predict the times of sunrise and high tides surprisingly well. His calculations also predicted when to let blood, induce diarrhea and employ the medical intervention of bathing. He discouraged bathing when the moon was in Virgo or Capricorn, because these earth signs are naturally at odds with water.

Digges’ weather predictions were both vague and imprecise. It’s hard to tell whether to expect warm and wet, or warm and dry. And though we might expect warm, should we expect it next week or next month?

The almanacs also had another problem seen today in many business analyses. Leonard Digges had calculated the distance from Earth to the sphere of the fixed stars to be 358,463.5 miles. Such calculations at best show neglect in the significance of digits, and at worst, are failures of epistemological humility, or even outright crackpot rigor.

Thomas Digges corrected his father’s error here, and, going further, positing and endless universe – endless once you travel beyond the crystalline spheres of the heavenly elect, the celestial angels, and the orb of The Great God. Beyond that sphere Digges imagined infinite stars. But he failed to see the sun as a star and the earth as a planet, a conclusion that his more scientifically-minded contemporary, Tycho Brahe, had already reached.

I don’t mean to mock Digges. He wrote the first real defense of heliocentrism in English. Despite pursuing a mixture of superstition, science, and Christianity, Digges was a pioneer. He was onto something – just like practitioners of ERM. For Digges, rationality and superstition could live side by side without conflict. ERM likewise. Digges worked long and hard to form theories, sometimes scoffing dogma, sometimes embracing it. Had he taken the extra step of judging theories on evidential support – something natural philosophers would master over the next century – a line of slick computers would today bear his name.

Copernican universe according to Thomas Digges“A Perfit Description of the Caelestiall Orbes according to the most aunciente doctrine of the Pythagoreans, latelye revived by Copernicus and by Geometricall Demonstrations”

Digges’ view of the world, as seen in the above diagram, has many problems. Two of particular significance stem from his retaining Aristotle’s circular orbits and the idea that celestial bodies were attached to crystalline spheres that held them in position. Without letting go of these ancient beliefs, his model of reality was stuck in a rut.

ERM has analogous models of the world – at least the world of risk management. A staple of ERM is the risk register, as seen below. As commonly used the risk register is representation of all identified risks using a two-axis world view. Apparently unknown to many practitioners, this model, like Digges’ work view, contains wrong beliefs that, like circular orbits, are so deeply embedded as to be invisible to its users. Two significant ones come to mind – a belief in the constancy of risk tolerance across organizations, and belief in constancy of risk tolerance across hazard impact levels.

author: Hou710An ERM risk-register model of the world

Many ERM practitioners believe risk registers (and heat maps, a closely related model) to be a tool or concept used in aerospace, an exemplar for risk management. This is incorrect; commercial aviation explicitly rejects risk registers precisely because constancy of risk tolerance across hazard severities is not remotely akin to the way human agents perceive risk. Some might argue that all other things being equal, the risk register is still a good model. But that ceteris paribus is far enough from reality to make the point moot. It recalls Nathan Arizona’s famous retort, “yeah, and if a frog had wings…” No human or corporate entity ever had a monolithic risk appetite or one that was constant across impact levels.

The use of risk registers implies agreement with an assumption of risk-neutrality that is never made explicit – never discussed – but for which I can imagine no justification. Should ERM do away with risk registers altogether? Short answer: yes. Replace it with separate functional hazard analyses, business impact analyses, and assessments of causal factors leading up to the identified hazards.

As with proto-science in the age of Thomas Digges, ERM needs to establish exactly what it means by its fundamental terms – things like uncertainty and risk. Lack of terminological clarity is an obstacle to conceptual clarity. The goal here is not linguistic purity, or, as William Gilbert, a contemporary of Digges put it, the “foolish veils of vocabularies,” but the ability of practitioners to get beyond the illusion of communication.

Also like proto-science, ERM must embrace mathematics and probability. Mapping known or estimated cost values into ranges such as minor, moderate and significant does no intellectual work and attempts to cure imprecision with vagueness. The same goes for defining probability values of less than one per thousand as remote. Quantified estimation is necessary. Make informed estimates and state them clearly. Update your estimates when new evidence appears.

As with science, ERM seeks to make prognostications that can inform good decision-making. It needs method (methodology), but method at a high level rather than processes to be enacted by “risk owners” removed from the decisions the risk analysis was intended to inform. As Michael Power put it, recommendations to embedding risk management and internal control systems within all business processes have led to “the wrong kind of embeddedness.” Power suggests that a Business Continuity Management (BCM) approach would be more useful than the limited scope of an internal-controls approach. While Power doesn’t specifically address the concept of objective measurement, it is central to BCM.

Like the proto-science of Thomas Digges, ERM needs to embrace empiricism and objective measurement and to refrain from incantations about risk culture. As Joseph Glanville wrote in 1661, “ we believe the [compass] needle without a certificate from the days of old.” Paraphrasing Kant, we can add that theory without data is lame.

There is danger in chasing an analogy too far, but the rough parallels between proto-science and ERM’s current state are instructive. Few can doubt the promise of enterprise risk management; but it’s time to take a step forward.

Comments on the COSO ERM Public Exposure 2016

In June, COSO, the Committee of Sponsoring Organizations of the Treadway Commission, requested comments on a new draft of its framework.  I discovered this two days before the due date for comments, and rushed to respond.  My comments are below. The document is available for public review here.

Most of my comments about this draft address Section 2, which deals with terminology. I’d like to stress that this concern stems not from a desire for semantic purity but from observations of miscommunication and misunderstandings between ERM professionals and those of various business units as well as a lack of conceptual clarity about risk within ERM.

Before diving into that topic in detail, I’ll offer two general comment based on observations from work in industry. I think we all agree that risk management must be a process, not a business unit. Despite this, many executives still equate risk management with regulatory compliance or risk transfer through insurance. That thinking was apparent in the Protiviti and EIU surveys of the 2000’s, and, despite the optimism of Deloitte’s 2013 survey, is readily apparent if one reads between its lines. As with information technology, risk management is far too often viewed as a department down the hall, rather than an integral process. Sadly, part of this problem seems to stem from ERM’s self-image; ERM is often called “an area of management” in ERM literature. Risk management can no more be limited to an area of management than can engineering or supply-chain management; i.e., they require management, not just Management.

My second general comment is that the framework expends considerable energy on risk management but comparatively little on risk assessment. It is hard to imagine how risks can be managed without first being assessed, i.e., managed risks must be first identified and measured.

Nearly all risk management presentations lean on imagery and examples from aerospace and other human endeavors where inherently dangerous activities have been made safe through disciplined risk analysis and management. Many ERM practitioners believe that their best practices and frameworks draw heavily on the body of knowledge developed in aviation over the last 70 years. This belief is not totally justified. ERM educators and practitioners often use aerospace metaphors (or nuclear, mountaineering, scuba, etc.) but are unaware that the discipline of aerospace risk assessment and management categorically rejects certain axioms of ERM – particularly those tied to the relationships between the fundamental concepts of risk, likelihood or probability, severity and uncertainty. I’d like to offer here that moving a bit closer to conceptual and terminological alignment with the aerospace mindset would better serve the objectives of COSO.

At first glance ERM differs greatly in objective from aircraft safety, and has a broader scope. This difference in scope might be cited as a valid basis for the difference in approaches and mindsets I observe between the two domains. I’ll suggest that the perception of material differences is mostly an illusion stemming from our fear of flying and from minimal technical interchange between the two domains. Even assuming, for sake of argument, that aerospace risk analysis is validation-focused rather than a component of business decision-making and strategy, certain fundamental concepts would still be shared. I.e., in both cases we systematically identify risks, measure their impact, modify designs and processes to mitigate them, and apply the analysis of those risks to strategic decisions where we seek gain. This common thread running through all risk management would seem to warrant commonality in perspective, ideology, method, and terminology. Yet fundamental conceptual difference exist, which, in my view, prevent ERM from reaching its potential.

Before discussing how ERM might benefit from closer adherence to mindsets fostered by aerospace risk practice (and I use aerospace here as a placeholder – nuclear power, weapons systems, mountaineering and scuba would also apply) I’d like to stress that both probabilistic and qualitative risk analysis of many forms profoundly impact strategic decisions of aircraft makers. At McDonnell Douglas (now Boeing) three decades ago I piloted an initiative to use probabilistic risk analysis in the conceptual-design phase of aircraft models considered for emerging markets (as opposed to merely in the realm of reliability assessment and regulatory compliance). Since risk analysis is the only rational means for allocating redundancy within complex systems, the tools of safety analysis entered the same calculus as those evaluating time-to-market, financing, credit, and competitive risk.

In the proposed framework, I have significant concerns about the definitions given in Section 2 (“Understanding the Terms”). While terminology can be expected to vary across disciplines, I submit that these definitions do not serve COSO’s needs, and that they hamper effective communication between organizations. I’ll offer suggested revisions below.

P22 begins:

“There is risk in not knowing how an entity’s strategy and business objectives may be affected by potential events. The risk of an event occurring (or not), creates uncertainty.”

It then defines risk, given the context of uncertainty specified above:

Risk: “The possibility that events will occur and affect the achievement of strategy and business objectives.”

The relationship between risk and uncertainty expressed here seems to be either backward or circular. That is, uncertainty always exists in setting an entity’s strategy and business objectives. That uncertainty exists independent of whether a party has a stake in the success of the entity. Uncertainty – the state of not being definitely known, being undecided, or having doubt – only entails risk, as “risk” is commonly used in society, most of business, science, and academics to those with a stake in the outcome.

I am aware that in many ERM frameworks, risk is explicitly defined as uncertainty about an outcome that can be either beneficial or undesirable. Such usage of term has two significant problems. First, it causes misunderstandings in communications between ERM insiders and those affected by their decisions. Second, even within ERM, practitioners drift between that ERM-specific meaning and the meaning used by the rest of the world. This is apparent in the frequent use of expressions such as “risk mitigation” and “risk avoidance” within ERM literature. Use of these phrases clearly indicates a scope of “risk” limited to unwanted events, not to desired outcomes. Logically, no one would seek to mitigate benefit.

While the above definition of risk doesn’t explicitly connect beneficial outcomes with risk, the implicit connection is obvious in the relationships between risk and the other defined terms. If risk is “the possibility that events will occur” and those events can be beneficial or undesirable, then, as defined, the term risk covers both beneficial and undesirable events. Risk then communicates nothing beyond uncertainty about those events. As such, risk becomes synonymous with uncertainty.

Equating risk with uncertainty is unproductive; and expressing uncertainty as a consequence of risk (as stated at the beginning of P22) puts the cart before the horse. The general concept in risk studies is that risk is a consequence of uncertainty, not the cause of uncertainty. Decisions would be easy – virtually automatic – if uncertainty were removed from the picture.

Uncertainty about potential outcomes, some of which are harmful, is a necessary but insufficient feature of risk. The insufficiency of uncertainty alone in expressing risk is apparent if one considers, again, that no risk exists without a potential loss. Uncertainty exists at the roulette wheel regardless of your participation. You have risk only if you wager. The risk rises as your wager rises. Further, for a given wager, your risk is higher in America than in Europe roulette because American roulette’s additional possible outcome – the double-zero (not present elsewhere) – reduces your probability – i.e., increases your uncertainty – of winning. Thus rational management of risk entails recognition of two independent components of risk – uncertainty and loss. Below I suggest a revision of the definition of risk to accommodate this idea.

Understanding risk to involve both uncertainty and potential loss provides consistency with usage of the term in the realms of nuclear, aerospace, medicine, manufacturing statistical-process control, and math and science in general.

When considering uncertainty’s role in risk (and that they have profoundly different meanings), we can consider several interpretations of uncertainty. In math, philosophy, and logic, uncertainty usually refers to quantities that can be expressed as a probability – a value between zero and one – whether we can state that probability with confidence or not. We measure our uncertainty about the outcome of rolling a die by, assuming a fair die, examining the sample space. Given six possible outcomes of presumed equal likelihood, we assign a probability of 1/6 to each possible outcome. That is a measurement of our uncertainty about the outcome. Rolling a die thousands of times gives experimental confirmation of our uncertainty measurement. We express uncertainty about Manhattan being destroyed this week by an asteroid through a much different process. We have no historical (frequency) data from which to draw. But by measuring the distribution, age, and size of asteroid craters on the moon we can estimate the rate of large asteroid strikes on the earth. This too gives a measure of our uncertainty about Manhattan’s fate. We’re uncertain, but we’re not in a state of complete ignorance.

But we are ignorant of truly unforeseeable events – what Rumsfeld famously called unknown unknowns. Not even knowing what a list of such events would contain could also be called uncertainty; but it is a grave error to mix that conception of uncertainty (perhaps better termed ignorance) with uncertainty about the likelihood of known possible events. Much of ERM literature suffers from failing to make this distinction.

An important component of risk-management is risk-analysis in which we diligently and systematically aim to enumerate all possible events, thereby minimizing our ignorance – moving possible outcomes from the realm of ignorance to the realm of uncertainty, which can be measured, though sometimes only by crude estimates. It’s crucial to differentiate ignorance and uncertainty in risk management, since the former demands thoroughness in identifying unwanted events (often called hazards, though ERM restricts that term to a subset of unwanted events), while the latter is a component of a specific, already-identified risk.

Beyond facilitating communications between ERM practitioners and those outside it, a more disciplined use of language – using these separate concepts of risk, uncertainty and ignorance –  will promote conceptual clarity in managing risk.

A more useful definition of risk should include both uncertainty and loss and might take the form:

Risk:  “The possibility that unwanted events will occur and negatively impact the achievement of strategy and business objectives.”

To address the possible objection that risk may have a “positive” (desirable) element, note that risk management exists to inform business decisions; i.e., making good decisions involves more than risk management alone; it is tied to values and data external to risks. Nothing is lost by restricting risk to the unwanted consequences of unwanted events. The choice to accept a risk for the purpose of achieving a desirable outcome (gain, reward) is informed by thorough assessment of the risk. Again, without uncertainty, we’d have no risk; without risk, decisions would be easy. The possibility that by accepting a managed risk we may experience unforeseen benefits (beyond those for which the decision to accept the risk was made) is not excluded by the above proposed definition of risk. Finally, my above proposed definition is consistent with the common conception of risk-reward calculus.

One final clarification: I am not proposing that risk should in any way be an arithmetic product of quantified uncertainty and quantified cost of the potential loss. While occasionally useful, that approach requires a judgment of risk-neutrality that can rarely be justified, and is at odds with most people’s sense of risk tolerance. For example, we have no basis for assuming that a bank would consider one loss of a million dollars to be an equivalent risk to 10,000 losses of $100 each, despite both having the same mathematical expectation (expected value or combined cost of the loss).

An example of the implicit notion of a positive component of risk (as opposed to a positive component of informed decision-making) P25 states:

“Organizations commonly focus on those risks that may result in a negative outcome, such as damage from a fire, losing a key customer, or a new competitor emerging. However, events can also have positive outcomes, and these must also be considered.“

A clearer expression of the relationship between risk (always negative) and reward would recognize that positive outcomes result from deciding to accept managed and understood risks (risks that have been analyzed). With this understanding of risk, common to other risk-focused disciplines, positive outcomes result from good decisions that manage risks, not from the risks themselves.

This is not a mere semantic distinction, but a conceptual one. If we could achieve the desired benefit (“positive outcome”) without accepting the risk, we would certainly do so. This point logically ties benefits to decisions (based on risk analysis), not to risks themselves. A rewording of P25 should, in my view, should explain that:

  • events (not risks) may result in beneficial or harmful outcomes
  • risk management involves assessment of the likelihood and cost of unwanted outcomes,
  • risks are undertaken or planned-for as part of management decisions
  • those informed decisions are made to seek gains or rewards

This distinction clarifies the needs of risk management and emphasizes its role in good corporate decision-making.

Returning to the concept of uncertainty, I suggest that the distinction between ignorance (not knowing what events might happen) and uncertainty (not knowing the likelihood of an identified event) is important for effective analysis and management of risk. Therefore, in the context of events, the matter of “how” should be replaced with shades of “whether.” The revised definition I propose below reflects this.

The term severity is common in expressing the cost of the loss component of risk.  The definition of severity accompanying P25 states:

Severity: A measurement of considerations such as the likelihood and impacts of events or the time it takes to recover from events.

Since the definition of risk (both the draft original and my proposed revision) entail likelihood (possibility or probability), likelihood should be excluded from a definition of severity; they are independent variables. Severity is a measure of how bad the consequences of the loss can be. I.e., it is simply the cost of the hypothetical loss, if the loss were to occur. Severity can be expressed in dollars or lost lives. Reputation damage, competitive disadvantage, missed market opportunities, and disaster recovery all ultimately can be expressed in dollars. While we may only be able to estimate the cost of a loss, the probability of that loss is independent of it severity.

Recommended definitions for Sections 2:

Event: An anticipated or unforeseen occurrence, situation, or phenomenon of any magnitude, having beneficial, harmful or unknown consequences

Uncertainty: The state of not knowing or being undecided about the likelihood of an event.

Severity: A measurement of the undesirability or cost of a loss

Risk:  The possibility that unwanted events will negatively impact the achievement of strategy and business objectives.

 

Historical perspective on the divergent concepts of risk, uncertainty, and probability

Despite having mastered geometry and the quadratic formula in ancient times, our study of probability and uncertainty only dates to the late 17th century when Blaise Pascal was paid by a client to develop mathematics to gain an advantage in gambling. This was the start of the frequentist interpretation of probability, based on the idea that, under deterministic mechanisms, we can predict the outcome of various trials given a large enough data set. Pierre-Simon Laplace then formalized the subjectivist (Bayesian) interpretation of probability in which probability refers to one’s degree of belief in a possible outcome. Both these interpretations of probability are expressed as a number between zero and one. That is, they are both quantifications of uncertainty about one or more explicitly identified potential outcomes.

The inability to identify a possible outcome, regardless of probability, stems from ignorance of the system in question. Such ignorance is in some cases inevitable. An action may have unforeseeable outcomes; flipping our light switch may cause a black hole to emerge and swallow the earth. But scientific knowledge combined with our understanding of the wiring of a house gives us license to eliminate that as a risk. Whether truly unforeseeable events exist depends on the situation; but we can say with confidence that many events called black swans, such as the Challenger explosion, Hurricane Katrina and the 2009 mortgage crisis were foreseeable and foreseen – though ignored. The distinction between uncertainty about likelihood of an event and ignorance of the extent of the list of events si extremely important.

Yet confusing the inability to anticipate all possible unwanted events and a failure to measure or estimate the probability of identified risks is common in some risk circles. A possible source of this confusion was Frank Knight’s 1921 Uncertainty and Profit. Knight’s contributions to economic and entrepreneurial theory are laudable, but his understanding of set theory and probability was poor. Despite this, Knight’s definitions linger in business writing. Specifically, Knight defined “risk” as “measurable uncertainty” and “uncertainty” as “unmeasurable uncertainty.” Semantic incoherence aside, Knight’s terminology was inconsistent with all prior use of the terms uncertainty, risk, and probability in mathematical economics and science. (See chapters 2 and 10 of Stigler’s The History of Statistics: The Measurement of Uncertainty before 1900 for details).

The understanding and rational management of risk requires that we develop and maintain clarity around the related but distinct concepts of uncertainty, probability, severity and risk, regardless of terminology. Clearly, we can navigate through some level of ambiguous language in risk management, but the current lack of conceptual clarity about risk in ERM has not well served its primary objective. Hopefully, renewed interest in making ERM integral to strategic decisions will allow a reformulation of the fundamental concepts of risk.