Risk Neutrality And Risk Appetite Skewness

In a recent post I I argued that risk frameworks’ models of an entity’s risk appetite contain implicit risk-neutrality. Some readers understood me to say that the frameworks promote indifference toward taking or not taking a well-motivated business risk. That wasn’t my intent; I don’t think risk frameworks have that particular problem.

Risk registers often model risk as the arithmetic product of likelihood (probability) and the cost of an unwanted event. By doing this, risk frameworks assume an enterprise is indifferent to two risks having the same numerical value of that product, where one risk has high probability and low cost and another has high cost and low probability.

Frameworks further mischaracterize an enterprise resulting in poor risk guidance and crypto-normativity, i.e., implicit bias, telling the enterprise what its values should be rather than supporting a decision process consistent with its values. Assuming that users of frameworks compensate for implicit risk-neutrality, they must then deal with the presumption of constancy of risk-adversity or risk-seekingness across costs or opportunities. This is a highly inaccurate model of how humans and enterprises address risk.

The example in my risk-neutrality post was equivalent to a single horse race with high and low odds options. That is, in a race, one horse has high odds (low probability – high winnings) while another has low odds (high probability – low winnings).

It might be more useful to view business decisions as a day at the races rather than a single race. Not all races at Churchill Downs, on any given day, may have an extreme low-probability bet, so a risk seeker would likely skip betting on that race. In addition to picking horses we must pick the races in which we place bets and decide how much to bet.

How enterprises behave in an equivalent business scenario depends on their values, their distributed knowledge of the domain, and some irrational beliefs. I’m not concerned here with the latter, and risk frameworks do little to dispel such beliefs. I’ll assume, for sake of argument, that an enterprise’s picks of races and bet amounts are justified.

With that assumption, evidence still suggests the complexity of judgment in picking races and the amount to wager (risk preferences) is high, and that risk frameworks cannot accommodate it.

Continuing with the horse race analogy, work of several researchers has shown that the risk appetite of real horse-race gamblers can be modeled with a utility function that, in addition to the mean value and expected value of returns, considers skewness.

At low odds (high probability – low winnings) the gamblers are risk averse, but for high odds (low probability – high winnings) they are risk seeking.

Assume, for sake of argument, that all available bets at the track have roughly the same expected value, i.e., the track or bookie’s income is from margin, not speculation. This is usually true, although bookmakers sometimes adjust odds and point spreads to increase the number of of bettors against a horse perceived as being on a winning streak (thereby making the wager literally unfair).

But all races may not have a high-odds (low probability – high winnings) option. For such races, the gambler might still bet, but be risk-averse,  yet be risk-seeking for races having a high-odds option. Golec and Tamarkin cover this in Bettors love skewness, not risk, at the horse track. Garrett and Sobel found the same for state lotteries, giving an explanation for why otherwise risk-averse people pay a dollar for lottery tickets with an expected value of fifty cents.

The economic utility function of a risk-averse entity is convex (blue below) and concave for risk seekers (red). Golec and Tamarkin modeled the utility function of many gamblers as a curve of order 3 (cubic), as seen in green below.

risk neutrality - onriskof.com

The preferences of organizations, whether reasonable or unreasonable in the view of any particular observer, may be beyond the scope of risk management. If risk frameworks care to judge the justification of preferences, they should do so explicitly, rather than embedding implicit neutrality (or any other utility function) into the frameworks.  In addition to the insufficiency of risk registers as a basis for enterprise decision-making, we must accept that risk registers aren’t merely insufficient, they are outright wrong, or worse.


–  –  –

In the San Francisco Bay area?

If so, consider joining us in a newly formed Risk Management meetup group.

Risk assessment, risk analysis, and risk management have evolved nearly independently in a number of industries. This group aims to cross-pollinate, compare and contrast the methods and concepts of diverse areas of risk including enterprise risk (ERM), project risk, safety, product reliability, aerospace and nuclear, financial and credit risk, market, data and reputation risk, etc.

This meetup will build community among risk professionals – internal auditors and practitioners, external consultants, job seekers, and students – by providing forums and events that showcase leading-edge trends, case studies, and best practices in our profession, with a focus on practical application and advancing the state of the art.

If you are in the bay area, please join us, and let us know your preferences for meeting times.


Norman Marks on Cyber Security

Norman Marks provides some welcome sanity in a recent post on his blog, On Governance, Risk Management, and Audit. Commenting on an April 2016 white paper, Cyber Security and the Board of Directors, by the Delta Risk company, Marks notes that Delta’s call for educating board members on technical details of cyber risk is likely unproductive.

Delta’s approach seems to stem from their identifying the Statement of Risk Appetite, required for banks by the Basel II accords, as a way for the board to communicate the organization’s risk boundaries and rationale.  Delta fails to see that an assessment of risks, given a firm’s operations and objectives, is a discreet task requiring specific skills unlikely to present on a board. It requires a good deal of rigor and must be continuously maintained. So while cyber risk should be incorporated into a risk-appetite statement, it is fundamentally different from the tasks of establishing priorities and communicating performance expectations.

Marks also gets my praise for, uncommonly in ERM, calling for expressing cyber risk in terms of the potential for a breach to affect the achievement of each of the enterprise’s objectives.

A Functional Hazard Analysis approach to modeling risk (more accurately, modeling hazards) in the context of specific operations and objectives of an enterprise would address this need. The refusal of this industry to use an FHA (or a systematic, enhanced BIA) approach has always puzzled me.

Marks is also admirably critical of neglect of probability in Delta’s recommendations.  As with ERM in general, many in cyber security seem to believe that vagueness is a cure for uncertainty.

While the Delta paper mentions metrics, it does so only in vague terms (“cyber-related status metrics” as KPIs). Marks correctly notes the absence of any metrics for deciding whether a firm’s information security program is effective. He asks how they might measure it.

To that I would also ask a potentially more revealing question: how would you know if it didn’t work? That question can better explore a program’s provisions for low-frequency hazards, since merely searching for confirming evidence (e.g., “we intercepted this one…”) can ignore low-frequency, high-impact hazards that have never occurred simply because of limited exposure time.

Use of FMEAs in risk management is common, despite their limited usefulness as a risk-analysis tool. Use of FHAs (or a structured version of Business Impact Analyses) seems nearly non-existent. I’ll be writing some recommendations for use of FHA in the future. Philosophizing about risk is a poor substitute for modeling hazards.


–  –  –

In the San Francisco Bay area?

If so, consider joining us in a newly formed Risk Management meetup group.

Risk assessment, risk analysis, and risk management have evolved nearly independently in a number of industries. This group aims to cross-pollinate, compare and contrast the methods and concepts of diverse areas of risk including enterprise risk (ERM), project risk, safety, product reliability, aerospace and nuclear, financial and credit risk, market, data and reputation risk, etc.

This meetup will build community among risk professionals – internal auditors and practitioners, external consultants, job seekers, and students – by providing forums and events that showcase leading-edge trends, case studies, and best practices in our profession, with a focus on practical application and advancing the state of the art.

If you are in the bay area, please join us, and let us know your preferences for meeting times.


Risk Neutrality and Risk Frameworks

William Storage – Oct 29, 2016
VP, LiveSky, Inc.,  Visiting Scholar, UC Berkeley History of Science

Wikipedia describes risk-neutrality in these terms: “A risk neutral party’s decisions are not affected by the degree of uncertainty in a set of outcomes, so a risk-neutral party is indifferent between choices with equal expected payoffs even if one choice is riskier”

While a useful definition, this statement is still problematic, since we don’t all agree on what “riskier” means. We can compare both the likelihoods and the costs of different risks, but comparing their riskiness using a one-dimensional range (i.e., higher vs. lower) requires a scalar calculus of risk. If risk is a combination of probability and severity of an unwanted outcome, riskier might equate to a larger value of the arithmetic product of the relevant probability and severity. But defining risk as such a scalar (area under the curve, therefore one dimensional) value is a big step, one which analysis of human behavior suggests is not at all an accurate representation of how we perceive risk. It implies risk-neutrality.

Most people agree, as Wikipedia states, that a risk-neutral party’s decisions are not affected by the degree of uncertainty in a set of outcomes. On that view, a risk-neutral party is indifferent between all choices having equal expected payoffs.

Under this definition, if risk-neutral, you would have no basis for preferring any of the following four choices over another:

1) a 50% chance of winning $100.00
2) An unconditional award of $50.
3) A 0.01% chance of winning $500,000.00
4) A 90% chance of winning $55.56.

If risk-averse, you’d prefer choices 2 or 4. If risk-seeking, you’d prefer 1 or 3.

Now let’s imagine, instead of potential winnings, an assortment of possible unwanted events, which we can call hazards, for which we know, or believe we know, probability values. One example would be to simply turn the above gains into losses:

1) a 50% chance of losing $100.00
2) An unconditional payment of $50.
3) A 0.01% chance of losing $500,000.00
4) A 90% chance of losing $55.56.

In this example, there are four different hazards. To be accurate, we observe that loss of money is not a useful statement of a hazard. Loss of a specific amount of money is. The idea that rational analysis of risk entails quantification of hazards (independent of whether probabilities are quantified) is missed by many risk management efforts, and is something I discuss here often. For now, note that this example uses four separate hazards, each having different probabilities, resulting in four risks, all having the same $50 expected value, labeled 1 through 4. Whether those four risks can be considered equal depends on whether you are risk-neutral.

If forced to accept one of the four risks, a risk-neutral person would be indifferent to the choice; a risk seeker might choose risk 3, etc. Banks are often found to be risk-averse. That is, they will pay more to prevent risk 3 than to prevent risk 4, even though they have the same expected value. Viewed differently, banks often pay much more to prevent one occurrence of hazard 3 than to prevent 9000 occurrences of hazard 4, i.e., $500,000 worth of them. Note the use of the terms “hazard 3” and “risk 3” in the preceding two sentences; hazard and risk have very different meanings here.

If we use the popular heat-map approach (sometimes called risk registers) to visualizing risks by plotting the four probability-cost vector values (coordinates) on a graph, they will fall on the same line of constant risk. Lines of constant risk, as risk is envisioned in popular risk frameworks, take the form of y = 1/x. To be precise, they take the form of y = a/x where a represents a constant number of dollars called the expected value (or mathematical expectation or first moment) depending on area of study. For those using the heap-map concept, this number is exactly equal to the “risk” being modeled. In other words, in their model, risk equals probability times cost of the hazard: R = p * c. So if we graph probability on the x-axis and cost on the y-axis, we are graphing c = R/p, which is analogous to the y=a/x curve mentioned above. A sample curve of this form, representing a line of constant risk appears below on the left.

In my example above, the four points (50% chance of losing $100, etc.) have a large range of probabilities. Plotting these actual values on a simple grid isn’t very informative because the data points are far from the part of the plotted curve where the bend is visible (plot below on the right).

risk neutrality

Good students of high-school algebra know a fix for the problem of graphing data of this sort (monomials): use log paper. By plotting equations of the form described above using logarithmic scales for both axes, we get a straight line, having data points that are visually compressed, thereby taming the large range of the data, as below.

Popular risk frameworks use a different approach. Instead of plotting actual probability values and actual costs, they plot scores, say from one ten. Their reason for doing this is more likely to convert an opinion into a numerical value than to cluster data for easy visualization. Nevertheless, plotting scores – on linear, not logarithmic, scales – inadvertently clusters data, though the data might have lost something in the translation to scores in the range of 1 to 10. In heat maps, this compression of data has the undesirable psychological effect of implying much small ranges for the relevant probability values and costs of the risks under study.

A rich example of this effect is seen in the 2002 PmBok (Project Management Body of Knowledge) published by the Project Management Institute. It assigns a score (which it curiously calls a rank) of 10 for probability values in the range of 0.5, a score of 9 for p=0.3, and a score of 8 for p=0.15. It should be obvious to most having a background in quantified risk that differentiating failure probabilities of .5, .3, and .15 is pointless and indicative of bogus precision, whether the probability is drawn from observed frequencies or from subjectivist/Bayesian-belief methods.

The methodological problem described above exists in frameworks that are implicitly risk-neutral (most are, with a few noted exceptions, e.g., commercial aviation, medical devices, and some of NASA). The real problem with the implicit risk-neutrality of risk frameworks is that very few of us – individuals or corporations – are risk-neutral. And no framework has any business telling us that we should be. Saying that it is somehow rational to be risk-neutral pushes the definition of rationality too far. Doing so crosses the line from deductive (or inductive) reasoning to human values. It is convenient, for those seeking the persuasive power of numbers (however arbitrary or error-laden those scores and ranks might be) to model the universe as risk-neutral. But human preferences, values, and ethics need not abide that convenience, a convenience persuasive because of apparent mathematical rigor, but one that makes recommendations inconsistent with our values.

As proud king of a small distant planet of 10 million souls, you face an approaching comet that, on impact, will kill one million in your otherwise peaceful world. Your planet’s scientists and engineers rush to build a comet-killer nuclear rocket. The untested device has a 90% chance of destroying the comet but a 10% chance of exploding on launch thereby killing everyone on your planet. Do you launch the comet-killer, knowing that a possible outcome is total extinction? Or do you sit by and watch one million die from a preventable disaster? Your risk managers see two choices of equal risk: 100% chance of losing one million and a 10% chance of losing 10 million. The expected value is one million lives in both cases. But in that 10% chance of losing 10 million, there is no second chance – an existential risk.

If these two choices seem somehow different, you are not risk-neutral. If you’re tempted to leave problems like this in the capable hands of ethicists, good for you. But unaware boards of directors have left analogous dilemmas in the incapable hands of facile risk frameworks.

The risk-neutrality embedded in risk frameworks is a subtle and pernicious case of Hume’s Guillotine – an inference from “is” to “ought” concealed within a fact-heavy argument. No amount of data, whether measured frequencies or subjective probability estimates, whether historical expenses or projected costs, even if recorded as PmBok’s scores and ranks, can justify risk-neutrality to parties who are not risk-neutral. So why do we embed it in our frameworks?


“If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.” – David Hume, An Enquiry Concerning Human Understanding


–  –  –

In the San Francisco Bay area?

If you are, consider joining us in a newly formed Risk Management meetup group.

Risk assessment, risk analysis, and risk management have evolved nearly independently in a number of industries. This group aims to cross-pollinate, compare and contrast the methods and concepts of diverse areas of risk including enterprise risk (ERM), project risk, safety, product reliability, aerospace and nuclear, financial and credit risk, market, data and reputation risk, and so on.

This meetup aims to build community among risk professionals – internal auditors and practitioners, external consultants, job seekers, and students – by providing forums and events that showcase leading-edge trends, case studies, and best practices in our profession, with a focus on practical application and advancing the state of the art.

If you’re in the bay area, please join us, and let us know your preferences for meeting times.


ERM and the Prognostication Everlasting of Thomas Digges

William Storage – Oct 19, 2016
VP, LiveSky, Inc.,  Visiting Scholar, UC Berkeley History of Science

Enterprise Risk Management is typically defined as a means to identify potential events that affect an entity and to manage risk such that it is within the entity’s risk appetite. Whether the “events” in this definition are potential business opportunities or are only potential hazards is a source of confusion. This definition ties a potentially abstract social construct – risk appetite – to the tangible, quantifiable concept of risk. If the events under consideration in risk analysis are business opportunities and not just hazards (in the broader sense of hazard, including, e.g., fraud, insufficient capital, and competition), then the definition also directly entails quantifying the opportunity – its value, time scale, and impact on other mutually-exclusive opportunities. Underlying the complex subject of enterprise risk are the fundamental and quantifiable elements of probability, uncertainty, hazard severity, cash value of a loss, value of a gain, and to some extent, risk appetite or tolerance.

ERM practitioners tend to recognize that these concepts lie at the heart of ERM, but seem less certain about how the concepts relate to one another. In this sense ERM reminds me of the way 16th century proto-scientists wrestled with the concepts of mass, forces, cause and effect, and the difficulties they had separating long-held superstitious and theology-based beliefs from beliefs based on evidence and rational thought.

A great example is Thomas Digges’ 1605 almanac, Prognostication Everlasting, an augmented version of his father’s 1576 almanac of the same name. Both Digges had a keen interest in nature and physics. These writers, like their contemporaries including William Gilbert and Galileo, are examples of proto-scientists. In his extended Prognostication, Thomas Digges predicted the weather by a combination of astrology and atmospheric phenomena including clouds and rainbows. Stars and planets were parts of nature too. Lacking any concept of gravity and how natural forces give rise to observed effects, it seemed reasonable that the position of celestial bodies could impact weather and human life. Digges was able to predict the times of sunrise and high tides surprisingly well. His calculations also predicted when to let blood, induce diarrhea and employ the medical intervention of bathing. He discouraged bathing when the moon was in Virgo or Capricorn, because these earth signs are naturally at odds with water.

Digges’ weather predictions were both vague and imprecise. It’s hard to tell whether to expect warm and wet, or warm and dry. And though we might expect warm, should we expect it next week or next month?

The almanacs also had another problem seen today in many business analyses. Leonard Digges had calculated the distance from Earth to the sphere of the fixed stars to be 358,463.5 miles. Such calculations at best show neglect in the significance of digits, and at worst, are failures of epistemological humility, or even outright crackpot rigor.

Thomas Digges corrected his father’s error here, and, going further, positing and endless universe – endless once you travel beyond the crystalline spheres of the heavenly elect, the celestial angels, and the orb of The Great God. Beyond that sphere Digges imagined infinite stars. But he failed to see the sun as a star and the earth as a planet, a conclusion that his more scientifically-minded contemporary, Tycho Brahe, had already reached.

I don’t mean to mock Digges. He wrote the first real defense of heliocentrism in English. Despite pursuing a mixture of superstition, science, and Christianity, Digges was a pioneer. He was onto something – just like practitioners of ERM. For Digges, rationality and superstition could live side by side without conflict. ERM likewise. Digges worked long and hard to form theories, sometimes scoffing dogma, sometimes embracing it. Had he taken the extra step of judging theories on evidential support – something natural philosophers would master over the next century – a line of slick computers would today bear his name.

Copernican universe according to Thomas Digges“A Perfit Description of the Caelestiall Orbes according to the most aunciente doctrine of the Pythagoreans, latelye revived by Copernicus and by Geometricall Demonstrations”

Digges’ view of the world, as seen in the above diagram, has many problems. Two of particular significance stem from his retaining Aristotle’s circular orbits and the idea that celestial bodies were attached to crystalline spheres that held them in position. Without letting go of these ancient beliefs, his model of reality was stuck in a rut.

ERM has analogous models of the world – at least the world of risk management. A staple of ERM is the risk register, as seen below. As commonly used the risk register is representation of all identified risks using a two-axis world view. Apparently unknown to many practitioners, this model, like Digges’ work view, contains wrong beliefs that, like circular orbits, are so deeply embedded as to be invisible to its users. Two significant ones come to mind – a belief in the constancy of risk tolerance across organizations, and belief in constancy of risk tolerance across hazard impact levels.

author: Hou710An ERM risk-register model of the world

Many ERM practitioners believe risk registers (and heat maps, a closely related model) to be a tool or concept used in aerospace, an exemplar for risk management. This is incorrect; commercial aviation explicitly rejects risk registers precisely because constancy of risk tolerance across hazard severities is not remotely akin to the way human agents perceive risk. Some might argue that all other things being equal, the risk register is still a good model. But that ceteris paribus is far enough from reality to make the point moot. It recalls Nathan Arizona’s famous retort, “yeah, and if a frog had wings…” No human or corporate entity ever had a monolithic risk appetite or one that was constant across impact levels.

The use of risk registers implies agreement with an assumption of risk-neutrality that is never made explicit – never discussed – but for which I can imagine no justification. Should ERM do away with risk registers altogether? Short answer: yes. Replace it with separate functional hazard analyses, business impact analyses, and assessments of causal factors leading up to the identified hazards.

As with proto-science in the age of Thomas Digges, ERM needs to establish exactly what it means by its fundamental terms – things like uncertainty and risk. Lack of terminological clarity is an obstacle to conceptual clarity. The goal here is not linguistic purity, or, as William Gilbert, a contemporary of Digges put it, the “foolish veils of vocabularies,” but the ability of practitioners to get beyond the illusion of communication.

Also like proto-science, ERM must embrace mathematics and probability. Mapping known or estimated cost values into ranges such as minor, moderate and significant does no intellectual work and attempts to cure imprecision with vagueness. The same goes for defining probability values of less than one per thousand as remote. Quantified estimation is necessary. Make informed estimates and state them clearly. Update your estimates when new evidence appears.

As with science, ERM seeks to make prognostications that can inform good decision-making. It needs method (methodology), but method at a high level rather than processes to be enacted by “risk owners” removed from the decisions the risk analysis was intended to inform. As Michael Power put it, recommendations to embedding risk management and internal control systems within all business processes have led to “the wrong kind of embeddedness.” Power suggests that a Business Continuity Management (BCM) approach would be more useful than the limited scope of an internal-controls approach. While Power doesn’t specifically address the concept of objective measurement, it is central to BCM.

Like the proto-science of Thomas Digges, ERM needs to embrace empiricism and objective measurement and to refrain from incantations about risk culture. As Joseph Glanville wrote in 1661, “ we believe the [compass] needle without a certificate from the days of old.” Paraphrasing Kant, we can add that theory without data is lame.

There is danger in chasing an analogy too far, but the rough parallels between proto-science and ERM’s current state are instructive. Few can doubt the promise of enterprise risk management; but it’s time to take a step forward.

Cato Institute on Immigration Terrorism Risk

Alex Nowrasteh, the immigration policy analyst at the Cato Institute, recently authored a paper entitled “Terrorism and Immigration: A Risk Analysis.” It is not an analysis of risk in the traditional sense; it has little interest in causes and none in mitigation strategy. A risk-reward study, it argues from observed frequencies of terrorism incidents in the US that restricting immigration is a poor means of reducing terrorism and that the huge economic benefits of immigration outweigh the small costs of terrorism.

That may be true – even if we adjust for the gross logical errors and abuse of statistics in the paper.

Nowrasteh admits that in the developing world, heavy refugee flows are correlated with increased terrorism. He also observes that, since 2001, “only three years were marred by successful foreign-born attacks.” Given his focus on what he calls the chance of being murdered in a terrorist attack (based solely on historical frequencies since 1975), the fact that successful terrorism occurred in only three years seems oddly polemical. What follows?  By his lights, the probability of terrorist death stems only from historical frequencies. While honest people disagree about Bayesian probability theory, surely we owe the Bayesians more than blinding ourselves to all but brute averages over a 40-year interval. I.e., having noted that heavy refugee flows correlate with terrorism elsewhere, he doesn’t update his prior at all. Further, unsuccessful terrorist attempts  have no influence.

Nowrasteh writes, “government officials frequently remind the public that we live in a post-9/11 world where the risk of terrorism is so extraordinarily high that it justifies enormous security expenditures.” I don’t know his mindset, but writing this in a “risk analysis” seems poorly motivated at best. He seems to be saying that given the low rate of successful terrorism, security efforts are a big waste. The social justice warriors repeating this quote from his analysis clearly think so:

“The chance that an American would be killed in a terrorist attack committed by a refugee was 1 in 3.64 billion a year.” [emphasis in original]

Nowrasteh develops a line of argument around the cost of a disrupted economy resulting from terrorism events using the 1993 and 2001 WTC attacks and the Boston Marathon bombing, finding that cost to be relatively small. He doesn’t address the possibility that a cluster of related successful attacks might have disproportionately disruptive economic effects.

He makes much of the distinctions between various Visa categories (e.g., tourist, refugee, student) – way too much given that the rate of terrorism in each is tiny to start with, and they vary only by an order of magnitude or so.

These are trifles. Two aspects of the analysis are shocking. First, Nowrasteh repeatedly reports the calculated probabilities of being killed by foreigners of various Visa categories – emphasizing their extreme smallness – with no consideration to base rate. I.e. the probability of being murdered is already tiny. Many of us might be more interested in a conditional probability – what is the probability that if you were murdered, the murderer would be an immigrant terrorist. Or perhaps, if you were murdered by an immigrant terrorist, how likely is it that the immigrant terrorist arrived on a refugee Visa.

Finally, Nowrasteh makes this dazzling claim:

“The attacks (9/11) were a horrendous crime, but they were also a dramatic outlier.” 

Dramatic outlier? An outlier is a datum that lies far outside a known distribution. Does Nowrasteh know of a distribution of terrorist events that nature supplies? What could possibly motivate such an observation. We call a measurement an outlier when its accuracy is in doubt because of prior knowledge about its population. Outliers cannot exist in sparse data. Saying so is absurd. Utterly.

“I wouldn’t believe it even if Cato told me so.” That is how, we are told, an ancient Roman senator would express incredulity, since Marcus Porcius Cato was the archetype of truthfulness. Well, Cato has said it, and I’m bewildered.




Medical Device Risk – ISO 14971 Gets It Right

William Storage
VP, LiveSky, Inc.,  Visiting Scholar, UC Berkeley History of Science

The novel alliance between security research firm MedSec and Muddy Waters LLC’s short-seller Carson Block brought medical device risk into the news again this summer. The competing needs of healthcare cost-control for an aging population, a shift toward population-level outcomes, med-tech entrepreneurialism, changing risk-reward attitudes, and aggressive product liability lawsuits demand a rational approach to medical-device risk management. Forty-six Class-3 medical device recalls have been posted this year.

Medical device design and manufacture deserves our best efforts to analyze and manage risks. ISO 14971 (including EU variants) is a detailed standard providing guidance for applying risk management to medical devices. For several years I’ve been comparing different industries’ conceptions of risk and their approaches to risk management in my work with UC Berkeley’s Center for Science, Technology, Medicine and Society. In comparison to most sectors’ approach to risk, ISO 14971 is stellar.

My reasons for this opinion are many. To start with, its language and statement of purpose is ultra-clear. It’s free of jargon and ambiguous terms such as risk scores and risk factors – a potentially useful term that has incompatible meanings in different sectors. Miscommunication between different but interacting domains is wasteful, and could even increase risk. Precision in language a small thing, but it sets a tone of discipline that many specs and frameworks lack. For example, the standard includes the following definitions:

  • Risk– combination of the probability of occurrence of harm and the severity of that harm
  • Hazard– potential source of harm
  • Severity– measure of the possible consequences of a hazard

Obvious as those may seem, defining risk in terms of hazards is surprisingly uncommon; leaving severity out of its definition is far too common; and many who include it define risk as an arithmetic product of probability and severity, which often results in nonsense.

ISO 14971 calls for a risk-analysis approach that is top-down. I.e., its risk analysis emphasizes functional hazard analysis first (ISO 14971 doesn’t use the acronym “FHA”, but its discussion of hazard analysis is function-oriented). Hazard analyses attempt to identify all significant harms or unwanted situations – often independent of any specific implementation of the function a product serves – that can arise from its use. Risk analyses based on FHA start with the hypothetical harms and work their way down through the combinations of errors and failures that can lead to that harm.

Despite similarity of the information categories between FHA and Failure Mode Effects Analysis (FMEA), their usage is (should be) profoundly different. As several authors have pointed out recently, FMEA was not invented for risk analysis, and is not up to the task. FMEAs simply cannot determine criticality of failures of any but the simplest components.

Further, FHA can reasonably accommodate  harmful equipment states not resulting from failure modes, e.g. misuse, and mismatched operational phase and operating mode, and other errors. Also, FHAs force us to specify criticality of situations (harm to the device user) rather than trying to tie criticality to individual failure modes. Again, this is sensible for complex and redundant equipment, while doing no harm for simple devices. While the standard doesn’t mention fault trees outright, it’s clear that in many cases the only rational defense of a residual risk of high severity in a complex device would be fault trees to demonstrate sufficiently low probability of hazards.

ISO 14971 also deserves praise for having an engineering perspective, rather than that of insurers or lawyers. I mean no offense to lawyers, but successful products and patient safety should not start with avoidance of failure-to-warn lawsuits, nor should it start with risk-transfer mechanisms.

The standard is pragmatic, allowing for a risk/reward calculus in which patients choose to accept some level of risk for a desired benefit. In the real world, risk-free products and activities do not exist, contrary to the creative visions of litigators. Almost everyone in healthcare agrees that risk/reward considerations make sense; but it often fails to make its way into regulations and standards.

14971 identifies a proper hierarchy of risk-control options that provide guidance from conceptual design through release of medical devices. The options closely parallel those used in design of life-critical systems in aerospace and nuclear circles:

  1. inherent safety by design
  2. protective measures
  3. information for safety

As such, the standard effectively disallows claiming credit for warnings in device instructions as a risk-reduction measure without detailed analysis of such claims.

A very uncommon feature of risk programs is calling for regression-analysis on potential new risks introduced by control measures. Requiring such regression analysis forces hazard analysis reports to be living documents and the resulting risk evaluations to be dynamic. A rough diagram of the risk management process of ISO 14971, based on one that appears in the standard with minor clarifications (at least for my taste) appears below.

ISO 14971 risk management process

This standard also avoids the common pitfalls and fuzzy thinking around “detection”(though some professionals seem determined to introduce it in upcoming reviews). Presumably, its authors recognized that if monitors and operating instructions call for function-checks then detection is addressed in FHAs and FMEAs, and is not some vague factor to be stirred into risk calculus (as we see in RPN usage).

What’s not to like? Minor quibbles only. Disagreements between US and EU standards bodies address some valid, often subtle points. Terminology issues such as differentiating “as low as reasonably practicable” vs “as far as possible” bring to mind the learning curve that went with the FAA AC 25.1309 amendments in commercial aviation. This haggling is a good thing; it brings clarity to the standard.

Another nit – while the standard is otherwise free of risk-neutrality logic flaws, Annex D does give an example of a “risk chart” plotting severity against probability. However, to its credit, the standard says this is for visualization and does not imply that any conclusions be drawn from the relative positions of plotted risks.

Also while  severity values are quantified concretely (e.g., Significant = death, Moderate = reversible or minor injury, etc.), Annex D.3.4 needlessly uses arbitrary and qualitative probability ranges, e.g., “High” = “likely,” etc.

These are small or easy-to-fix concerns with a very comprehensive, systematic, and internally consistent standard. Its authors should be proud.

Comments on the COSO ERM Public Exposure 2016

In June, COSO, the Committee of Sponsoring Organizations of the Treadway Commission, requested comments on a new draft of its framework.  I discovered this two days before the due date for comments, and rushed to respond.  My comments are below. The document is available for public review here.

Most of my comments about this draft address Section 2, which deals with terminology. I’d like to stress that this concern stems not from a desire for semantic purity but from observations of miscommunication and misunderstandings between ERM professionals and those of various business units as well as a lack of conceptual clarity about risk within ERM.

Before diving into that topic in detail, I’ll offer two general comment based on observations from work in industry. I think we all agree that risk management must be a process, not a business unit. Despite this, many executives still equate risk management with regulatory compliance or risk transfer through insurance. That thinking was apparent in the Protiviti and EIU surveys of the 2000’s, and, despite the optimism of Deloitte’s 2013 survey, is readily apparent if one reads between its lines. As with information technology, risk management is far too often viewed as a department down the hall, rather than an integral process. Sadly, part of this problem seems to stem from ERM’s self-image; ERM is often called “an area of management” in ERM literature. Risk management can no more be limited to an area of management than can engineering or supply-chain management; i.e., they require management, not just Management.

My second general comment is that the framework expends considerable energy on risk management but comparatively little on risk assessment. It is hard to imagine how risks can be managed without first being assessed, i.e., managed risks must be first identified and measured.

Nearly all risk management presentations lean on imagery and examples from aerospace and other human endeavors where inherently dangerous activities have been made safe through disciplined risk analysis and management. Many ERM practitioners believe that their best practices and frameworks draw heavily on the body of knowledge developed in aviation over the last 70 years. This belief is not totally justified. ERM educators and practitioners often use aerospace metaphors (or nuclear, mountaineering, scuba, etc.) but are unaware that the discipline of aerospace risk assessment and management categorically rejects certain axioms of ERM – particularly those tied to the relationships between the fundamental concepts of risk, likelihood or probability, severity and uncertainty. I’d like to offer here that moving a bit closer to conceptual and terminological alignment with the aerospace mindset would better serve the objectives of COSO.

At first glance ERM differs greatly in objective from aircraft safety, and has a broader scope. This difference in scope might be cited as a valid basis for the difference in approaches and mindsets I observe between the two domains. I’ll suggest that the perception of material differences is mostly an illusion stemming from our fear of flying and from minimal technical interchange between the two domains. Even assuming, for sake of argument, that aerospace risk analysis is validation-focused rather than a component of business decision-making and strategy, certain fundamental concepts would still be shared. I.e., in both cases we systematically identify risks, measure their impact, modify designs and processes to mitigate them, and apply the analysis of those risks to strategic decisions where we seek gain. This common thread running through all risk management would seem to warrant commonality in perspective, ideology, method, and terminology. Yet fundamental conceptual difference exist, which, in my view, prevent ERM from reaching its potential.

Before discussing how ERM might benefit from closer adherence to mindsets fostered by aerospace risk practice (and I use aerospace here as a placeholder – nuclear power, weapons systems, mountaineering and scuba would also apply) I’d like to stress that both probabilistic and qualitative risk analysis of many forms profoundly impact strategic decisions of aircraft makers. At McDonnell Douglas (now Boeing) three decades ago I piloted an initiative to use probabilistic risk analysis in the conceptual-design phase of aircraft models considered for emerging markets (as opposed to merely in the realm of reliability assessment and regulatory compliance). Since risk analysis is the only rational means for allocating redundancy within complex systems, the tools of safety analysis entered the same calculus as those evaluating time-to-market, financing, credit, and competitive risk.

In the proposed framework, I have significant concerns about the definitions given in Section 2 (“Understanding the Terms”). While terminology can be expected to vary across disciplines, I submit that these definitions do not serve COSO’s needs, and that they hamper effective communication between organizations. I’ll offer suggested revisions below.

P22 begins:

“There is risk in not knowing how an entity’s strategy and business objectives may be affected by potential events. The risk of an event occurring (or not), creates uncertainty.”

It then defines risk, given the context of uncertainty specified above:

Risk: “The possibility that events will occur and affect the achievement of strategy and business objectives.”

The relationship between risk and uncertainty expressed here seems to be either backward or circular. That is, uncertainty always exists in setting an entity’s strategy and business objectives. That uncertainty exists independent of whether a party has a stake in the success of the entity. Uncertainty – the state of not being definitely known, being undecided, or having doubt – only entails risk, as “risk” is commonly used in society, most of business, science, and academics to those with a stake in the outcome.

I am aware that in many ERM frameworks, risk is explicitly defined as uncertainty about an outcome that can be either beneficial or undesirable. Such usage of term has two significant problems. First, it causes misunderstandings in communications between ERM insiders and those affected by their decisions. Second, even within ERM, practitioners drift between that ERM-specific meaning and the meaning used by the rest of the world. This is apparent in the frequent use of expressions such as “risk mitigation” and “risk avoidance” within ERM literature. Use of these phrases clearly indicates a scope of “risk” limited to unwanted events, not to desired outcomes. Logically, no one would seek to mitigate benefit.

While the above definition of risk doesn’t explicitly connect beneficial outcomes with risk, the implicit connection is obvious in the relationships between risk and the other defined terms. If risk is “the possibility that events will occur” and those events can be beneficial or undesirable, then, as defined, the term risk covers both beneficial and undesirable events. Risk then communicates nothing beyond uncertainty about those events. As such, risk becomes synonymous with uncertainty.

Equating risk with uncertainty is unproductive; and expressing uncertainty as a consequence of risk (as stated at the beginning of P22) puts the cart before the horse. The general concept in risk studies is that risk is a consequence of uncertainty, not the cause of uncertainty. Decisions would be easy – virtually automatic – if uncertainty were removed from the picture.

Uncertainty about potential outcomes, some of which are harmful, is a necessary but insufficient feature of risk. The insufficiency of uncertainty alone in expressing risk is apparent if one considers, again, that no risk exists without a potential loss. Uncertainty exists at the roulette wheel regardless of your participation. You have risk only if you wager. The risk rises as your wager rises. Further, for a given wager, your risk is higher in America than in Europe roulette because American roulette’s additional possible outcome – the double-zero (not present elsewhere) – reduces your probability – i.e., increases your uncertainty – of winning. Thus rational management of risk entails recognition of two independent components of risk – uncertainty and loss. Below I suggest a revision of the definition of risk to accommodate this idea.

Understanding risk to involve both uncertainty and potential loss provides consistency with usage of the term in the realms of nuclear, aerospace, medicine, manufacturing statistical-process control, and math and science in general.

When considering uncertainty’s role in risk (and that they have profoundly different meanings), we can consider several interpretations of uncertainty. In math, philosophy, and logic, uncertainty usually refers to quantities that can be expressed as a probability – a value between zero and one – whether we can state that probability with confidence or not. We measure our uncertainty about the outcome of rolling a die by, assuming a fair die, examining the sample space. Given six possible outcomes of presumed equal likelihood, we assign a probability of 1/6 to each possible outcome. That is a measurement of our uncertainty about the outcome. Rolling a die thousands of times gives experimental confirmation of our uncertainty measurement. We express uncertainty about Manhattan being destroyed this week by an asteroid through a much different process. We have no historical (frequency) data from which to draw. But by measuring the distribution, age, and size of asteroid craters on the moon we can estimate the rate of large asteroid strikes on the earth. This too gives a measure of our uncertainty about Manhattan’s fate. We’re uncertain, but we’re not in a state of complete ignorance.

But we are ignorant of truly unforeseeable events – what Rumsfeld famously called unknown unknowns. Not even knowing what a list of such events would contain could also be called uncertainty; but it is a grave error to mix that conception of uncertainty (perhaps better termed ignorance) with uncertainty about the likelihood of known possible events. Much of ERM literature suffers from failing to make this distinction.

An important component of risk-management is risk-analysis in which we diligently and systematically aim to enumerate all possible events, thereby minimizing our ignorance – moving possible outcomes from the realm of ignorance to the realm of uncertainty, which can be measured, though sometimes only by crude estimates. It’s crucial to differentiate ignorance and uncertainty in risk management, since the former demands thoroughness in identifying unwanted events (often called hazards, though ERM restricts that term to a subset of unwanted events), while the latter is a component of a specific, already-identified risk.

Beyond facilitating communications between ERM practitioners and those outside it, a more disciplined use of language – using these separate concepts of risk, uncertainty and ignorance –  will promote conceptual clarity in managing risk.

A more useful definition of risk should include both uncertainty and loss and might take the form:

Risk:  “The possibility that unwanted events will occur and negatively impact the achievement of strategy and business objectives.”

To address the possible objection that risk may have a “positive” (desirable) element, note that risk management exists to inform business decisions; i.e., making good decisions involves more than risk management alone; it is tied to values and data external to risks. Nothing is lost by restricting risk to the unwanted consequences of unwanted events. The choice to accept a risk for the purpose of achieving a desirable outcome (gain, reward) is informed by thorough assessment of the risk. Again, without uncertainty, we’d have no risk; without risk, decisions would be easy. The possibility that by accepting a managed risk we may experience unforeseen benefits (beyond those for which the decision to accept the risk was made) is not excluded by the above proposed definition of risk. Finally, my above proposed definition is consistent with the common conception of risk-reward calculus.

One final clarification: I am not proposing that risk should in any way be an arithmetic product of quantified uncertainty and quantified cost of the potential loss. While occasionally useful, that approach requires a judgment of risk-neutrality that can rarely be justified, and is at odds with most people’s sense of risk tolerance. For example, we have no basis for assuming that a bank would consider one loss of a million dollars to be an equivalent risk to 10,000 losses of $100 each, despite both having the same mathematical expectation (expected value or combined cost of the loss).

An example of the implicit notion of a positive component of risk (as opposed to a positive component of informed decision-making) P25 states:

“Organizations commonly focus on those risks that may result in a negative outcome, such as damage from a fire, losing a key customer, or a new competitor emerging. However, events can also have positive outcomes, and these must also be considered.“

A clearer expression of the relationship between risk (always negative) and reward would recognize that positive outcomes result from deciding to accept managed and understood risks (risks that have been analyzed). With this understanding of risk, common to other risk-focused disciplines, positive outcomes result from good decisions that manage risks, not from the risks themselves.

This is not a mere semantic distinction, but a conceptual one. If we could achieve the desired benefit (“positive outcome”) without accepting the risk, we would certainly do so. This point logically ties benefits to decisions (based on risk analysis), not to risks themselves. A rewording of P25 should, in my view, should explain that:

  • events (not risks) may result in beneficial or harmful outcomes
  • risk management involves assessment of the likelihood and cost of unwanted outcomes,
  • risks are undertaken or planned-for as part of management decisions
  • those informed decisions are made to seek gains or rewards

This distinction clarifies the needs of risk management and emphasizes its role in good corporate decision-making.

Returning to the concept of uncertainty, I suggest that the distinction between ignorance (not knowing what events might happen) and uncertainty (not knowing the likelihood of an identified event) is important for effective analysis and management of risk. Therefore, in the context of events, the matter of “how” should be replaced with shades of “whether.” The revised definition I propose below reflects this.

The term severity is common in expressing the cost of the loss component of risk.  The definition of severity accompanying P25 states:

Severity: A measurement of considerations such as the likelihood and impacts of events or the time it takes to recover from events.

Since the definition of risk (both the draft original and my proposed revision) entail likelihood (possibility or probability), likelihood should be excluded from a definition of severity; they are independent variables. Severity is a measure of how bad the consequences of the loss can be. I.e., it is simply the cost of the hypothetical loss, if the loss were to occur. Severity can be expressed in dollars or lost lives. Reputation damage, competitive disadvantage, missed market opportunities, and disaster recovery all ultimately can be expressed in dollars. While we may only be able to estimate the cost of a loss, the probability of that loss is independent of it severity.

Recommended definitions for Sections 2:

Event: An anticipated or unforeseen occurrence, situation, or phenomenon of any magnitude, having beneficial, harmful or unknown consequences

Uncertainty: The state of not knowing or being undecided about the likelihood of an event.

Severity: A measurement of the undesirability or cost of a loss

Risk:  The possibility that unwanted events will negatively impact the achievement of strategy and business objectives.


Historical perspective on the divergent concepts of risk, uncertainty, and probability

Despite having mastered geometry and the quadratic formula in ancient times, our study of probability and uncertainty only dates to the late 17th century when Blaise Pascal was paid by a client to develop mathematics to gain an advantage in gambling. This was the start of the frequentist interpretation of probability, based on the idea that, under deterministic mechanisms, we can predict the outcome of various trials given a large enough data set. Pierre-Simon Laplace then formalized the subjectivist (Bayesian) interpretation of probability in which probability refers to one’s degree of belief in a possible outcome. Both these interpretations of probability are expressed as a number between zero and one. That is, they are both quantifications of uncertainty about one or more explicitly identified potential outcomes.

The inability to identify a possible outcome, regardless of probability, stems from ignorance of the system in question. Such ignorance is in some cases inevitable. An action may have unforeseeable outcomes; flipping our light switch may cause a black hole to emerge and swallow the earth. But scientific knowledge combined with our understanding of the wiring of a house gives us license to eliminate that as a risk. Whether truly unforeseeable events exist depends on the situation; but we can say with confidence that many events called black swans, such as the Challenger explosion, Hurricane Katrina and the 2009 mortgage crisis were foreseeable and foreseen – though ignored. The distinction between uncertainty about likelihood of an event and ignorance of the extent of the list of events si extremely important.

Yet confusing the inability to anticipate all possible unwanted events and a failure to measure or estimate the probability of identified risks is common in some risk circles. A possible source of this confusion was Frank Knight’s 1921 Uncertainty and Profit. Knight’s contributions to economic and entrepreneurial theory are laudable, but his understanding of set theory and probability was poor. Despite this, Knight’s definitions linger in business writing. Specifically, Knight defined “risk” as “measurable uncertainty” and “uncertainty” as “unmeasurable uncertainty.” Semantic incoherence aside, Knight’s terminology was inconsistent with all prior use of the terms uncertainty, risk, and probability in mathematical economics and science. (See chapters 2 and 10 of Stigler’s The History of Statistics: The Measurement of Uncertainty before 1900 for details).

The understanding and rational management of risk requires that we develop and maintain clarity around the related but distinct concepts of uncertainty, probability, severity and risk, regardless of terminology. Clearly, we can navigate through some level of ambiguous language in risk management, but the current lack of conceptual clarity about risk in ERM has not well served its primary objective. Hopefully, renewed interest in making ERM integral to strategic decisions will allow a reformulation of the fundamental concepts of risk.