ERM and the Prognostication Everlasting of Thomas Digges

William Storage – Oct 19, 2016
VP, LiveSky, Inc.,  Visiting Scholar, UC Berkeley History of Science

Enterprise Risk Management is typically defined as a means to identify potential events that affect an entity and to manage risk such that it is within the entity’s risk appetite. Whether the “events” in this definition are potential business opportunities or are only potential hazards is a source of confusion. This definition ties a potentially abstract social construct – risk appetite – to the tangible, quantifiable concept of risk. If the events under consideration in risk analysis are business opportunities and not just hazards (in the broader sense of hazard, including, e.g., fraud, insufficient capital, and competition), then the definition also directly entails quantifying the opportunity – its value, time scale, and impact on other mutually-exclusive opportunities. Underlying the complex subject of enterprise risk are the fundamental and quantifiable elements of probability, uncertainty, hazard severity, cash value of a loss, value of a gain, and to some extent, risk appetite or tolerance.

ERM practitioners tend to recognize that these concepts lie at the heart of ERM, but seem less certain about how the concepts relate to one another. In this sense ERM reminds me of the way 16th century proto-scientists wrestled with the concepts of mass, forces, cause and effect, and the difficulties they had separating long-held superstitious and theology-based beliefs from beliefs based on evidence and rational thought.

A great example is Thomas Digges’ 1605 almanac, Prognostication Everlasting, an augmented version of his father’s 1576 almanac of the same name. Both Digges had a keen interest in nature and physics. These writers, like their contemporaries including William Gilbert and Galileo, are examples of proto-scientists. In his extended Prognostication, Thomas Digges predicted the weather by a combination of astrology and atmospheric phenomena including clouds and rainbows. Stars and planets were parts of nature too. Lacking any concept of gravity and how natural forces give rise to observed effects, it seemed reasonable that the position of celestial bodies could impact weather and human life. Digges was able to predict the times of sunrise and high tides surprisingly well. His calculations also predicted when to let blood, induce diarrhea and employ the medical intervention of bathing. He discouraged bathing when the moon was in Virgo or Capricorn, because these earth signs are naturally at odds with water.

Digges’ weather predictions were both vague and imprecise. It’s hard to tell whether to expect warm and wet, or warm and dry. And though we might expect warm, should we expect it next week or next month?

The almanacs also had another problem seen today in many business analyses. Leonard Digges had calculated the distance from Earth to the sphere of the fixed stars to be 358,463.5 miles. Such calculations at best show neglect in the significance of digits, and at worst, are failures of epistemological humility, or even outright crackpot rigor.

Thomas Digges corrected his father’s error here, and, going further, positing and endless universe – endless once you travel beyond the crystalline spheres of the heavenly elect, the celestial angels, and the orb of The Great God. Beyond that sphere Digges imagined infinite stars. But he failed to see the sun as a star and the earth as a planet, a conclusion that his more scientifically-minded contemporary, Tycho Brahe, had already reached.

I don’t mean to mock Digges. He wrote the first real defense of heliocentrism in English. Despite pursuing a mixture of superstition, science, and Christianity, Digges was a pioneer. He was onto something – just like practitioners of ERM. For Digges, rationality and superstition could live side by side without conflict. ERM likewise. Digges worked long and hard to form theories, sometimes scoffing dogma, sometimes embracing it. Had he taken the extra step of judging theories on evidential support – something natural philosophers would master over the next century – a line of slick computers would today bear his name.

Copernican universe according to Thomas Digges“A Perfit Description of the Caelestiall Orbes according to the most aunciente doctrine of the Pythagoreans, latelye revived by Copernicus and by Geometricall Demonstrations”

Digges’ view of the world, as seen in the above diagram, has many problems. Two of particular significance stem from his retaining Aristotle’s circular orbits and the idea that celestial bodies were attached to crystalline spheres that held them in position. Without letting go of these ancient beliefs, his model of reality was stuck in a rut.

ERM has analogous models of the world – at least the world of risk management. A staple of ERM is the risk register, as seen below. As commonly used the risk register is representation of all identified risks using a two-axis world view. Apparently unknown to many practitioners, this model, like Digges’ work view, contains wrong beliefs that, like circular orbits, are so deeply embedded as to be invisible to its users. Two significant ones come to mind – a belief in the constancy of risk tolerance across organizations, and belief in constancy of risk tolerance across hazard impact levels.

author: Hou710An ERM risk-register model of the world

Many ERM practitioners believe risk registers (and heat maps, a closely related model) to be a tool or concept used in aerospace, an exemplar for risk management. This is incorrect; commercial aviation explicitly rejects risk registers precisely because constancy of risk tolerance across hazard severities is not remotely akin to the way human agents perceive risk. Some might argue that all other things being equal, the risk register is still a good model. But that ceteris paribus is far enough from reality to make the point moot. It recalls Nathan Arizona’s famous retort, “yeah, and if a frog had wings…” No human or corporate entity ever had a monolithic risk appetite or one that was constant across impact levels.

The use of risk registers implies agreement with an assumption of risk-neutrality that is never made explicit – never discussed – but for which I can imagine no justification. Should ERM do away with risk registers altogether? Short answer: yes. Replace it with separate functional hazard analyses, business impact analyses, and assessments of causal factors leading up to the identified hazards.

As with proto-science in the age of Thomas Digges, ERM needs to establish exactly what it means by its fundamental terms – things like uncertainty and risk. Lack of terminological clarity is an obstacle to conceptual clarity. The goal here is not linguistic purity, or, as William Gilbert, a contemporary of Digges put it, the “foolish veils of vocabularies,” but the ability of practitioners to get beyond the illusion of communication.

Also like proto-science, ERM must embrace mathematics and probability. Mapping known or estimated cost values into ranges such as minor, moderate and significant does no intellectual work and attempts to cure imprecision with vagueness. The same goes for defining probability values of less than one per thousand as remote. Quantified estimation is necessary. Make informed estimates and state them clearly. Update your estimates when new evidence appears.

As with science, ERM seeks to make prognostications that can inform good decision-making. It needs method (methodology), but method at a high level rather than processes to be enacted by “risk owners” removed from the decisions the risk analysis was intended to inform. As Michael Power put it, recommendations to embedding risk management and internal control systems within all business processes have led to “the wrong kind of embeddedness.” Power suggests that a Business Continuity Management (BCM) approach would be more useful than the limited scope of an internal-controls approach. While Power doesn’t specifically address the concept of objective measurement, it is central to BCM.

Like the proto-science of Thomas Digges, ERM needs to embrace empiricism and objective measurement and to refrain from incantations about risk culture. As Joseph Glanville wrote in 1661, “ we believe the [compass] needle without a certificate from the days of old.” Paraphrasing Kant, we can add that theory without data is lame.

There is danger in chasing an analogy too far, but the rough parallels between proto-science and ERM’s current state are instructive. Few can doubt the promise of enterprise risk management; but it’s time to take a step forward.

2 thoughts on “ERM and the Prognostication Everlasting of Thomas Digges

  1. The Risk Register is starting to make its way into Nuclear power. Risk is always a concern and the register is being used to rank various future projects, in part to address the nuclear risk. I am looking for ways to understand the register and its limitations and identify possible alternatives.

    Like

    1. It’s painful to watch heat maps, RPN, risk ranking and related techniques, mostly pseudoscience, creep into areas that once knew better. I’ve recently seen USAF documents with heat maps in them. Digging into how that came to be, some engineers told me they thought it trickled down from management, who was influenced by consulting work done for the USAF by Deloitte, Bain and McKinsey. So it seems plausible that the USAF is now performing a bad imitation of itself. The consultancies embraced a bad imitation of engineering risk analysis, delivered it as part of consulting to USAF, and others in USAF promoted it as best practice, since big consultancies use it. Fascinating. I once read that some of the 1960’s USAF fighter planes were, similarly, bad 2nd generation imitations of earlier USAF fighters, by virtue of copying Soviet aircraft designs that were poor copies of USAF aircraft designs. Not sure if that really happened, but it seems plausible.

      I worked in nuclear a long time ago. Nuclear power has had a rather uneven history in reliability/risk analysis.

      Possible alternative (roughly speaking):
      Start with FHAs.
      Use a coarse division of hazard classes (commercial aviation uses only 4).
      Assign hazard classes to functional hazards.
      Forget about the two lower classes.
      Avoid any comparison of hazards across hazard classes (no heat maps).
      Model every functional hazard with fault trees, FMEA etc. Include operator error.
      Model monitors as part of the systems, including human monitors.
      Get exposure times right for latent failures and monitor failures.
      Validate independence of all redundancies.
      Examine common-mode failures.
      Do zonal analyses and cross-system impact analysis.

      Thoughts?

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s