Category Archives: Medical devices

ECRI’s Top 10 Health Technology Hazards

William Storage – Nov 30, 2016
VP, LiveSky, Inc.
Visiting Scholar, UC Berkeley Center for Science, Technology, Medicine & Society

ECRI recently published their list of top ten health technology hazards for 2017. ECRI has released such a list each year since at least 2008.

ECRI’s top ten for 2017 (requires registration) as they label them:

  1. Infusion Errors Can Be Deadly If Simple Safety Steps Are Overlooked
  2. Inadequate Cleaning of Complex Reusable Instruments Can Lead to Infections
  3. Missed Ventilator Alarms Can Lead to Patient Harm
  4. Undetected Opioid-Induced Respiratory Depression
  5. Infection Risks with Heater-Cooler Devices Used in Cardiothoracic Surgery
  6. Software Management Gaps Put Patients, and Patient Data, at Risk
  7. Occupational Radiation Hazards in Hybrid ORs
  8. Automated Dispensing Cabinet Setup and Use Errors May Cause Medication Mishaps
  9. Surgical Stapler Misuse and Malfunctions
  10. Device Failures Caused by Cleaning Products and Practices

ECRI is no doubt aiming their publication at a broad audience. The wording of several of these, from the standpoint of hazard assessment, could be refined a bit to better inform mitigation plans. For example, the first item in the list (infusion errors) doesn’t really name an actual hazard (unwanted outcome). I take a crack at it below, along with a few comments on some of the other hazards.

ECRI lists their criteria for inclusion in the list. They include – in system-safety terminology – severity, frequency, scope (ECRI: “breadth,” “insidiousness”), detectability (“insidiousness”), profile,  preventability.

That seems a good set of criteria, though profile might better point to an opportunity for public education rather than be a good criterion for ranking risks. We’d hope that subject matter experts would heavily discount public concern for imaginary hazards.

Infusion failures/errors resulting in wrong dose, rate, duration or contamination

I’m guessing there’s a long list of possible failures and errors that could lead to or contribute to this hazard. Some that come to mind:

  • Software bugs
  • Human-computer interaction (HCI) errors (wrong value entered due to extra keystroke)
    • Unit-of-measure confusion
    • Unclear instructions and cues
    • Alert fatigue
    • Unclear warnings
    • Unheard warnings (speaker volume low)
  • Monitor failures (false positive, failure to alert when alert condition is met)
  • Undetected physical damage (material fatigue-cracks allowing water penetration
  • Unannunciated battery failure
  • Electrical power failure

Ventilator alarms

This issue includes two unrelated problems, one simple and infrequent, the other common  and often called “preventable human error.” Human error may be the immediate cause, but systems having a large number of critical, preventable errors are flawed systems. That means some combination of flawed hardware design and flawed operating procedures. The first problem, latent failure of ventilator alarm resulting in undetected breathing problem, caused several deaths in the last ten years. Failure of caregivers to respond to alarm reporting critical breathing condition is much more serious, and has been near the top of ECRI’s list for the past five years.

Undetected Opioid-Induced Respiratory Depression

In 2006 an Anesthesia Patient Safety Foundation conference set a vision that “no patient shall be harmed by opioid-induced respiratory depression” and considered various changes to patient monitoring. In 2011, lack of progress toward that goal led to another conference that looked at details of patient monitoring during anesthesia. Alert fatigue was again a major factor. Inclusion in ECRI’s 2016 list suggests HCI issues related to oximetry and ventilation-monitoring still warrant attention.

Occupational Radiation Hazards in Hybrid ORs

Wouldn’t traditional radiation badges for the staff in hybrid facilities be a cheap solution?

Software Management Gaps

Yes Judy, EHR vendors’ versioning practices from the 80s do impact patient care. So do sluggish IT departments. ECRI cites delayed implementation of software updates with safety ramifications  and data inaccessibility as consequences.

Mitigations

Isn’t there some pretty low-hanging fruit for mitigation on this hazard list? Radiation badges, exhaust-fan filters on heater-cooler systems to catch aerosolized contaminants, and formal procedures for equipment cleaning that specify exactly what cleaning agents to use would seem to knock three items from the list with acceptable cost.

Correcting issues with software deployment and version management may take years, given the inertia of vendors and IT organizations, and will require culture changes involving hospital C-suites.

onriskof.comDespite decades of psychology studies showing that frequent and repetitive alarms (and excessive communication channels) negatively impact our ability to recall “known” information, to cause us to forget which process step we’re performing, and cause us to randomly shed tasks from a mental list, computer and hardware interface design still struggles with information chaos. Fixing this requires the sort of multidisciplinary/interdisciplinary analysis for which current educational and organizational silos aren’t prepared. We have work to do.

ECRI deserves praise not only for researching and publishing this list, but for focusing primarily on hazards and secondarily on risk. From the perspective of system safety, risk management must start with hazard assessment. This point, obvious to those with a system safety background, is missed in many analyses and frameworks.

  – – –


In the San Francisco Bay area?

If you are, consider joining us in a newly formed Risk Management meetup group.

Risk assessment, risk analysis, and risk management have evolved nearly independently in a number of industries. This group aims to cross-pollinate, compare and contrast the methods and concepts of diverse areas of risk including enterprise risk (ERM), project risk, safety, product reliability, aerospace and nuclear, financial and credit risk, market, data and reputation risk, and so on.

This meetup aims to build community among risk professionals – internal auditors and practitioners, external consultants, job seekers, and students – by providing forums and events that showcase leading-edge trends, case studies, and best practices in our profession, with a focus on practical application and advancing the state of the art.

If you’re in the bay area, please join us, and let us know your preferences for meeting times.

https://www.meetup.com/San-Francisco-Risk-Managers/

Medical Device Risk – ISO 14971 Gets It Right

William Storage
VP, LiveSky, Inc.,  Visiting Scholar, UC Berkeley History of Science

The novel alliance between security research firm MedSec and Muddy Waters LLC’s short-seller Carson Block brought medical device risk into the news again this summer. The competing needs of healthcare cost-control for an aging population, a shift toward population-level outcomes, med-tech entrepreneurialism, changing risk-reward attitudes, and aggressive product liability lawsuits demand a rational approach to medical-device risk management. Forty-six Class-3 medical device recalls have been posted this year.

Medical device design and manufacture deserves our best efforts to analyze and manage risks. ISO 14971 (including EU variants) is a detailed standard providing guidance for applying risk management to medical devices. For several years I’ve been comparing different industries’ conceptions of risk and their approaches to risk management in my work with UC Berkeley’s Center for Science, Technology, Medicine and Society. In comparison to most sectors’ approach to risk, ISO 14971 is stellar.

My reasons for this opinion are many. To start with, its language and statement of purpose is ultra-clear. It’s free of jargon and ambiguous terms such as risk scores and risk factors – a potentially useful term that has incompatible meanings in different sectors. Miscommunication between different but interacting domains is wasteful, and could even increase risk. Precision in language a small thing, but it sets a tone of discipline that many specs and frameworks lack. For example, the standard includes the following definitions:

  • Risk– combination of the probability of occurrence of harm and the severity of that harm
  • Hazard– potential source of harm
  • Severity– measure of the possible consequences of a hazard

Obvious as those may seem, defining risk in terms of hazards is surprisingly uncommon; leaving severity out of its definition is far too common; and many who include it define risk as an arithmetic product of probability and severity, which often results in nonsense.

ISO 14971 calls for a risk-analysis approach that is top-down. I.e., its risk analysis emphasizes functional hazard analysis first (ISO 14971 doesn’t use the acronym “FHA”, but its discussion of hazard analysis is function-oriented). Hazard analyses attempt to identify all significant harms or unwanted situations – often independent of any specific implementation of the function a product serves – that can arise from its use. Risk analyses based on FHA start with the hypothetical harms and work their way down through the combinations of errors and failures that can lead to that harm.

Despite similarity of the information categories between FHA and Failure Mode Effects Analysis (FMEA), their usage is (should be) profoundly different. As several authors have pointed out recently, FMEA was not invented for risk analysis, and is not up to the task. FMEAs simply cannot determine criticality of failures of any but the simplest components.

Further, FHA can reasonably accommodate  harmful equipment states not resulting from failure modes, e.g. misuse, and mismatched operational phase and operating mode, and other errors. Also, FHAs force us to specify criticality of situations (harm to the device user) rather than trying to tie criticality to individual failure modes. Again, this is sensible for complex and redundant equipment, while doing no harm for simple devices. While the standard doesn’t mention fault trees outright, it’s clear that in many cases the only rational defense of a residual risk of high severity in a complex device would be fault trees to demonstrate sufficiently low probability of hazards.

ISO 14971 also deserves praise for having an engineering perspective, rather than that of insurers or lawyers. I mean no offense to lawyers, but successful products and patient safety should not start with avoidance of failure-to-warn lawsuits, nor should it start with risk-transfer mechanisms.

The standard is pragmatic, allowing for a risk/reward calculus in which patients choose to accept some level of risk for a desired benefit. In the real world, risk-free products and activities do not exist, contrary to the creative visions of litigators. Almost everyone in healthcare agrees that risk/reward considerations make sense; but it often fails to make its way into regulations and standards.

14971 identifies a proper hierarchy of risk-control options that provide guidance from conceptual design through release of medical devices. The options closely parallel those used in design of life-critical systems in aerospace and nuclear circles:

  1. inherent safety by design
  2. protective measures
  3. information for safety

As such, the standard effectively disallows claiming credit for warnings in device instructions as a risk-reduction measure without detailed analysis of such claims.

A very uncommon feature of risk programs is calling for regression-analysis on potential new risks introduced by control measures. Requiring such regression analysis forces hazard analysis reports to be living documents and the resulting risk evaluations to be dynamic. A rough diagram of the risk management process of ISO 14971, based on one that appears in the standard with minor clarifications (at least for my taste) appears below.

ISO 14971 risk management process

This standard also avoids the common pitfalls and fuzzy thinking around “detection”(though some professionals seem determined to introduce it in upcoming reviews). Presumably, its authors recognized that if monitors and operating instructions call for function-checks then detection is addressed in FHAs and FMEAs, and is not some vague factor to be stirred into risk calculus (as we see in RPN usage).

What’s not to like? Minor quibbles only. Disagreements between US and EU standards bodies address some valid, often subtle points. Terminology issues such as differentiating “as low as reasonably practicable” vs “as far as possible” bring to mind the learning curve that went with the FAA AC 25.1309 amendments in commercial aviation. This haggling is a good thing; it brings clarity to the standard.

Another nit – while the standard is otherwise free of risk-neutrality logic flaws, Annex D does give an example of a “risk chart” plotting severity against probability. However, to its credit, the standard says this is for visualization and does not imply that any conclusions be drawn from the relative positions of plotted risks.

Also while  severity values are quantified concretely (e.g., Significant = death, Moderate = reversible or minor injury, etc.), Annex D.3.4 needlessly uses arbitrary and qualitative probability ranges, e.g., “High” = “likely,” etc.

These are small or easy-to-fix concerns with a very comprehensive, systematic, and internally consistent standard. Its authors should be proud.