Cognitive Ergonomics and Safety-Critical Systems
Posted on April 29, 2015
By Mitch Valdmanis, M.Eng.
Recently an excerpt from the book “The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age” by Robert Wachter was published online: “The Overdose: How Medical Tech Gave a Patient a 39-Fold Overdose“. Though the headline is sensationalized – a more appropriate title would be “How Medical Tech Failed To Prevent a Patient From Receiving a 39-Fold Overdose” – it provides an excellent anecdote about the danger of overreacting to potential failure modes to the point that new failure modes are introduced, in particular when dealing with human-computer interaction.
I would like to focus on two items identified as contributing factors to the error: the requirement that the patient’s dose be entered as a weight-based order, and the software alerts that went unheeded by the doctor and pharmacist. Both of these items likely originated as well-intentioned risk mitigations, so how is it that they ended up contributing to a medical error? In both cases, the answer can be observed when looked at through the lens of cognitive ergonomics – the study of human cognition as it relates to work and other operations.
The first item is simpler to explain. For children in particular it is important that many drugs have their dosages adjusted according to the weight of the patient. To prevent doctors or pharmacists from having to manually perform error-prone calculations to account for the patient’s weight, the administrators of UCSF’s EPIC system decided, entirely reasonably, that orders for patients weighing less than 40kg be entered as weight-based orders. This mitigation surreptitiously introduced a new failure mode, that the total dosage is entered instead of a weight-based dosage. Why didn’t the doctor realize she was entering the total dose into a weight-based dose field?
Consult any guide to user interface design and you will be convinced that consistency is among the first rules of design. In the case of the EPIC order entry screens, however, the consistency between different dosage entry modes was the downfall. The appearance of the form to enter weight-based dosage was entirely consistent with the form for entering total dosage – so much so that the physician did not recognize which mode of operation was currently being used. Perhaps if the user interface was not so consistent between the two modes, the user would instantly recognize which mode of operation was being used every time they went to enter an order. That is not to say that consistency should be disregarded entirely, but the difference between the two modes should be immediately obvious to the user.
The second failure mode, the unheeded error messages, is not so simple to diagnose precisely because the addition of alerts is such a natural risk mitigation technique. The idea that an alert can be used to mitigate a failure mode is predicated on the assumption that the user acknowledges, understands, and knows how to respond to the alert. As discussed by the author of the article, the problem lies with the first assumption: if the user fails to acknowledge the alert due to “alert fatigue,” then the alert has failed its purpose.
Alert fatigue is not a new phenomenon. Users of the Windows Vista operating system likely recall the frequent User Account Control alerts. Microsoft cited “click fatigue” as one of the reasons for reducing the frequency of prompts with the subsequent Windows release, despite protestations that doing so would put users at risk. Microsoft recognized that an alert that the user dismisses without even considering is no better than no alert at all, and is in fact worse than having fewer alerts that the user does acknowledge. Indeed, in its latest draft guidance on “Applying Human Factors and Usability Engineering to Optimize Medical Device Design” the FDA added a discussion of “nuisance” alarms, specifically that “the user could fail to notice them or be unable to make important distinctions among them”. Nuisance alarms are not restricted to the realm of computer software either. Instructions for the installation of smoke and carbon monoxide detectors include directions for locating the sensors to avoid nuisance alarms so that users do not get frustrated to the point of completely disconnecting the detectors. Ultimately, the concept of nuisance alarms goes back even to ancient times with Aesop’s classic fable about the boy who cried wolf. Though nuisance alarms are not necessarily lies, the result is the same: the alarm is ignored if it is raised too often.
The above examples illustrate that when designing systems for use in safety-critical environments it is important to consider not just how the user might interact with the system in a single instance, but the user’s interaction with the system over time: how is the user likely to interact with after they’ve grown accustomed to it? Also consider the user’s environment: what other systems or people may be vying for the user’s attention? If the user is distracted midway through an operation with your system, how difficult is it for them to re-establish the correct mental state to continue? What other alarms may the user expect to encounter while using the system? Is your system’s alarm more or less important than those? And do not forget about these factors when implementing changes to mitigate other risks.
Mitch Valdmanis is a Software Developer at Intelligent Hospital Systems, a member of the ARxIUM group of companies.
Blog | Safety | Software
Tags