Ever notice how some “risk controls” end up creating new risks? You add one more safety feature, and suddenly the design is twice as complex and harder to use.
How do you decide when a mitigation effort has gone too far? Is there a point where “safer” actually means “riskier”?
I think there should be two concepts that are taken into account when deciding if a change is actually worth implanting and it should be about efficiency of building the product after the update has been made and the severity of the risk. A product can be perfected multiple times until it is almost impossible, even though as mentioned in the lecture someone will always find a way and it is difficult to predict every possible risk, but it is useless if those safety features make the product too complex to build. A complex build will decrease efficiency and more likely profit because it may cost more to add those extra features, so the success if reduced tremendously. A harder to build product could also make it more difficult to replicate because small errors in the build could cause it to fail or even create new unforeseen problems. I think that is the time to use the risk matrix and try to rate the significance of each risk and where it falls, once that ranking has been created it is time to input how difficult is the solution to implement and what is the work around. For example, could a simple warning label be enough because there is no other actually useful route to fix the issue. It is definitely a thin line that you have to cross sparingly because it can be hard to decide what risks are worth altering the design of the entire product for, but then it may just happen that there is no solution and the risks are simply too great, so it does not make sense to continue with the product.
Sometimes a mitigation doesn’t make the product safer overall, it just transfers the risk to a different user group, workflow, or failure mode. For ex., adding too many alarms on a medical device can reduce one specific hazard, but it increases the risk of alarm fatigue, which can be more dangerous. So instead of asking, "does this control reduce the original risk?” it’s better to ask: “What new risks does this control create, and who now carries them?”
A mitigation should only move forward if the “net risk” across the whole system decreases, not just the risk we happened to focus on first.
Risk controls should be kept in their design when it effectively mitigates a risk. When a mitigation effort lowers the initial risk but introduces new issues that are even worse, such as more components that could malfunction, I believe that is when it has gone too far. There is definitely a point at which attempting to make something safer actually makes it more risky, especially when it comes to medical devices where physicians must act dependably and because of this, usability testing and risk-benefit analysis is very important. If consumers find the feature difficult to use or disregard it, or if the overall design becomes more difficult to use than before, I would say that it is obvious the mitigation needs to be simplified or reevaluated.
I think this is a great question because it is definitely something that isn't talked about often. When working on my capstone project, my team and I definitely had instances were adding improvements for optimization for our device only introduced more pathways to failure or made the device harder to operate. This only proved to increase our workload to fix those new failures and increase our stress.
In terms of medical devices, I would say that the decision comes down to how well you consider a few things:
-
Human factors: If the mitigation increases the chances of user error by adding extra steps or creating confusing workflows then it’s usually making things worse, as these errors can sometimes outweigh the risk you were trying to eliminate.
-
Complexity vs. reliability: One thing I noticed during my capstone was that every additional component or software layer we introduced also created new failure modes. If the mitigation doubles the number of things that can break, it’s time to reconsider.
-
Residual risk comparison: It's important to determine whether the post-mitigation risk is actually lower than the original risk. If it is the same or higher, then the control isn’t justified and needs to be reconsidered.
-
Benefit to risk alignment: Determine if it's necessary to redesign the product/component or is this something that can be better handled with labeling, training, and/or procedural controls.
-
Design intent: The more critical and life-sustaining the device is, the more we tolerate added complexity. For simple, user-driven devices, added complexity is usually a recipe for failure.
In summary, the best risk controls are the ones that reduce risk without distorting usability, maintainability, or reliability. If any of these factors becomes compromised due to mitigation efforts then it could be considered as "going too far", but that should be left up to the project lead to decide. Another question I might add to this topic is how do you rely more on human factors testing, FMEAs, or real-world feedback to catch over-engineered mitigations?
At some point, there is a tradeoff between mitigating risk and actually increasing the risk through "mitigation techniques or processes." Of course, basic levels of mitigating risk are important to a device, since in doing so you prevent any mishaps from occurring and make the building process and subsequent regulatory processes much easier, however there is an extent to this. This is actually a great question because it is not thought about much, especially when you are trying to figure out methods to actually reduce risk. There is a fine line, and this is when a mitigation process or technique introduces new problems to the creation of use of the medical device, it is no longer mitigating the risk but actually creating new or amplifying current risks that may not be directly related to the "risk" factor you are trying to reduce. For example, I have part A, B, and C for my device. I want to reduce the exposure of A and by doing so I implement another part, but this starts to interfere with the functionality of B while it does reduce the risk of A. If it is minimal reduction in B's performance than it can be applicable but if it causes a relatively significant change in B's function than it does not mitigate overall risk rather increases, it. In this way, it is true to say that "over mitigation efforts" can actually increase the risk at some points.
When discussing mitigation going "too far", I think it is important to analyze whether a control is actually improving the device in an impactful or meaningful way, rather than just being work to place on a table for risk. Devices can have the appearance of being safer due to just the claims on documentation, but when it is implemented, you then realize there is an issue. This is detrimental, especially if it reaches the user at this time. If the mitigation begins to disrupt the primary function of the medical device, it's safety features will most likely suffer and either slow the clinical workflow by increasing the load cognitively or it adds more parts than the system can most likely tolerate. Additionally, when one tries to become safer, it can become less intuitive which can create a whole new set of risks and complications which might not have been predicted before.
One thing which I think is also important to consider is the user's perception of the device if the device is "over-engineered" to mitigate risk. For the actual device, implementing all these features can make a device safer on paper, but it does also increase the complexity of it. In the consumer aspect of it, it can become harder to use, or these risk-mitigating designs can make the user more complacent and can result in the consumer focusing less on the risks of using the device. Most simply, if a device has a built-in turn-off when not in use feature, it can introduce the risk that a user would not turn off the device because it has the feature to turn it off. If the device fails to turn off, it could cause a fire or something like that. That's a simple example but it can apply to more complex mechanisms. This is called risk compensation. Making the user feel protected from perceived risk can introduce more risk.
Risk controls go too far when the new complexity, failure modes, and human-factor burdens they introduce outweigh the risk they were meant to reduce; in other words, when “safer” only looks safer on paper while actually making the whole system more brittle. A good litmus test is whether the control meaningfully lowers total system risk rather than just fixing one failure mode and creating others. If a mitigation depends on perfect user behavior, adds significant cognitive or mechanical steps, obscures system state, or creates rare but catastrophic new failure points, it’s probably crossing the line into net-risk-increasing territory. The balance point is reached when additional controls yield diminishing returns, and the smartest move is often simplifying the system or improving feedback rather than layering on more safety features.