Since it is known that going back to a previous phase during the review meetings can be really expensive and counterproductive, I was wondering about the amount of error and deviance from the expectations that must occur for something like that to happen. Of course, I recognise that this depends on each product and case but was wondering whether there is some consensus on that.
I feel as though going back to a previous phase is not counterproductive as long as the issue(s) are resolved, or a better proposition is put forth. It could cost financially, however for the progression of the project and to continue to the next phase, it is important to ensure that previous steps are not left unfinished. There is no numerical value of how much error or deviance would occur from the expectations to have counterproductivity. I believe that the severity of the issue from the previous phase determines how quickly it can be resolved and pushed along, which would impact the timeline and financial side of the project. Having mistakes is not bad, as long as it does not shake up the projects trajectory too much.
There would have to be something fundamentally wrong with the project in order to go back to a previous phase. The design review process is set up to guide the project forward. Certain requirements must be met to move on to the next phase. A common phase for the project to hit setbacks would be V&V. If the project is consistently failing V&V test, the project might get kicked back to development to work on the issue.
Once a product is in post-release phase, it may also go back to previous phases. For example, if a product is receiving complaints in the market, a new design project may be initiated to fix these issues. Sometimes all of the design phases are completed again,
Going back to a previous phase in project development is often costly and time-consuming, but in some cases, it is necessary to ensure compliance, functionality, and safety (especially in highly regulated industries like medical devices). While there isn’t a strict numerical threshold that dictates when a project must revert to an earlier phase, the severity of the issue, regulatory requirements, and risk assessment outcomes typically drive such decisions.
One common trigger for returning to a previous phase is repeated failures in Verification and Validation testing. If a medical device consistently fails during verification due to design flaws, it may need to go back to the development phase for redesign. Similarly, if validation testing reveals that the product does not meet user needs, modifications may be required before commercialization.
Contract Manufacturing Organizations (CMOs) and Contract Research Organizations (CROs) add complexity to this issue. If an external manufacturer or testing facility fails to meet regulatory standards, companies may need to re-audit their processes, delaying production. Regular supplier audits and stringent quality control checkpoints can help mitigate this risk.
This topic is explored in Simulation 3, as a problem with cytotoxicity will force a return to the verification phase. This is costly and generally avoided unless absolutely necessary. Thus, such a step is warranted only when critical errors or significant deviations are identified—typically those that impact safety, regulatory compliance, core functionality, or user requirements. In this case, cytotoxicity spurred the return to verification. Minor issues are usually logged for resolution in the current or future phases. The decision often hinges on risk assessment: if the potential consequences of proceeding without correction outweigh the cost and delay of backtracking, then reverting is justified.
While there's no universal metric defining the exact degree of deviation warranting a rollback, the overarching principle is that any deviation posing a risk to safety, efficacy, or compliance is taken seriously. Implementing robust design controls and risk management practices, as outlined in standards like ISO 14971, helps in proactively identifying and mitigating such risks, thereby reducing the likelihood of significant deviations that could necessitate reverting to earlier development phases.
Patient Safety Concerns: Any discovery of potential risks that could compromise patient safety necessitates a thorough reassessment, potentially leading to a rollback. The FDA emphasizes that design changes impacting safety require stringent control and documentation.
Regulatory Compliance Issues: Identifying non-compliance with regulatory standards during verification or validation phases may require revisiting earlier design stages to ensure adherence to necessary guidelines. The FDA's Design Control Guidance underscores the importance of addressing such discrepancies promptly.
Manufacturability or Stability Issues: Breakdown of the medical device during usage.
As mentioned by FDA "When conducting risk analysis, firms are expected to identify possible hazards associated with the design in both normal and fault conditions. The risks associated with those hazards, including those resulting from user error, should then be calculated in both normal and fault conditions. If any risk is deemed unacceptable, it should be reduced to acceptable levels by the appropriate means, for example by redesign or warnings".
An important part of risk analysis is ensuring that changes made to eliminate or minimize hazards do not introduce new hazards.
Common tools used by firms to conduct risk analyses include Fault Tree Analysis (FTA), and Failure Modes and Effects Analysis (FMEA).
One should not be hesitant to go back to the previous phase if it compromises the safety of the patients.
The decision to go back to a previous phase is not based on a simple threshold of error. In real life, the discussion is about the cost of fixing the issue now versus the cost of allowing the issue to remain. This is sometimes referred to as the cost of correction curve. Problems that appear in the beginning of development are usually cheaper to fix than problems that are discovered after launch. Due to this, the decision is framed as a risk-versus-impact discussion. From a technical perspective, a small design flaw might be okay to have. However, from a regulatory standpoint, engineers could see the flaw creating reliability issues later, which would lead the company to open an earlier phase to address the issue. If the problem is technically imperfect but falls within the performance limits, then the team could decide to move forward instead of restarting development.
The discussion also becomes complicated if the project is close to commercialization and the problem is revealed. Additionally, early assumptions that change later on can lead the team to reopen an older phase. This is when new information changes how the team understands the problem. Revisiting here would strengthen the design in the long-term, as it is not correcting a mistake but actually improving the design from new information that was obtained. So, instead of a technical rule, the decision to return to an earlier phase is strategic.
The reason that going back to a previous phase costs so much is that work builds on top of old work. So, when you go back and change the base work, then everything that comes after needs to be changed as well. Verification might need to restart, validation needs to be repeated, and the stakeholders need to be consulted again.
How do you think companies can reduce the costs of going back into previous phases? Can AI be incorporated into the pipeline to ensure that a phase is truly complete before closing it out? What experiences have you had with having to re-open previous phases in your work?
It is also essential to consider a different perspective when it comes to coming back to the previous phase in a project. Overall, it is much less about the quantity of deviation of a project and more about if the issue truly changes the core assumptions of a project's design. Therefore, if the problem impacts a confined parameter such as a tolerance or even documentation changes, then it can be successfully fixed through the current phase. Yet, if there is data proving that a fundamental assumption, such as a workflow, safety, or material compatibility concern, is not correct, then the project will have to make a dramatic pivot. Since everything is built off an assumption, the project can be labeled as error-prone. Overall, the decision is to make the project more structural than numerical in a sense. The real question is whether the issue will change the foundation of the design instead of minor details. Early testing and reviews through cross-disciplinary individuals are valuable in this phase since they can reveal gaps in assumptions before extensive work begins to accumulate. In this way, it can be questioned if there should be more checkpoint experiments early in development, even if it can slow down a project in the beginning stages?
The distinction you are drawing between surface-level deviations and foundational assumption failures is an extremely important judgement call a PM and project team have to make, and often does not get enough structured attention in most developmental processes. The challenge is that foundational assumptions often feel solid until they are not, and by the time data surfaces that challenges them, a significant amount of downstream work has already been built on top of them. This highlights the importance of cross-disciplinary reviews early in development carry so much value. In a project team, all of the different members will look at the same assumption at the concept phase will all surface very different concerns, and catching even one can prevent a phase reopening later down the line that would've saved time and resources.
The question about whether teams should build in more checkpoint experiments early, even at the cost of slowing initial progress, is one answer that is almost always a yes in the medical device space. The regulatory environment alone justifies a front loaded review structure because the FDA does not distinguish between a problem that was known early and ignored, vs. one that was genuinely missed. Beyond compliance, there is a project management argument to be made as well. A checkpoint that adds two weeks to the concept phase but prevents a three month phase reopening during verification is not a slowdown, it is one that is a high return investment (probably the best one a team can make).
One idea that stood out to me in this discussion is that the decision to return to a previous phase is less about the amount of error and more about the type of issue being discovered. Sometimes early results appear acceptable, but more rigorous testing during verification or validation reveals patterns or weaknesses that weren’t obvious before. In those cases, going back a phase isn’t really a failure of the process; it’s more of a correction based on better information. I think this also highlights why smaller checkpoints and iterative testing throughout development are important. Catching issues earlier can prevent a much more costly rollback later in the project.
A way companies can reduce the cost of returning to earlier development phases is through strengthening early-stage planning and risk identification. This can be facilitated by thorough design reviews, failure mode and effects analysis (FMEA), and early prototyping. As teams spend more time validating assumptions and testing concepts, they reduce the chance that critical issues will surface during verification/validation. To this point, cross-functional collaboration is also critically important in the early phases, as involving regulatory, manufacturing, and quality teams, for example, can uncover potential compliance/reliability issues that engineers alone may overlook. Thus, by identifying risks earlier in the process, organizations can prevent expensive redesigns/repeated testing cycles that can occur when foundational work must be revised.
AI can certainly play a role in reducing the likelihood of reopening previous phases by helping teams crunch larger datasets for design and testing more quickly. For example, AI tools could assist in predicting potential failure modes, identifying patterns in test results, or flagging inconsistencies between requirements/verification results before a phase is closed. However, AI would function better, I think, as a decision-support tool rather than as a replacement for human judgment, as strategic considerations are often warranted that can only be derived from human evaluation. How much should organizations rely on AI in making decisions, and could overreliance on automated analysis generate new risks in product development?