If ISO and FDA don’t set a required number of design review meetings, how can teams decide when they have done enough reviews? Do you think one review is enough to ensure that a medical device design is safe and ready for production, or should multiple reviews be held throughout the development process? Why?
I think that the number of design reviews should depend on the complexity of the device. Although the FDA requires only one formal review, a single review is not sufficient for most devices and may raise concerns during regulatory evaluation. It is in a company’s best interest to conduct multiple formal design reviews, as they help to catch and address issues early, when changes are less costly and easier to implement. Ideally, a design review should take place before moving on to each major phase of the design process. For example, one review could be conducted before finalizing design inputs, another after completing design outputs, and another prior to design validation. The schedule and timing of these reviews should be documented in the Design Development Plan (DDP). Ultimately, having more than enough design reviews is far less risky than having too few.
Teams can establish when they have done enough reviews by following the framework they initially created at the beginning of their design process. To ensure success, teams should plan to have a design review at each major milestone during the course of the project which they determine in the beginning of the project. In this way the team can ensure that their product meets all requirements, documentation is satisfactory, and that any and all issues are closed so that the design history file may be appropriately finalized. With that said, I do not believe a single review is sufficient, especially when a team is ensuring the design of a medical device is safe and production ready. As stated before, there needs to be traceability and proper documentation that the outputs of a design match the user needs. A single review cannot verify these parameters especially across different stages of development like inputs, outputs, etc. Multiple design reviews are needed to ensure that traceability and documentation are proper. Also, consider a design review that is only conducted once at the end stage of the product. If a flaw was built into the initial design it will remain until the product is fully developed. This in turn would be a major loss of time, effort , and money for the team and the company. As you stated there is no fixed number for design reviews, however it is expected and often required by regulators and companies to ensure that there are formal reviews at each major milestone in the development process. Given the need for systematic checks and documentation through the design process, what mechanisms should be implemented by a medical device design team to ensure full traceability throughout the design process?
I agree that multiple design reviews are needed, especially at major milestones like inputs, outputs, and validation. But to answer your question about how to ensure full traceability, I think the key lies in the systems teams use to document and connect everything. One mechanism that’s really effective is a Requirements Traceability Matrix (RTM). It links every user's need to its corresponding design input, test method, and verification result. That way, during reviews, you can see exactly which needs have been addressed and which still need work.
Another mechanism is using electronic design control (EDC) systems instead of scattered Word files or spreadsheets. These platforms automatically maintain version control and audit trails, so nothing gets lost between reviews. Finally, I think bringing in cross-functional reviewers (like someone from QA or clinical) helps maintain objectivity and catch issues that engineers might overlook. Together, these steps make design reviews not just more frequent, but more meaningful and compliant.
I think that multiple design reviews should be held throughout the development process. Each stage of design can reveal different issues that might not be able to seen later. Having several reviews helps ensure that the risks are identified early and corrections can be made. Since ISO 13484 and FDA 21 CFR 820 don't specify how many reviews are required, it is up to the team to determine based on how complex the device is. For high risk devices, more frequent reviews should be done to maintain safety and compliance.
I generally do agree with the others that one design review is probably not likely to be enough, and while technically following the law is likely not the best idea. Not only will this raise uncomfortable regulatory questions, but also it may result in major delays if issues are only caught at a final design review. While creating a plan at the start of the project to plan how many design reviews is a good idea, it may not be a good idea to stick to that plan fully. Depending on issues that pop up during the design process new device milestones may be created, or existing ones shifted or split or even merged. This may change the number of design reviews necessary, making it important to be adaptable. While you could simply plan the design reviews based on project phase, I think that this approach might be easier and consistent, but may not be the most efficient depending on the device complexity.
I definitely agree with what has been said about needing multiple reviews because there is usually some small aspect that can easily get overlooked at just one meeting, so it is necessary to have multiple reviews maybe it could be split about reviewing a small portion at each meeting. Also, in case something goes wrong and there are issues with the product there is proof that the failed aspect was reviewed and looked over and not simply neglected, so the fault is possibly some other part of the process. I agree with the point about deciding how many reviews are needed based on the complexity because a simple design probably needs one or two review sessions to ensure that everything works as needed and it is achievable, but similar to the idea of testing required by the FDA, the more invasive/ risky the device is, so the more complex/ risky a device the more meetings they should have. Furthermore, the meetings do not need to be planned for each portion, but maybe for the most important pieces and these meetings should be in depth to ensure all the bases are covered. The riskier the product seems to be, the more frequent the meetings should be to ensure that safety is of the utmost importance. Since safety is the most important part, there is no harm with having more meetings and ensuring that the design will be able to meet the requirements.
This is a great set of ideas to build upon for the complexities of developing & reviewing a potential product, especially regarding living documents, progress traceability & structure. I want to add my thoughts on using these methods.
When using a tool such as an RTM or EDC, we need to ensure that the progress of the work is reflected on its Gantt chart according to the project progress, timelines, and resource allocation status. As you’ve stated, EDCs and RTMs provide the framework for connecting the “punch lists” of each department/member’s tasks, but they need to guarantee that they are meeting the deadlines or modifying the schedule if they are ahead/behind. That way, PMs and departmental contacts (i.e., QA/QC, Regulatory, Marketing, HR, etc.) can track & modify their respective operations and tasks corresponding to the project(s) in question without any caveats in controlled document version or compatibility. Furthermore, PMs can make more informed and timely decisions with up-to-date data, as the time needed for updating corresponding timelines becomes automated and accurate (with human oversight for verification, of course).
For example, this system is especially useful for conducting pre-clinical & clinical trials; when models reach a certain milestone or if adverse effects cause delays necessitating design modifications, the system will automatically update the status of that timeline according to what the electronic data capture (EDC) system reflects. This can help PMs and stakeholders to make necessary decisions when conducting design reviews, even whether they’ll require more.
What do you think about this concept? Anything to add/modify?
I agree that multiple review meeting should be made during a medical device's lifecycle. Deciding on how many meetings are happening should be governed by what goals must be met before being able to move on to the next step. Each review should have a measurable set of requirements that must be passed that shows the design is ready to move onto the next step, whether it be that all major safety risks are dealt with, or most of the design requirements are tested and verified. This way the number of meeting are based on real results and something quantifiable instead of being aimless. What would be some of the most important thresholds to meet for a design review?