With week 7 we begin our second simulation and the project presented me with a question; how can you as a PM decide what factors to tweak during the test process? With the second simulation we are presented with a bone growth factor device and need to find a way to store it properly. There are no preliminary tests or data for us to work off of this time so how do you know what to change. For this simulation we can choose as many ideas as we may want and Dr. Simon will let us know the outcome, however in reality it is not so simple. Each new test takes more time and more money and with strict deadlines and tight budgets it becomes difficult to decide what to change. My question is thus, how do you decide what your preliminary tests are and how can you complete those verification tests while remaining in your existing resource constraints? Further, what do you do as a PM when your product fails time and time again after changing the simple conditions, what part of the product do you change when all external components are controlled?
I think as a PM, when you do not have preliminary data to work off of, you have to go back to the basics. Even without past tests, you still have engineering principles, material properties, and risk analysis to guide decisions. With something like a bone growth factor device, storage conditions directly affect safety and performance, so I would start by identifying the biggest risks instead of randomly testing everything.
With limited resources and tight deadlines, it makes sense to start with simpler tests before moving into more complex ones. Simple tests are cheaper and faster, and they can eliminate obvious issues early. That way you are not wasting time and money on complicated studies if the problem is something basic. I would also change one variable at a time so we actually learn from each attempt.
If the product keeps failing after adjusting simple conditions, then the issue might be deeper in the core design. If all external components are controlled, I would start questioning the formulation, materials, or even the original assumptions. At some point it is not about small tweaks anymore, it is about reevaluating the overall design approach.
When you see repeated failures, is it better to keep adjusting small variables, or step back and rethink the entire design?
This is a really good question and post to make. When working in the medical device industry, whether that be in the R&D sector, manufacturing, etc. each one of us will run into this question. There are a few fundamentals to fall back on in this situation, firstly, as a PM you must hone in on the required specs. The specs are the customer specifications that must be met. This, above all else, will serve as your foundation and guideline for potential avenues you may follow. Two of the specs in this weeks simulation, for example, are: must remain in-site for 3 days min and must have a 2yr shelf life.
Knowing clearly what these are will allow you to tailor test verification to prove out that these specs are met. Additionally, as Ad282 mentioned, you want to fall back on principles. In a non-new company, this can look like looking at previous testing done for a similar device and doing the same for your device. If you don't have this info, you would fall back on prior engineering experience or what makes sense from a fundamentals perspective.
Lastly, you would utilize tools such as DMAIC (Define, Measure, Analyze, Improve, and Control) to approach the problem solving methodically and efficiently. All tools and analysis methods should be off-the shelf, non-specialized, and using already existent materials and methods you'd already used to first develop the product to make it as cheap and quick as possible.
I agree with both points about starting with the required specs and using simpler tests first. When resources and time are limited, another important step as a PM is prioritizing tests that give the most information early. Instead of testing every possible variable, I would focus on experiments that can quickly rule out major failure mechanisms, such as formulation instability or material interactions.
If repeated failures occur even after adjusting storage conditions or simple parameters, that’s usually a signal that the issue may be with the design itself. At that point, it may be more efficient to step back and reassess assumptions about the formulation or system architecture rather than continuing incremental adjustments.
In practice, balancing learning with resource limits means choosing tests that reduce uncertainty the fastest while still staying aligned with the core product specifications.
The preliminary tests should be dependent on the risk factors, the cause for failure, and implementing fast, yet low-cost methods. As seen with this simulation, time is of the essence due to the partnership and competitive nature of the medical device field. Most of the time, the preliminary tests and verification test changes can be something little or simple if you look at the reasoning behind the failure in the first place in greater detail without overthinking or overlooking the factors. With time constraint, you should also look into ways to expedite testing to meet certain conditions to see if the product works ideally in those simulated realistic conditions or not. If there are resource constraints, the main thing is to focus on the crucial deliverables needed to keep the partnership while allowing the device or product to be functional. The key function and deliverable of the project is the most important part like in the simulation for the growth factor to be injectable onto the bone fractures.
If the product fails time and time again after changing the simple conditions, then that is when you should stop and take a step back before wasting any more money, resources, and time. This is when you should perform Failure Mode and Effects Analysis to identify the failures in the product or design process. This helps to see what went wrong and why it occurred. With that analysis, that will allow you to find the root cause of the product failure rather than blindly going in and changing everything all at once or just guessing which one to fix. If all of the external factors and components are controlled, then the thing to look out for in the product development process and test are the materials, design of the product/device, or testing conditions you are performing. With chemicals, there are many factors that can cause failure, such as moisture, temperature, degradation, or toxicity. The design of the product can be another factor that needs to be changed to adhere to the deliverables. There could be issues with the design, such as any creep of moisture or any components not put together correctly or fastened all the way causing the product to not work properly.
I think one of the things I would consider as a PM in this instance is to consult the members of my team who would have more specialization in testing and validation. As a PM its good to have knowledge around all functional aspects of the project, but specialized areas such as preliminary testing and verification testing I would look towards more team members before finalizing and implementing a strategy. In terms of testing under constraints, I would focusing testing on the aspects of the project that were identified as high risk through initial FMEA conducted with the team. This is where utilization of tools such as the 80/20 rule can become beneficial by focuses more of the budget and testing efforts towards the 20% of high risk issues before looking at the lesser risk failures in the project. In the instance that the device is repeatably failing, I think the best approach is to look at each component of testing and the overall project to see where the error can be occurring. This can include things like external conditions like temperature or pressure, or inherent issues with interface or material interactions that are leading to failures. This again would be an excellent time to consult members of your team or department that are highly specialized to observe any possible pitfalls. In addition to this, revisiting the initial FMEA that was conducted in the beginning of the project during the conception phase may be beneficial to confirm that no potential risks were overlooked. With that said, to further discussion how would you handle conflicting advice from two different specialists on your team? For example, a material scientist suggests a coating that a regulatory lead says will lead to expensive rounds of clinical trials.
Testing results should be communicated to stakeholders or senior management, and the project manager will need their approval to proceed with further action. Approaching a preliminary test includes identifying a clear purpose and, for many, choosing the testing solution that is the least costly and has the highest chance of success. Oftentimes, tailoring is needed for the testing procedure, and it can benefit from independent verification, where a separate researcher follows the method to see whether they obtain the same results. Products or tests that consistently fail benefit from additional resources, such as sharing information with other staff who can provide a new perspective. The project manager needs to exercise discernment when faced with conflicting advice. Decision-making is a multi-step process and ties back to stakeholder involvement.
When faced with design flaws and limited resources, I think the most important step is prioritizing tests that are most likely to reveal the root cause of the problem. As a PM, I would start by identifying the critical variables that could affect the device such as storage conditions, material stability, or environmental factors and test these first. Using small, controlled preliminary tests should help narrow down which factors are actually impacting performance without consuming too many resources. If a product continues to fail even after adjusting these conditions it might indicate that the issue lies in the core design rather than the external environment. At this point, a PM should step back and reassess the underlying assumptions of the device such as the material choice, formulation, or mechanism of action.
In industry, a PM can rely on types of structured experimental strategies instead of trial and error. The simulation provided the opportunity to set out experimental designs to test our parameters. In my experience, it is crucial for teams usually approach experiments through a design of experiment (DOE) mindset where small quantities of tests are used to analyze multiple factors, such as temperature, time, or humidity, that occur at the same time. A team is able to learn the maximum amount of information with the fewest round experiments, and it allows a team to make more informed decisions. These methods are essential when there are time constraints and a limited budget. A PM's role comes down to deciding when to stop optimizing conditions and then pivot to escalate the issue to a redesign for a project. If a project fails constantly even after controlling all external factors, then it would imply there is instability in the formulation of the product. In the simulation, one adjusted the environmental modifications, but it consistently failed. Then there is reformulation of the product to consider if all avenues are exhausted. How do you think teams should determine when it is time to stop testing and consider a redesign of a project?
The point about Design of Experiments is one of the more underutilized tools in this kind of situation. The advantage of a DOE approach over one variable at a time testing is exactly what was described by many of you, it lets a team extract significantly more information from fewer experimental runs, which becomes critical when every test cycle is eating into a shrinking budget and timeline. In a resource constrained environment, the cost of running redundant or uninformative tests is not just the direct expense, it is the opportunity cost of not having run a more strategically designed experiment that could have pointed out the root cause of the issue one or two rounds earlier (Something that the simulations run in class have shown us very well). That compounding inefficiency is where projects tend to bleed time and money quietly before anyone realizes how far off schedule they actually are.
On the topic of when to stop testing and move towards a redesign, I think the honest answer is that there is no universal threshold, but there are signals worth paying attention to. If a team has controlled all possible variables and applied the proper structured experimental methods and is still still seeing consistent failure, it is the primary notification that the problem is not environmental or procedural, it is intrinsic to the design or formulation. If tests are continued from this point, it is a needless waste of resources. It is then the job of the PM to recognize this, and shift the timeline to redesign the project to fix the point of failure.
My first ideas is to call a meeting with as many of the project team members as possible, as soon as possible. In this meeting we would brain storm possible theories about what has gone wrong, why it has gone wrong, and possible ways to solve the issues. I would use each member's past experience as a resource and hope that we could come to a few ideas of plausible fixes for the issue and decide from there what we can test both quickly and cheaply.
As for w hen a product fails and fails again and again, it might be time to go back to the drawing board and see what more major components you can change without affecting the whole product production process. I would start with the components that would less affect the FDA assessment, then if that still fails, I would talk with my manager to see if the project is even feasible.