For the Verification step, a test is generated to ensure the design outputs meet the design input requirements. If the test doesn't pass, then a deviation essentially occurs. Has anyone encountered a Verification Test that didn't pass? If it didn't pass what was your process for changing the test, changing the DSD, etc?
We tested the spec of a dimension for a subassembly. The spec that was initially decided on was too small and it failed the verification test protocol. We then had to open Change Orders to open up the spec and tolerance and then adjust the verification protocol accordingly and re-run the test.
The Verification step is a very important factor when understanding Design Controls. If the design does not pass the Verification Test, there could be a high risk factor. Although I have not personally encountered an incident when the design does not pass the Verification Test, a major issue that occurs within medical devices is failure of software. The software is a key component in ensuring that medical devices perform as needed. However, there are instances when there is not as much importance given to the verification. As a result, this can cause a malfunction in the software, which puts the safety of humans at risk. This can lead to recalls, and even removal of the device at times. If Verification Tests are done with the amount significance that it should be given, there are fewer chances for error.
References:
"When Medical Device Software Fails Due to Improper Verification and Validation"
If any requirement in the Design Verification (DV) Protocol is not met during testing you will be unable to complete DV until this situation is rectified. I have been in this situation before and there are a few common options you have to choose from depending on specific requirement that was not met. Like it has already been mentioned one possible way would be to re-look and re-evaluate the actual requirement that is not being met. There is a chance that the requirement was originally made was too conservative and changing the requirement to something that can be met could be easily rationalized. If you cannot change the requirement another option could be to test more samples. For instance if it is an attribute requirement and the acceptance level is test N=120 samples with 0 failures, you could check to see how many samples you need to test to in order to accept 1 failure which may be N=250. Testing additional samples would require a deviation but is a lot less paperwork and time then changing a requirement. Another option could be to rationalize the failure that has occurred through engineering logic and evaluation. This option tends be more difficult because it requires a deep understanding as to why the failure occurred and why it will not impact product that will eventually be sold to customers. An example of something like this could be a sample that failed due to manufacturing defect and it clear that the failure was caused by the manufacturing process, and there has already been a fix put into the manufacturing line to correct this problem. This does require a lot of documentation but the failure could be rationalized and no additional testing or requirement change would be needed. These are some of the common examples that I have used in the past, but in really depends on the each specific situation what the best route to address the failure in DV. My only advice would be to not always assume a major change is needed, make sure you have a full / clear understanding of the problem before deciding the best way to address it.
Software can be an essential component of the medical device, whether it is designing a new product or improving the functionality of an existing one. When it comes to the software in medical devices, one seemingly minor change can have drastic implications for device function and clinical performance. reliance on software functionality and the regulatory need to make sure the software works as intended becomes increasingly important.
In the verification phase, a medical device developer is testing to see that the medical device software meets software specifications. Verification should occur throughout the medical device software development life cycle. This type of verification is conducted in the development phase of the product. Since my experience was in the software area. Whenever we came across a failed test, the first approach was to re create the scenario to do root cause analysis. In this we would check to see in which circumstance the particular test failed. Since, sometimes there could be a network issue or an incorrect values would have passed, therefore we would not consider to be a failed test. If it continues to fail then we would determine a need for code change. The code change request would require some paperwork especially if it is a change to an existing product.
Deviations that come from the verification step of a design control always causes a havoc at work. Recently, I was trying to qualify a modified fixture for use on the production floor. The drawing was completed, the fixture was built and read for qualification. I began to execute and installation verification and function verification test to the fixture, and it failed. The deadline for this project was one week away, so I had to reorganize my gnatt chart to accommodate for the setback. While completing the installation qualification, the critical dimension of the fixture was measured and did not meet the dimension specified in the drawing. Therefore, we had to correct the drawing to include a higher tolerance. This simple change took two weeks to finalize because it had to be approved and signed by document control and each manager of quality, finance, manufacturing etc. Then only could we try to verify the fixture again, luckily the second go around was a success.
I agree with dbonanno1 that you can rationalize failures during the verification process. For example, at work we have a new combination product releasing and I was part of the team conducting verification shipping study. As a precautionary action in the verification protocol that was drafted we put in that any breakage to packaging would not need an investigational analysis and rather we were conducting the testing for purposes of analytically testing the combination product and not the actual mechanical stresses on the container. We actually did have 3 of the vials out of a n=1000 sample size containing the product break, however since the testing was done in an extreme setting that did not mimic the stress the vials would actually see, aka real world conditions we were able to avoid an investigational analysis. In this case testing the worst case scenario allowed us to prove the product can handle the shipping lane while not having to worry about the packaging.
I agree with dbonanno1 that you can rationalize failures during the verification process. For example, at work we have a new combination product releasing and I was part of the team conducting verification shipping study. As a precautionary action in the verification protocol that was drafted we put in that any breakage to packaging would not need an investigational analysis and rather we were conducting the testing for purposes of analytically testing the combination product and not the actual mechanical stresses on the container. We actually did have 3 of the vials out of a n=1000 sample size containing the product break, however since the testing was done in an extreme setting that did not mimic the stress the vials would actually see, aka real world conditions we were able to avoid an investigational analysis. In this case testing the worst case scenario allowed us to prove the product can handle the shipping lane while not having to worry about the packaging.
Verification is of course an important step. I would like to say that this verification step as statistics with respect to research. In research, although we get the result we should check whether the stats shows the same difference. I was involved in a human study where demographic details would play an important role. Age, sex, race , education are some of the factors taken into consideration. If these are not balanced then they have to either go ahead with increasing the population or try to analyze in a different way.
In case of verification, small changes would be enough and should be tested. If it is really a big issue then great measures should be taken.
In Capstone, we had to formulate test plans in order verify our design. Should one of our tests fail, we were taught to reassess our design and change the specifications and tolerance in our document in order to assure that the product will pass the test the next time it is tested. Based on the other's responses, this is also the case in industry where if a test fails, the documentation has to be reassessed in order to accommodate for passing the test the next time. This can be as simple as changing the measurements on a design and such, but due to how things are run in industry, such a change can take weeks and dramatically affect the pace of the project.
For a verification that I was recently working on we tried very hard to make sure the verification was written in a way that deviation could be avoided. We attempted to run the test ourselves as well as making sure that our product was perfectly configured and our test samples did not contain any errors. With this there is a still a large risk that a deviation will need to occur. The proper planning prevents this from being a definite as the deviation will be significantly more time consuming than if the verification is able to work as intended the first time. Also a CAPA will not need to be written if a deviation does not occur so it will also save time in that aspect as well.
Through my experience working as a Project Coordinator in the medical industry, I have encountered a two verification tests that didn’t meet the design requirements that were in place for that product. We were verifying a modification of an existing product, so there was hesitance in changing the specifications from the original product that already passed verification and validation, received regulatory approval, and was currently distributed in the market.
One of the verification tests showed failures in a specification that was associated with minimal risk to the patient. The first action that the team took was to evaluate the results from the study to create a hypothesis for the shown failures. Then we conducted another study with a larger sample size to further evaluate the root cause and determine if there was a possibility for developing an acceptance criteria of the failure. Even though the failure was classified as minimal risk, the additional study demonstrated that the failure was randomized, ultimately meaning that there were concerns in quality control of the product, resulting in purchasing new manufacturing equipment, being a much larger impact than anticipating when reviewing the initial results.
The other failed verification test was associated with packaging verification. One of the larger packaging configurations failed the drop test. The team reviewed the requirement in place for the height of the drop test, and since it was consistent with our other products and aligned with our SOP, we were not in favor for changing the specification. Instead we evaluated alternatives for the adhesive being used on the associated configuration, resulting in a resolution that didn’t impact the current budget or schedule for the project.
The clinical research company I work for carries out cell therapy in that T-cells are taken from a patient's blood, which are then amplified via additives (viral vectors), and then expanded/proliferated until they reach the specified quantity stated in the batch record. The verification step therefore requires one sample to be taken for each day that the cells are being processed to ensure that the product remains viable. This requires performing analytical protocols on the samples by our QC department in order to ascertain that the cells are indeed growing and receiving the proper amount of additives. This is then carried out through tests such as flow cytometry, cell counting, and sterility testing to determine if the product is safe to infuse into the patient. If the product is not sterile, then it must be discarded. If the product lacks additives, then additives will be added, however a deviation will occur since it was clearly not done properly the first time. The same goes for measuring the cell counts of the product in that if the cell viability is too low, the process will have to be extended, but only after a deviation report (DR) is initiated. The reason for this is because the patients waiting for the infusions are afflicted with conditions (such as cancer, HIV, hepatitis, etc) that limit the amount of time that they can endure their sickness before requiring treatment. Therefore, if treatment is delayed for any reason, it must be accompanied by a lengthy amount of paperwork to justify the delay.
The Bonezone article posted this week on design verification and validation specifies the importance of labeling products throughout the verification process so that their specs can be easily compared to the ideal values. Are there any other techniques that can enhance the verification process? If a blood-infusion product is perfectly viable and 100% effective, but its batch record is missing a signature from a technician who is no longer employed at the company, should the product still be infused into the patient? What if the product was mislabeled but the manufacturer was aware of the mislabeling and insisted that the product still matched the desired specs? Assume that the patient only has days to live and is depending on the transfusion.
I haven't actually had the experience to comment with a story of my own, but I do have a few questions that tie into this subject. How often do failed verification tests occur? It seems like verification failures occur more often than failed validation tests. It also seems as if these failures are relatively simple fixes. Are there any you know of that were quite difficult to fix? Like gaberuiz13, I too had capstone issues that had to be fixed, but I feel like it's not on par with real industrial experience. Any opinions on any of my questions?
During our verification testings, we at times notice deviations between the final product and the input design of our dental implant crowns. We bring in dental technicians to score every test product we produce and if any receive a score lower than acceptable, based on the potential risks, we investigate what went wrong. Often times, it has something to do with how we implemented the new tools into our software and requires some internal modifications.
I have encountered this on a small scale. For my capstone project we are using InfraRed LEDs to illuminate the eyes with IR light. When we designed the circuit that will power the LEDs we needed to make sure that once it is all put together that the power outputted by the LEDs will not harm the patient using the product. Once we built the circuit and tested it we found that whilst the intended use was fine, when we went to a worst case scenario of all the current being pushed into one LED, the power would become harmful. Since it was capstone we did the correct documentation, but it is not like industry where much more rigorous documentation is required. IT still allowed me to appreciate the need for design verification and how it is very good to have. Because of our early testing we were able to see the issue early and were able to change our design earlier and stay on schedule.