Forum

Notifications
Clear all

Statistical Sampling

19 Posts
19 Users
0 Reactions
3,920 Views
(@djwhitemsm-edu)
Posts: 48
Eminent Member
 
Posted by: @savery115

Statistical sampling is required for design verification and validations. If you have worked for a company or written protocols in the past - how have you explained or ensured that the sampling plan and statistical approach you have taken has been sufficient?

For myself, I have written validation protocols in the past and usually it depends on the extent of the validation if it is minor or critical or variable or attribute data we are measuring. From there we look at 90% reliability and 90% confidence (example) and determine how many samples we need based on that. The acceptance criteria for the test comes afterward along with the statistical analysis. This is a very basic and high level approach. I'm curious as to how people approach the reasoning and explanation behind the amount of samples they choose for verification and validation.

This is a very good question @savery115 I think that statistical sampling is an important tool to use for any group of people who are acquiring data that relates to consumers. It is important to do all statistical sampling when aiming for a 90-95% confidence interval because it will really give a more accurate depiction, in the analysis, about how a design is being received. I think using sampling methods based in math is the only realistic way people can give reasoning and a clear explanation for their sampling methods. 

 
Posted : 08/06/2021 11:48 am
(@mrela13)
Posts: 36
Eminent Member
 

When it comes to statistical sampling there are a few factors to consider when writing protocols. One of the main factors is what kind of protocol is being written. Most of my experience is with process validations, so I write statistical sampling for OQ's and PQ's. Most of these protocols are based on risk of the process and failure rates during testing. If something had a high occurrence of failure and a high severity if the failure occurred, the sampling size would increase significantly. Likewise, if there was a low occurrence and low severity the sampling size would be much smaller. For OQ's the sampling size is usually smaller and the acceptable statistical limits such as Ppk and Cpk are much lower. In PQ's the sampling size is normally much larger and the Ppk and Cpk allowance for testing is much higher. This is because OQ's test the limits or extreme process parameters while the PQ's test the normal operating limits for the process. Another thing to keep in mind during statistical sampling is whether the tests are attribute or variable data. Attribute data has much higher sample sizes because they are based on a yes/no criteria and the data cannot be analyzed as much. Variable date has much lower sampling rates because the numeric values are recorded and can be tested to see the likelihood of failure and analyzed much more. Finally, the last factor I can think of to statistical sampling is if a similar test has been run before. If so, a justification for why a sampling plan can be reduced and be completed to lower the sampling plan and some of the requirements during the testing.

 
Posted : 31/10/2021 10:12 pm
(@rifath-hasan)
Posts: 24
Eminent Member
 

Statistical sampling used in Control Quality. It is defined as selecting part of a population. The samples are chosen and tested according to the quality management plan. When doing this particular project management technique, the sample size and frequency should be determined by the project manager since all types of sampling techniques do not give the best representation of the total population. Therefore simple random sampling method is widely used over systemic sampling, stratified sampling, cluster sampling, judgmental and haphazard sampling.

 

 
Posted : 31/10/2021 10:46 pm
(@jaf22)
Posts: 83
Trusted Member
 

Statistical sampling can be pretty dependent on what you are working towards. There are guidelines to follow an ANOVA for a process to validate their systems which can be very different from medical device validation. Medical device validation can often have a sampling of a panel of surgeons use process equivalent devices to evaluate the inputs such as "device is able to work as intended." These samplings are usually based on availability where its always good to have more input in relevant functions before launch. For verification activities, as many people have said, you statistically base your sample size on risk and reliability that is associated with their part. There are many types of tests such as a t-test or weibull distribution that can define your test length and size which can be very subjective to what type of test is being verified (also if it is non-inferiority, equivalence, or verification of new product). The FDA and other regulatory bodies have their required sample size which aids in reducing test parts such as "6" samples for interbody cages in static testing and at least 2 run outs and 4 failures in an SN curve for dynamic testing. 

 
Posted : 01/11/2021 6:02 pm
Page 2 / 2
Share: