I have a sample of 30 data points, that I am unable to find a distribution fit for. The goal of my analysis is to assess process capability and get an accurate Ppk. This is one output of a process I am validating.
Unable to fit anything, I observed that assuming normality, the data gives a Ppk>3.00. The data set is tailed to one side due to one or two (true and valid) outliers. These outliers are representative of the process and I cannot modify and improve the process to remove these and then re-sample later.
So I'm trying to find out what my options would be. Re-sampling and measuring again would likely yield similar results and is time consuming/laborious. Is there some rationale I can use to allow me to assume the data normal for analysis? Would be your next move in a similar situation?
Really appreciative of any help here, I'm stumped!
Thanks tom