I am currently working on the analysis of measurement data and have the problem of obvious non-normal data. Since most methods I want to use require normality (especially statistical process control and process capability) I first tried to transform my data using a power transform, however this did not work.
The data I have consists of means of ~1500 samples each. Of these means I have around 500-700 samples for multiple days and different measurements. Here is an example of how such a measurement might be distributed. Depending on the day and the type of measurements, the different distributions differ greatly.
Obviously this is only a sample and does not fully represent the underlying process. That's why I wanted to attempt to use bootstrapping to estimate the true population parameters of the underlying process which could then be used to define statistical limits. However I am not completely sure how valid such an approach is for my usecase. I know that due to the central limit theorem the means of my bootstrapped groups are going to follow a normal distribution which I was able to observe.
I am not a statistician and relatively new to everything in the field of data science and statistics so I definitely lack some of the fundamentals and intuition needed to evaluate such things. When looking at the bootstrapped distribution, the mean is consistent with the total mean of all samples, but the distribution just does not seem wide enough. In my samples I have many values which are below 0.45 and many more which are above 0.47 but to my understanding, according to the bootstrapped distribution such values would be highly unlikely. I am more than sure that there is something I do not understand correctly about bootstrapping and the relation to population parameters so I hope someone is able to explain the error in my train of thought.

