I have a limited set of repeated experimental values, roughly 100. The original experiments are expensive, so creating more data points in not an option.
If I use bootstrapping to estimate the mean and std error, then each bootstrap iteration samples abut 63% of the data elements, repeating those to fill in the N data points.
Alternately I could sample K elements, where K > N. For example if I sample 300 elements with replacement from 100 elements, then the coverage of the 100 points is 95% in each iteration.
Is this a 'thing' with any papers written about it? any good reasons not to do it? Other solutions to this issue?
I understand it will contain more outliers. My concern with the original data is that 100 values may not create the extreme statistics of the 'true' distribution. The usual bootstrap may also leave gaps in the sampled distribution.
TIA