Under-educated non-statistician seeks short-term relationship for very one-sided benefit.
System in question involves rocks and physical properties. Modelling bits of the earth typically means few real measurements compared to volume of model. Many estimates are required, and I have NO IDEA how to handle the statement of uncertainties. For illustration let's say the process flow of modelling is 'measure small sample of rocks: get property X (via mean of measurements)', then 'use X in simple model to determine Y'. E.g., Y = mX + b. If I have 20 rocks that I measure, and get a mean for property X, how do I show its uncertainty to start with, and then how do I propagate that through in my calculation of Y (assuming the uncertainty around m is insignificant in comparison)?
I've looked at my data graphically, and they seem to have mostly Gaussian shape. E.g. this is a plot of the kernel densities of property X:

Some skew and bumpiness evident, possibly affected by small sample size. OK so there's a fairly large spread of values for some of the rock types, but in essence we deal with large volumes, and have to idealise our models quite a bit, so tails like you see on the yellow curve far left, while real, are not going to be explored in our numerical simulations. I was once told by an older pro that if my model can explain about 85% of observations then I should buy champagne.
This: Estimating error from repeated measurements seems to be a similar question, but I don't even really understand the accepted answer. Std dev/sqrt(n) is the 'standard error'?