Is there a more scientific way of determining the number of significant digits to report for a mean or a confidence interval in a situation which is fairly standard - e.g. first year class at college.
I have seen Number of significant figures to put in a table, Why don't we use significant digits and Number of significant figures in a chi square fit, but these don't seem to put their finger on the problem.
In my classes I try to explain to my students that it is a waste of ink to report 15 significant digits when they have such a wide standard error in their results - my gut feeling was that it should be rounded to about somewhere of the order of $0.25\sigma$. This is not too different from what is said by ASTM - Reporting Test Results referring to E29 where they say it should be between $0.05\sigma$ and $0.5\sigma$.
EDIT:
When I have a set of numbers like x below, how many digits should I use to print the mean and standard deviation?
set.seed(123)
x <- rnorm(30) # default mean=0, sd=1
# R defaults to 7 digits of precision options(digits=7)
mean(x) # -0.04710376 - not far off theoretical 0
sd(x) # 0.9810307 - not far from theoretical 1
sd(x)/sqrt(length(x)) # standard error of mean 0.1791109
QUESTION: Spell out in detail what the precision is (when there is a vector of double precision numbers) for mean and standard deviation in this and write a simple R pedagogical function which will print the mean and standard deviation to the significant number of digits that is reflected in the vector x.
R(as well as almost all software) the printing is controlled by a global value (seeoptions(digits=...)), not by any consideration of precision. – whuber Dec 27 '12 at 19:00R, in which I do not see any new question. I don't see anything left to answer and am therefore inclined to vote to close the thread as a duplicate unless you can edit your question in a way that does not completely overlap existing threads here. – whuber Dec 27 '12 at 19:21