I wonder whether I am behaving correctly with my Student t-test exercise.
I am continuously testing a small data sample of metal-detector antenna readings against a calibration sample distribution.
I am experiencing slow drifts in the distribution of my data (due to several factors like temperature raise, or soil composition changes, etc.), and I want to be able to automatically follow the drift in order to keep the t-test reliable in the long run.
My current idea is to adjust the mean of the calibration sample at each iteration, given some constraints:
WHILE(true) {
... t-test calculations...
IF (p-value < P_LIMIT) {
E(X_cal) += (DRIFT_WEIGHT * (E(X_test)-E(X_cal))
}
}
Hence, only when I have a certain confidence on the null-hypothesis, then I adjust the calibration in the direction of the currently computed mean, weighting properly.
In this situation, in case of a slow drift, the calibration mean would be pushed towards the current mean iteration after iteration until it calibrates.
QUESTION: I wonder if this is making some sense, or whether I am violating important corner-stone rules in statistical distribution comparisons.
H0 = abs(mu1-mu2)>threshold. That would mean that there is a relevant presence of metal and hence a positive shall be triggered. – Campa Mar 29 '18 at 15:19