I am trying to identify positive detections on the readings of a metal-detector antenna, and for that I am employing the well known Student t-test.
I am continuously filling a rotating buffer with incoming data (the test sample), and testing it against the calibration sample: t calculation then 2-tailed p significance.
My idea is that the calibration sample can have a solid cardinality (done one-off as the system starts, eg. around.. 300 samples?), while the test sample must react quickly, hence its cardinality is small (around maybe 10 or 20 samples?).
I want to test a significant deviation of the mean of the test distribution away from the calibration reference, but the p value is way too sensible and quickly goes to ~1.
QUESTION I wanted to know some opinions from the community about my current setup, along with maybe some guiding on the direction to take.
I am using the t-test for independent samples with equal variance assumptions described here. I did not try the Welch t-test yet, I have not tested the assumption on variances yet, but I see would give even higher significance, hence it would not help.
ANECDOTE: while developing the test, I was wrongly inserting a 0 value in the calibration sample, that was increasing the overall samples variance by a factor of ~100x. This way I had really satisfactory results, hence I am tempted to manually add a constant FACTOR to the calculation of my pooled variance but I really feel this is awesomely unorthodox.
t=((E(X_cal)+threshold)-E(X_test))/(pooled_var). b) that is unlikely to be the problem with such a small sample : I observed such noisy behavior also with large test samples. c) Do you mean that a drift that is relevant within the time window of a calibration is causing the calibration variance to be smaller that what it actually is? In any case, I see no relevant drift. – Campa Mar 29 '18 at 12:43t=((abs(E(X_test)-E(X_cal)) - threshold))/(pooled_var)then deduce its 1-tailed significance, maybe..? – Campa Mar 29 '18 at 14:17