I have found some similar questions to the one I pose here, but the answers are either over my head, or seem to be slightly off point to what I am trying to accomplish. If it's a true repeat, I apologize in advance.
I am a test engineer working on a system where I need to test latency. I have data gathered whereby a series of times are recorded for specific events. The time difference between these events constitutes my latency. Pretty simple so far:
Event A - Event B = Latency
These times are measured down to the microsecond. I can produce a reasonable, but not overly large sample size for these latencies... let's call it 75 for the sake of argument. (if there are special circumstances related to the sample size in your solution, please state.. 75 is being used only for example)
The current requirement for the latency states that it must be under some value, let's call it 55 milliseconds. The problem comes in how to you prove this?
I have proposed (from my limited and half forgotten statistics classes) that we collect samples and analyze them for an upper limit at some confidence level. While the latency is sometimes (rarely) exceeded, I would like to show that the system is still valid.
I have read quite a bit over the past couple days about limits and curve distributions and all kinds of other stuff. Unfortunately, beyond standard deviations and error factors, I don't remember much of my statistics training.
What I need is a method to deal with this data that is relatively clear. I can follow the math, but when people start talking "Test for Cornelius distribution and if valid apply the Upjohn factor" I get lost. Ok, I made that up, but I think you get my point.
Here's what I got:
A set of samples (say 75 observations) of data in milliseconds. A confidence value (say 95%)
What I need:
I need to be able to make the statement that the latency will not exceed some upper limit within the stated confidence value. The finial confidence value is still to be determined, but you can use 95% for an example.
Clear concise instructions. If you pull in a factor, I don't necessarily need to know what it's called, but I do need to know how it was derived if it's pertinent. (ie if you take the 95% and derive some alternate factor from it, I need to know how you did it.)
References to more advanced (less digestible) material is acceptable, but try and point me in the right direction if you can. I am willing to do my own work, but I can't afford to get lost in a sea of technical jargon.
Conditions:
The latency will never go below 0 (Yeah I know, but this seemed important in a lot of the theories I was reading.)
I am not positive the data is truly random, or if it is simply cyclic or unpredictable... But it does all seem to fall within a certain range. I am making the assumption that it has a distribution that can be analyzed in this manner.
Thanks in advance for your time!
(as a side note, any suggestions for appropriate tags for this question would be appreciated... I picked on that seemed close.)