I am working on reducing the data from a particular type of particle detector. When struck by a particle, this detector produces a voltage pulse that has the form of a Gaussian convolved with an exponential decay.
Among the earlier work done with this sort of detector, I've found one journal article which suggests that:
Often times, the signal obtained with a current mode detector will have less than satisfactory statistics and will contain statistical noise as well as fluctuations due to digitization noise. In order to fit this data, it is sometimes more reliable to fit the integral of the signal.
No further justification is given for this statement, and I've never heard of doing this before.
I have no problem implementing this approach, and indeed it works well for me; but I would like to know why and when this is a good idea, how I might have thought of doing this on my own, and also to have some mathematical justification for integral fitting to be superior in certain cases. I'm not asking for a highly rigorous proof, so some hand waving is fine, but I do want more than the simple assertion made in this paper.