2

About impedance calculations, I have to build the quotient of voltage and current. This is done in the frequency domain after, e.g. Fourier transform to get a spectrum. I'm trying to calculate the SNR of the impedance based on the individual SNRs of the voltage and current.

In question 31440, the multiplication of two noisy signals was derived. I'm failing to do the same for division. I'm fine by assuming as well: "all signals are independent of each other and have zero mean"[1]. If really necessary, I could also assume that the noise is identical.

$$\frac{x_1+n_1}{x_2+n_2}$$

$$ \mathrm{SNR}_1= \frac{\sigma_{x_1}^2}{ \sigma_{n_1}^2} \\ \mathrm{SNR}_2= \frac{\sigma_{x_2}^2}{ \sigma_{n_2}^2} $$

Thanks a lot :)

AlexB
  • 21
  • 2
  • Yes you are right. But I wanted to keep it general/ analogous to the multiplication question. – AlexB Apr 19 '23 at 08:27
  • Are you doing electrical impedance spectroscopy (EIS)? I saw the acronym on the data plots you linked in a comment. – Ed V Apr 20 '23 at 01:17
  • Yes, exactly. In the end, I want to use it for passive online impedance spectroscopy on batteries in a microcontroller. For the theoretical analysis, I will often restrict to a single sine, but in the application, I will see arbitrary signals. – AlexB Apr 20 '23 at 09:01

1 Answers1

2

My original answer is below. I proceeded to validate the result with a quick simulation and immediately find some fundamental flaws with the premise. First I want to offer if the intention is for an impedance measurement comparing voltage to current, the "signal" waveforms should then be highly correlated (with phase rotation for complex impedance cases). For that purpose, and importantly assuming the noise itself is uncorrelated for the voltage and current measurements, I recommend a correlation based approach, which under the presence of additive white noise (and stable impedance over course of measurement) would provide the best estimate for the desired magnitude ratio as well as relative phase from which an accurate impedance can be derived (ideal for a Smith chart measurement if desired). This should be done on the time domain signals directly and not FFT magnitudes, which convert Gaussian distributions with zero mean to Rayleigh distributions, with the penalty of doing post-detection estimation. If there is further interest in the details of such a suggested approach, I suggest posting that as a separate question since that simply motivated the interesting stochastic processing question posed here.

Apart from that, I found the OP's challenge interesting with reference to MattL's original post showing the solution for multiplying two noisy signals, and proceeded with a generalized solution for dividing two noisy signals as the OP directed, with the conditions of independence and zero mean for the noise as well as the signals. I then proceeded with a validation of a simple case of Gaussian distributed signals as well as noise (to confirm the $K$ factor detailed below) and from that see the issue with the measurement as constructed:

If the signals were actually zero mean and independent, then the denominator consisting of signal and noise as a zero mean signal will pass through zero and approach zero, and for all small values the the quotient will explode! The measurement can't be done as described (but this will not pose a problem for my suggestion of a correlation approach). I should have seen this right away but it took my initial simulation to make that clear, with results shown in the plot below detailing the issue described above.

simulation results

I am hoping the above provides a useful answer and guidance as to next steps toward a productive impedance measurement, and will leave the details below as an interesting exercise of using stochastics for further review and scrutiny. I don't see a mistake is the specific steps shown below other than now recognizing that the very first line will lead into an unmanageable signal given the divide by zero conditions), and for that reason am unable to verify the result.


Original response:

$$\frac{x_1 + n_1}{x_2+n_2} = \frac{x_1}{x_2}+\frac{n_1 x_2 - n_2 x_1}{x_2(n_2+x_2)} \tag{1}\label{1}$$

The signal component in the division result is

$$x = \frac{x_1}{x_2}\tag{2}\label{2}$$

And the noise is

$$ n =\frac{n_1 x_2 - n_2 x_1}{x_2(n_2+x_2)}\tag{3}\label{3}$$

Given independent zero mean signals, the signal power is:

$$\sigma_x^2 = \frac{\sigma_{x_1}^2}{\sigma_{x_2}^2}\tag{4}\label{4}$$

And the noise power is (note! the variance of the difference is the sum of the individual variances):

$$\sigma_n^2 = \frac{\sigma_{n_1}^2\sigma_{x_2}^2 +\sigma_{ n_2}^2\sigma_{ x_1}^2}{\sigma_{x_2}^2\sigma_{n_2}^2+K\sigma_{x_2}^4}\tag{5}\label{5}$$

Where $K\sigma_{x_2}^4$ represents the variance of $x_2^2$ and $\sigma_{x_2}^4 = (\sigma_{x_2}^2)^2$ (the variance of $x_2$, squared).

Note the factor of $K$: knowing the variance only for the signal $x_2$ is insufficient information to compute the variance of the product. For example, if we also knew the signal was Gaussian distributed then $K=2$ and if the signal was a sinusoid then $K=1.5$ (with "mean-square" instead of "variance"), but would be different for other distributions.

Proceeding with the SNR as ratio of signal to noise:

$$\text{SNR}= \frac{\sigma_x^2}{\sigma_n^2} = \frac{\sigma_{x_1}^2}{\sigma_{x_2}^2}\frac{\sigma_{x_2}^2\sigma_{n_2}^2+K\sigma_{x_2}^4}{\sigma_{n_1}^2\sigma_{x_2}^2 +\sigma_{ n_2}^2\sigma_{ x_1}^2}\tag{6}\label{6}$$

$$= \frac{ \sigma_{x_1}^2K\sigma_{x_2}^4+\sigma_{x_1}^2 \sigma_{x_2}^2\sigma_{n_2}^2} {\sigma_{n_1}^2K\sigma_{x_2}^4+\sigma_{x_1}^2 \sigma_{x_2}^2\sigma_{n_2}^2 }\tag{7}\label{7}$$

Where:

$K\sigma_{x_2}^4$ represents the variance of $x_2^2$,

$K$ is a proportionality constant that depends on the distribution of signal $x_2$,

$\sigma_{x_2}^4 = (\sigma_{x_2}^2)^2$

Further simplifying \ref{7} as follows:

$$\text{SNR } = \frac{\sigma_{x_2}^2\sigma_{x_1}^2 ( K\sigma_{x_2}^2+\sigma_{n_2}^2)} {\sigma_{x_2}^2(\sigma_{n_1}^2K\sigma_{x_2}^2+\sigma_{x_1}^2 \sigma_{n_2}^2 )} = \frac{ \sigma_{x_1}^2(K\sigma_{x_2}^2+\sigma_{n_2}^2)} {\sigma_{n_1}^2K\sigma_{x_2}^2+\sigma_{x_1}^2 \sigma_{n_2}^2 }\tag{8}\label{8}$$

Which with $\text{SNR}_1 = \sigma_{x_1}^2/\sigma_{n_1}^2$ and $\text{SNR}_2 = \sigma_{x_2}^2/\sigma_{n_2}^2$ we can rewrite \ref{8} to be:

$$SNR = \frac{\text{SNR}_1(\text{SNR}_2K+1)}{\text{SNR}_2K+\text{SNR}_1}\tag{9}\label{9}$$

Dan Boschen
  • 50,942
  • 2
  • 57
  • 135
  • 1
    That's very helpful. Smaller adaptions might be necessary:\begin{array}{r} n=\frac{n_1 x_2-n_2 x_1}{x_2\left(n_2+x_2\right)} \ \mathrm{SNR}=\frac{\sigma_x^2}{\sigma_n^2}=\frac{\sigma_{x_1}^2}{\sigma_{x_2}^2} \frac{\sigma_{x_2}^2 \sigma_{n_2}^2+K \sigma_{x_2}^4}{\sigma_{n_1}^2 \sigma_{x_2}^2-\sigma_{n_2}^2 \sigma_{x_1}^2} \end{array} – AlexB Apr 19 '23 at 10:18
  • Could you agree on this: \begin{aligned} &\begin{array}{r} n=\frac{\sigma_{x_1}^2 \sigma_{x_2}^2 \sigma_{n_2}^2+\sigma_{x_1}^2 K \sigma_{x_2}^4}{\sigma_{n_1}^2 K \sigma_{x_2}^4-\sigma_{x_1}^2 \sigma_{x_2}^2 \sigma_{n_2}^2}=\frac{\sigma_{x_1}^2 \sigma_{x_2}^2+\sigma_{x_1}^2 \sigma_{n_2}^2}{\sigma_{x_2}^2 \sigma_{n_1}^2-\sigma_{x_1}^2 \sigma_{n_2}^2} \ \mathrm{SNR}1=\frac{\sigma{x_1}^2}{\sigma_{n_1}^2} \ \mathrm{SNR}2=\frac{\sigma{x_2}^2}{\sigma_{n_2}^2} \end{array}\ &\mathrm{SNR}=\frac{\mathrm{SNR}_1 \mathrm{SNR}_2+\mathrm{SNR}_1}{\mathrm{SNR}_2-\mathrm{SNR}_1} \end{aligned} – AlexB Apr 19 '23 at 10:19
  • @AlexB Ah yes I multiplied Signal and Noise rather than divide (following your first comment)...let me fix that and then digest your second one – Dan Boschen Apr 19 '23 at 11:16
  • 1
    I’m puzzled. The fundamental issue with quotients of experimental results is avoiding division by small values. An SNR has a nominal range of zero to infinity, so the denominator SNR has to be restricted away from zero to avoid having the ratio of SNRs “blow up”, as it were. – Ed V Apr 19 '23 at 11:33
  • @AlexB I'm not confident your next step is correct. Are we convinced that the variance of $x_2(n_2+x_2)$ equals $\sigma_{x_2}^2(\sigma_{n_2}^2 + \sigma_{x_2}^2)$ as that would then equal $\sigma_{x_2}^2\sigma_{n_2}^2+ (\sigma_{x_2}^2)^2$? I think we would need to be able to do that to then divide out $\sigma_{x_2}^2$ as you have done. – Dan Boschen Apr 19 '23 at 11:40
  • Right to @EdV 's good point --- that conclusion in AlexB's comment would then suggest and infinite SNR when the two SNR's are equal, which would not make sense. – Dan Boschen Apr 19 '23 at 11:44
  • I can just follow what you have done. Unfortunately, I can no longer evaluate whether this is correct. That exceeds my knowledge. Sorry, I do not have enough reputation on stackexchange for answering in the chat. If you @Dan Boschen and Ed V agree I would accept the answer. I can read your chat. – AlexB Apr 19 '23 at 14:26
  • I can't find a mistake, but it doesn't work for my measurements yet. Here is a picture of my measurement results to give you a better understanding if interested: https://t1p.de/7ff5c .The measured SNR (linear) is 0.0651 for the impedance. Based on (9) it should be 119.4003. Yes, I have no zero mean, nor a proper normal distribution for my noise (actually, nearly perfect Rayleigh distribution), but it should anyway end up in the correct direction or? – AlexB Apr 19 '23 at 23:15
  • 1
    @AlexB I've made my final updates to the post. See the new intro- It sounds now like you are using magnitudes of your FFT which the original question didn't describe (it would avoid the divide by 0 issue I ran into but also it's a completely different answer). But rather than try to redo all that I suggested another approach as using post detection estimates (with Rayleigh distributions as you noted) will not optimize the SNR in your result. Consider the correlation approach I described; it should be far superior in the presence of independent noise on I and V (if that's the case). – Dan Boschen Apr 20 '23 at 03:23
  • I wish good answers, like this one, would get more upvotes. As I type this, I am the only upvoter. It takes thought, time and effort to craft a correct and helpful answer that is at an appropriate level. Apparently, some folks keep upvotes in their pockets, but they suffer from low pockets and short arms. – Ed V Apr 20 '23 at 19:33
  • 1
    @EdV Thank you, very kind. I get it, some of the best answers are the shortest as many of us have little time-- For the longer answers (as many I give) the title and content of the question needs to have wide appeal--and the answer be structured in a way to get the reader quickly past the problem at hand. But most importantly I hope the content was helpful to the OP. – Dan Boschen Apr 20 '23 at 23:38