4

I have two 2D Gaussian random variables. I'd like to find the weighted average of the two (based on their covariance matrix around the mean, meaning that the mean of the final Gaussian should be closer to the one with the smallest variance). What is the proper way of doing that? Any pointer to web urls around this topic would be highly appreciated.

Clarification: I have a robot that has two sensors. The robot reads two bi-variate normal observations of an object. I want to find the location of the object by finding the weighted mean of the two samples. So I want my final location that I calculate to be closer to the observation that has a smaller spread. What is the proper way of doing that?

whuber
  • 322,774
MarkSAlen
  • 2,927
  • What is the variance of a 2D Gaussian random variable? – Dilip Sarwate Oct 29 '11 at 17:28
  • I meant the co-variance matrix or the guassian. What I am interested in is I have two observation that are Gaussian and I know their covariance. Now I want to combine the two (say I have two sensors that observe the location of a robot) what is the proper way of combining the two variables? – MarkSAlen Oct 29 '11 at 18:05
  • Wait a minute. You said you have two $2D$ Gaussian random variables which makes four Gaussian random variables total, and that's what your Clarification is saying. So when you say "I have two observations that are Gaussian and I know their covariance", are you talking of just one pair of random variables? – Dilip Sarwate Oct 29 '11 at 19:40
  • You may try to look to Kalman filters. – Carvalho ARF Jan 30 '12 at 11:22

2 Answers2

2

Presumably the sensors have been calibrated for no bias. Let the first sensor return the vector $(X_1,X_2)$ with covariance matrix $\Sigma$ and the second one return $(Y_1,Y_2)$ with covariance matrix $T$. The expectation of a weighted linear combination $\mu(X_1,X_2)+\nu(Y_1,Y_2)$ is still unbiased if and only if $\mu+\nu=1$, while its covariance equals $\mu^2\Sigma + \nu^2 T$. This tells us the expectation of the squared error (squared Euclidean distance) equals $\mu^2(\sigma_{11}+\sigma_{22}) + \nu^2(\tau_{11}+\tau_{22})$, which is minimized when $\mu$ is proportional to $1/(\sigma_{11}+\sigma_{22})$ and $\nu$ is proportional to $1/(\tau_{11}+\tau_{22})$; the constant of proportionality is easily found from the $\mu+\nu=1$ restriction. (For an explanation of this see Intuitive explanation of contribution to sum of two normally distributed random variables.)

whuber
  • 322,774
  • Do you assume that the constant of proportionality for both mu and nu is the same? if not the systems has many solutions right? – MarkSAlen Jan 09 '12 at 11:06
  • I am merely sketching out the solution of a straightforward optimization problem, Mark: minimize expected squared error (as a function of $\mu$ and $\nu$) subject to a single constraint on the variables $\mu$ and $\nu$. Except for degenerate problems, the solution will be a discrete collection of points. In this case it's unique. – whuber Jan 09 '12 at 14:40
0

Despite this question being asked a long time ago, someone might still find this useful.

As the accepted answer suggests a Kalman filter would be the way to go. I was dealing with a similar problem recently and the way I ended up combining the - in this case say, two measurements is the following.

Let us have 2 measurements with means $\mathbf{\mu}_{1,2} \in \mathbb{R}^2$ and associated covariance matrices $\mathbf{\Sigma}_{1,2} \in \mathbb{R}^{2\times2}$. Then their combination into the final estimate amounts to $$\mathbf{K} = \mathbf{\Sigma}_1\left(\mathbf{\Sigma}_1+\mathbf{\Sigma}_2\right)^{-1},\\ \mathbf{\mu} = \mu_1 + \textbf{K}\left(\mu_2 - \mu_1\right),\\ \mathbf{\Sigma} = \mathbf{\Sigma}_1 - \mathbf{K\Sigma}_1, $$ where $\mu$ is the resulting mean and $\mathbf{\Sigma}$ its covariance.

More on this can be found, together with the problem statement and a more in-depth explanation here.

domiinio
  • 1
  • 1