1

What is the maximum likelihood function of an independent Bivariate normal sample $(x_i, y_i)$, where the mean is known as a vector of $(\mu_x, \mu_y)$, and the variance is known to be some sort of a Variance-Covariance matrix?

I'm a little bit confused in how Bivariate normals work, and stuck with figuring out a likelihood function.

Thanks in advance if anyone can reach out and help!

Abhinav Gupta
  • 1,636
  • 12
  • 24
  • When you say "independent" do you mean to say that $X_i$ and $Y_i$ are independently distributed? (this would imply their covariance is zero) or are you referring to the fact that the sample is independently distributed. i.e. that $(X_i,Y_i)$ is independent of $(X_j,Y_j)$ for $i \neq j$? I think you mean the later, in which case you should ask, "what is the likelihood function of an iid bivariate normal sample?" where iid, stands for independantly and identically distributed. just to make the question clearer – Zachary Blumenfeld Dec 14 '15 at 05:44
  • @ZacharyBlumenfeld Yes, it should be iid. Sorry I didn't make that clear enough. – user98139 Dec 14 '15 at 05:48
  • So both the density and log-likelihood of the multivariate normal are given on Wikipedia https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Density_function and can be found in numerous other places. Where are you having trouble? Do you know how to maximize the likelihood of an iid univariate normal sample? Is it that you don't understand how to derive the multivariate likelihood? how to show first order conditions? how to maximize it?, or are you looking for some sort of intuition? The more detail on exactly where you are stuck, the better. – Zachary Blumenfeld Dec 14 '15 at 05:58

1 Answers1

1

The bivariate normal density can expresed as follows

p(x)=$\frac{1}{\sqrt{2\pi}\|\Sigma|}exp(-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu))$

where $\mu$ is mean vector and $\Sigma$ is variance-covariance matrix.

First you need to understand what is $\Sigma$. This is a matric of which $i,j^{th}$ element is the covariance between $i^{th}$ and $j^{th}$ variable. As in univariate case the shape of distribution depends upon variance $\sigma$, in bivariate case shape depends upon $\Sigma$. In univariate case density at a point depends upon distance of that point from $\mu$ with respect to $\sigma$. The same thing happens in bivariate case but here you can move away from mean in infinite directions. If you move in a direction in which variance is low your density would decrease faster compared with if you move in a direction in which variance is high.

For example suppose the variance of first variable is $\sigma_1^{2}$ and variance of second variable is $\sigma_2^{2}$. And the covarince between variable 1 and variable 2 is zero. Then you would get an ellitical density depending upon the values of $\sigma_1^{2}$ and $\sigma_2^{2}$. If $\sigma_1^{2}$ > $\sigma_2^{2}$; then the major axis of ellipse would be in the direction of variable 1 and minor axis ib the direction of variable 2.

Now that you understand bivariate normal density you can write likelihood function as in single variable case.

L($\mu, \Sigma$) = $\Pi_i p(x_i)$

Abhinav Gupta
  • 1,636
  • 12
  • 24
  • 1
    The normalizing constant for $k$-variate normal is $1/\sqrt{(2\pi)^k |\Sigma|}$, but for the likelihood it is not needed so can be dropped. – Tim Jun 24 '20 at 08:15