3

I am trying to understand the difference between orthogonality and zero correlation. There are many insightful questions/answers on the topic in this forum, but I'm looking for some practical examples. Could you suggest examples where $X$ is a $2\times 1$ random vector (non-trivial), $U$ is a scalar continuou random variable, and:

case 1. $E(XU)= 0$, $cov(X,U)\neq 0$, $E(U)\neq 0$, $E(X)\neq 0$

case 2. $E(XU)\neq 0$, $cov(X,U)= 0$, $E(U)\neq 0$, $E(X)\neq 0$

Star
  • 826
  • 2
    It is so easy to find examples of both with Bernoulli variables that I encourage you to look for them yourself. – whuber Jul 29 '22 at 20:10
  • And when $U$ is continuous? – Star Jul 29 '22 at 20:44
  • Tiny variations of those examples will work. Add a tiny bit of iid measurement error to $X$ and $U$ to get started. – whuber Jul 29 '22 at 21:12
  • I haven't been able to come up with an example. Could you advise? – Star Jul 30 '22 at 09:03
  • Try some $2\times 2$ tables. Then add iid standard Normal values to those variables. – whuber Jul 30 '22 at 12:19
  • Re the edit: the conditions on $E(X)$ are superfluous, because the covariance imposes a relationship between $E(XU)$ and $E(X)E(U).$ Thus, if exactly one of $E(XU)$ and $\operatorname{Cov}(X,U)$ is nonzero, then both of $E(U)$ and $E(X)$ must be nonzero. – whuber Jul 30 '22 at 16:24

1 Answers1

5

To construct examples, use the linearity property of expectation and the fact that expectations of products of independent random variables are the products of the expectations.

In many situations we can construct examples of random variables with given properties out of truly simple variables, such as binary or ternary variables. When examples involving continuous distributions are needed, a slight modification of these examples -- replacing their probability masses by narrow distributions around those values -- often will work.

Case 1

Let $(X,U)$ take on the values $(0,1),$ $(1,0),$ and $(0,0)$ with equal probability. Here is a table of its probability mass function.

$$\begin{array}{l|cc} \text{}& U=0 & U=1\\ \hline X=0 & \frac{1}{3} & \frac{1}{3}\\ X=1 & \frac{1}{3} & 0 \end{array}$$

Because $1 = \Pr(XU=0),$ $E[XU]=0.$ And since $X$ and $U$ are identically distributed Bernoulli$(1/3)$ variables, neither has zero expectation. The covariance is

$$\operatorname{Cov}(X,U) = E[XU] - E[X]E[U] = -E[X]E[U]\ne 0.$$

To create an example with a continuous distribution, let $Z_i$ be independent zero-mean continuous variables (also independent of $(X,U)$) and set $X^\prime = X+Z_1,$ $U^\prime = U + Z_2.$ Compute

$$E[X^\prime] = E[X] + E[Z_1] = E[X] \ne 0;$$ $$E[U^\prime] = E[U] + E[Z_2] = E[U] \ne 0;$$ $$\begin{aligned} E[X^\prime U^\prime] &= E[XU] + E[Z_1 U] + E[X Z_2] + E[Z_1ZZ_2] \\ &= 0 + E[Z_1]E[U] + E[X]E[Z_2] + E[Z_1]E[Z_2] \\ &= 0+0+0+0 = 0. \end{aligned}$$

Consequently the covariance is unchanged, so $\operatorname{Cov}(X^\prime, U^\prime) \ne 0.$

Case 2

Let $X$ and $U$ be independent with finite nonzero expectations. You can check that all the requirements hold because $E[XU]=E[X]E[U]\ne 0.$

There exist other examples when $X$ and $U$ are not independent. For instance, take any two variables $(X,Y)$ with zero covariance, independent or not. Adding constants to $X$ and $U$ will not change the covariance but will change their expectations as well as the expectation of $XU.$ Choose suitable values to make all those expectations nonzero.

whuber
  • 322,774