2

I have two i.i.d. random variables, $\theta_1$ and $\theta_2$ which are uniformly distributed on the unit square. I need to compute the joint posterior distribution of these two variables, given their difference $m=\theta_1-\theta_2$, i.e. $f_{\theta|m}(\theta|m)$ where $\theta=(\theta_1,\theta_2)$. I then must use this posterior to compute the expectation of some function $g(\theta_1,\theta_2)$, $$\mathbb{E}_{f_{\theta|m}}[g(\theta_1,\theta_2)]$$

I am having trouble computing the posterior distribution $f_{\theta|m}(\theta|m)$ and reconciling the results with a graphical intuition (see below). It would be great if you could also help me to set up the integral for the expectation with the relevant boundaries.

My attempt

Using Bayes' theorem we can write $$f_{\theta|m}(\theta|m)=\frac{f_{m|\theta}(m|\theta)f_\theta(\theta)}{f_m(m)}$$ Now, $f_\theta(\theta)=1$ for all $(\theta_1,\theta_2) \in [0,1]\times[0,1]$. As for $f_{\theta|m}(m|\theta)$, this equals $1$ when $\theta_1$ and $\theta_2$ fall in the line $\theta_1-\theta_2=m$, and zero otherwise, so $f_{\theta|m}(m|\theta)$ should just be the indicator $\mathbb{I}_{\{\theta_1-\theta_2=m\}}(\theta_1,\theta_2)$. I computed the denominator using the convolution formula, and obtained $f_m(m)=1-|m|$ for $m \in [-1,1]$. Putting everything together, $$f_{\theta|m}(\theta|m)=\frac{1}{1-|m|} \ \ \text{ if } \ \theta_1-\theta_2=m$$ with $m \in [-1,1]$, and zero otherwise. I was wondering if this formula is correct and, if so, if there is a more explicit way to write the support of $(\theta_1,\theta_2)$ in terms of intervals. I need this because I must use this posterior to compute the expectation of a function of $\theta$ and I need to specify the limits of integration.

Moreover, I was not able to reconcile this result with a graphical intuition. Indeed, after learning $m$ I would expect that the random vector $(\theta_1,\theta_2)$ is uniformly distributed on the line segment $m=\theta_1-\theta_2$, with $\theta_1 \in [0,m+1]$ and $\theta_2 \in [-m,1]$ for $m<0$ in the figure below (and similarly, with $\theta_1 \in [m,1]$ and $\theta_2 \in [0,1-m]$ for $m>0$). This segment has length (Pythagorean theorem) $\sqrt{2}(1+m)$ if $m<0$ and $\sqrt{2}(1-m)$ if $m>0$, so $\sqrt{2}(1-|m|)$ suggesting a density of $\frac{1}{\sqrt{2}(1-|m|)}$, which is similar to what I got with the previous method, but not identical, as we have this additional $\sqrt{2}$ which I cannot find an interpretation for.

Thank you in advance!

Unit square

ad018
  • 31
  • 1
    I think the explanation is that the factor of $\sqrt 2$ is eliminated when you project onto the x-axis or y-axis. Imagine we pick a point uniformly at random from the line segment between (0,0) and (1,1). Then the probability that the point lies on a subsegment of length $l$ is $\frac{l}{\sqrt{2}}$, but the probability that the x coordinate of the point lies on subinterval of [0, 1] of length $m$ is just $m$, because the projection shrinks the domain by a factor of $\sqrt 2$. – fblundun Apr 27 '21 at 12:16
  • 1
    https://stats.stackexchange.com/questions/573959 answers your question and helps refine your (correct) intuition. – whuber May 18 '22 at 19:02

0 Answers0