3

What is the multivariate distribution of $(X_1, \ldots, X_n | X_1+ \dotsm + X_n = y)$, given $y$ and that $X_i$ is (otherwise; without being given $y$) unconditionally independently normally distributed with mean $\mu $ and standard deviation $\sigma$?

And, to the task at hand: How do I simulate random drawings of $X$ given $y,~ \mu,$ and $\sigma$?

  • 5
    Please just apply any of the formulas for conditional Normal distributions: https://stats.stackexchange.com/search?q=normal+conditional+variance. The application is direct and simple, because $(X_1,\ldots, X_n, X_1+\cdots+X_n)$ has an $(n+1)$-variate Normal distribution whose parameters are particularly easy to compute from the independence of the $X_i.$ – whuber Mar 11 '23 at 18:16

1 Answers1

5

As pointed out by @whuber, we can simply use the known formulas for conditional distributions in the multinormal, as given for instance at Deriving the conditional distributions of a multivariate normal distribution (we will use notation from there). At first I doubted, since the full covariance matrix $\Sigma$ in this case is singular, but a close reading of the proof by user macro shows that we only need that $\Sigma_{22}$ is non-singular. That $\Sigma$ itself is singular does not matter.

For this case, we can easily compute (details not given) that $$ \Sigma =\begin{pmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22} \end{pmatrix} =\sigma^2 \begin{pmatrix}I_n & 1_n \\ 1_n^T & n \end{pmatrix} $$ where $I_n$ is the $n \times n$ identity matrix, and $1_n$ is the column vector with $n$ $1$'s. Then using the formulas we find that the conditional distribution of $X$ given that $Y=y$, where $Y=\sum_1^n X_i$, is the n-dimensional multinormal distribution with mean $$ \mu_{x|y} = 1_n \frac{y}n $$ that is, all the components have the same conditional expectation $y/n$, and covariance matrix $$ \Sigma_{x|y} = \sigma^2 \begin{pmatrix} I_n - 1_n 1_n^T / n \end{pmatrix} $$ Note that $1_n 1_n^T$ is an $n \times n$-matrix with all components 1.

User1865345
  • 8,202
  • 1
    This works like a charm. Thanks! – Řídící Mar 12 '23 at 17:27
  • I don't know if the extension to vectors is straightforward, so here's that question: https://stats.stackexchange.com/q/609277/370545 – Řídící Mar 13 '23 at 11:40
  • 2
    I don't follow that, because the present question is explicitly about vectors! Moreover, the method of this answer and the formula it relies on apply without any modification whatsoever to your new question. – whuber Mar 13 '23 at 15:26
  • @whuber Scalars, vectors, matrices... What I meant was one more dimension. And I did not have too much trouble implementing the above answer, but I am having a terrible time implementing it for the higher-dimensional case. :( – Řídící Mar 13 '23 at 15:54
  • 1
    It's unclear what you might mean by "one more dimension," because all it means is -- the vectors are of a different dimension. But since this answer applies to all dimensions, it applies to any situation you might contemplate. – whuber Mar 13 '23 at 16:24
  • @whuber This question (above) asks about $X_i$s that are scalars (and $X$ thus being a vector). The other question asks about $X_i$s that are vectors. So, in that question the $X$s are either matrices or vectors of vectors. I'm sure that the above answer provides the necessary clues on how to handle this, but I haven't figured it out yet. – Řídící Mar 13 '23 at 16:30
  • This question asks about an $X$ that is explicitly a vector. Any bunch of vectors is, according to the axioms, also a vector. Your $y$ here is also a (dimension 1) vector. Proceed from there. – whuber Mar 13 '23 at 16:33
  • @whuber OK, that is good to know (about the axioms). Thanks! But whereas the unconditional distributions of the $X_i$s are independent, they are not in the other question. To (hopefully) clarify: in the above question, the unconditional $X$ represents one time series (of length $n$). In the other question the unconditional $X$ represents a bunch of time series that have a covariance (matrix, I dare to say). – Řídící Mar 13 '23 at 16:43
  • Right: and you just plug in the entries of that covariance matrix where needed and the remaining ones are given by the independence assumption. – whuber Mar 13 '23 at 18:18
  • @whuber I'm giving it a shot (in the other question). – Řídící Mar 14 '23 at 18:25