In general, there is a formula for the conditional distributions of a joint normal distribution, as here.
To use these formulas, we would need to know the covariance of the vector $(x_1,\dots, x_n,\overline{X})$. But this vector can be obtained as a linear transformation of $(x_1,...,x_n)$, namely $\left(\begin{array}{c}x_1 \\\dots\\ x_n\\ \overline{X}\end{array}\right)=A\left(\begin{array}{c}x_1\\...\\x_n\end{array}\right)$.
where $A=\left(\begin{array}{c} I_n \\\textbf{1}_n^t/n\end{array}\right)$ and $\textbf{1}_n$ is the length n vector with all entries equal to 1. In general, for a random vector $Y$ and linear map $A$, the covariance of $AY$ is given by $Cov(AY)=ACov(Y)A^T$.
At this point, the rest is a routine calculation involving the formulas given in the link.
A more "conceptual" argument is to note that, by symmetry, all components of the conditional distribution must have the same mean, call it $\mu':=Ex_i|\overline{X}$. On the other hand, $\overline{X}=E\overline{X}|\overline{X}=En^{-1}\sum_i x_i |\overline{X}=n^{-1}\sum_i E x_i|\overline{X}=\mu'$.
The variance is a bit trickier, and the key is to argue that the conditional variance does not depend on the value of $\overline{X}$. One way to see this is to realize that $(x_1-\overline{X},\dots, x_n-\overline{X})$ is independent of $\overline{X}$, so $Cov(x_1,\dots, x_n|\overline{X})=Cov(x_1-\overline{X},\dots,x_n-\overline{X}|\overline{X})+Cov(\textbf{1}\overline{X}|\overline{X})=Cov(x_1-\overline{X},\dots,x_n-\overline{X}|\overline{X})=Cov(x_1-\overline{X},\dots,x_n-\overline{X})$. This does not involve conditioning on $\overline{X}$, so we have shown that the conditional variance does not depend on the value of $\overline{X}$.
Once you believe this, it follows that $ECov(x_1,\dots, x_n|\overline{X})=Cov(x_1,\dots, x_n|\overline{X})$. Applying the law of total variance and the above result for the mean, this is
$I_n-Var(\overline{X}\textbf{1}_n)=I_n-\textbf{1}_n\textbf{1}^t_n/n$.