I am completing exercises in the book Mathematical Statistics: Basic Ideas and Selected Topics regarding proving or disproving that a model is identifiable.
The problem I am struggling with considers $X_1, \ldots, X_n$ where $X_i \sim \text{N}(\alpha_i + \mu, \sigma^2)$. I am attempting to prove that the parametrization $$ \mathcal{P} = \left\{ P_\theta \left\vert \theta = (\alpha_1, \ldots, \alpha_n, \mu, \sigma^2) \colon \sum_{i=1}^{n} \alpha_i = 0, \right. \mu \in \mathbb{R}, \sigma > 0 \right\} $$ where $$ P_\theta = \prod_{i=1}^{n} f(x_i \vert \theta) = \prod_{i=1}^{n} \frac{1}{\sqrt{2\pi} \sigma} e^{ -\frac{(x_i - \alpha_i - \mu)^2}{2 \sigma^2} } $$ is in fact identifiable.
After reading the post "Identifiability of normal distribution", it was a very simple exercise to prove that $$ \mathcal{P} = \left\{ P_\theta \left\vert \theta = (\alpha_1, \ldots, \alpha_n, \mu, \sigma^2) \colon \alpha_i \in \mathbb{R}, \right. \mu \in \mathbb{R}, \sigma > 0 \right\} $$ is not identifiable, and the comments in the post suggest that the model I am struggling with is identifiable.
After reading posts like "What is model identifiability?" and "Is the following parametrizations identifiable?", I know what I must do: I must prove $P_{\theta_1} = P_{\theta_2} \implies \theta_1 = \theta_2$, or equivalently $\theta_1 \neq \theta_2 \implies P_{\theta_1} \neq P_{\theta_2}$. This is where I run into issues with either approach.
First, I start with an attempt at $P_{\theta_1} = P_{\theta_2} \implies \theta_1 = \theta_2$. Let $$ \theta_j = (\alpha_{j1}, \ldots, \alpha_{jn}, \mu_j, \sigma_j^2) $$ so that $$ P_{\theta_j} = \prod_{i=1}^{n} \frac{1}{\sqrt{2\pi} \sigma_j} e^{ -\frac{(x_i - \alpha_{ji} - \mu_j)^2}{2 \sigma_j^2} } $$ Now, if $P_{\theta_1} = P_{\theta_2}$, then $$ \prod_{i=1}^{n} \frac{1}{\sqrt{2\pi} \sigma_1} e^{ -\frac{(x_i - \alpha_{1i} - \mu_1)^2}{2 \sigma_1^2} } = \prod_{i=1}^{n} \frac{1}{\sqrt{2\pi} \sigma_2} e^{ -\frac{(x_i - \alpha_{2i} - \mu_2)^2}{2 \sigma_2^2} } $$ This is where I get stuck, and I have made it almost no where. I could expand the products to get summations in the exponent of the $e^{(\cdot)}$ terms, but my algebra does not seem to provide anything useful. I may be able to make some argument about the coefficients $\frac{1}{\sqrt{2\pi} \sigma_j}$ being equal but this only provides that one component of the $\theta$s are equal.
If I instead attempt to prove $\theta_1 \neq \theta_2 \implies P_{\theta_1} \neq P_{\theta_2}$, I run into similar issues. In my work, I set $P_{\theta_1} = P_{\theta_2}$ in a attempt to produce some contradiction to the assumption $\theta_1 \neq \theta_2$, but I again have no intuition as to where to take an expression of the form $P_{\theta_1} = P_{\theta_2}$.
In all, I am stuck with either approach. I know I need to use the restriction $$ \sum_{i=1}^{n} \alpha_i = 0 $$ but I am not sure how exactly. I thought to maybe use something like $$ \alpha_1 = - \sum_{i = 2}^{n} \alpha_i $$ or more generally $$ \alpha_k = - \sum_{i \neq k}^{n} \alpha_i $$ to replace one of the parameters in $P_\theta$ but I again got trapped in the algebra.
How should I proceed? Am I correct in thinking I need to use something like $$ \alpha_k = - \sum_{i \neq k}^{n} \alpha_i $$ in my expressions for $P_{\theta_1} = P_{\theta_2}$, or is the approach more subtle? Any guidance is greatly appreciated.