Recapitulation of the question
Let's consider the following data generating process (considering only the mean $\bar{X}$, instead of multiple $X_i$, since the mean is a sufficient statistic and simplifies a lot):
$$\begin{array}{rll}
\mu &\sim& N(\mu_t,1)\\
\bar{\epsilon}_n &=& N(0,\sqrt{n}) \\
\bar{X}_n &=& \mu + \bar{\epsilon}_n
\end{array} $$
and the aim is to infer $\mu$ based on observations of the average $\bar{X}_n$ .
The true distribution of $\mu$ is $N(\mu_t,1)$, and the question is whether using a prior equal to this true distribution (which I assume is what is meant with the "true prior") results in the best maximum a posteriori probability (MAP) estimate based on the expectation value of the squared error.
The idea of a 'true prior' is not always appropriate. It is not clear what it means. Sometimes a value to be estimated may be considered to have a degenerate distribution. This is for example the case when we try to measure physical constants. So I rephrased the question here to be about an example where the parameter to be estimated is assumed to follow some distribution, and that distribution is considered as the 'true prior'.
Two priors
Below we compare two different priors
$$\mu_{m} \sim N(m,1)$$ and $$\mu_{\tau} \sim N(\mu_t,1/\tau)$$
One of them differs from the true distribution of $\mu$ by assuming a different mean $m$, the other by assuming a different precision $\tau$.
Below we will see that
for the prior $\mu_m$ the lowest expectation value for the mean squared error is obtained when $m = \mu_t$ (that is, when the prior equals the 'true distribution').
for the prior $\mu_\tau$ the optimum is also the value $\tau = 1$.
Note: While creating this answer I had expected that the optimum would be at a different $\tau \neq 1$ and due to some regularization there would be a situation where a prior different from the true data generating process might be an improvement. But after working it out it appears to be no improvement. Yet, I still believe that there should be ways to improve the estimate for other situations (different distributions and priors).
Computation stuff
Since the normal distribution is a conjugate prior, the posterior will be a normal distribution as well, and the mean of that distribution will be the maximum a posteriori probability (MAP) estimate.
For the two different prior distributions (which we will identify by using subscripts with letters $_m$ and $_\tau$) we can express the MAP estimate as function of the true value of $\mu$ and the mean $\bar\epsilon_n$ (where $\bar{X}_n = \mu + \bar{\epsilon}_n$), as follows:
$$ \begin{array}{}
\hat{\mu}_m(\mu,\bar{\epsilon}) = \frac{m+n \bar{X}_n}{1 +n}\\
\hat{\mu}_\tau(\mu,\bar{\epsilon}) = \frac{\tau\mu_t+n \bar{X}_n}{\tau +n}
\end{array}$$
and the error $e = \hat{\mu} - \mu$ is
$$ \begin{array}{}
e_m(\mu,\bar{\epsilon}) = \frac{m+n\mu+ n \hat\epsilon}{1 +n} - \mu = \frac{(m-\mu)+ n \hat\epsilon}{1 +n}\\
e_\tau(\mu,\bar{\epsilon}) = \frac{\tau\mu_t+n \mu + n \hat\epsilon}{\tau +n} - \mu = \frac{\tau(\mu_t-\mu) + n \hat\epsilon}{\tau +n}
\end{array} $$
The sampling distribution of these estimates are normal distributed variables (since they are a linear sum of $\bar{\epsilon}$ and $\mu$) and the expected mean squared error are the raw second moments of those variables
$$ \begin{array}{}
E_{\mu,\bar{\epsilon}}[e_m^2] = \left(\frac{m-\mu_t}{1+n}\right)^2 + \left(\frac{1}{1+n}\right)^2 + \left(\frac{\sqrt{n}}{1+n}\right)^2 \\
E_{\mu,\bar{\epsilon}}[e_\tau^2] = \left(\frac{\tau}{\tau+n}\right)^2 + \left(\frac{\sqrt{n}}{\tau+n}\right)^2
\end{array} $$
Simulations
The code below can help to interpret and verify the above formulae


n = 10
set.seed(1)
sim_m = function(n,m=0) {
true_mu = rnorm(1)
x = rnorm(1,true_mu,1/sqrt(n))
estimate = (m + n*x)/(1 + n)
return(estimate-true_mu)
}
sim_tau = function(n,tau=1) {
true_mu = rnorm(1)
x = rnorm(1,true_mu,1/sqrt(n))
estimate = (n*x)/(tau + n)
return(estimate-true_mu)
}
m = seq(-2,2,0.2)
e_m = sapply(m, FUN = function(m) {
mean(replicate(10000, sim_m(n,m)^2))
})
plot(m,e_m, xlab = expression(mu[prior]-mu[true]), ylab = "expected error^2")
lines(m, (m/(1+n))^2 + (1+n)/(1+n)^2)
tau = seq(0,2,0.1)
e_tau = sapply(tau, FUN = function(m) {
mean(replicate(10000, sim_tau(n,m)^2))
})
plot(tau,e_tau, xlab = "prior precision", ylab = "expected error^2")
lines(tau, (tau^2+n)/(tau+n)^2)