Reformulation in terms of linear combination of $\chi^2(1)$ variables
We can reformulate the problem.
Let's rewrite $$\Vert \mathbf{x} \Vert^4 =\left(\sum_{i=1}^n x_k^2\right)^2 = Y_n^2$$
Such that we can focus on the variable
$$Y_n = \sum_{k=1}^n x_k^2$$
and the problem statement in terms of $Y_n$ becomes
$$E[1/Y_n^2] \to 1/E[Y_n^2]$$
we can make another reformulation and consider the eigenvalues $\lambda_k$ of the matrix $\Sigma$ then we can express $Y_n$ as a linear combination of $n$ iid chi squared variables.
$$Y_n \sim \sum_{k=1}^n \lambda_k Z_k \qquad \text{where $\forall k:Z_k \sim \chi^2(1)$}$$
where we have the conditions that
- $\sum_{k=1}^n \lambda_k = 1$, which relates to the condition that $Tr(\Sigma) = 1$.
- $\max(\lambda_k) \to 0$, which relates to the spectral norm approaching zero.
The mechanism behind the convergence
The expectation and variance of $Y_n$ is
$$E[Y_n] = \sum_{k=1}^n \lambda_k = 1$$
and
$$\text{Var}[Y_n] = \sum_{k=1}^n 2 \lambda_k^2 \to 0$$
Intuitively: The variable $Y_n$ approaches a constant value $1$ and that is how $E[1/Y_n^2]$ will approach $1/E[Y_n^2]$.
I am not sure how to make this formal. I am thinking about something like the continuous mapping theorem. If $Y_n \to 1$ then $f(Y_n) \to f(1)$. But I am not sure whether the decreasing variance is sufficient to state that $Y_n \to 1$ and what sort of convergence is exactly needed or allowed to make the statements.
A problem with the convergence
In intuitive terms we see that the variance shrinks to zero and that is what makes the convergence happen, at least seemingly in simulations. A point that worries me is that a function like inverse $E[1/Y_n^2]$ can involve division by zero and result into an infinite or undefined expectation. For instance if we have some normal distributed variable $W_n \sim \mathcal{N}(1,1/n)$ then we do not get convergence $E[1/W_n] \to 1/E[W_n]$ because the expectation of $1/W_n$ is undefined.
So a problem with the above intuitive reasoning is that the $E[1/Y_n^2]$ may be undefined when the density of $Y_n$ at zero is finite. For instance the inverse of the square of a chi-squared distribution has no finite expectation value when $\nu \leq 4$ (see the variance of a inverse chi-squared distribution).
What we need to proof is that we can not have the $\lambda_k$ in such a way while $max(\lambda_k)$ approaches zero.
I imagine for instance a dominant term that approaches zero very slowly while the remaining terms approach zero very quickly. E.g. some slowly decreasing function of $n$ such that
$$\lambda_k = \begin{cases} f(n) &\quad \text{if} \quad k=n \\
\frac{1-f(n)}{n-1} &\quad \text{if} \quad k\neq n \end{cases}$$
Then $Y_n$ is a sum of two chi squared variables, one with 1 degree of freedom and another with $n$ degrees of freedom.
$$Y_n \sim f(n) \chi(1) + \frac{1-f(n)}{n-1} \chi(n-1)$$
I don't believe that this $Y_n$ has a non-zero density. I also don't believe that any other similar approach can result in a non-zero density for $Y_n$.
We have
$$Y_n \sim \sum_{k=1}^n \lambda_k Z_k > \sum_{k=1}^n \min(\lambda_k) Z_k \sim \Gamma(k=n/2, \theta = 2 \min(\lambda_k))$$
Because the $Y_n$ is gonna be made of at least 4 components (otherwise $\max(\lambda)$ can't approach zero) you get that the variable $Y_n$ is at least as large as a scaled chi-squared variable with more than 4 degrees of freedom and the density at zero should should be zero.