As discussed in the comments, the context is not perfectly clear. My answer is based on the assumption that we have a sequence of models in mind, one for each dimension $n$, with corresponding possibly unknown matrices $\Lambda=\Lambda_n$ and $\Sigma = \Sigma_n := \mathrm{Cov}(u_t)$.
Let now $\lambda_n$ be the largest (in absolute terms) eigenvalue of $\Lambda_n ' \Lambda_n$ and let $\mu_n$ be the largest (again in absolute terms) eigenvalue of $\Sigma_n$.
That the sequence $\lambda_n$ is $O(1)$ says by definition that there exists constants $C_1, N_1$ such that $|\lambda_n|\leq C_1, \forall n \geq N_1$. This means that the sequence of (maximal) eigenvalues of $\Sigma_n$ is bounded.
That the sequence $\mu_n$ is $O(n)$ says by definition that there exists constants $C_2, N_2$ such that $|\mu_n| \leq n C_2, \forall n\geq N_2$. This says that the eigenvalues may grow linearly with $n$. Note that they need not, indeed they may even decrease with $n$.
Two contrived examples: setting $\Lambda_n = \sqrt{n}I_n$, where $I_n$ is the $n-$dimensional identity matrix, is an example where the eigenvalues (the diagonal elements) grow linearly with $n$. On the other hand, setting $\Lambda_n = n^{-1/2}I_n$ is an example where the eigenvalues decrease in $n$.