0

I have the equation:

\begin{equation} X_{t}=\Lambda F_t+u_{t}, \end{equation} with: \begin{equation} Cov(X_t)=\Lambda Cov(F_t) \Lambda' + Cov(u_t) \end{equation}

where $F_t$ is an $r \times 1$ vector, $\Lambda$ is a $n\times r$ matrix and $u_t$ is $n \times 1$ vector.

Now they say that the eigenvalues for $Cov(u_t)$ are $O(1)$ and $O(n)$ for $\Lambda'\Lambda$.

What do they mean by $O(1)$ and $O(n)$?

2 Answers2

1

As discussed in the comments, the context is not perfectly clear. My answer is based on the assumption that we have a sequence of models in mind, one for each dimension $n$, with corresponding possibly unknown matrices $\Lambda=\Lambda_n$ and $\Sigma = \Sigma_n := \mathrm{Cov}(u_t)$.

Let now $\lambda_n$ be the largest (in absolute terms) eigenvalue of $\Lambda_n ' \Lambda_n$ and let $\mu_n$ be the largest (again in absolute terms) eigenvalue of $\Sigma_n$.

That the sequence $\lambda_n$ is $O(1)$ says by definition that there exists constants $C_1, N_1$ such that $|\lambda_n|\leq C_1, \forall n \geq N_1$. This means that the sequence of (maximal) eigenvalues of $\Sigma_n$ is bounded.

That the sequence $\mu_n$ is $O(n)$ says by definition that there exists constants $C_2, N_2$ such that $|\mu_n| \leq n C_2, \forall n\geq N_2$. This says that the eigenvalues may grow linearly with $n$. Note that they need not, indeed they may even decrease with $n$.

Two contrived examples: setting $\Lambda_n = \sqrt{n}I_n$, where $I_n$ is the $n-$dimensional identity matrix, is an example where the eigenvalues (the diagonal elements) grow linearly with $n$. On the other hand, setting $\Lambda_n = n^{-1/2}I_n$ is an example where the eigenvalues decrease in $n$.

KOE
  • 4,541
  • Both the question and your explanation are a little ambiguous, so I'm hoping you will be able to clarify this. Do these "constants" depend on $u_t$, $\Lambda$, and other aspects of the context or not? This is separate from the issue of depending on $t$--we could safely suppose that the $u_t$ are iid, whence the covariances do not change with $t$. Note that $O(1)$ does include situations where the eigenvalues grow with $n$--but they must be bounded. Also, the "allowed to grow" construct could be interpreted in two ways: is it a result of the assumptions or is it a constraint in the model? – whuber Apr 29 '15 at 18:28
  • Thanks for the comments @whuber, I'll try to clarify. Good point about the $O(1)$, I went too far when stating the "intuition". I don't really understand your first remark (second sentence), was there a particular thing you had in mind there that I overlooked? – KOE Apr 29 '15 at 18:42
  • To clear up my first remark: I'm a little confused myself, because I don't have enough information about the context. Yet it occurs to me that one particular statistical setting could involve a sequence of $\Lambda$ indexed by $n=1,2,\ldots$ whereas another setting could involve a different sequence of such $\Lambda$. I would normally understand "$O(n)$" to refer to the eigenvalues of this particular sequence, but it occurs to me that the condition you give is not entirely clear about whether $N_2$ depends on this sequence or is a universal constant for this particular kind of problem. – whuber Apr 29 '15 at 18:53
  • 1
    I see now, good point. It's not clear what the context is. I was looking at it as an assumption for a set of models. Saying something like for every $n$ we have a specific, not necessarily known, matrix $\Lambda$, and we know this sequence of unknown $\Lambda$s has eigenvalues that are $O(n)$. – KOE Apr 29 '15 at 19:02
-1

The link given in the comment above is a good overview. Very briefly, big O notation tell you much the running time of an algorithm changes with the size of the problem.

O(1) is basically saying it is constant. If you change "n" your algorithm will still take the same time. E.g. taking the first item from a list will always take the same time no matter how long the list is.

O(n) says that the algorithm runtime will be proportional to the size of n. E.g. counting the number of items in the list will take twice as long for a list that is twice as long.

qhfgva
  • 153
  • This question is not about runtime at all. The big-O notation has a formal mathematical definition that has a specialized use in computer science, but referring to that instead of to the appropriate mathematical concept is a distraction and potentially confusing. – whuber Apr 29 '15 at 18:13
  • This isn't a question about algorithms, so it isn't clear to me that this answer is particularly helpful. One might get the impression from computer sciencetists that they invented big-$O$ notation, and that it is all about analyzing the runtime/memory requirements of algorithms, but neither of those things is true. It's use goes all the way back to the late 19th century in the field of number theory. – guy Apr 29 '15 at 18:15
  • Sorry, my mistake. – qhfgva Apr 29 '15 at 18:17