I know there are "superfast" $O(n \log^p n)$ algorithms for solving Toeplitz linear systems. Is it possible to compute all eigenvalues of such a matrix with the same complexity?
-
2@David: You changed $\tilde{O}$ to $\mathcal{O}$, but $\tilde{O}(n)$ is standard notation for linear times a polylog factor. I haven't seen the mathcal usage before; is it also supposed to indicate the polylog? – Geoffrey Irving Aug 02 '12 at 16:31
-
My apologies! I had never seen that notation before. I've changed it back. – David Ketcheson Aug 02 '12 at 19:36
-
No worries. I often feel guilty using $\tilde{O}$ anyways, since it feels like lazy marketing ("Look at my nearly linear time algorithm, not the small print..."). – Geoffrey Irving Aug 02 '12 at 21:50
2 Answers
The results in The Complexity of the Matrix Eigenproblem (STOC '99, Proceedings of the thirty-first annual ACM symposium on theory of computing, p. 507-516) indicate that the best known algorithms for the Hermitian Toeplitz case are $\tilde{O}(n^{2})$, based on Section 1.2. In this section, the eigenproblem is divided into three stages (Section 1.2, pages 3 and 4 of the article):
a) the reduction of the eigenproblem for a given matrix to the one for a matrix in a canonical form (that is, for a Frobenius matrix, for a block triangular or block diagonal matrix with Frobenius diagonal blocks or for a tridiagonal matrix); as a by-product of this stage, the coefficients of the characteristic polynomial $c_A(x)$ of the original input matrix $A$ are output or become available readily, in $O(n)$ additional ops
b) the approximation of the eigenvalues of $A$ as the zeros of $c_A(x)$, and
c) the computational of the eigenspaces of the approximated eigenvalues, by exploiting the available reduction of $A$ to the canonical form.
The lower bound on the arithmetic complexity of stage (a) stated in the article fror general matrices is $\Omega(M(n))$, where $M(n)$ is the arithmetic complexity of matrix-matrix multiplication. I know that the arithmetic complexity of LU decomposition is equivalent to the arithmetic complexity of matrix-matrix multiplication, but I don't know if it is true for individual linear solves. I don't believe it's true. Since $M(n)$ is $\Omega(n^{2})$ (even for Toeplitz matrices, from what I can tell), calculating the eigenvalues (stages (a) and (b)) must be $\Omega(n^{2})$, based on the results of the article.
In order to obtain a $\tilde{O}(n)$ algorithm, the limiting step is calculating the characteristic polynomial in $\tilde{O}(n)$ time. How one does that, I'm not sure, but the output of stage (a) can't be the limiting step because Frobenius, block Frobenius, and tridiagonal matrices can all be specified with $O(n)$ data. I don't have access to the source cited along with the $\Omega(n^{2})$ bound, so I can only speculate that the limiting step there is a sequence of matrix-matrix multiplies associated with elementary matrix operations to reduce the given matrix to one of the listed canonical forms, in which case the "output-limiting step" is in intermediate calculations.
The fastest known algorithm is one by Ng and Trench (1997 technical report), which calculates the eigenvalues in $\tilde{O}(n^{2})$ time in serial and $\tilde{O}(n)$ time in parallel.
- 30,394
- 9
- 64
- 127
-
I think they are saying that $\tilde{O}(n^2)$ is nearly optimal for the entire eigenproblem including eigenvalues. – Geoffrey Irving Aug 03 '12 at 17:28
-
-
1
-
@GeoffreyIrving: Look at Section 1.2 of the paper, where they separate the problem into three stages, (a) through (c). Calculating the eigenvalues is equivalent to solving stages (a) and (b); Section 1.2 gives the computational complexity of each stage, which is what I used to write my answer. – Geoff Oxberry Aug 03 '12 at 20:53
-
@Geoff: Remark 1.3 says explicitly what statement they're making: their algorithms are nearly optimal because they take $\tilde{O}(m)$ time where $m$ is the size of the output. This optimality statement is only valid if compute computes needs all the eigenvectors in explicit form. If you need only eigenvalues, or possibly eigenvectors in terms of an action and inverse action, faster algorithms could exist. – Geoffrey Irving Aug 03 '12 at 21:31
-
@GeoffreyIrving: I think we're talking about different meanings of "nearly optimal". I am referring to that phrase when used in page 4 of the article, (re: stage (a)) "All of these algorithms are performed at nearly optimal cost of $O(M(n) \log(n))$ ops...", where $M(n)$ is the cost of a matrix-matrix multiply. If you can accomplish stage (b) in a matrix-free way, you can potentially get around the $\Omega(n^{2})$ bound on matrix-matrix multiplication. I know that LU decomposition and matrix-matrix multiply have the same complexity, but I don't know if that's true of individual linear solves. – Geoff Oxberry Aug 03 '12 at 21:41
-
@Geoff: You may be right that we're talking about different things. My point is that their use of the phase "nearly optimal" doesn't say anything about the answer to my question (e.g., as your answer would imply). – Geoffrey Irving Aug 03 '12 at 23:04
-
-
@Geoff: Your argument doesn't prove anything about the complexity of pure eigenvalues, since all you've done is move $O(n^2)$ output variables into step (a). Other operations with Toeplitz matrices such as matrix-vector multiply or linear solves take subquadratic time, so I don't think it's obvious that all algorithms for eigenvalues must construct $O(n^2)$ intermediate data on the way. – Geoffrey Irving Aug 04 '12 at 03:21
-
1@Geoffrey: Frobenius, block Frobenius, and tridiagonal matrices can all be specified using $O(n)$ data, so output data (at least at the end of that stage) is not an issue; rather, you need a fast way (i.e., $\tilde{O}(n)$ algorithm) to calculate the coefficients of the characteristic polynomial. Without the source, it's not immediately clear to me that the $\Omega(M(n))$ bound excludes that possibility, but the "fast algorithms" I've seen from searching around are all $\tilde{O}(n^{2})$. It's not a proof, but the evidence suggests that a $\tilde{O}(n)$ algorithm is unlikely. – Geoff Oxberry Aug 04 '12 at 04:49
-
My first idea for computing the eigenvalues of a Hermitian Toeplitz matrix would be to use the fast $O(n \log n)$ matrix multiplication for Toeplitz matrices together with the Lanczos algorithm to get a tridiagonal matrix. The complexity of this procedure is $O(n^2 \log n)$, and it isn't even numerically robust. I vaguely remember that Marlis Hochbruck has written a paper on look-ahead algorithms related to Toeplitz matrices, but I think these were intended to ensure robustness for non-Hermitian Toeplitz systems, not to reduce the complexity.
My conclusion is that already a proof that there exists a numerically robust $\tilde{O}(n^2)$ algorithm for computing all eigenvalues (and eigenvectors) of a Hermitian Toeplitz matrix would be a nice thing to have. I also wonder how useful it would be to able to compute the eigenvalues in $\tilde{O}(n)$ time, if computing the corresponding eigenvectors takes $O(n^2)$ time.
- 2,141
- 15
- 35
-
A more useful version would be an $\tilde{O}(n)$ algorithm that constructs both eigenvalues and a $\tilde{O}(n)$ eigenvector action operator. This is the case for circulant matrices, for example. – Geoffrey Irving Aug 03 '12 at 17:32
-
Well, the eigenvectors of a circulant matrix are known beforehand, so you don't really need to compute them. I guess the mentioned "superfast" algorithms will take $\tilde{O}(n)$ time for solving a Toeplitz linear system for a single right hand side vector, which is "superfast", but still not really unexpected. Perhaps there exists a way to compute the characteristic polynomial of a Toeplitz matrix in $\tilde{O}(n)$ time, but I wouldn't consider this to be very useful. On the other hand, computing the complete eigenvalue decomposition in $\tilde{O}(n)$ time would be really unexpected. – Thomas Klimpel Aug 03 '12 at 20:16