12

Suppose $A\in\mathbb{R}^{n\times n}$ is a symmetric, positive definite matrix. $A$ is big enough that it's expensive to solve $Ax=b$ directly.

Is there an iterative algorithm for finding the smallest eigenvalue of $A$ that doesn't involve inverting $A$ in each iteration?

That is, I'd have to use an iterative algorithm like conjugate gradients to solve $Ax=b$, so repeatedly applying $A^{-1}$ seems like an expensive "inner loop." I only need a single eigenvector.

Thanks!

Federico Poloni
  • 11,344
  • 1
  • 31
  • 59
Justin Solomon
  • 1,341
  • 7
  • 24
  • 1
    Have you tried using the Cholesky decomposition? You'd have to factor $A$ into $L L^T$ with $L$ being a triangular matrix. Once you have the factorization (you only do this once) you can use it in every iteration to solve the system very fast by back and forward substitution. – Juan M. Bello-Rivas Jan 02 '16 at 21:54
  • Is A a sparse matrix? – Tolga Birdal Jan 02 '16 at 21:55
  • $A$ has some block structure, but I'd prefer not to mess with it if I don't have to -- so I was looking into "matrix free methods." The "LOBPCG" algorithm has some promise, I think! @Juan, the Cholesky factorization is still quite expensive. – Justin Solomon Jan 02 '16 at 21:57
  • If you are using matlab or octave use the eigs-routine. It is an iterative method. There are options to specify which eigenvalue you want, e.g. smallest real. – sebastian_g Jan 03 '16 at 12:17
  • I understand and am indeed using eigs in matlab. But if you specify options like "sm" in eigs, then it requires the inverse of $A$ rather than $A$. Check out the table in the documentation: http://www.mathworks.com/help/matlab/ref/eigs.html – Justin Solomon Jan 03 '16 at 16:35
  • There is a TraceMIN solver in Trilinos's Anasazi, which is an iterative solver for minimizing $X^TAX$ subject to $X^TX=I$, which gives the smallest eigenvalues: https://trilinos.org/docs/dev/packages/anasazi/doc/html/classAnasazi_1_1Experimental_1_1TraceMin.html – Kirill Jan 04 '16 at 18:39

1 Answers1

15
  1. Compute the largest-magnitude eigenvalue $\lambda_{max}$ of $A$ (with, say, eigs('lm')).

  2. Then compute the largest magnitude (negative) eigenvalue $\hat{\lambda}_{max}$ of $M = A - \lambda_{max}I$ (again, through a standard call to eigs('lm')).

  3. Observe that $\hat{\lambda}_{max} + \lambda_{\max} = \lambda_{min}(A)$. The reason why this holds is explained here.

  4. Find your eigenvector $v$ by solving $(A - \lambda_{min} I) v = 0$.

GoHokies
  • 2,196
  • 12
  • 23