1

I have a question about matrix diagonalization. I don't know if this is the right forum... Is there a way to compute the smallest real eigenvalue (and eigenvector if possible) of a general real nxn matrix? (with n small say n=5). Is there a routine in fortran 90 that does this? And, more generally, what is the situation on numerical computing all existing eigenvalues (even for non diagonalizable matrices)?

Edit: after Gerry's comment, I believe it's better to consider n as an odd number. In this case a real eigenvalue always exists because a polynomial of odd order with realcoefficients, the characteristic polynomial, has always a zero and so the smallest real eigenvalue is well defined.

The Hiary
  • 251
  • 1
  • 9
  • Your question appears to be about coding for finding the smallest eigenvalue. We can only show you the process of finding them –  Dec 22 '13 at 19:49
  • 1
    You are aware that a real matrix may have no real eigenvalue? –  Dec 22 '13 at 19:50
  • In general, the eigenvalues are those $\lambda$, for which the determinant of $M-\lambda I$ equals zero. This involves finding the roots of a polynomial, which can be done by Newtons method for example. –  Dec 22 '13 at 19:50
  • 1
    Not sure about Fortran but you could write a code that finds the characteristic polynomial and then find all 5 roots. –  Dec 22 '13 at 19:51
  • @Don: yes but the smallest "real" eigenvalue. I can code, but what process would you suggest? –  Dec 22 '13 at 20:27
  • @Gerry: I can see that this can happen. But I want to find the smallest one, provided that it exists of course :) –  Dec 22 '13 at 20:28
  • @Ragnar and user88595: so the choice in this type of problems is finding the root of polyonomials? No similarity transformations? –  Dec 22 '13 at 20:30
  • @Thomas finding roots is relatively easy to program from scratch, but there are other ways to find them I assume (although this is the only one I know, apart from dioganalization.) –  Dec 22 '13 at 21:33
  • Smallest means what? In absolute value? Is the matrix invertible? –  Dec 22 '13 at 23:03
  • @Yiorgos S. Smyrlis: I put an edit on the answer in order to have the problem well defined. The matrix needs not to be invertible nor diagonalizable. Smallest means in absolute value... I'm sorry this was not clear. –  Dec 22 '13 at 23:05
  • @user88595 This is definitely the wrong way to go about it. The standard algorithm for finding roots is to form the companion matrix of the polynomial and then use an eigenvalue algorithm. – Alex Becker Dec 23 '13 at 22:26
  • @AlexBecker: That's one "standard" way. It is how Matlab finds roots of polynomials, but not how Maple does it. Cleve Moler says < http://blogs.mathworks.com/cleve/2012/10/08/chebfun-roots/ > that this was "a novel approach" at the time he wrote Matlab. –  Dec 24 '13 at 06:58
  • I do agree that using the characteristic polynomial is a very poor way to find eigenvalues numerically, especially for matrices much larger than $5 \times 5$. For $5 \times 5$ you might be able to get away with it. –  Dec 24 '13 at 07:03

3 Answers3

3

For numerical linear algebra, there's no point in reinventing the wheel. There are very well-written linear algebra libraries (e.g. LAPACK) that will find eigenvalues and eigenvectors.

  • I know lapack and blas, but I found only information about diagonalization of symmetric matrices... which routines would you suggest for obtaining what I want? –  Dec 22 '13 at 20:45
  • cgees would appear to fit the bill. –  Dec 23 '13 at 00:03
  • very very useful thanks. This is very useful I'm trying to understand how to use it. So let's suppose that using this cgees I manage to find a unitary matrix U such that my matrix A becomes an upper triangolar T: $U^+ A U = T$ (Schur decomposition). Let's suppose that using the option "select" I manage to have the smallest real eigenvalue $\lambda$ on the first column of $T$. Is it right to say that in this case the first column of $U$ will be en eigenvector of $A$ associated to $\lambda$ ? (and, if the algebraic multiplicity is one, the only eigenvector)?? –  Dec 23 '13 at 14:52
3

If $n$ is small, LAPACK is your best bet. Instead of using cgees, I'd use zgeev, which is a routine that will calculate eigenvalues and optionally, the left and/or right eigenvectors of a general matrix. Compared to zgeev, cgees uses single-precision complex numbers, rather than double-precision complex numbers, and will return the Schur form of the matrix rather than the eigenvectors you would like to calculate . There exist other, similar routines that will take advantage of symmetry (or Hermitian symmetry) if it is available for your problem. LAPACK uses dense linear algebra, which is best for small matrices (say, less than 1000 by 1000 or so, as a rough estimate; you might be able to accommodate more or less depending on how much RAM you have available).

If $n$ is large, algorithms that combine a some variant of:

  • a good estimate of an eigenvalue
  • a shift-and-invert strategy
  • and power iteration

are used to calculate eigenvalues and eigenvectors. (More advanced algorithms use Krylov subspace iteration, transformation to Hessenberg form, and other features; broadly speaking, these can be thought of as power iteration-like algorithms with some more desirable algorithmic properties.) If you want to find the smallest eigenvalue of a large matrix, you're best off using a package like SLEPc in concert with a specialized package for eigenvalue problems, like ARPACK, BLOPEX, etc.

Generally speaking, one does not calculate all eigenvectors and eigenvalues of a large, sparse matrix. Usually, eigenvalues at the extremes -- the eigenvalues with the largest and smallest magnitudes -- are easier to calculate accurately than eigenvalues in the middle of the spectrum. As alluded to earlier, these are also possible to calculate accurately if estimates of these eigenvalues are available (using shift-and-invert). It's also expensive to calculate all of these eigenvalues, because iterative methods calculate one eigenvalue and eigenvector at a time.

As for diagonalization, the Jordan canonical form is numerically unstable, so it is not calculated. The Schur decomposition is typically calculated instead, as alluded to in one of the comments above. If $A = QUQ^{*}$ is the Schur decomposition of $A$, then the main diagonal of $U$ gives the eigenvalues of $A$. Once these are known, it's possible to calculate the eigenvectors of $A$. If $A$ is a normal matrix (that is, $A^{*}A = AA^{*}$, where the asterisk denotes the conjugate transpose, or Hermitian transpose), then the Schur decomposition is also an eigendecomposition, and diagonalizes the matrix.

As many noted, it's possible for a matrix not to be diagonalizable over the reals. However, as you correctly point out, the fundamental theorem of algebra implies that for $n$ odd, an $n \times n$ matrix must have a real eigenvalue, thus a smallest eigenvalue must exist.

Geoff Oxberry
  • 30,394
  • 9
  • 64
  • 127
0

In addition to what Robert Israel wrote, I noticed that you mentioned that you are dealing with incredibly tiny square matrices, with $n=5$ being typical. I don't know what application you have in mind for this, but Mathematica is perfectly capable of computing the eigenvalues of small (ie, $n<5000$) matrices. Just type

Eigenvalues[N[A]]

where $A$ is your matrix, and it will give you a list of all of them.

In general, if you are looking for the smallest eigenvalue of a very large matrix for which typical $O(n^3)$ algorithms would be unfeasible, you should look for variants of the Lanczos algorithm. The Lanczos algorithm is used for finding the largest eigenvalues of a matrix, but it can easily be adapted to find the smallest ones as well.

DumpsterDoofus
  • 287
  • 1
  • 8