9

I have a large cubic eigenvalue problem:

$$\left(\mathbf{A}_0 + \lambda\mathbf{A}_1 + \lambda^2\mathbf{A}_2 + \lambda^3\mathbf{A}_3\right)\mathbf{x} = 0.$$

I could solve this by converting to a linear eigenvalue problem but it would result in a system $3^2$ as large:

$$\begin{bmatrix} -\mathbf{A}_0 & 0 & 0 \\ 0 & \mathbf{I} & 0 \\ 0 & 0 & \mathbf{I} \end{bmatrix} \begin{bmatrix} \mathbf{x} \\ \mathbf{y} \\ \mathbf{z} \end{bmatrix} = \lambda \begin{bmatrix} \mathbf{A}_1 & \mathbf{A}_2 & \mathbf{A}_3 \\ \mathbf{I} & 0 & 0 \\ 0 & \mathbf{I} & 0 \end{bmatrix} \begin{bmatrix} \mathbf{x} \\ \mathbf{y} \\ \mathbf{z} \end{bmatrix},$$

where $\mathbf{y} = \lambda\mathbf{x}$ and $\mathbf{z} = \lambda\mathbf{y}$. What other techniques are available to solve a cubic eigenvalue problem? I've heard that there is a version of Jacobi-Davidson that will solve it but haven't found an implementation.

Also, I need to be able to target specific eigenvalues similarly to the shift-and-invert method of ARPACK and find the associated eigenvectors.

OSE
  • 397
  • 2
  • 10
  • What are the dimensions of the matrices involved? – Bill Barth Jan 17 '14 at 16:20
  • $\mathbf{A}_i$ is order $10000\times 10000$. I have two different formulations of this problem, one in which $\mathbf{A}_i$ is dense and in the other it is sparse. – OSE Jan 17 '14 at 16:28
  • 1
    SLEPc has routines for quadratic eigenvalue problems and nonlinear eigenvalue problems, so you might be able to find what you need there. It also has shift-and-invert facilities, and has an interface to ARPACK. – Geoff Oxberry Jan 17 '14 at 19:06

1 Answers1

5

With the reverse communication protocol of ARPACK, you do not need to store the $3n \times 3n$ matrix explicitly: you just need to provide two functions that compute:

$ \left[ \begin{array}{c} x \\ y \\ z \end{array}\right] \rightarrow \left[ \begin{array}{c} -A_0 x \\ y \\ z \end{array}\right]$ and $ \left[ \begin{array}{c} x \\ y \\ z \end{array}\right] \rightarrow \left[ \begin{array}{c} A_1 x + A_2 y + A_3 z \\ y \\ z \end{array}\right]$

(you still pay the price of storing the $3\times n$-dimensional vectors but you do not pay anything for the matrices).

Regarding the invert transform, you can do the same, i.e. implement it yourself by using a callback that computes $x \mapsto M^{-1} x$ instead of $x \mapsto M x$ and replace the computed $\lambda's$ with $\lambda^{-1}$. To compute $M^{-1}x$, you can pre-factor your matrix $M$, which means only pre-factoring $A_0$ (using LU, Cholesky, or sparse versions of them depending on the structure of the matrix). For the full shift-invert transform, I think that something similar can be done.

BrunoLevy
  • 2,315
  • 12
  • 22