7

Let $A\in \mathbb{R}^{n\times n}$ symmetric and positive semidefinite, and $\omega\in \mathbb{R}\setminus\{0\}$. I am interested in solving the following linear system for a range of values of $\omega$:

$$((A-\omega^2 I)(A-\omega^2 I)+\omega^2 I)x = b.$$ It may be useful to note that the matrix factors as $$ (A-(\omega^2-i\omega)I)(A-(\omega^2+i\omega)I), $$ where $i^2 = -1$.

Details: $A$ is sparse and I won't have direct access to its entries. The dimension of the null space of $A$ is a non-negligible fraction of $n$. The dimension of the problem, $n$, will be as big as the computer's RAM will allow.

What is a good way to solve preprocess / precondition this system? Note that the RHS, $b$, will change when $\omega$ changes.

Notes: This is a follow up question to this one. The idea of the proposed solution to that question shows that if we could perform an complete eigendecomposition on $A$, we would have a pretty much ideal preprocess. I have implemented an Lansczos iteration to approximate this eigendecomposition but it doesn't perform as well as I had hoped. I can explain this idea in more detail as an addendum if there is interest.

Of course full answers are appreciated, but they are not expected. I am mainly looking for ideas to investigate. Any comments and pointers to the literature are much appreciated.

Note to mods: Is this kind of question acceptable? I can change it to something more definite if asking for ideas is unacceptable.

Edit

This is what I plan on doing. First note that as $\omega\to \infty$ the matrix starts looking like $I(\omega^4+\omega^2)$, so we are mainly interested in when $\omega$ is comparable to the norm of $A$ and smaller.

To that end, we compute $r$ eigen-pairs of $A$, $(\lambda_i,q_i)\in \mathbb{R}\times \mathbb{R}^{n\times n}$, with the largest eigenvalues. Then, since these eigenvectors can be made to be orthonormal we have $$ x= \sum_{i=1}^r \alpha_i q_i + \sum_{i={r+1}}^n \alpha_i q_i. $$

Now, taking the dot product of both side of the equation with $q_i$ for $1\le i\le r$ we get

$$ \alpha_i = \left\langle q_i,b \right\rangle \frac{1}{(\lambda_i - \omega^2)(\lambda_i - \omega^2) + \omega^2}. $$

I plan on using this information to construct an initial guess for $x$. I am still unsure on what preconditioner to use.

fred
  • 1,000
  • 6
  • 12
  • 1
    I think it's fine (although I'm not a mod); the main thing to keep in mind is that it should be (somewhat) clear from the question what constitutes a useful answer, and ideally, that there is some objective (or at least not purely subjective) way of choosing the "best" answer to accept, in case there are multiple. – Christian Clason Oct 16 '15 at 15:36

1 Answers1

4

You have noticed that any eigenvector of $A$ is also an eigenvector of your composed matrix. When you compute eigenvectors of your matrix, you can use them in a deflation-type preconditioner, as described in [1, 2].

[1] Frank, J. & Vuik, C. On the construction of deflation-based preconditioners SIAM Journal on Scientific Computing, 2001, 23, 442-462

[2] Saad, Y.; Yeung, M.; Erhel, J. & Guyomarc'h, F. A deflated version of the conjugate gradient algorithm SIAM Journal on Scientific Computing, 2000, 21, 1909-1926

H. Rittich
  • 558
  • 3
  • 11