11

Is there any hope in solving the following linear system efficiently with an iterative method?

$A \in \mathbb{R}^{n \times n}, x \in \mathbb{R}^n, b \in \mathbb{R}^n \text{, with } n > 10^6$

$Ax=b$

with

$ A=(\Delta - K) $, where $\Delta$ is a very sparse matrix with a few diagonals, arising from the discretization of the Laplace Operator. On it's main diagonal there is $-6$ and there are $6$ other diagonals with $1$ on it.

$K$ is a full $\mathbb{R}^{n \times n}$ matrix that consists completely of ones.

Solving $A=\Delta$ works fine with iterative methods like Gauss-Seidel, because it's a sparse diagonally dominant matrix. I suspect that the problem $A=(\Delta - K)$ is pretty much impossible to solve efficiently for large numbers of $n$, but is there any trick to maybe solve it, exploiting the structure of $K$?

EDIT: Would doing something like

$\Delta x^{k+1} = b + Kx^{k}$ // solve for $x^{k+1}$ with Gauss-Seidel

converge to the correct solution? I read that such a Splitting Method converges if $\rho(\Delta^{-1} K) < 1$, where $\rho$ is the spectral norm. I manually calculated the eigenvalues of $\Delta^{-1} K$ for some different small values of $n$ and they're all zero except the one which has a pretty high negative value. (about ~500 for $n=256$) So I guess that wouldn't work.

EDIT: More information about $\Delta$:

$\Delta \in \mathbb{R}^{n \times n}$ is symmetric and is negative definite and diagonally dominant.

It is created the following way in matlab

n=W*H*D;

e=ones(W*H*D,1);

d=[e,e,e,-6*e,e,e,e];

delta=spdiags(d, [-W*H, -W, -1, 0, 1, W, W*H], n, n);

yon
  • 213
  • 1
  • 6

1 Answers1

14

There are two choices which, if you use reasonable data structures, can solve the problem with $n>10^6$ on a laptop and $n \approx 10^{12}$ on a supercomputer. Note that for efficiency, you should use multigrid to solve with $\Delta$. The cost in either case will be a small factor more expensive than just solving with $\Delta$. The two approaches are equivalent via a Schur complement argument, but I describe them separately because the implementation is different.

  • Use the bordered system

$$ \newcommand\ee{\mathbf{e}} M = \begin{pmatrix} \Delta & \ee \\ \ee^T & 1 \end{pmatrix} $$

where $\ee$ is a column vector consisting of all ones and solve the system

$$ M \begin{pmatrix} \mathbf{x} \\ y \end{pmatrix} = \begin{pmatrix} \mathbf{b} \\ 0 \end{pmatrix} $$

using an iterative or direct solver.

  • Use a Krylov method and apply the matrix $A$ as $\Delta - \ee \ee^T$ (i.e. sparse matrix plus rank-1 correction. Use your existing preconditioner $P^{-1} \approx \Delta^{-1}$, or, especially if you wanted to use a direct solve with $\Delta$, update it with the Sherman-Morrison formula.
Jed Brown
  • 25,650
  • 3
  • 72
  • 130
  • I'd tend to think that you're much better off with the second approach. The point simply is that you must not try to store the matrix $K$ in memory nor try to do a matrix vector product with it. Rather, every time you have to multiply with $A$ with a vector $z$ in your iterative scheme, you multiply $h = \Delta z$ and then compute $y = h - \mathbf e (\mathbf e^T z)$. The term in the parentheses is just the sum of the entries of $z$ and you compute it only once. Jed already explained this well but I wanted to emphasize the order of operations. – Wolfgang Bangerth Jul 21 '12 at 22:55