3

Given square matrix $A_0$, vector $b$, vector $A_0^{-1}b$ and matrices $A_1, A_2, \dots, A_k$, in which each $A_i$ is generated from $A_{i-1}$ by replacing one single column, I would like to find an efficient method to compute all the results of

$$A_1^{-1}b, A_2^{-1}b, \dots, A_k^{-1}b$$

Any suggestions are welcome.

John Smith
  • 241
  • 1
  • 2
  • How do you get a solution of $A_0^{-1}b$? Do you perform LU-decomposition on $A_0$? For example, see https://scicomp.stackexchange.com/questions/25063/low-rank-updates-in-bfgs – Anton Menshov Apr 12 '18 at 17:40
  • I use the iterative method: $x_{j+1} = b - (A_0-I)x_{j}$. – John Smith Apr 12 '18 at 17:48
  • @RodrigodeAzevedo, By viewing the replacement of one single column as a rank-one update of $A$, i.e., $A' = A+uv^T$. I notice that, after applying Sherman–Morrison method, I still need to compute $A^{-1}u$. It appears to me that the cost won't be reduced. – John Smith Apr 12 '18 at 17:52
  • @RodrigodeAzevedo, he is certainly not finding $A^{-1}$ explicitly. I'm not sure how can you use Sherman-Morrison for the iterative procedure... – Anton Menshov Apr 12 '18 at 18:09
  • 1
    If you're using a Krylov subspace method, then in the process of computing a solution $A_0^{-1}b$ you are also building up an approximation of $A_0^{-1}$ in the Krylov subspace (e.g. see any reference on the Arnoldi process). This approximation is almost certainly going to be a great preconditioner for $A_1$. In fact I'm fairly sure that you can prove something about the quality of this preconditioner. – Richard Zhang Apr 12 '18 at 18:39

1 Answers1

6

This can be done using a technique called the $\eta$ (eta) factorization. This method is commonly used in implementations of the simplex method for linear programming and can be found in many textbooks on linear programming.

The procedure is as follows:

  1. Find an LU factorization of $A_{1}$,

$PA_{1}=LU$

Using this factorization, you can easily solve $A_{1}x=b$.

  1. Suppose that $A_{2}$ is constructed from $A_{1}$ by replacing column $p$ of $A_{1}$ by a vector $v$. Then we can write

$A_{2}=A_{1}E_{2}$

where $E_{2}$ is a so-called $\eta$ matrix that is an identity matrix with column $p$ replaced by the solution to $A_{1}u=v$. When we multiply $A_{1}$ times the columns of $E_{2}$ taken from $I$, we get the original $A_{1}$ columns copied into $A_{2}$. When we multiply $A_{1}$ times $u$ in the $p$th column of $E_{2}$, we get the desired vector $v$.

  1. To solve $A_{2}x=b$, we use the following procedure. First, solve $A_{1}w=b$ using the original LU factorization of $A_{1}$. Then, solve $E_{2}x=w$ by first solving for $x_{p}=w_{p}/u_{p}$ and then using back substitution to solve for the remaining entries in $x$. Thus

$A_{2}x=A_{1}E_{2}x=A_{1}w=b$.

This process is continued to get

$A_{k}=A_{1}E_{2}E_{3}E_{4}\cdots E_{k}$.

The worst-case computational complexity of the procedure is as follows. Computing the LU factorization of $A_{1}$ takes $O(n^{3})$ time. Computing an $\eta$ matrix takes $O(n^{2})$ time. The solution of the system $A_{1}w=b$ takes $O(n^{2}$ time. The solution of a system $E_{j}v_{j}=w_{j-1}$ takes $O(n)$ time.

In practice, the matrix is usually refactorized once the number of $\eta$ matrices hits a configurable limit. In linear programming, refactorizing the basis matrix after 30 $\eta$ updates is a common rule of thumb.

Although this method is commonly employed with a direct factorization method to solve $A_{1}x=b$, you could apply the same approach with an iterative method for solving $A_{1}x=b$. Finding each $\eta$ matrix $E_{k}$ would require a full iterative solution plus $k-1$ $\eta$ updates. The solution of $A_{k}x=b$ would require one full iterative solution, plus $k$ $\eta$ updates, each of which takes only $O(n)$ time. This really wouldn't save any computational effort compared to solving $A_{k}x=b$ using the iterative method.

Brian Borchers
  • 18,719
  • 1
  • 39
  • 70
  • Thanks, Brian. It seems that solving $A_1 u =v$ for $u$ has the same computational complexity as directly solving $A_1 x =b$ for $x$. What's the main advantage of your method? – John Smith Apr 14 '18 at 01:31
  • The important point is that with a direct factorization, it takes $O(n^{3})$ time to factor $A_{1}$, and then each additional solve takes only $O(n^{2})$ time. This advantage disappears if you use an iterative method to solve the equations. – Brian Borchers Apr 14 '18 at 02:26