15

It is well known that for certain linear systems Jacobi and Gauss-Seidel iterative methods have the same convergence behavior, e.g. Stein-Rosenberg Theorem. I am wondering if similar results exist for nonlinear iterations, where at step $k$ the Jacobi iterations on, say $\mathbb{R}^n$, are defined as

$$x_i^{k+1}=F_i(x_1^{k+1},\cdots,x_{i-1}^{k+1},x_i^k,\cdots,x_n^k),$$

and the Gauss-Seidel iterations are defined as

$$x_i^{k+1}=F_i(x_1^k,\cdots,x_n^k)$$

for $i=1,\cdots,n$ and we have a set of nonlinear functions $F_i(\cdot): \mathbb{R}^n\to\mathbb{R}$.

nicoguaro
  • 8,500
  • 6
  • 23
  • 49
hchen
  • 151
  • 1
  • 4
  • 10
    The statements of nonlinear Jacobi and Gauss-Seidel (the two displayed equations) are backwards. Are you expecting to assume that F is positive and perhaps some notion of coercivity? – Jed Brown Jan 05 '14 at 04:37
  • 1
    We need to start from the assumptions you need for convergence of each method. For example, what you are calling Jacobi (its usually called successive substitutions for nonlinear equations) is normally justified by requiring the operator F to be contractive. – Matt Knepley Jan 05 '14 at 19:12
  • I guess the question is that since we have 2 iterative methods on the same functions ${F_i}$, the mappings from these 2 methods are definitely related. But in general it is not true that $(F_1,\cdots,F_n)$ being contractive, which makes Gauss-Seidel method converging, implies the Jacobi method converging. – hchen Jan 08 '14 at 17:31
  • My first thought is that you could take a taylor series of the functions at a point, assume a small enough domain (with small-enough stepsize) that the higher order terms go to zero, and reduce this to a linear form with its answer, then look at the places where that breaks down such as discontinuities or such. – EngrStudent Mar 08 '15 at 14:12

1 Answers1

2

This paper ( http://www.dcs.bbk.ac.uk/~gmagoulas/APNUM.PDF ) seems to be relevant to your question.

Edit (more than a year later): The paper above is "From linear to nonlinear iterative methods" by Vrahatis et al (2003). Their main application is neural network training, but they also present some results for a few classical optimization problems.

The part that is relevant to this question is Theorems 1 and 2. Theorem 1 states that if $f$ is twice differentiable in an open neighbourhood around the minimizer $x^*$ and the Hessian (of $f$) $H(x^*)$ is positive definite with Property A, then the nonlinear Jacobi iterations for $f$ converges to $x^*$ given that the initial guess $x_0$ is sufficiently good. Theorem 2 says that the non-linear SOR scheme will converge for any sufficiently close initial guess $x_0$ if $f$ is twice differentiable in an open neighborhood $S_0$ around the minimizer $x^*$.

As a further summary, both methods are locally convergent given that the function is nice, and you have a good initial guess.

There are further results presented in the paper, in addition to some suggestions for developing globally convergent algorithms. But, note that this particular paper is published in 2003, so the state-of-the-art is more advanced now, and I didn't do a thorough review.

Abdullah Ali Sivas
  • 2,636
  • 1
  • 6
  • 20
  • Could you explain the main points of the paper in your answer? Also, keep in mind that link-only answers are discouraged since links might change in the future. – nicoguaro Mar 13 '21 at 18:57
  • Done. I know that links are volatile, and it is frowned upon to post such answers, my apologies. Probably, I just wanted to give a quick answer at the time and didn't think twice. – Abdullah Ali Sivas Mar 13 '21 at 20:44