According to Wikipedia the rate of convergence is expressed as a specific ratio of vector norms. I'm trying to understand the difference between "linear" and "quadratic" rates, at different points of time (basically, "at the beginning" of the iteration, and "at the end"). Could it be stated that:
with linear convergence, the norm of the error $e_{k+1}$ of the iterate $x_{k+1}$ is bounded by $\|e_k\|$
with quadratic convergence, the norm of the error $e_{k+1}$ of the iterate $x_{k+1}$ is bounded by $\|e_k\|^2$
Such interpretation would mean that, with a few (small number of) iterates of linearly convergent algorithm A1 (random initialization assumed), smaller error would be achieved that with a few iterates of quadraticaly convergent algorithm A2. However, since the error diminishes, and due to squaring, later iterates would mean smaller error with A2.
Is the above interpretation valid? Note that it disregards the rate coefficient $\lambda$.