3

I was looking at the matlab function pinv.m for the compuation of the pseudoinverse. The code uses the singular values decomposition. $$ A = U D V $$ When looking for non-zero diagonal elements it sets a threshold using the formula: $$max(n,m) * eps(d_{max}) $$ where $n$ is number of rows $m$ number of columns of $A$, $eps$ is the machine precision and $d_{max}$ is the biggest singular values. Apparently this is the standard way to do it. However, I cannot easily find the explanation for such formula.
Could somebody provide it?
Also, is it a strict upper bound of the floating point error or some probabilistic consideration are taken into account?

pinpon
  • 153
  • 5

1 Answers1

3

The intuition comes from considering how wrong your singular values could be given a matrix $A \approx A + \delta A$. I think this ultimately goes back to Golub (or even before). I managed to hunt down a proper proof of an actual SVD algorithm from Ipsen (Sec. 4, pg 28). This result ultimately follows from a careful round-off error analysis: the $\text{max}(m,n) \epsilon$ term comes from the accumulation of machine epsilon-scale errors at most either $m$ or $n$ times, whichever is larger, and the largest singular value is the 2-norm of $A$.

  • Many thanks, I am going trough the proof, so is it a strict upper bound? – pinpon Feb 17 '23 at 09:18
  • In this case, yes, but this is for the Jacobi method. I know that this heuristic is used broadly, going back many decades. I am not sure if it is a hard upper bound in all such cases. However, flirting with the bound is probably not a good idea in practice. –  Feb 17 '23 at 18:20