I have a real matrix whose two largest eigenvalues are two conjugate complex numbers. I need to check if their absolute value is greater than 1.
Since the largest by absolute value eigenvalue is complex, the power method, and any other method that can only output real eigenvalues, doesn't apply.
I looked into using numerical methods. Scipy, numpy and tensorflow libraries have an "eigvals" function. It outputs an eigenvalue with absolute value 1.02. I need to know the error bounds of the eigvals function, to understand if this proves that the absolute value is greater than 1.
What I found from documentation: tensorflow.linalg.eigvals uses the C++ Eigen library, which computes eigenvalues using the Shur decomposition. Documentation says: "The Schur decomposition is computed by first reducing the matrix to Hessenberg form using the class HessenbergDecomposition. The Hessenberg matrix is then reduced to triangular form by performing Francis QR iterations with implicit double shift." Wikipedia says that a bound on the error of the QR algorithm is provided by the Gerschgorin circle theorem. I'd like to know how can I practically compute this error bound, and how does the floating point precision factor into it?
If there are other approaches to checking if the spectral radius is greater than 1, I'm insterested in them, too.
UPD: Just occured to me I could use Gelfand's formula: r(A) = \lim_{n\to\infty} ||A^n||^{1\n}, so if powers of A have big eigenvalues, then the spectral radius is above 1? But then how do I prove that numerical errors during matrix multiplication didn't alter the result?