I faintly recall from my early "numerics" lectures that iterative linear solvers for $Ax=b$ often require that when $A$ is decomposed as
$$A=D + M$$
where D is a diagonal matrix and $M$ has zero diagonal, the elements of $D$ should be dominant over the entries in $M$ for iterative solvers to perform well.
What if that's not the case and the entries of $D$ become really small?
Should I use a direct solver then?
To be more specific, the linear system I want to solve involves a matrix $$A(\omega) = D(\omega) + M$$ where the non-diagonal part is constant but the diagonal part depends on a parameter $\omega$ in some non-trivial way. So far, I don't see a way around solving $A(\omega) x = b$ for each $\omega$ anew.
The diagonal entries are of the form $A_{jj} = \omega + z_j + i\eta$ where $z_j$ is some real number that depends on the row we are in whereas $\eta$ is a very small convergence factor and $i$ is the imaginary unit. Could this lead to numerical instabilities when $\omega + z \approx 0$?
EDIT: Well, maybe one more thing about the nature of $A(\omega)$: If one sets $\eta$ to $0$ exactly, then $A(\omega)$ is guaranteed to have poles. This is because ultimately I use this matrix to compute (many-body) Green's functions in frequency domain, and those need a convergence factor $\eta$ to move their poles off the real axis. The sum of absolute values of off-diagonal matrix elements in each row is $10$ at most, but the diagonal will always have some entries whose real part is very close or equal to zero.