In a generic rootfinding problem $f(x) = 0$, we assume that the probability that the root $x$ is a floating point representable is zero. Hence, the best floating point approximation $\hat{x}$ to $x$ gives $f(\hat{x}) = f(x(1+\delta)) = x\delta f'(x) + \mathcal{O}(\delta^2) \approx x\delta f'(x)$ where $|\delta| \le \mu_{M}$, where $\mu_{M}$ is the unit roundoff.
Hence, any rootfinding iteration can be terminated when $|f(\hat{x})| \le \mu_{M}|x f'(x)|$. However, this termination criteria fails in the presence of a double root.
What is a sensible termination criteria for a rootfinding problem in the presence of a double root? (The assumption being that $f \in C^{2}$ and $f''$ is available.)
@LutzLehman makes a valid point that this may be impossible in general. Nonetheless I have succeeded in every case I have tested for the case of polynomials by use of the expected residual
$$ \mu_{M}\left[\sum_{k=0}^{n} |c_k||x|^k + |xp'(x)| \right] $$
Maybe using a running error estimate in the evaluation of $f$ might succeed in the general case?