When writing $\delta$-$\varepsilon$ proofs, it's common that the ''natural'' choice of $\delta$ leads to the final inequality in the form, say, $|\ldots| < \varepsilon+\varepsilon+\varepsilon$ instead of $|\ldots| < \varepsilon$. It's always possible to correct the choice of $\delta$, but this has its disadvantages (see MSE links below). Instead, one can refer to the general principle (the ''$K$-$\varepsilon$ principle'') which says
Instead of doing this, let’s once and for all agree that if you come out in the end with $2\varepsilon$ or $22\varepsilon$, that’s just as good as coming out with $\varepsilon$. If $\varepsilon$ is an arbitrary small number, then so is $2\varepsilon$. Therefore, if you can prove something is less than $2\varepsilon$, you have shown it can be made as small as desired.
However, why do standard textbooks never teach, or prove $\delta$-$\varepsilon$ using, this $K$-$\varepsilon$ principle?
Every analysis monograph in my library shirked this K-ε principle, including Abbott’s Understanding Analysis, Apostol’s Calculus, Bartle and Sherbert’s Introduction to Real Analysis, Michael Spivak’s Calculus, and Rudin’s Principles of Mathematical Analysis. Google found merely 2 books that tout this K-ε principle ―

- Frank Morgan’s Real Analysis (2005), pages 17-8. But Morgan doesn’t headline Mattuck's $K$-$\varepsilon$ principle as leading news, or teach it as a godsend to streamline $\delta$-$\varepsilon$ proofs.

Most students prefer this $K$-$\varepsilon$ principle for being MORE forthright and straightforward, because in actuality,
Math S.E. users kvetch that traditional $\delta$-$\varepsilon$ proofs are
- “intimidating and confusing to students; not so much the actual idea, which is simply that you can arbitrarily constrain the output by suitably constraining the input.”
- “a magic ball”.
- “seems like magic to me”.
- “algebra magic”.
- “weird”, “circular reasoning”.
- “odd challenge-response nature of the limit definition”, “min-max game semantics of logic”.
- “stumbling blocks”.