I have already seen several related questions: 1, 2, 3, 4, 5.
The answer to 1 states
Regularization attemts to reduce the variance of the estimator by simplifying it, something that will increase the bias, in such a way that the expected error decreases.
How do we show, mathematically, that regularization (say $L_2$ or others) increases bias and decreases variance? None of the answers to the linked questions seem to "prove" this.
I am aware of the Bias-Variance tradeoff: $$\mathbb{E}\big[(\hat{\theta}-\theta)^2\big] = \mathbb{E}\big[\hat{\theta}-\theta\big]^2 + \mathbb{E}\big[(\hat{\theta}-\mathbb{E}(\hat{\theta}))^2\big]$$