In lasso regression many coefficients become zero as discussed here
$$ \min_{\beta\in\mathbb{R}^p}\frac{1}{N}||y-\beta X||_2^2+||\lambda\beta||_1 $$
Columns of $X$ are $\{x_j \}_{j=1}^p$ corresponding to $p$ variables and $y$ is answer variable.
During any iteration, we increase $\lambda$. So some coefficients become zero. When it goes on, is it possible that some zero coefficients that were zero in iteration $i$ become non-zero in iteration $i+1$? If so, why? and if not, dose it have proof?
The only thing I know is when we set $\lambda = \lambda_{max}=\max_{j}|\frac{1}{N}\langle x_j,y\rangle|$ or greater all coefficients become zero and the number of zero coefficients is increasing when $\lambda$ increases.
Asked
Active
Viewed 585 times
1
nima
- 121
- 3
-
1A bit related: "Intuition for nonmonotonicity of coefficient paths in ridge regression". – Richard Hardy Dec 19 '20 at 19:52