I've seen this post as well as this one regarding the difference between the lars and glmnet solution paths for fitting the lasso. From my understanding, glmnet uses coordinate descent optimisation to identify its coefficient path. If that is the case, what does the lars implementation use to identify its path? Is it the same as coordinate descent?
While the first post provides a simple example, I also wondered how they determine their "best" coefficients from those paths. Further, it doesn't seem particularly efficient to me to have to use a lambda value from the lars function with glmnet, what if you want to move on from the lars package and solely use glmnet? How can you trust that you are identifying the right coefficients using the right lambdas?
On more complex examples, I've found that lars is able to identify the true regression model using the lasso whereas the glmnet method has not. Theoretically, glmnet should work in identifying the model correctly, so I'm trying to understand the disconnect.
Thanks.
larsable to find the true regression model? I haven't testedlarsspecifically but every feature selection method I've tried results in a very low probabiilty of finding the "right" model even with simple toy test cases. That is especially true there there is collinearity. – Frank Harrell Feb 12 '21 at 12:16larspath with the lasso modification. I am able to identify the true regression model (or a very close estimate) for several more complex datasets. From my understanding, I am utilising the function to perform the lasso algorithm by determining the optimal lambda value. – AW27 Feb 12 '21 at 12:33larsidentifies a different coefficient path as the lasso implementation withglmnet, even when standardising outside of the functions. – AW27 Feb 12 '21 at 13:33