1

I am trying to get results to agree between glmnet and sklearn elastic net regression for a specific case where I can't normalise the response variable y. I know that for ridge regression (alpha = 0) this can be achieved by just multiplying lambda with sd(y), see eg Why is glmnet ridge regression giving me a different answer than manual calculation?

And for pure lasso (alpha = 1) it works without rescaling the lambdas. But for the mixed case, I don't know what the proper rescaling is. I'll give an example with code below.

My question is, how can I recover the sklearn solution using glmnet in the elastic net case for an unnormalised response y?

Let's generate some mock data.

library(glmnet)
library(reticulate)
sklearn = import("sklearn")

X = matrix(rnorm(1e5), 1000, 100) Y = X %*% rnorm(ncol(X)) + rnorm(nrow(X))

Xp = np_array(X, dtype = "float16") Yp = np_array(Y, dtype = "float16")

lambda = 0.1

Pure ridge case (alpha = 0). Here I pass the lambda rescaled with sd(Y) to glmnet and get the same results as with sklearn with the original lambda, as expected.

beta_glmnet = as.numeric(glmnet(X, Y, lambda = lambda*sd(Y), alpha = 0, standardize = F, intercept = F)$beta)
beta_sklearn = as.numeric(sklearn$linear_model$ElasticNet(alpha = lambda, l1_ratio = 0, fit_intercept = FALSE)$fit(Xp, Yp)$coef_)

mean(abs(beta_glmnet)) mean(abs(beta_sklearn)) mean(abs(beta_glmnet - beta_sklearn))

0.622938562017786

0.622977211185278

7.33209070182556e-05

Pure Lasso case (alpha = 1). Here I pass the original lambda to glmnet and get the same results as with sklearn.

beta_glmnet = as.numeric(glmnet(X, Y, lambda = lambda, alpha = 1, standardize = F, intercept = F)$beta)
beta_sklearn = as.numeric(sklearn$linear_model$ElasticNet(alpha = lambda, l1_ratio = 1, fit_intercept = FALSE)$fit(Xp, Yp)$coef_)

mean(abs(beta_glmnet)) mean(abs(beta_sklearn)) mean(abs(beta_glmnet - beta_sklearn))

0.590703733705835

0.590713277953699

7.0877870919121e-05

Mixed case (alpha = 0.1). Here both the the glmnet results with rescaled and original lambdas differ substantially from the sklearn solution.

beta_glmnet1 = as.numeric(glmnet(X, Y, lambda = lambda*sd(Y), alpha = 0.1, standardize = F, intercept = F)$beta)
beta_glmnet2 = as.numeric(glmnet(X, Y, lambda = lambda, alpha = 0.1, standardize = F, intercept = F)$beta)
beta_sklearn = as.numeric(sklearn$linear_model$ElasticNet(alpha = lambda, l1_ratio = 0.1, fit_intercept = FALSE)$fit(Xp, Yp)$coef_)

mean(abs(beta_glmnet1)) mean(abs(beta_glmnet2)) mean(abs(beta_sklearn)) mean(abs(beta_glmnet1 - beta_sklearn)) mean(abs(beta_glmnet2 - beta_sklearn))

0.550951096753231

0.674230822535796

0.619582124477384

0.0686310277241525

0.0574189752430566

```

cno
  • 111

0 Answers0