When using LASSO, you do not need to drop one level from your categorical variable.
The practice of dropping a level is known as dummy coding. Here are a couple of references that say that dummy coding is not necessary when you are using regularization (for example, when using LASSO):
Because dummy coding is a more compact representation, it is preferred in statistical models that perform better when the inputs are linearly independent.
Modern machine learning algorithms, though, don’t require their inputs to be linearly independent and use methods such as L1 regularization to prune redundant inputs
Consequently, if we apply the tiniest bit of regularization (whether it's ℓ2, ℓ1, or elastic net), we can handle features that are perfectly correlated without removing any columns. Regularization also innately addresses the effects of multicollinearity—it's pretty awesome.
But if you are regularizing, there's no need to drop one of the one-hot encoded columns from each categorical feature—math's got your back.