In general, significant vs. non-significant does not translate into "definitely matters without any doubt" vs. "definitely does not matter". Unless there is some particular convention (and these are usually bad conventions for most practical purposes), it is usually not a good idea to base decision on p-values. Usually, it makes more sense to do something that specifically aims to do what you want to achieve (e.g. to get a good predictive performance of a model, look at cross-validation).
For regression models with potentially correlated predictors, as alluded to by the posts you are referencing, there is a risk that you might not get signficance of important predictors, because the model "cannot quite decide" (not a technical term, but it brings the issue across), which of several predictors is the important one. It might be that any one of them would more or less tell you the same thing (and having at least one of them is hugely important) and that your model might tell you that they are all not significant. Or you might not have enough data to get significance, which seems like a questionable reason for selecting a particular model, again.
As stated what you should also depend on what your ultimate aim is. However, selection your final model based on whether something is significant or not, tends to invalidate any standard inference (e.g. estimates, p-values, confidence intervals etc. will no longer have their usual properties) and it requires some cumbersome adjustments to take into account the model selection in final inference.
If this type of inference is not your goal and you are more interested in prediction, then as alluded to above, p-values should not be your guide at all. Additionally, the regularization you already did (unclear what you did), presumably accounted for the list of candidate terms for your model. If you re-estimate/re-regularized the model with some terms ommitted, you will apply an inappropriately low penalty and have an overfit model despite regularization.