If your model gives good predictions and "makes sense" as in "based on your previous knowledge it gives results which you can reasonably expect", then in my opinion you are likely on the right way, meaning that data should not be biased and your best prediction model can be used to predict new observations. If your data are biased, I don't think you can validate your predictions running a regression model with a regularization technique, because the results will contain the same bias as the prediction model.
Bias is strictly linked to the concept of causality. In the question you refer to, collinearity is mentioned. Collinearity is a statistical term that refers to the degree of correlation among the predictors, but does not say anything about the direction of their relationship. This means that if two predictors are highly correlated, in general you can't say much about them.
If you have to choose which one to keep, I would be careful using a purely statistical techniques to make this decision (data-driven) such as lasso or ridge. You need to use your knowledge of the domain (hypothesis-driven), that is the knowledge you have of the research field and the data.
General suggestion: everything depends a lot on the kind of research question you are interested in. If you are only interested in predictions, you usually don't really need to make the model parsimonious. If you need parsimony, it probably means that you also want to understand something about the relationship among the outcome (dependent variable) and the predictors (independent variables, features or whatever you want call them).
Ask yourself: Do I have a reason to think that predictions obtained using this data are not generalizable to new observations? In other words, do this data carry some sort of selection bias?
Additionally, particularly because you want your model to have a few good predictors, it's also important to understand the direction of the relationship among them. Otherwise you might incur in confounding.
I recommend looking into DAGs (Directed Acyclic Graphs) which are intuitive tools that can help you to understand better your data and its potential biases. If you have time to invest, I recommended this book which you can download for free from the website. It will really help in a variety of different problems with data analysis.
Update: there is not a unique and straight answer to this, because the question is about the strategy of your analyses so people with different backgrounds working in different fields can have different views.
Some answers might provide you a better explanation than others, based on your background knowledge. Therefore, this answer might not be very useful to you or it might be enlightening! It would be very good if you could give a feedback on this answer (either editing your question with updates or commenting my answer) so that others know what it should complemented with or you are satisfied with it! There are no deadlines, but don't wait too long if you intend to do so.
It might not be easy for you to explain what it is not clear because some topics are complex and sometimes too broad to handle. I guarantee you that we all understand that!
However, research thrives thanks to our differences, so please don't held back your doubts :).
Also, consider that an effort from your side will be rewarding both for you, in getting the best answer you need to proceed with your analysis, and for the answerer, who is here to share knowledge with everyone for free!
Bonus info: everyone is trying their best here, but some people seeing your question might be top-level experts in problems related to what you work on. They might can be of immense help, but their time might be very limited and they can be discouraged from answering if they don't have enough elements to help... You get me ;).
trainandmethod = 'optimism_boot'intrainControlbut especially updating the metric. Above all, my work in research has taught me a natural critique of overly optimistic results, no matter how well they fit what I suspected - probably even especially in cases where it fits too well, especially with small data sets. – umrpedrod Jan 12 '23 at 22:54