You might wish to consider doing a sensitivity analysis. In your case, simulating the addition of noise into your data followed by training your model on the noised data will let you see how different your parameters are when your data is slightly different. Repeating this process many times will allow you to estimate a histogram of parameter values under a choice of noise model (e.g. add IID standard normal variables to your data).
What if the parameters don't change much? For purely stability purposes, this is great news.
For inferences about the parameters, you'll need to look a little further. You might find that the parameters are not statistically significant even though you achieve good predictive error. What might be going on in that case?
When your predictors are highly correlated, they may also be multicollinear. Multicollinearity can lead to variance inflation of the standard errors of those parameters, which is a problem for false positive rates in hypothesis testing about those parameters' statistical significance. See this list of potential "remedies" to multicollinearity if you are interested in inferences about the parameters.