This has issues but could work if you are careful.
On the one hand, this is a form of stepwise regression, which has major drawbacks. In particular, all standard downstream inference is tainted by doing this. If you fit a model, remove insignificant features, and then fit another model on just the significant features, the p-values and confidence intervals lack their standard meaning, as they are calculated without considering the earlier step of feature selection. Even an in-sample measure of model performance like adjusted $R^2$ winds up biased high, since the model degrees of freedom does not account for the variable selection.
For a reference, Frank Harrell discusses a number of issues with stepwise variable selection here. The content deals with the mathematics, not the software implementation, so the reference applies whether you use Stata or not.
Further, variable selection is notoriously unstable. If you do cross-validation or bootstrap your data set, you are likely to see selected features come and go.
Finally, what do you do when you fit a model, remove insignificant features, fit a new model on just the significant features, and get that some of those features are insignificant? Do you keep removing insignificant variables? Do you even trust the significance, in light of the above discussion about p-values and confidence intervals lacking their usual meaning?
However, if you do an out-of-sample validation, rather than relying on in-sample measures like the high-biased adjusted $R^2$, stepwise selection can be competitive with other predictive modeling strategies.