Can a model still be "overfit" if it is hitting 99.9% on the hidden test set on Kaggle (ie, 30,000 rows withheld on Kaggle by instructor)?
Consider a situation where you have a strong class imbalance where $99.95\%$ of the categories belong to one class. In such a situation, your accuracy, after going through all kinds of trouble to learn and implement fancy machine learning methods, is worse than some jerk would get by predicting the majority category every time. In this case, your model performance turns out to be quite poor, despite what appears to be a sky-high accuracy score, so pointing out, "Look at how good my holdout performance is! No overfitting here!" does not work.
It might be that you could achieve $99.97\%$ training accuracy and $99.96\%$ holdout accuracy with a simpler model, which would indicate overfitting according to a pretty standard definition where in-sample performance is improved at the expense of out-of-sample performance.
Despite the flaws of accuracy as a performance metric, I agree that $99.9\%$ on holdout data at least sounds impressive (though, depending on the prevalence, it might be poor), and if you have reason to believe that $99.9\%$ accuracy really is good performance for this task, you might not care that you could achieve $99.92\%$ with a simpler model that has a worse in-sample score, despite the overfitting present in your model. (Whether or not you should be interested in the accuracy score is a separate matter, one addressed in the above link.)