I know the principle of nested cross-validation, and it is used for testing model performance. However, when choosing models and selecting hyperparameters we still use normal cross-validation (CV) with grid-search (or other methods). So what is the use of nested CV? Just see the model's performance. Because finally, we need to do a CV and choose hypers and train models. Or is what I understand wrong?
And if I choose hypers by CV and train the model and find it a good performance, how do I know whether there exists overfitting? And what can I do?