Most learning (training and validation) curves that I see in literature and through online resources depict a steep decrease/increase in loss/accuracy during initial epochs followed by a subsequent flattening/plataeuing of these curves. As the performance of a model is generally measured by its accuracy on the validation set, what is the best approach to take when one is not satisfied with the level of validation accuracy obtained with the current data and model used?
Assume that the model is sufficiently regularized and that this level of accuracy is obtained for both overfitting and (slightly) underfitting (enough that the model is still capable of learning, i.e., the training loss continues to decrease) regimes using the same architecture. Is the best solution (as I have gathered online) to simply gather more data until one achieves a desired level of (generalization) accuracy?