I'm quite surprised that nobody mentioned the existence of other criteria for comparing regression models and selecting the best one. These criteria belong to two different approaches: traditional hypothesis testing and information theory.
The non-mentioned criteria for model comparison and selection, using the former approach, include other error measures (RMSE, MAE, MAPE, MASE, MPE), adjusted R-squared, F-test statistic (also see this). Additionally, to this group IMHO belong goodness-of-fit (GoF) measures, such as chi-squared GoF test statistic and likelihood-ratio GoF test statistic.
As for the latter approach, the non-mentioned criteria include Akaike's information criterion (AIC), Mallows' Cp statistic and Bayesian information criterion (BIC). [NOTE: "Non-mentioned" refers to the time when I started writing this answer, prior to @EngrStudent's answer, which I just saw after posting my answer.]
UPDATE:
Just wanted to add two points. First, another nice measure for regression model comparison and selection is standard error of regression (also referred to as sigma), which is better than R-squared due to maintaining the original data scale. In R parlance, this criterion can be found in lm() output under the name of "residual standard error". Second, for completeness, I would like to mention some non-analytic (exploratory) approaches and criteria to model comparison and selection, such as diagnostic plots (Q-Q, residuals, etc.) as well as domain/theory considerations and model simplicity (parsimony).