$R^2$ does not care about your errors being normal. If you do an OLS regression with an intercept, then $R^2$ has the interpretation of being the proportion of variance explained.
The p-values do, however, care about the errors being normal. Usually, such as the summary or an lm linear model in R software, p-values assume that the minimization of square loss is equivalent to maximum likelihood estimation of the parameters, which happens for normal error terms. One nice feature of the t-tests that generate these p-values is that they are fairly robust to deviations from normality, particularly as the sample size gets large. However, with only $25$ observations, that might not be enough to appeal to this kind of asymptotic argument, particularly if the residuals have a considerable departure from a normal-looking shape.
There are limits to this robustness.