The comment you cite is very imprecise. If errors are normal and the model is correctly specified and estimated by least squares, residuals will be conditionally normally distributed (see comment by @BigBendRegion).
Normality testing of residuals is problematic anyway as nothing in reality is really normal and precise normality is not required. Particularly with large samples normality tests will (correctly) reject normality, but the regression may still be fine (but then it may not). For a discussion see Is normality testing essentially useless?, Relevance of assumption of normality, ways to check, and recommendations.
"How can we assume anything about the errors?" Model assumptions in the first place are tools for thinking, that enable us, for example, to make quantitative statements about uncertainty. We generally need to make some model assumptions that cannot be directly verified in order to do statistical analyses. The models are idealised situations and we choose our analyses so that they are guaranteed to work well in the idealised model-world. This does not necessarily guarantee us anything for reality, however it is a starting point for investigating it. As often model assumptions are connected to expected visible patterns in the data (such as normal residuals), we can make statements from the observations to what extent certain model assumptions are compatible with the data, without being able to verify them. Note however that "model assumption X is compatible with the data" is not quite the same as "method Y based on model assumption X will work well for these data", and usually we are interested in the latter rather than the former.