Well when you say that 'linear regression is the best fit' and 'we know it's a good fit because it minimizes the sum of squared errors' then you have all the elements you need: OLS will give you estimates that give you the smallest sum of squared errors. But why did you choose to minimize this sum? Why not minimize the sum of absolute deviations in stead of the sum squared errors ?
Moreover, as @Matthew Drury also said, it is not because you have the best result (according to some criterium) that it is a good result? E.g. In economics one tries to maximize the profit of a firm, but it could be that the maximum (best) profit is negative. Would you then run that firm? I wouldn't, so even if you know that this profit is the best you can get, you will still check ' how good it is' (i.e. Whether it is not negative)?
A similar example, if you have to realise an IT project and you know that you get the best project manager, the best analists, the best programmers, and you see that these best people will need two years to realise the project, but the deadline for realisation is in one year, then you have the 'best solution' but it seems to be 'not good enough', so knowing that you have the best solution (best fit, according to some criterium) does not necessarily mean that it is a good solution (therefore you need to check the goodness of the (best) fit).