One exercise I give to students is to take a small data set where there is an insignificant relationship between Y and X. Then, simply duplicate the data set so that there are twice as many observations. You will notice that the coefficient estimates and R-squared value are identical to the original, but the p-value is smaller. Repeating this process, you can make the p-value as small as you want.
Now, this is obviously silly statistical practice, but what is specifically violated here? The answer is the uncorrelated errors assumption. All the other assumptions (linearity, homoscedasticity, normality), if valid for the process that produced the original small sample, remain valid for the process that produced the duplicated sample.
The point is that the big bad wolf indeed has teeth: gross violations of the uncorrelated errors assumption can have enormous effects on the validity of the inferences (intervals as well as tests).