Extrapolation is in general "unreliable". (See "What is wrong with extrapolation?")
But it is also commonly said that extrapolation is "less reliable" than interpolation.
But why should we generally assume that the model is "more reliable" between two known data points than to the right of the right-most data point (or the left of the left-most data point)?
From empirical examples, I can see that indeed, interpolation is often "more reliable" than extrapolation. But is there a more formal, theoretical justification for why this assertion is, in general, true?
Or is it just a purely empirical observation that interpolation tends to be "better"?
extrapolation “less reliable” than interpolationthat is true not in all contexts. It may be true when a specific number, more than one, of known points is demanded to infer the unknown point. In interpolation, you estimate unknown t2 value from t1 and t3 values, both known and both adjacent to t2. Consider now the comparable extrapolation: t1 is known, t3 in not known, extrapolation t2 from t1 can be seen as interpolation from t1 and (unknown) t3 onto t2. Of course, it is less "reliable". – ttnphns Jun 30 '16 at 07:52