I have a set of data parameterized by a single variable which is nearly perfectly linear, and I am trying to quantitatively determine with what confidence we can say a theoretical quadratic term is zero. So far I have done this simply by fitting the data twice, once with a purely linear fit, and once also including a quadratic term.
Though in the quadratic fit I find the quadratic term is consistent with 0, including the quadratic also changes the linear fit term, I think because of a covariance between the two. I am trying to determine if there is a quantitative way in which I can say something to the effect of: "with XX% confidence, I confirm there is no quadratic term, thus fit only up to first order". I have looked into tests such as Neyman–Pearson am not sure if these are fruitful avenues.
For reference, here is an example of the data below. In the legend I show the fit parameters, where the term in the parentheses is the 1-sigma error on that fit parameter.
Edit: I want to clarify that the problem I am ultimately trying to solve is that though the quadratic term is consistent with zero (as evidenced by its standard error), including it or not changes the linear and constant terms - I am trying to quantitatively say with what confidence we can ignore the quadratic term, and fit only up to linear order.
