I suppose you use a ordinary least square (OLS) model to find the average effect of some $x$ on some outcome $y$? In order to see if some additional variable (or transformation) like $x^2$ benefits the model, you could a) look at the p-value of the variable(s) in question and b) inspect AIC, BIC (and possibly adjusted $R^2$) to see if the additional (quadratic) term improves model fit. Note that a non-significant p-value of the quadratic term alone does not imply that the variable is not useful!
If you find no evidence based on descriptive statistics, p-values, AIC, BIC that including $x^2$ will be beneficial, you have good reasons to claim that this effect is negligible in your case (and that excluding some term does not cause underspecification).
However, since quadratic terms are often a crude approximation of non-linear effects, you might inspect the data generating process a little further (in a multivatiate setting). You could use generalised additive models (GAM) to test for non-linear effects (without making assumptions about parameterization). See ISL, Chapter 7.
Find a minimal example with simulated data here. In the example (see figure below), the GAM model (black, dashed line) is able to approximate a non-linear function quite (red line) well, where a linear (blue line) or quadratic parameterization (not shown) would fail (more or less) to fit the data well.
GAM (with locale regression or splines) are not easily interpreteable (like OLS coefficients would be, see also EdM's comment below). However, you could inspect your DGP in detail using GAM and look for a good approximation for the data (or possibly find good reasons to include a quadratic term or not in your OLS model).
