You need to be careful with the way you phrase this question. And, in general it would be helpful to see the output of the tables you are describing.
If I understand you correctly, you are saying that you fit models of the form
Model A: $\quad y = b_0 + b_1 x + e$
Model B: $\quad y = b_0 + b_1 x + b_2 x^2 + e$
where $y$ is the dependent (response) variable, $x$ is the independent (explanatory) variable and $e$ is the error term (residuals unexplained by the model) and the $b_i$ ($i \in \{0,1,2\}$) are coefficients.
Both A and B are linear models because the coefficients appear as additive terms. From what you say, under Model A, the $b_1$ term is significant and positive. Under Model B the $b_2$ term is significant but not $b_1$.
To preserve marginality the $b_1$ term must be retained in Model B when $b_2$ is significant, even if $b_1$ itself reports as non-significant. You can think of this informally as: you can't have $x^2$ unless you already have $x$. That result is telling you that inclusion of the $x^2$ term provides a better fit to the data than the model without the $x^2$ term. This indicates that curvature is a feature of the relationship between $x$ and $y$. In that sense the relationship is non-linear. But the fitted model is still a linear regression model, albeit one which allows a degree of curvature to be accommodated via a polynomial term.