Estimating and Controlling Nonlinear Trends
I realize you have already accepted an answer here, but I wanted to address what I perceive as unanswered parts of your question here. First, regarding this part:
Is there a widely-used statistical method that establishes the relationship between two variables when that relationship is not linear - and which controls for a third variable?
The most obvious way to to model the linear relationship between two predictors, $x_1$ and $x_2$, and a dependent variable $y$, is what you already said: a multiple linear regression. This is because it models the conditional mean after partialing the variance of each effect independent of the other effects, such as the following equation:
$$
y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \epsilon
$$
We can do this because we simply can multiply an estimated coefficient by zero and leaving the rest of the equation to predict the outcome. However, per your other question:
Is there a widely-used statistical method that establishes the relationship between two variables when that relationship is not linear - and which controls for a third variable?
The most obvious way of modeling a nonlinear multiple regression would be some kind of polynomial or spline-based method for approximating this relationship between the predictor and response. As an example, we can just fit the terms $\beta_1 x_1$ and $\beta_2 x_1^2$ as a quadratic trend in our regression. Alternatively, if we wanted to construct a piecewise function $f(x)$ that is a construction of basis functions (such as a combination of linear and polynomial terms together) for a given predictor (where $b_j$ is a given basis function):
$$
f(x) = \sum\limits_{j=1}^{k}b_j(x)\beta_j
$$
We get a much more flexible approximation of our $x_1$ and $x_2$ than we would a simple polynomial because now we can "mix and match" until we get a smooth function that best fits the data. Additionally, one can not only do this in the simple case of a bivariate relationship, this can also be extended to a multiple regression context, where the piecewise functions in the regression can be represented as:
$$
y = f(x_1) + f(x_2) + \epsilon
$$
Thus here we achieve the same thing as the linear case, only now we have nonlinear functions of each predictor rather than linear ones (though the parameters are still technically "linear" here). This is the basis of a generalized additive model (GAM), which models nonlinear data directly in the same ways we would consider with multiple regression. Now of course as alluded to earlier, one could just use a multiple regression with polynomials or other related terms, such as the below model with quadratic terms and the full linear equation:
$$
y = \beta_0 + \beta_1 x_1 + \beta_2 x_1^2 + \beta_3 x_2 + \beta_4 x_2^2 + \epsilon
$$
However, it is often the case that with simple polynomials one achieves a poor interpolation as the degree of polynomial increases or with sparse data at the ends of the distribution. Even simple splines can overfit the noise if one is not careful with the degrees of freedom they employ, etc. The magic of GAMs is that they include a penalized version of nonlinear regression to circumvent this, where the penalty is weighted by an estimation of the second differences with respect to the residual sum of squares and smoothing parameter $\lambda$:
$$
\Vert{y-X\beta}\Vert^2 + \lambda \sum_{j=2}^{k-1}\{ f(x^*_{j-1})-2f(x^*_j) + f(x^*_{j+1}) \}^2
$$
Thus in some respects, we get a "safer" nonlinear regression than something like a polynomial regression or otherwise.
Simulated Example
To demonstrate, we could construct some simulated data which predicts $y$ in a nonlinear way using the GAM package mgcv in R.
#### Simulate and Plot Data ####
set.seed(123)
x1 <- runif(100,0,1)
x2 <- runif(100,0,1)
y <- log(x1) + log10(x2) + rnorm(100, sd = .01)
df <- data.frame(x1,x2,y)
pairs(df)
The plotted data shows that $x_1$ and $x_2$ have different log-forms of the relationship with $y$ and are uncorrelated with each other:

Then we can simply fit the model:
#### Fit GAM ####
library(mgcv)
fit <- gam(
y ~ s(x1) + s(x2),
data = df,
method = "REML"
)
Summarize and Plot
summary(fit)
plot(fit,pages=1)
Summarizing the model, we get estimated curvilinearity of our functions, model $R^2$, and other important metrics:
Family: gaussian
Link function: identity
Formula:
y ~ s(x1) + s(x2)
Parametric coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.46620 0.01149 -127.6 <2e-16 ***
Signif. codes: 0 ‘*’ 0.001 ‘’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Approximate significance of smooth terms:
edf Ref.df F p-value
s(x1) 8.272 8.850 607.4 <2e-16 ***
s(x2) 8.375 8.885 222.6 <2e-16 ***
Signif. codes: 0 ‘*’ 0.001 ‘’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
R-sq.(adj) = 0.988 Deviance explained = 99%
-REML = -34.978 Scale est. = 0.013201 n = 100
We can also just simply plot the estimated model and see if it matches our data:

And it appears to do just that. One can include different smoothing penalties, splines, knots, and any number of other adjustments to the regression to make the fit "better." Thus GAMs are a flexible framework that can be applied to your problem.