Summary (tl;dr)
This answer assumes that measurement errors for the same measurement using methods A and B are independent. It also assumes that both measurement methods are unbiased, but the results extend to the case where only one of them is unbiased, and then hold for the mean squared error (MSE).
- given we knew the variances, it would be easy to find the optimal weight as a function of the true value and the ratio of variances for methods A and B.
- we have neither the true value nor the ratio of variances, but estimates or guesses for both. Therefore, weights will in reality deviate from the optimal values.
- using wrong weights may yield worse estimates than using the arithmetic mean, but being "slightly off" still improves the precision of estimates. Therefore, using weights that come from some estimation should be OK.
- the time series nature of the data scan be exploited to estimate variances and to calibrate the weight function (the sigmoid in OP).
Optimal choice of weights when "everything is known"
The generic question:
"how should I weight the outcomes of measurement A and B on the same quantity with independent measurement errors?"
has a well known solution:
- if we can assume that both measurement methods are unbiased, then the optimal weight is given by inverse variance weighting.
- this result carries over to the case where measurement A, say, is unbiased, and B is biased. In that case, replace "variance" by "mean squared error" (can be shown by the analogous calculations as for retrieving the result for unbiased estimators).
This requires however that the variances of the two methods are known. Let $\sigma^2_A(x)$ and and $\sigma^2_B (x)$ be the variances of measurement $A$ and $B$ when $x$ is the true value, and denote the ratio of variances as $\rho(x)=\frac{\sigma^2_A(x)}{\sigma^2_B(x)}.$
Then optimal weights $w_A(x)$ and $w_B(x)$ are
$$
\begin{aligned}
w_A(x) &= \frac{\sigma^2_B(x)}{\sigma^2_A(x)+ \sigma^2_B(x)} = \frac{1}{1+\rho(x)},
\\ \quad w_B(x) &= \frac{\sigma^2_A(x)}{\sigma^2_A(x)+ \sigma^2_B(x)}=\frac{\rho(x)}{1+\rho(x)}.
\end{aligned}
$$
How much do we gain using weighted averages?
The variance of a weighted average of $A$ and $B$, using a given weights
$w_A =w$ and $w_B = 1- w$ , where $w\in[0,1]$, is
$$
\begin{aligned}
\sigma^2_{\text{weight}}(w) &:= \mathrm{Var} ( w A + (1- w) B) \\&= w^2 \sigma_A ^2 + (1- w)^2 \sigma_B^2 \\& = (w^2\rho +(1-w)^2)\sigma^2_B.
\end{aligned}
$$
For the variance $\sigma^2_{\text{avg}}$ of the arithmetic mean of $A$ and $B$, where $w=1/2$, we get
$$
\sigma^2_{\text{avg}} := \sigma^2_{\text{weight}}(1/2)=\frac{\rho+1}{4}\sigma^2_B,
$$
and the variance $\sigma^2_{opt}$ when using optimal weights $w^*_A$ and $w^*_B$ would give
$$
\sigma^2_{\text{opt}} := \sigma^2_{\text{weight}}(w^*_A)=\frac{\rho}{1+\rho}\sigma^2_B,
$$
so the relative efficiency of using optimal weights would be
$$
\text{eff.}(w^*,0.5):=\frac{\sigma^2_{\text{weight}}(1/2)}{\sigma^2_{\text{weight}}(w^*_A)}=\frac{\sigma^2_\text{avg}}{\sigma^2_\text{opt}} = \frac{(1+\rho)^2}{4\rho}.
$$

For moderate values of $\rho = \sigma^2_A/\sigma^2_B$, the relative efficiency is still quite close to 1. For example, with $\rho = 4$, we get $\text{eff.}(w^*, 0.5) = 25/16$. This is the same result as we would obtain for $\rho = 1/4$.
Using suboptimal weights
Since we do not know the true variances, we have to resort to estimated optimal weights, or to using predefined functions that may be fitted from data, as suggested in the original post. This bears the risk that the weighted average has lower efficiency than the arithmetic average, in particular when both measurement methods A and B have the same variance (whence $\rho=1$).
However, the as soon as B has higher variance than A, there is still gain in efficiency even when using a weight that deviates from the optimal one. In the figure below shows relative efficiency compared to the arithmetic average as a function of percentual deviation from the optimal weight, for ratios $\rho = 1, 2, 5$ and $10$.
Example
We can see that the method appears very forgiving already for $\rho = 5$, (which would correspond to $\rho = 1/5$ by changing the roles of A and B). In that case, the weight for the better method, B, would be $5/6$, and the weight for the less accurate method, A, would be $1/6$. The relative efficiency with optimal weight is $\text{eff.}(w^*_A,0.5) = 1.8$. If we use only method B, i.e., weight $w=0$ for A, the efficiency would drop down to $\sigma^2_{\text{avg}}/\sigma^2_{\text{weight}}(0) = 1.5$.

Examples, and relation to sigmoid functions approach
When we have a model for the variances as a function of the true value $x$, then it is easy to derive the weights, by calculating $\rho(x)$ and propping that into the expression for the weights $w_A(x)$ and $w_B(x)$.
In the present case, we know that $\sigma_A^2(x)$ is small for small $|x|$ and large for large $|x|$, and $\sigma_B^2(x)$ is small for large $|x|$ and large for small $|x|$. The following figure illustrates the optimal weight function as a result of variance functions in two examples.
The left part shows $\sigma^2_A$ (red), and $\sigma^2_B$ (blue). The right part shows $w_A^*$ (black) and the corresponding sigmoid function $f$ (green), where
$$
f(x) = \frac{1}{2}\begin{cases}
1 - w(x),&x>0\\
w(x),&x<0\end{cases} .
$$

The shape of the optimal sigmoid function depends strongly on the underlying variance functions, in a complicated way. I would therefore recommend to model the variance functions, rather than the corresponding sigmoid.
May we calculate weights from the data?
In practice, the true $x$ is unknown, and weights are calculated from the actual measurements $x_A$ and $x_B$ are measurements. So the original suggestion boils down to calculating the weighted mean
$$
\bar x_w = \frac{w_A(x_A)x_A + w_B(x_B)x_B}{w_A(x_A)+w_B(x_B)}.
$$
Using the data instead of the true value to find the weight may strictly spoken introduce a bias, since the weights are now random and depend on the data, and $\bar{X}_w$ is nonlinear in $A$ and $B$.
We can show (by a symmetry argument), that $\bar{X}_w$ is unaffected by bias, when
- the measurement errors have a symmetric distribution,
- measurements are unbiased, and
- the weight functions are symmetric around zero.
When these conditions are grossly not met, you can investigate
whether or not this bias is important and outweighs the improved precision by a simulation study that tries to mimic the original data.
Estimating the variance function from time series data
The scientific context of this question are measurements of sap flow velocities. Such data are time series with a distinct diurnal pattern. Restricted to a period with similar conditions (same tree, same weather, ...), it would be justified to model them as a periodic (aka seasonal) time series with time-varying variance.
You should then be able to express both the true value $x(t)$ (hopefully the mean of unbiased measurement) and the variance $\sigma^2(t)$ for A and B separately as a function of time $t$. Thus, you obtain data $\big(\hat x(t), \hat \sigma^2(t)\big)$ for many time points $t$. From these data, a relation $x\to \sigma^2$ can be established, where variances from different times with same mean $x(t)$ are averaged:
- by regression, if you assume a parametric model for the variance functions $\sigma^2_A(x), \sigma^2_B(x)$, or
- constructing a look-up table, preferably using some kind of smoothing (kernel or splines), see Nonparametric regression.