I believe that the correction factor you first tried is used when modeling $\log Y$ as a linear function of $X$. See this thread. Here you are modeling $Y$ as a function of $\log X$, with values $x$ specified precisely as part of the experiment, to obtain a calibration curve. You then later back-transform new observed values $y_{new}$ to estimate new values $x_{new}$.
In your terminology, with $X_L = \frac{Y-b}{a}$, $X_L$ has an assumed normal distribution with variance $\sigma_L^2$. Thus $Z=\exp(X_L)$, which you use for your estimates from the calibration curve, has a log-normal distribution with parameters $\mu$ and $\sigma_L^2$ for an observation $y_{new}$. Here $\mu$ is the true mean of the $\log x_{new}$ value associated with that observation, and we assume that $\sigma_L^2$ is known. What you want is an estimate $\hat \mu$ (of $\mu$, the true $\log x_{new}$), which you then exponentiate to estimate $\hat x_{obs}=\exp(\hat\mu)$.
The problem is that you are getting estimates from $Z=\exp(X_L)$. If $\mu=\log(x_{new})$ is the true value that you seek, $Z$ has an expected value $$\mathbb{E}(Z)=\exp\left(\mu + \left(\sigma_L^2/2\right)\right)=\exp(\mu)\exp(\sigma_L^2/2).$$
Thus $\exp(\mu) = \mathbb{E}(Z) \exp(-\sigma_L^2/2)$, as you found.
The following plot shows this bias and its correction in a simulation of a calibration experiment like yours. Values $y$ were distributed normally around a series of specified $\log x$ values ($x$ running from 0.5 to 20) plus a normally distributed error with standard deviation of 1. The $y$ values were then used to predict the corresponding $x$ values, either uncorrected via $\exp(y)$ or bias-corrected via $\exp(y-0.5)$. Mean predicted values at each of the original $x$ values are shown for both estimates. The solid line is the line of identity.
I suppose that you could calculate the variance, but with the exponentiation it probably makes more sense to calculate confidence intervals. For 95% CI, just start with what you call $x_L$, add/subtract 1.96 $\sigma_L$, and exponentiate. Note that to get the correct coverage you don't use the bias correction; see code below. This all assumes that you actually know $\sigma_L^2$; I haven't thought through how to account for basing this on a sample estimate $\hat \sigma_L^2$.

Code for plot and CI:
xvals <- rep(seq(0.5,20,by=0.5),1000)
xvals <- xvals[order(xvals)]
xvalClass <- as.character(xvals)
set.seed(1024)
yvals <- log(xvals) + rnorm(length(xvals),mean=0,sd=1)
df1 <- data.frame(x=xvals,y=yvals,calGroup=xvalClass)
df1$calGroup <- as.factor(df1$calGroup)
df1$calGroup <- reorder(df1$calGroup,as.numeric(as.character(df1$calGroup)))
ymod <- lm(y~log(x),data=df1)
summary(ymod) ## not shown; very close to ideal
df1[,"xUncorr"] <- exp(df1$y)
df1[,"xCorr"] <- exp(df1$y-(1/2))
agg1 <- aggregate(xUncorr~calGroup,data=df1,FUN=mean)
agg2 <- aggregate(xCorr~calGroup,data=df1,FUN=mean)
dfDisplay<- data.frame(trueX=as.numeric(as.character(agg1$calGroup)),uncorr=agg1$xUncorr,corr=agg2$xCorr)
plot(uncorr~trueX,dfDisplay,bty="n",ylab="Estimated x",xlab= "True x")
abline(0,1,col="red")
points(corr~trueX,dfDisplay,col="red")
legend("topleft",legend="Black points, uncorrected\nRed points, corrected",bty="n")
for 95% CI
df1$xCorrL <- exp(df1$y-1.96)
df1$xCorrU <- exp(df1$y+1.96)
df1$xInCI <- df1$x > df1$xCorrL & df1$x < df1$xCorrU
mean(df1$xInCI)
[1] 0.949525
shows desired coverage