When I think about a loss function for a point prediction, my mental model always runs like this (Kolassa, 2020): we don't know the outcome we want to predict, so it's best to think about it in terms of a predictive probability density. A point prediction is a one-number summary of this predictive density. Now: given some predictive density, which one-number summary will lead to the lowest expected loss? Essentially, any loss function elicits a particular functional from the predictive density: the MSE elicits the mean, the MAE elicits the median, a pinball loss elicits a quantile. Which functional does your loss function elicit?
(And yes, I maintain this is still a useful way of thinking even if you do not consider a predictive density explicitly - because your uncertainty is always there in the background, whether you choose to ignore it or not.)
In the present case, assume for example your predictive density is normal, with a mean of 1 and a standard deviation of 2. (Or alternatively, assume your uncertainty about the outcome can be described in this way.) Then it turns out that the optimal point prediction under your loss is zero, i.e., zero is the $\hat{y}$ that minimizes your expected loss. Thus, your loss does not elicit an unbiased expectation prediction.

By symmetry, zero is also the optimum point prediction if your conditional expectation is -1. As the SD goes down, the optimal point prediction will eventually move away from zero.
This may or may not be what you expected. After all, the pinball loss is explicitly built to elicit a quantile prediction. However, given that it seems to be little known and appreciated that loss functions like the MAE or the MAPE also elicit non-expectation functionals, I think that this aspect is relevant.
R code:
mm <- 1
sd <- 2
xx <- mm+seq(-2*sd,2*sd,by=0.01)
sims <- rnorm(1e6,mm,sd)
loss <- sapply(xx,function(yy)mean(abs(yy)(log(1+(yy-sims)^2)(simsyy>=0)+(yy-sims)^2(yy*sims<0))))
xx[which.min(loss)]
plot(xx,loss,type="l")
abline(v=mm,col="red")