I'm trying to capture heteroskedasticity in the returns of a price time series using a GARCH model.
A basic intuition suggests that I should fit the GARCH model on log-returns: indeed, if the price is divided by $2$ at a certain point in time, it'd give a return of $-0.5$. If it is multiplied by $2$, it gives a return of $1$. So we'd have different amplitude of return for a price move that is actually of the same amplitude because prices are in an exponential scale. If we take log-return, "divided by $2$" gives a log-return of $\log(0.5)\approx-0.3$ and "multiplied by $2$" to gives a log-return of $\log(2)\approx0.3$ : we're good, they are the same in absolute value.
However, after trying the GARCH on log-returns (i.e., the log of the gross return), it appears that log-returns remove a lot of the heteroskedasticity from the actual returns, leading the GARCH not to distinguish clearly between periods of high activity and period of low activity.
To sum up, if I use simple returns, the GARCH distinguish clearly periods of high volatility, but the same price move has a different amplitude depending on if it goes up or down, which biases the estimation of the variance in some way.
On the other hand, if I use log-returns, I don't have the "bias" of the exponential scale, but the result has less heteroskedasticity, which is not good for my strategy since I scale positions depending on volatility.
What is usually used in practice to forecast volatility? Is it more appropriate, in general, to fit a GARCH on returns or on log-returns to estimate volatility?