The evaluation function and the scaling are distinct issues in my mind. To me, scaling to $0$-$100$ is straightforward: compare to a reasonable baseline model. This is what the usual $R^2$ does by comparing the square loss of your model to the square loss of a baseline model that always predicts the overall mean (I argue here to use the in-sample mean, a stance supported by the statistics literature). For your time series problem, it might be reasonable to compare to a moving target as you get more information with the longer time series, discussed here with a reference to an article from the Review of Financial Studies. I simulate something like this here. Once you have the performance of your baseline model, you do a familiar calculation.
$$
1-\dfrac{\text{
Performance of your model
}}{\text{
Performance of the baseline model
}} $$$$= \dfrac{\text{
Performance of the baseline model
}-\text{
Performance of your model
}}{\text{
Performance of the baseline model
}}
$$
(It might make more sense if the measure of performance is $0$ when the predictions exactly match the true values; I discuss such an issue here (look for "...annoyingly..."). Most measures of performance will give you this, e.g., square loss, absolute loss, and crossentropy loss.)
A possible issue with an $R^2$-style comparison to a baseline is that it will be less than zero of the performance is worse than the baseline. That breaks from your range, but there is no limit to how bad the predictions can be, so I am not sure there should be a lower bound.
To do this, you have to consider an appropriate way to measure the quality of your model and the baseline. Sim of squared deviations/residuals/errors is a popular choice, as is the sum of the absolute deviations. However, you have mentioned an uneven penalty for missing high and missing low. Quantile loss and tilted square loss (not sure of a common name for it) might fit the bill, as they allow you to give different penalties for missing high by $\delta$ and missing low by $\delta$. Then you calculate the quantile or tilted square loss for your model and the baseline model, sticking those into the expression above to get your scaled score.
I encourage readers to consider the comment by Stephan Kolassa about determining what you want to forecast, too, as that can influence if you want quantile loss, tilted square loss, or something else.