When analyzing a dataset based on percentages, I have on occasions worked with data as "full" value (i.e. "50") or "reduced" value (i.e. ".50").
However, it just occurred to me that this could have a serious impact on how standard deviation and variance.
If my standard deviation and variance are above 1, the standard deviation will be smaller than the variance. But if they are below 1, the standard deviation will be bigger than the variance.
Does it mean it is good practice to never have a dataset resulting in a variance below 1? Or is there a way to account for this?