Quite often in published research we see researchers apply log transformation to their data, and some claim that this makes the data closer to normal distribution. My questions are:
Mathematically, why this might be true? In particular, it would be great if you could illustrate how log transformation brings the characteristics of the sample or data (such as: dispersion, skewness, etc. ) closer to those of a normal.
Does it always bring data closer to normal (Or, are there situations in which it fails)?
2. https://stats.stackexchange.com/questions/67437/confusion-related-to-which-transformation-to-use/
3. https://stats.stackexchange.com/questions/418316/what-if-we-take-the-logarithm-of-x-how-does-skewness-and-kurtosis-change/ – Glen_b Jun 25 '20 at 03:09