Aksakal provides a good experience-based answer to this question which I think is sufficient. I would like to simply add that using inferential tests of normality (such as Shapiro-Wilk, Anderson-Darling, and Kolmogorov-Smirnov tests) is often a poor way of assessing normality, especially for large sample sizes. Some simulations have shown that even slight skew or kurtosis is affected by sample size, particularly Shapiro-Wilk. For example, the Shapiro-Wilk formula is defined as:
$$
\Large
w = \frac{(\Sigma_{i=1}^na_ix_i)^2}{\Sigma_{i=1}^n (x-\bar{x})^2}
$$
where $x_i$ are the ordered random sample values and $a_i$ are constants generated from the covariances, variances and means of the sample (size $n$) from a normally distributed sample. You can see already here that because $n$ is included in this estimation, the test is weighted by how many data points are used.
This means even slightly non-normal distributions get flagged with this test with enough data. Here is a simulated example in R. Here I have created a beta distribution that is tweaked to look mostly normal.
#### Simulate Data and Plot ####
set.seed(123)
x <- rbeta(5000,4,4)
hist(x)

Running the Shapiro-Wilk test:
#### Run Test ####
shapiro.test(x)
We get a flagged result, despite the data coming from what appears to be a mostly normal distribution:
Shapiro-Wilk normality test
data: x
W = 0.99602, p-value = 2.236e-10
A good number of parametric tests in statistics are built off the normality assumption, but they can also be fairly robust to non-normality, particularly in cases like these where the normality of the distribution is only barely a problem. My advice is to ignore these types of tests and stick to visual aids like you already have with the histogram.
Having said that, the other answerer has clearly outlined that not only is this distribution non-normal, but the type of data you are using already has non-normal distributions in general (indeed, your distribution appears to have large kurtosis looking at it from afar). So while my advice is to use graphical methods to interpret normality, that does not change the interpretation of the other answer here.
var = np.percentile(df["Price Change (in %)"], 100 * alpha)and thenplt.axvline(var, color='purple', linestyle='--', label=f'VaR ({alpha*100:.2f}%)')with an alpha of 0.05. When plotting the KDE it looked normal so I thought it is more appropriate to use the mentioned VaR approach. – BlankerHans Sep 03 '23 at 12:32