Entropy is a function of probabilities, not a direct function of observed values. scipy.stats.entropy takes probabilities for each of the possible values of x as inputs, not the observed values.
Entropy (wikipedia): $\mathrm {H} (X):=-\sum _{x\in {\mathcal {X}}}p(x)\log p(x)$
As per the documentation of scipy.stats.entropy: "This routine will normalize pk and qk if they don’t sum to 1."
You supplied two vectors of observed values, which will be interpreted as vectors of probabilities, each one will be normalized to sum to one and then entropy will be computed as per the formula above.
Computations and results in R:
> a <- c(5,5,5,6,5,5,1,6,5,5,5,9,5,99,5)
> b <- rep(1, 15)
>
> normalized_a <- sapply(a, function(x) x/(sum(a)))
> normalized_b <- sapply(b, function(x) x/(sum(b)))
> round(normalized_a, digits = 3)
[1] 0.029 0.029 0.029 0.035 0.029 0.029 0.006 0.035 0.029
[10] 0.029 0.029 0.053 0.029 0.579 0.029
> round(normalized_b, digits = 3)
[1] 0.067 0.067 0.067 0.067 0.067 0.067 0.067 0.067 0.067 0.067
>
> -sum(sapply(normalized_a, \(x) x*log(x)))
[1] 1.769354
> -sum(sapply(normalized_b, \(x) x*log(x)))
[1] 2.70805
Your vector b has all values identical, suggesting that every possible value is equally likely. This suggests maximum surprise, therefore maximum entropy. But note this is only because you supplied observed values, instead of the relative frequency with which each of these values was observed.
Your vector a has only some values identical, and one very large value (99), which will be interpreted as a very high probability. Thus, entropy will be lower. But note this is only because you supplied observed values, instead of the relative frequency with which each of these values was observed.