I am trying to write some code to automatically detect whether a time series is seasonal. I have been looking into using the Kruskal-Wallis test, as there are a few examples of this being useful online, e.g. here.
Basically, you would perform this test by breaking your time series into groups (say, yearly) and performing the Kruskal-Wallis test on all of these groups to see if they are likely to have been sampled from the same distribution. The idea is that if the data is seasonal then each year (or month, or whatever) should have the same mean. However, there seem to be two fundamental flaws.
Most importantly, stationary data will also have the same mean each year (by definition) even if it is non-seasonal.
The Kruskal-Wallis test has a null hypothesis that the mean of each group is the same. The idea for seasonality detection is the following: if the null hypothesis is true when the time series is broken into groups of a certain lag, then the data is probably seasonal for that given lag. However, this strikes me as the opposite of what we want. The null hypothesis should be that the data is not seasonal, and we only reject this if the data "looks seasonal enough".
Am I misunderstanding something, or is this a reasonable argument to dismiss the Kruskal-Wallis test for seasonality detection?