I have a situation where I have around 30 classes of variables with different means and variances (though the means aren't too far from eachother; think 4-7) and that the distributions are right skewed, and I am trying to do hypothesis testing on the sum of variables from these classes.
For example, sometimes I have 20 values sampled at random from these classes, and other times I have 50. When I plot the couple hundred sums I get a distribution that is close to normal.
Looking around I found the Lyapunov version of the central limit theorem. The only thing I'm catching myself on is the denominator in the normalization formula.
Normally I would take the standard deviation of the variables making up the sum and use that, but is that appropriate in this case? I believe it is but I'd like some confirmation or a source that goes over an application of Lyapunov to real data.
Edits based on comments:
Additional information: The distributions are positive right skewed
I've estimated the means of all the classes using a couple prior years of data, and can find the estimated standard deviations if needed.
What I'm trying to do is tell if a sum of n random variables that are a combination of different classes are statistically different from the sums of the estimated means of their classes.
So, suppose I have 20 values random chosen from the 30 classes (the same classes can be chosen more than once). I add up the values to a sum $X = x_1 + ... + x_{20}$. Is this significantly different from the sum of the means of the different classes represented here $\mu = \mu_1 + ... + \mu_{20}$?