0

I have data that is not normally distributed. I can log-transform it to be normally distributed, and then perform a t-test and get confidence intervals (CI).

But how do I interpret the results of the t-test and the CIs?

In my example, I have a skewed data and I log-transformed it.

My original data has:

median  163000.00
mean    180921.20

My log data has:

median  5.21
mean    5.22
ci      0.0089

How do I back transform it to get the original CI?

Now let's say I want to perform a t-test to compare 2 means in the log-transformed data

mean1   5.22
std1    0.172   
n1      32

mean2 5.31 std2 0.214
n2 36

I ran a t-test and got p-value = 0.0624 - meaning the mean is not statistically different at 95% confidence

How do I interpret this result for my original data? Do I have to transform back the p-value, before I can say that the difference between the two means is significant? Anything else I need to do? Or can I just take the results and interpret them as if the test was performed on the original data?

EdM
  • 92,183
  • 10
  • 92
  • 267
rebar
  • 31
  • 2
    It's not clear from the question how you calculated the ci for the log-transformed data. Is that the confidence interval for the mean of the log-transformed data? Please provide that information by editing the question, as comments are easy to overlook and can be deleted. – EdM Sep 24 '22 at 14:15
  • Since you want results to be stated on the original scale, iy might be better to use a glm (generalized linear model), since then you transform the parameters and not the data. Maybe a gaussian glm with log link function? But we need more details ... maybe you can share the data? – kjetil b halvorsen Sep 25 '22 at 14:29

2 Answers2

2

I can log-transform it to be normally distributed, and then perform a t-test and get confidence intervals (CI). But how do I interpret the results of the t-test and the CIs?

If you want to compare 2 groups, the jump to log transformation to ensure normality isn't necessary and might end up confusing things. Think first about what aspect of the 2 groups is most important to compare.

Here's a simple data set close to what you describe:

set.seed(101)
grp1 <- rlnorm(32,5.22, 0.172)
grp2 <- rlnorm(36,5.34, 0.214)
grpDF<-data.frame(val=c(grp1,grp2),grp=c(rep(1,32),rep(2,36)))

Simple comparison

If what you care about is whether randomly sampled members of one group are likely to have greater values than those randomly sampled from the other group, you can use the Wilcoxon-Mann-Whitney (WMW) test directly on the untransformed values (or on any monotonic transformation of them):

wilcox.test(val~grp,data=grpDF)
## 
##  Wilcoxon rank sum exact test
## 
## data:  val by grp
## W = 409, p-value = 0.04023
## alternative hypothesis: true location shift is not equal to 0

Semi-parametric comparison

That non-parametric test doesn't provide a confidence interval (CI). If you want a related CI, try ordinal logistic regression. Frank Harrell explains in Chapter 10 of his course notes that the WMW test can be considered a special case of ordinal regression, as implemented in the orm() function of his rms package.

rms::orm(val~grp,data=grpDF)
# Logistic (Proportional Odds) Ordinal Regression Model
#  
#  rms::orm(formula = val ~ grp, data = grpDF)
#  
#                           Model Likelihood               Discrimination    Rank Discrim.    
#                                 Ratio Test                      Indexes          Indexes    
#  Obs               68    LR chi2      4.41    R2                  0.063    rho     0.251    
#  Distinct Y        68    d.f.            1    g                   0.457                     
#  Median Y    194.0303    Pr(> chi2) 0.0357    gr                  1.580                     
#  max |deriv|    8e-07    Score chi2   4.38    |Pr(Y>=median)-0.5| 0.112                     
#                          Pr(> chi2) 0.0364                                                  
#  
#      Coef   S.E.   Wald Z Pr(>|Z|)
#  grp 0.9041 0.4351 2.08   0.0377  

Given any outcome value, the coefficient is the log-odds difference between Group 2 and Group 1 of being at least that high. The estimate is assumed to be asymptotically normally distributed, so you can use the standard error to get CI.

Generalized linear model

If you care about the mean of the values in the original scale, you can model in that scale directly via a Gaussian generalized linear model with a log link as Kjetil Halvorsen suggested in a comment. That models the log of the mean as a function of group membership.

glmGrp <- glm(val~grp,data=grpDF,family=gaussian(link="log"))
summary(glmGrp)
## 
## Call:
## glm(formula = val ~ grp, family = gaussian(link = "log"), data = grpDF)
## some lines omitted 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  5.12880    0.07046  72.788   <2e-16
## grp          0.09373    0.04267   2.197   0.0316 

The intercept is the estimated log of the mean value for the first group and the grp coefficient is the difference from that for the second group.

Those methods all avoid the need to start with a transformation.

Log-transform, then t-test

If you do a log transformation of the values and then do a t-test, you are modeling instead the mean of the log values, or the geometric mean in the original scale.

t.test(log(grp1),log(grp2))
## 
##  Welch Two Sample t-test
## 
## data:  log(grp1) and log(grp2)
## t = -1.9694, df = 62.558, p-value = 0.05334
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -0.168314383  0.001239952
## sample estimates:
## mean of x mean of y 
##  5.212947  5.296484 

How do I interpret this result for my original data? Do I have to transform back the p-value...

There's no assurance that the "significance" (p-value) of a test on the geometric mean would represent the "significance" of a test on the arithmetic mean, which seems to be the p-value transformation that you're asking for.

How do I back transform it to get the original CI?

If you need to work on log-transformed values, it's best to stay in that transformed scale. Sometimes a log-based scale makes a lot of sense, as with Cq values in real-time polymerase chain reaction assays, or with pH values.

A simple back transformation of a mean and CI in the log scale, as suggested in another answer, can fail to include the mean on the original scale. For example:

set.seed(11)
lnDat <- rlnorm(40,5,1)
(meanLog <- mean(log(lnDat)))
# [1] 4.680395
(sdLog <- sd(log(lnDat)))
# [1] 0.7746519
## back-transform the 95% CI from the log scale
t975.39 <- qt(0.975,df=39)
exp(meanLog + t975.39 * sdLog/sqrt(40))
# [1] 138.1225
exp(meanLog - t975.39 * sdLog/sqrt(40))
# [1] 84.15407
# the actual mean in the original scale
mean(lnDat)
# [1] 149.2602

Back-transforming log-transformed regression coefficients is similarly prone to error. If the original data are log-normally distributed, then the mean of the log-transformed values is the log of the median in the original scale, not the mean. There's no assurance that relationship will hold for other skewed distributions, however.

For log-normally distributed data, there are asymptotic formulas for back-transforming CI from the log scale to the original scale when there are large numbers of observations. This page shows the formula for the CI for a mean value, and this paper presents a formula for a likelihood-based test to compare 2 mean values properly, given what the authors call the "Inappropriateness of t-Test Based on Log-Data."

But why do that? Decide what type of comparison of the groups makes the most sense for your study. If, on that basis, it should be a comparison of log-transformed values that's OK, but then you should stick with that log-transformed scale.

EdM
  • 92,183
  • 10
  • 92
  • 267
0

First of all, I have to highlight that you used log10, which is unusual (although it doesn't affect anything) and the natural log is preferred and usually assumed if not specified.

With confidence intervals, it's very easy. You have

mean    5.2200
ci      +/- 0.0089

so the log10 confidence interval is

5.2111 to 5.2289

and the non-transformed confidence interval will be

10^5.2111 to 10^5.2289

or

162592.31 to 169394.77

Yes, it's not symmetric.


No, I don't have to do anything with the p-value.

Alex
  • 1,057
  • The problem with this simple back-transformation (which I used to do, before I learned otherwise) is that the back transformed limits don't represent CI for the mean in the original scale. Your back-transformed CI don't include the mean value of 180921.20 shown in the OP. I added a reproducible example to my answer, showing how such limits can completely miss the mean. If you know that the original data are really log-normal then you might get something like CI for the original median. Links in my answer show asymptotic formulas for correct back-transformation for CI for the mean. – EdM Oct 01 '22 at 07:11
  • A fair point. When I transform the lognormal data into their log form, I just fully forget about the existence of the original form in most cases. The original form doesn't show what's really going on, the process. But sometimes back transformation is warranted, like when you try to calculate a European call option price for a time series with constant variance, which is actually infinity. Only in such cases, these issues with back-transformation really manifest themselves. – Alex Oct 01 '22 at 10:00