70

A published article (pdf) contains these 2 sentences:

Moreover, misreporting may be caused by the application of incorrect rules or by a lack of knowledge of the statistical test. For example, the total df in an ANOVA may be taken to be the error df in the reporting of an $F$ test, or the researcher may divide the reported p value of a $\chi^2$ or $F$ test by two, in order to obtain a one-sided $p$ value, whereas the $p$ value of a $\chi^2$ or $F$ test is already a one-sided test.

Why might they have said that? The chi-squared test is a two-sided test. (I have asked one of the authors, but gotten no response.)

Am I overlooking something?

Joel W.
  • 3,306
  • see http://stats.stackexchange.com/questions/171074/chi-square-test-why-is-the-chi-squared-test-a-one-tailed-test/171084#171084 –  Sep 13 '15 at 06:59
  • Look at exercise 4.14 of Davidson & Mackinnon 'Econometric Theory and Methods' 2004 edition for an (exceptional) example of when the Chi-squared is used for a two-tailed test. Edit: great explanation here: http://www.itl.nist.gov/div898/handbook/eda/section3/eda358.htm – Max Apr 29 '13 at 06:32
  • There's at least one case where it makes sense to talk about a one-sided chi-squared: when you have two dichotomous variables. I give more details here. – Arnaud Mortier Feb 12 '20 at 10:47
  • The χ² test can be either one- or two-sided in these scenarios: (A) comparing two samples' proportions in a 2x2 contingency table; (B) comparing a sample's observed variance to a null value. In case A, the p-value is equal to either (i) half the upper χ² tail (for a one-sided H1: p_A > p_B or p_A < p_B) or (ii) all of the upper χ² tail (for a two-sided H1: p_A ≠ p_B). In case B, the p-value is equal to (i) just the upper χ² tail (for one-sided H1: σ² > σ₀²), (ii) just the lower χ² tail (for one-sided H1: σ² < σ₀²), or (iii) upper χ² tail + lower χ² tail (for two-sided H1: σ² ≠ σ₀²). – jdcrossval Feb 04 '23 at 00:04
  • Is this just a matter of terminology? For example, on https://sphweb.bumc.bu.edu/otlt/MPH-Modules/PH717-QuantCore/PH717-Module8-CategoricalData/PH717-Module8-CategoricalData3.html: "As with the t-distribution, the χ2 distribution is actually a series of distributions, i.e., one for each number of degrees of freedom, and the upper tail area is the probability, i.e., the p-value. Even though it evaluates the upper tail area, the chi-square test is regarded as a two-tailed test (non-directional), since it is basically just asking if the frequencies differ." – hackerb9 Feb 08 '24 at 01:18
  • @hackerb9 If my null hypothesis is directional, is it logical to adjust the p value of the chi sq accordingly. – Joel W. Feb 08 '24 at 14:39
  • @JoelW Yes, you just use the upper tail area for a directional H₀. But, my question was whether this confusion is because people speak of it as non-directional because the most common use is to determine a binary (testing for independence or not.) – hackerb9 Feb 15 '24 at 18:30

7 Answers7

73

The chi-squared test is essentially always a one-sided test. Here is a loose way to think about it: the chi-squared test is basically a 'goodness of fit' test. Sometimes it is explicitly referred to as such, but even when it's not, it is still often in essence a goodness of fit. For example, the chi-squared test of independence on a 2 x 2 frequency table is (sort of) a test of goodness of fit of the first row (column) to the distribution specified by the second row (column), and vice versa, simultaneously. Thus, when the realized chi-squared value is way out on the right tail of it's distribution, it indicates a poor fit, and if it is far enough, relative to some pre-specified threshold, we might conclude that it is so poor that we don't believe the data are from that reference distribution.

If we were to use the chi-squared test as a two-sided test, we would also be worried if the statistic were too far into the left side of the chi-squared distribution. This would mean that we are worried the fit might be too good. This is simply not something we are typically worried about. (As a historical side-note, this is related to the controversy of whether Mendel fudged his data. The idea was that his data were too good to be true. See here for more info if you're curious.)

  • 9
    +1 for mentioning the two-sided use with Mendel's pea experiments: it's memorable and gets to the heart of the question. – whuber Feb 06 '12 at 17:01
  • 1
    I see what you are saying about goodness of fit, Jon, but consider this.

    Let's say we are comparing survival rates for groups A and B. The survival rate for group A could be higher than the survival rate group B or it could be lower. Why is that not a 2 tailed test?

    – Joel W. Feb 06 '12 at 17:02
  • 3
    +1 for a good question and an excellent answer. @Joel W: I can strongly recommend Khan Academys video on the $\chi^2$ test – Max Gordon Feb 06 '12 at 17:15
  • 1
    @JoelW., A could be higher than B, or B > A, but in both cases, when using the chi-squared test, you are asking if the two distributions are good fits for each other; in neither case are you worried that the numbers might match too identically. – gung - Reinstate Monica Feb 06 '12 at 17:23
  • @JoelW to expand on gung's comment on your comment, in both cases the result is a test statistic that is too high for the distribution under a null hypothesis, which is why we only check the value on the right tail. – Peter Ellis Feb 06 '12 at 18:48
  • Yes, Peter, in that sense we see the Chi-Sq as one-sided. But, we can be interested in both ways the results can deviate from the null hypothesis (A>B and B>A). In that sense the test is two-sided. If we were only interested in outcomes in one direction (e.g., is the new treatment, A, more effective than the old one, B) would it not be appropriate to modify the alpha level to reflect this directional alternative hypothesis? – Joel W. Feb 06 '12 at 19:07
  • 2
    @JoelW., 2 points here. 1) You may certainly be interested in both orderings of the groups, but this is not the same thing as being interested in whether the chi-squared statistic falls in either tail of its distribution. Whether A>>B (ie much bigger), or B>>A, the chi-squared statistic would fall far into the right tail of the dist. (2) You may adjust the alpha level for your studies as you like (provided this is done before gathering data & the process/reasoning is clearly stated), but that is very different from "dividing the p value by 2", which would not be valid. – gung - Reinstate Monica Feb 06 '12 at 19:29
  • 1
    To illustrate: saying eg, 'gathering a lot of data would be difficult, so we set $\alpha=.10$, our p=.06", would be perfectly valid. But, finding p=.06, dividing it by 2, and saying, 'we set $\alpha=.05$, our p=.03" would not be valid. In this way, both you and the authors are correct. – gung - Reinstate Monica Feb 06 '12 at 19:34
  • 13
    My summary of this is that the $\chi^2$ is a two-sided test for which we are usually interested in only one of the tails of the distribution, indicating more disagreement, rather than less disagreement than one expects by chance. – Frank Harrell Feb 06 '12 at 21:50
  • 6
    Supporting the 2-tailed view: "The two-tail probability beyond +/- z for the standard normal distribution equals the right-tail probability above z-squared for the chi-squared distribution with df=1. For example, the two-tailed standard normal probability of .05 that falls below -1.96 and above 1.96 equals the right-tail chi-squared probability above (1.96)squared=3.84 when df=1." Agresti, 2007 (2nd ed.) page 11 – Joel W. Feb 07 '12 at 02:30
  • 6
    That's right. Squaring a z-score yields a chi-squared variate. For example, a z of 2 (or, -2!) when squared equals 4, the corresponding chi-squared value. The two-tailed p-value associated with a z-score of 2 is .04550026; and the one-tailed p-value associated with a chi-squared value of 4 (df=1) is .04550026. A two-tailed z test corresponds to a one-tailed chi-squared test. Looking at the left tail of the chi-squared distribution would correspond to looking for z-scores that are closer to z=0 than you might expect by chance. – gung - Reinstate Monica Feb 07 '12 at 02:52
  • 2
    The key phrase in that quote is "equals the right-tail probability... for the chi-squared distribution". – gung - Reinstate Monica Feb 07 '12 at 03:01
  • If a non-directional (aka 1 tailed) chi-sq is equivalent to a TWO tailed z test, and you want to use the chi-sq in lieu of a ONE tailed z test, it would seem that you should change the alpha level of the chi-sq test so it reflects the square of a ONE tailed z test. So, if the critical value for a 1 tailed z test is 1.65, then the corresponding critical value for chi sq is that square of that, or 2.755 (not 3.84). So, the authors' statement that is is an error to adjust a 3.84 chi-sq seems incorrect. Or am I missing something? – Joel W. Feb 07 '12 at 16:30
  • 1
    You can set alpha wherever you like, so long as it's done a-priori & clearly explained when results are reported. Setting the critical value for chi-square to some specific # is the same as setting alpha--done beforehand & stated explicitly it's perfectly valid. Thus, one could say "we set alpha at .10, our critical chi-square is 2.7, our obtained p is .06", & that's OK, but one could not say "we set alpha at .05, our critical chi-square is 3.8, our obtained p is .03". Doubling the alpha is fine, dividing p by 2 is not. As I said above, in this sense, both you & the authors are right. – gung - Reinstate Monica Feb 07 '12 at 17:52
  • Hmm. If I understand what you are saying, you can do a z-test (say, comparing proportions) and report a one-tailed p value, but you cannot do the mathematically identical chi-square and report that same one-tailed p value. This does not make sense to me. – Joel W. Feb 10 '12 at 00:28
  • I'm sorry this is causing so much trouble; some topics in statistics just aren't very intuitive. Nonetheless, we've been going around and around on this for some time, and there's going to be a limit on how much I can clarify this for you in comments. You may want to see if you can work with a professional statistician, or take some formal classes, to get further assistance. The Khan Academy videos recommended above might be a good place to start. Good luck. – gung - Reinstate Monica Feb 10 '12 at 04:59
  • Let's say an engineer told me that by one calculation a bridge could be determined to be capable of supporting X tons and by a mathematically equivalent calculation the bridge could be determined to be capable of supporting 2X tons. Let's say the engineer further told me both answers are correct, it just depends on the approach you took. I would find that puzzling, too. Mathematically equivalent approaches logically should not result in different conclusions. – Joel W. Feb 13 '12 at 17:06
  • @gung I just came across this great post of yours, which made me realize that I don't understand the concept of goodness-of-fit as compared to other statistical tests. I don't see this addressed straight on in the site, and the Wikipedia entry is not that great (more a list). Can you suggest if and how it would be a good way to ask this? "Intuition..."? – Antoni Parellada Sep 12 '15 at 17:38
  • @AntoniParellada, it's hard to say without knowing what your question is. You might just ask & the question can be refined with comments if necessary. – gung - Reinstate Monica Sep 12 '15 at 17:45
  • what about in this case http://stats.stackexchange.com/questions/223560/how-to-define-a-rejection-region-when-theres-no-ump ? – An old man in the sea. Jul 16 '16 at 09:44
  • @Anoldmaninthesea., you can use a 2-tailed test in that case, note RayKoopman's answer below. – gung - Reinstate Monica Jul 16 '16 at 12:33
  • gung, you're right. However, in both, I don't see a worry over the possibility that the fit is too good that would justify the two-tailed test. Then why the two-tailed test? – An old man in the sea. Jul 16 '16 at 14:27
  • 1
    @Anoldmaninthesea., if you're testing a variance against a null value there is no fit that's 'too good'. The observed variance could be too low to have come from the reference value, or too high to have done so, but this is different from a fit that's too poor or too good. – gung - Reinstate Monica Jul 16 '16 at 14:45
  • gung, but in that case why not choose a one-tailed with same test size? it seems that there's no consideration w.r.t. power. – An old man in the sea. Jul 16 '16 at 15:09
  • 1
    @Anoldmaninthesea., if you want to test if the observed variance is less than some value, you should use a 1-tailed test. If you want to test if the observed variance is not equal to some value, you need to do a 2-tailed test. – gung - Reinstate Monica Jul 16 '16 at 15:14
  • @gung-ReinstateMonica, I agree with you, but I'm confused by the fact that statistical software allows us to make a chi-square two-sample test one-sided or two-sided. (For example, in SAS: PROC POWER; TWOSAMPLEFREQ TEST=pchi ODDSRATIO=3 REFPROPORTION=0.25 SIDES=1 2 POWER=. ALPHA=0.05 NTOTAL=400; RUN;). I understand that this 'SIDES' option is modifying the hypotheses (i.e., we're never using both tails of the chi square distribution). But I don't understand how the one-sided calculation is done. Is it square-rooting the chi-squared test statistic and then running a one-sided z-test? – jdcrossval Feb 01 '23 at 15:59
  • @JoelW. this is partly a semantic issue. The term "one-sided" can be attributed to two different things: (a) the alternative hypothesis and (b) the number of tails under the probability distribution. Using R, pnorm(-1.96, 0, 1, lower=TRUE) + pnorm(1.96, 0, 1, lower=FALSE) is equal to pchisq(1.96^2, 1, lower=FALSE). In other words, a two-tailed area under a z distribution is equal to a one-tailed area under the chi-squared distribution. You are exclusively attributing "number of sides" to the hypotheses, and others are attributing the "number of sides" to the probability calculation. – jdcrossval Feb 01 '23 at 16:18
  • @JoelW., the two-sided alternative hypothesis that two proportions aren't equal can be calculated equivalently using (a) a two-tailed z probability or (b) a one-tailed χ2 probability. What complicates matters is that the χ2 random variable can be conceptualized as representing the squared error between an observed distribution and an assumed distribution. Thus, we can often re-interpret the one-tailed chi-square probability as a one-sided alternative hypothesis relating to goodness of fit. – jdcrossval Feb 01 '23 at 16:19
  • @JoelW., but it's context-dependent (the χ2 test is used for more than just comparing proportions). Your Q was trying to be agnostic of context, but I wonder if you're only thinking about scenarios where we're comparing two proportions. Did you mean to ask about the scenario where we have a single sample, and we're evaluating whether that sample's variance is equal to some null variance that we think might exist in the broader population? – jdcrossval Feb 01 '23 at 16:21
  • @gung-ReinstateMonica, in general, isn't the one-sided z-tail pnorm( abs(x), 0, 1, lower.tail=FALSE ) equal to half of the upper chi-square tail 0.5 * pchisq(x^2, df=1, lower.tail=FALSE)? Is that not what SAS is doing for a "one-sided" chi-square test of proportions? Dividing the upper tail probability in half? – jdcrossval Feb 01 '23 at 16:29
  • @jdcrossval, you should ask a new question for this. – gung - Reinstate Monica Feb 02 '23 at 01:21
  • @gung-ReinstateMonica, I found my answer. In SAS and R, the one-sided chi-square test is computed using 0.5 * area in the upper tail. This is equal to the p-value acquired from a single tail on the corresponding z-distribution. But the terminology about one-sided vs. two-sided does get very confusing, because there's at least one test (the test comparing sample variance to a null variance value) that can use the left and right chi square tails. In that case, the upper tail alone does correspond to a one-sided H1. In goodness-of-fit cases, the upper tail corresponds to a two-sided H1. – jdcrossval Feb 02 '23 at 22:53
  • @gung-ReinstateMonica, overall I think your answer is good but requires a few qualifications: (1) You're only talking about χ² tests of goodness-of-fit or of association. You're not talking about the test that compares sample variance to a null value. This chi square test of variance can be one-sided or two-sided; the one-sided test uses the upper or lower χ² tail, and the two-sided test uses both χ² tails. – jdcrossval Feb 03 '23 at 02:06
  • (2) For the χ² tests, you are using "one-sided" to mean "only the upper χ² tail is used." This is true for the χ² tests you're thinking of, but for those tests, the alternative hypotheses can still be one-sided or two-sided. Statistical software (like R and SAS) offer the option for two-sided or one-sided χ² tests of association, which use the upper tail or half the upper tail, respectively. – jdcrossval Feb 03 '23 at 02:06
  • @jdcrossval, you should ask a new question for this. – gung - Reinstate Monica Feb 03 '23 at 12:37
  • @gung-ReinstateMonica, I don't have any remaining Qs. My last 2 comments are remarks about your original answer. Your answer says chi square tests are always one-sided because they only use the upper tail. That's not right. Two-sample chi square tests of association can use (A) the entire upper tail or (B) half the upper tail. Scenario A corresponds to a 2-sided H1 and scenario B corresponds to a 1-sided H1. Your answer doesn't capture that common possibility (which R and SAS are coded to enable). – jdcrossval Feb 03 '23 at 23:41
  • @gung-ReinstateMonica, your answer also says "If we were to use the chi-squared test as a two-sided test, we would also be worried if the statistic were too far into the left side of the chi-squared distribution. This would mean that we are worried the fit might be too good." This is also not correct. The chi square test of variance looks at the lower and upper χ² tails. But the reason isn't because we're concerned with too good a fit, as you've suggested. The reason is because we are concerned that the observed variance is significantly larger or significantly smaller than the null variance. – jdcrossval Feb 03 '23 at 23:50
  • @gung-ReinstateMonica, but I do think your answer overall is making a good point. It's just too broad (as written) because it essentially treats all chi square tests as "goodness-of-fit" tests. That simplification leads to an incorrect characterization of (a) the chi square test of variance and (b) the one-sided chi square test of association. – jdcrossval Feb 03 '23 at 23:50
16

Is chi-squared always a one-sided test?

That really depends on two things:

  1. what hypothesis is being tested. If you're testing variance of normal data against a specified value, it's quite possible to be dealing with the upper or lower tails of the chi-square (one-tailed), or both tails of the distribution. We have to remember that $\frac{(O-E)^2} E$ type statistics are not the only chi-square tests in town!

  2. whether people are talking about the alternative hypothesis being one- or two-sided (because some people use 'two-tailed' to refer to a two-sided alternative, irrespective of what happens with the sampling distribution of the statistic. This can sometimes be confusing. So for example, if we're looking at a two-sample proportions test, someone might in the null write that the two proportions are equal and in the alternative write that $\pi_1 \neq \pi_2$ and then speak of it as 'two-tailed', but test it using a chi-square rather than a z-test, and so only look at the upper tail of the distribution of the test statistic (so it's two tailed in terms of the distribution of the difference in sample proportions, but one tailed in terms of the distribution of the chi-square statistic obtained from that -- in much the same way that if you make your t-test statistc $|T|$, you're only looking at one tail in the distribution of $|T|$).

Which is to say, we have to be very careful about what we mean to cover by the use of 'chi-square test' and precise about what we mean when we say 'one-tailed' vs 'two-tailed'.

In some circumstances (two I mentioned; there may be more), it may make perfect sense to call it two-tailed, or it may be reasonable to call it two-tailed if you accept some looseness of the use of terminology.

It may be a reasonable statement to say it's only ever one-tailed if you restrict discussion to particular kinds of chi-square tests.

Glen_b
  • 282,281
  • what about this one? http://stats.stackexchange.com/questions/223560/how-to-define-a-rejection-region-when-theres-no-ump – An old man in the sea. Jul 16 '16 at 09:45
  • Thank you very much for mentionning the variance test. That is actually a quite interesting use of the test, and also the reason why I ended up on this page ^^ – Tobbey Sep 09 '19 at 14:04
  • @Glen_b can you please elaborate on this statement " it's two tailed in terms of the distribution of the difference in sample proportions, but one tailed in terms of the distribution of the chi-square statistic obtained from that" ? – KKW Apr 18 '23 at 13:32
  • If you look at $\hat{p}_1-\hat{p}_2$ and you want to reject whether that's large and positive or small (i.e. large and negative), you're interested in both tails of that difference in proportions. If you did a z-test for example, you'd do a two tailed test. Both of those tail-cases result in large chi-squared values, so if you're doing the chi-squared test you reject for large values. In that example, small values of the chi-squared statistic occur when the sample proportions are almost identical (in the center of the null distribution of $\hat{p}_1-\hat{p}_2$); you don't want to reject those. – Glen_b Apr 18 '23 at 22:22
  • By contrast, if you're doing a variance test, you'd want to reject whether $s^2/\sigma^2$ was large or small (it's values in the middle of that distribution -- near $1$ -- that are consistent with $H_0$; both tails of the that ratio are telling you the variance differs from the variance in the null. This translates directly to the chi-squared statistic being close to its d.f. when $H_0$ is true and either higher or lower than it when $H_0$ is false. In that case you want to look into both tails of the chi-squared distribution. (So we must be clear about what we're talking about the tails of) – Glen_b Apr 18 '23 at 22:32
7

The chi-square test $(n-1)s^2/\sigma^2$ of the hypothesis that the variance is $\sigma^2$ can be either one- or two-tailed in exactly the same sense that the t-test $(m-\mu)\sqrt{n}/s$ of the hypothesis that the mean is $\mu$ can be either one- or two-tailed.

Ray Koopman
  • 2,303
1

I also have had some problems to come to grips with this question as well, but after some experimentation it seemed as if my problem was simply in how the tests are named.

In SPSS as an example, a 2x2 table can have an addition of a chisquare-test. There there are two columns for p-values, one for the "Pearson Chi-Sqare", "Continuity Correction" etc, and another pair of columns for Fisher's exact test where there are one column for a 2-sided test and another for a 1-sided test.

I first thought the 1- and 2-sides denoted a 1- or 2-sided version of the chisquare test, which seemed odd. It turned out however that this denotes the underlying formulation of the alternate hypothesis in the test of a difference between proportions, i e the z-test. So the often reasonable 2-sided test of proportions is achieved in SPSS with the chisquare test, where the chisquare measure is compared with a value in the (1-sided) upper tail of the distribution. Guess this is what other responses to the original question already have pointed out, but it took me some time to realize just that.

By the way, the same kind of formulation is used in openepi.com and possibly other systems as well.

Robert L
  • 137
  • see http://stats.stackexchange.com/questions/171074/chi-square-test-why-is-the-chi-squared-test-a-one-tailed-test/171084#171084 –  Sep 13 '15 at 06:56
1

@gung's answer is correct and is the way discussion of $\chi^2$ should be read. However, confusion may arise from another reading:

It would be easy to interpret a $\chi^2$ as 'two-sided' in the sense that the test statistic is typically composed of a sum of squared differences from both sides of an original distribution.

This reading would be to confuse how the test statistic was generated with which tails of the test statistic are being looked at.

conjectures
  • 4,226
  • Could you elaborate on what a "side of an original distribution" would be? It's not even evident what that "original distribution" refers to nor how it is related to the chi-squared statistic as computed from data. – whuber Jun 15 '15 at 16:29
  • For example, a sum of $n$ independent normals squared is $\chi^2$. The normals are the 'original' distribution. The $\chi^2$ stat incorporates information from both tails of the underlying normal distribution. – conjectures Jun 15 '15 at 16:32
  • OK, but I still cannot figure out what you are contrasting that with. Could you provide an example of a non-two-sided test statistic that could be used in ANOVA and show how it is connected with the tails of some distribution? – whuber Jun 15 '15 at 16:39
  • I'm not contrasting it with anything. I'm pointing out a reason why people might get confused about the one-sided/two-sided jargon in the context of $\chi^2$. It's straightforward for experts to see that the $\chi^2$ test itself is usually a one-sided test on the calculated stat. Others may have some data and be thinking about deviations from the mean in both directions, which often get rolled up into a $\chi^2$ stat. They will have heard things along the lines of 'thinking of deviations from the mean in both directions=two-sided test'. Hence a misunderstanding. – conjectures Jun 15 '15 at 16:52
  • I'm asking for a contrast only to help understand what you are trying to describe. I haven't been able to determine what that is yet. – whuber Jun 15 '15 at 17:08
  • The misunderstanding has nothing in particular to do with ANOVA, but everything to do with why someone might have the idea that, 'The chi-squared test is a two-sided test.' To reiterate because the OP may have in mind that the $\chi^2$ stat sums deviations from the mean in both directions. – conjectures Jun 15 '15 at 17:14
  • see http://stats.stackexchange.com/questions/171074/chi-square-test-why-is-the-chi-squared-test-a-one-tailed-test/171084#171084 –  Sep 13 '15 at 06:57
0

$\chi^2$ test of variance can be one or two sided: The test statistic is $(n-1)\frac{s^2}{\sigma^2}$, and the null hypothesis is: s (sample deviation)= $\sigma$ (a reference value). The alternative hypothesis could be: (a) $ s> \sigma$, (b) $s < \sigma$, (c) $s \neq \sigma$. p-value caculation involves the asymmetry of the distribution.

-2

The $\chi^2$ and F tests are one sided tests because we never have negative values of $\chi^2$ and F. For $\chi^2$, the sum of the difference of observed and expected squared is divided by the expected ( a proportion), thus chi-square is always a positive number or it may be close to zero on the right side when there is no difference. Thus, this test is always a right sided one-sided test. The explanation for F test is similar.

For the F test, we compare between group variance to sum of within group variances ( mean square error to $\frac{SSw}{dfw}$. If the between and within mean sum of squares are equal we get an F value of 1.

Since it is essentially the ratio of sum of squares, the value never becomes a negative number. Thus, we don't have a left sided test and F test is always a right sided one sided test. Check the figures of $\chi^2$ and F distributions, they are always positive.For both tests, you are looking at whether the calculated statistic lies to the right of the critical value.

Chi-square and F Distributions

Ferdi
  • 5,179
Daniel
  • 1
  • 2
    A test statistic doesn't need to take negative values for us to consider both tails. Consider an F test for the ratio of two variances, for example. – Glen_b Mar 03 '17 at 06:25
  • F test is one sided test Glen_b. – Daniel Mar 03 '17 at 06:49
  • 3
    The F test for equality of variances, which has a statistic that's the ratio of the two variance estimates is NOT one sided; there's an approximation to it which places the larger of the two sample variances on the numerator, but it's only really right if the df are the same. But if you don't like that there's any number of other examples. The statistic for the rank sum test cannot be negative but the test is two tailed.I can supply other examples if needed. – Glen_b Mar 03 '17 at 07:19
  • @Ferdi Unfortunately there's something clearly wrong with the example there -- it says it's two-sided but then implies it only rejects for large values of the statistic. If $\sigma_1^2$ was less than $\sigma_2^2$ we'd be almost never observing a large value for the ratio, so the statistic would only tend to reject when $\sigma_1^2>\sigma_2^2$ making it one-sided. – Glen_b Mar 03 '17 at 11:27