3

I am used to interpreting Cohen's d in terms of small/medium/large effect sizes (cf. Cohen 1988 or Sawilowsky 2009) where, for example, a a Cohen's d of 0.2 corresponds to a small effect size. I am looking for something similar for the relative risk. Craun and Calderon reference Monson (1990) and say:

An increased risk of less than 50% (RR=1.0–1.5) or a decreased risk of less than 30% (RR=0.7–1.0) is considered by many epidemiologists to be either a weak association or no association

I don't have easy access to the book, and more importantly I don't have enough domain knowledge to interpret the statement. Is an RR of 1.2-1.5 typically called a "weak" increase?

Monson R. Occupational Epidemiology, 2nd edition. Boca Raton, Florida: CRC Press Inc., 1990.

AdamO
  • 62,637
StrongBad
  • 280
  • 1
    Effect size does not have a 1-1 relationship with the (magnitude of the) relative risk. Effect size, in Cohen's treatment, strictly has to do with power calculation using a theoretical relative risk and it's a very complicated function of other parameters than just the observed frequencies of response. – AdamO Jun 26 '19 at 14:24

1 Answers1

5

You can't speak of magnitude of effect in isolation. This is a common source of confusion because we often encounter the term in a discipline-specific setting, but the hand-wavy discussions quickly lead one to speculate whether there is literature on a seemingly abstract biostatistical topic of "magnitude of effect".

But no, the literature or scientific setting itself is the basis of deciding what effect sizes are. Predefining these effects is an important point of study planning, and content experts should speak about the state of knowledge to form these definitions (or else people are just blabbering nonsense). What's lacking is an explicit conversation to settle what a meaningful effect--and consequently small or large effects--might be.

That said, the notion that a RRR has to 50% or greater is somewhere between hysterical and offensive. Written by someone clearly out of touch epidemiology, if not reality. Go ahead and read a few abstracts of AJE's most recent publication to find examples to the contrary.

Some considerations:

  • The burden of the disease or condition in question. "Burden" is intentionally loosely defined here and is a catchall for prevalence, mortality, cost, severity, QALYs, disability, QoL, etc.

  • The promise of an intervention or treatment both in terms of its effectiveness but also in terms of its cost, availability, etc. Other treatments, and also the lag, harm, and specificity of indicating the treatment (e.g. do you need a blood panel, wait for weeks without treatment when the off-the-shelf alternative could have been used sooner, better, and for less?)

  • Lastly, the design of the study. Sometimes the RR is a biased estimator. Not a problem, but we have to consider that sometimes in exploratory analyses or biased samples, we have to interpret things cautiously, though we'll still proceed with confirming strong or promising signals.

As an example, fish oil has recently famously been "discredited" as beneficial to heart health. They followed a cohort of 5,000 people and 300 deaths occurred, and a hazard ratio was found to be 0.9 just straddling the 1.0 null in the 95% CI. Now, fish oil is dirt cheap, not toxic whatsoever, and if I saw those results, I'd wonder if the study was merely underpowered. A 10% reduction in mortality is a big deal. If it was a $10,000 medical procedure to transplant stool, I'd be more inclined to say no.

AdamO
  • 62,637