Tuesday, October 23, 2012

A dip in the data pool



Sometimes, people combine data that really don't belong together - conflict all over the place!

The statistical test shown by the Iin a meta-analysis tries to pin down how much conflict there is in a meta-analysis. (A meta-analysis pools multiple data sets. Quick intro about meta-analysis here.)

I2  is one way to measure "combinability": another is the chi-squared test (χor Chi2).

You will often see the I2 in the forest plot. It is one way of measuring how much inconsistency there is in the results of different sets of data. That's called heterogeneity. The test is gauging if there is more difference between the results of the studies than you would expect just because of chance.

Here's a (very!) rough guide to interpreting the I2 result: 0 - 40% might be ok, 75% or more is "considerable" (that is, an awful lot!). (That's from section 9.5.2 here.)


Differences might be responsible for contradictory results - including differences in the people in the trials, the way they were treated, or the way the trials were done. Too much heterogeneity, and the trials really shouldn't be together. But heterogeneity isn't always a deal breaker. Sometimes it can be explained.

Want some in-depth reading about heterogeneity in systematic reviews? Here's an article by Paul Glasziou and Sharon Sanders from Statistics in Medicine [PDF].

Or would you rather see another cartoon about heterogeneity? Then check out the secret life of trials.

See also my post at Absolutely Maybe: 5 tips to understanding data in meta-analysis.

(Some of these characters also appear here.)

[Updated 4 July 2017.]


Thursday, October 18, 2012

You have the right to remain anxious....


"It's extremely hard not to have a diagnosis," according to Steve Woloshin, this week at the 2012 NIH Medicine in the Media course for journalists. Allen Frances talked about over-diagnosis of mental disorders (read more about that in my blog at Scientific American online).

The National Cancer Institute's Barry Kramer tackled the issue of over-diagnosis from cancer screening. He explained lead-time bias using an image of Snidely Whiplash tying someone to train tracks. Ineffective screening, he said, is like a pair of binoculars for the person tied to the tracks: you can see the train coming at you sooner, but it doesn't change the moment of impact.

Survival rates after a screening diagnosis increase, even when no one lived a day longer: people have cancer for longer when the diagnosis comes long before any symptoms. Screening is effective, on the other hand, when earlier detection means more people do well than would have done if they'd gone to the doctor first when there were symptoms.


Read more in The Disease Prevention Illusion: A Tragedy in Five Parts




Monday, October 15, 2012

Breaking news: space-jumping safety study



Making a good impression with headlines based on tiny preliminary studies? Too easy!

Other ways to fall into traps about exaggerated research findings: reports of laboratory or animal studies that don't mention their limitations, studies with no comparison group, conference presentations with inadequate data reports. These were some of the key points made by Steve Woloshin at the first full day of NIH's Medicine in the Media course, happening now in Potomac near Washington DC.

Read more here if you want to know more about the pitfalls of small study size and how to know if a study was big enough to be meaningful.

Update 31 July 2016: And now there's jumping from a plane without a parachute.

Friday, October 12, 2012

The Forest Plot Trilogy - a gripping thriller concludes



Forest plots, funnel plots - and what's with the mysterious diamond symbol, lurking like a secret sign, in meta-analyses? Meta-analysis is a statistical technique for combining the results of studies. It is often used in systematic reviews (and in non-systematic reviews, too).

A forest plot is a graphical way of presenting the results of each individual study and the combined result. The diamond is one way of showing that combined result. Here's a representation of a forest plot, with 4 trials (a line for each). The 4th trial finds the treatment better than what it's compared to: the other 3 had equivocal results because they're crossing the vertical line of no effect.



A funnel plot is one way of exploring for publication bias: whether or not there may be unpublished studies. Funnel plots can look kind of like the sketches below. The first shows a pretty normal distribution of studies - each blob is a study. It's roughly symmetrical: small under-powered studies spread around, with both positive and negative results.



This second one is asymmetrical or lopsided, suggesting there might be some studies that didn't show the treatment works - but they weren't published:


        Gaping hole where negative studies should be



(This post uses snapshots from slides I'll be using to explain systematic reviews at the 2012 NIH Medicine in the Medicine course that's starting this weekend. It's several days of in-depth training in evidence and statistics for journalists. This year it's being held at Potomac, just near Washington. And here's a post on the start of the course that I wrote for Scientific American online.)