Internet Culture

Journalist circulates pseudo-science study to prove a point about science journalism

This study on chocolate and weight loss turned out to be a sticky mess.

Photo of Cynthia McKelvey

Cynthia McKelvey

Article Lead Image

A journalist claimed in a post on io9 on Wednesday that he fooled millions of people into believing that chocolate could aid in weightloss. 

Featured Video

John Bohannon said he created a fake website and a fake name, but holds a real Ph.D. in molecular biology (not exactly related to nutrition and diet), and conducted a real clinical trial in Germany. Oh, and he got a whole bunch of journalists and their readers to believe his faked findings that a low-carb diet combined with eating dark chocolate could lead to faster weight loss.

“I thought it was sure to fizzle,” Bohannon wrote. “Reporters on the health science beat were going to smell this a mile away.”

But they didn’t. They ate it up. And it spread like wildfire, thanks to the 24-hour Internet news cycle, its click-baiting headline, and its “like”-ability.

Advertisement

The “researchers” had a grand total of 15 people participate in the trial (there were 16, but one dropped out). They were split into three groups. One group was on a low-carb diet, the second was on the same low-carb diet but also had a 1.5 oz chocolate bar every day. The third group was a control, and were asked not to change anything about their diet. All three groups weighed themselves for three weeks and reported back with blood tests.

The clinical trial results—not that they really mattered, as you’ll see—showed that people in both the low-carb non-chocolate and the low-carb chocolate-eating group lost about five pounds, but the chocolate group lost their weight faster.

Seems legit, right? Well first off, the sample size is exceedingly small, which can make finding real results very difficult. With so few people participating in the study, variables like natural weight fluctuation, people not following the diet regimen, and other hidden nuisances could skew the results. Those variables aren’t as loud in a larger sample size.

The researchers also made sure that they would get at least one “statistically significant” result by testing a whopping 18 variables, including “weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.” As Bohannon put it:

Advertisement

Think of the measurements as lottery tickets. Each one has a small chance of paying off in the form of a “significant” result that we can spin a story around and sell to the media. The more tickets you buy, the more likely you are to win. We didn’t know exactly what would pan out—the headline could have been that chocolate improves sleep or lowers blood pressure—but we knew our chances of getting at least one “statistically significant” result were pretty good.

That’s the thing about statistical significance, if you didn’t take statistics in high school or college. It’s measured by a figure called the p-value, which only tells you the probability that a pattern is random. So the lower the p-value, the less likely the results are to be a coincidence. In other words, it does not mean “this definitely is a thing,” but instead means “this probably isn’t a totally random non-thing.” But it’s easy to massage a p-value to say what you want it to.

Bohannon writes that he stacked the game in his favor, a trick called “p-hacking,” which is a big problem in science. It’s often done unconsciously on the part of the researchers. When you feel you really should be finding a particular effect, it’s easy to pick out parts of your data and shrug them off as outliers, or otherwise imperfect, and then exile them from your data set. (Which is sometimes also a good thing; scientists want to disregard data that may be tainted by unrelated, but confounding variables that could lead to false positives or negatives.)

But for some reason, the p-value is exalted above nearly all other statistical measures of effect. Perhaps because it’s a basic measure that’s easy to calculate. Nonetheless, some scientists and science journalists have suggested that the p-value be disregarded, or at least devalued somewhat.

Advertisement

Readers who are familiar with the process of scientific publication may be wondering how peer reviewers could have bungled catching such an obvious piece of pseudoscience. For the uninitiated, peer-review is a stepping stone where a manuscript is sent to be reviewed by a panel of anonymous experts. Ostensibly they screen it for bad science and overstated results. Peer review isn’t perfect, but no matter because Bohannon and his colleagues skipped right over all of that and literally paid a journal to publish their article, open-access, without peer reviewing first. There are a lot of prestigious-sounding “journals” happy to publish whatever drivel you want, for a nominal fee.

Next came the media frenzy. Bohannon goes on to talk about how easy it was to write a press release, how no journalists bothered to cite the study’s numbers. No journalists seemed to take even the slightest critical or skeptical look at the study. It’s a great story, after all: chocolate actually helps you lose weight? Why look a gift horse in the mouth?

Bohannon chalks this up to a cocktail of a calculated ruse baiting and lackadaisical journalism. There are certainly many problems within science journalism whose bullshit needs to be called out, and not actually reading the source materials or consulting experts are a major ones. But Bohannon also fails to mention many of the problems in the research and publication process that trip up journalists, as well.

First of all, there’s the fact that many scientists themselves misunderstand the true meaning of the p-value. Those who do also know that their data are more likely to be published if they get positive results, incentivizing p-hacking.

Advertisement

Bohannon also identifies another major issue: Journalists write from faulty press releases without consulting the source material. Press releases are often misleading and even misrepresentative of the science, but scientific articles are usually behind huge paywalls. While some major journals like Science and Nature have great press access, you need to be an established reporter with a few clips and references to get in. Even if you do have access to the big journals (who also publish pseudoscience from time to time), you need to be really established (i.e. on staff) to get a chance to cover an article in the big-league journals. For the newbies and minor-league freelancers, smaller niche journals are the mines for hidden gems.

Smaller journals can be harder to access, though. If they’re not part of a major publishing group, and you can’t get in touch with the scientists, you may be out of luck.

What about interviews? Many journalists, even the staffers, are required to churn out several articles per day for their publications or just to make a living. Often times scientists don’t reply to queries for comment at all or they do so way too late. There simply aren’t enough hours to thoroughly cover every single story, and naturally some journalists are going to prioritize some stories over others. That’s not to say that there aren’t lazy journalists out there. Of course there are, just like there are lazy scientists. But there are problems on both ends of this issue that need to be rectified if people want better science journalism.

Screengrab via Mr.TinDC/Flickr (CC BY ND 2.0)

Advertisement
 
The Daily Dot