- Black conservative activist made #IceBae a thing 4 Years Ago
- Jeffrey Epstein’s pilot deletes Instagram after Kellyanne Conway pic surfaces Today 7:56 AM
- ‘Pennyworth’ is a deliriously strange addition to the Batman franchise Today 7:30 AM
- How to read free magazines with your Amazon Prime membership Today 7:00 AM
- 2020 celeb donations: Hanks loves Biden, Stamos stans Buttigieg Today 6:30 AM
- AirTV is essential for Sling TV subscribers Today 6:00 AM
- #ICEBae is reportedly a Democrat–and she has some things to get off her chest Tuesday 8:45 PM
- Fans are stoked that Taika Waititi is back to direct ‘Thor 4’ Tuesday 7:22 PM
- Sacha Baron Cohen thanks ‘co-stars’ Dick Cheney, Sarah Palin for making Emmy nominations possible Tuesday 6:43 PM
- Roger Stone barred from posting on all social media platforms Tuesday 6:03 PM
- The FaceApp challenge shows you how gracefully you’ll age Tuesday 5:16 PM
- Kylie Jenner opens up about her mental health in candid Instagram post Tuesday 4:38 PM
- Fans speculate wildly about Naomi Watts’ ‘Game of Thrones’ prequel role after leaked set photo Tuesday 3:54 PM
- New Jersey congressman joins House Democrats ‘Squad’ because of an Onion article Tuesday 3:09 PM
- Twitter begins rolling out new desktop redesign, and users aren’t happy Tuesday 1:54 PM
This tool exposes how easy it is to manipulate scientific data
Spoiler alert: It’s stupidly easy.
In the world of research science and academia, you’ll often hear this cynical mantra: “publish or perish.”
The idea is that researchers who frequently publish their work in journals often do better in their careers. Those who publish less are more prone to stagnation, or possibly even losing their jobs altogether.
Such pressures to publish may be leading some researchers to manipulate their data, intentionally or not. Though it may seem difficult to finesse data by accident, it’s actually stunningly easy. In a blog post on FiveThirtyEight titled, “Science Isn’t Broken,” science journalist Christie Aschwanden lets readers see for themselves how easy it is. The article—a fascinating read itself—contains a tool that allows you to work with real-world data to see if you can find a connection between the overall economic health of the nation and whether Republicans or Democrats currently hold the most power. You can choose to include or exclude certain variables. Watch as your results—and their publishability—change with your choices.
Your results are publishable if, after your manipulations, you achieve a “p-value” of less than 0.05.
A p-value, if you didn’t take statistics or need a refresher, is a measure of probability. In statistics you are testing two hypotheses: the proposed hypothesis (e.g. “coffee causes cancer”) and the null hypothesis (there is no relationship between coffee and cancer.)
The null hypothesis is a very important concept in statistics and is the basis for calculating the p-value. Basically, the p-value tells you the probability that you would get your observed data set (such as a large number of people who consumed a lot of coffee and later got cancer) if the null hypothesis were true. That means the smaller the p-value—the smaller the probability that your data set would exist if there were no effect—the more likely it is your results are real. (For the record, there is no known link between coffee and cancer, it’s just an example.)
The problem, as Aschwanden demonstrates with her tool, is that it’s astoundingly easy to get a low p-value.
Using the tool, you can create 1,800 different combinations of variables to yield your result. Of those, 1,078 yielded a “publishable” result (a p-value of less than 0.05, an arbitrary standard for “statistical significance” accepted by many scientific journals.)
Aschwanden is certainly not saying anything that isn’t already known to the scientific community—p-values are notoriously easy to hack.
The problem, Aschwanden says, is that despite the bar being laughably low, many journals will readily accept a p-value of 0.05 or lower as reason enough to publish an article. There are more issues Aschwanden discusses at length, but one biggie she didn’t quite touch on is the propensity for journals to publish only positive results.
In undergrad science, you’re taught that any result, even a negative one, is a good result. However in the real world of science, not all results are created equal. Journals have a strong bias towards only publishing “positive” results, or papers that boast low p-values indicative of a potential effect. Showing that there’s no relationship between two variables is also incredibly important, but less interesting.
As a result, some are calling for a boycott of the p-value. One journal has even refused to accept papers whose results rely on p-values alone. There are more powerful statistics out there to assess data. The problem is that many researchers may not understand how to use all the tools in their statistical toolbox. P-values are great because they’re easy to calculate and everyone, supposedly, knows what they mean.
Ultimately Aschwanden’s point is that science is very difficult to do and that even scientists can fall prey to bias and misconduct, just like any other human. We—including the scientists themselves—must always be aware of that.
Image via Intel Free Press/Wikimedia (CC BY 2.0)
Cynthia McKelvey covered the health and science for the Daily Dot until 2017. She earned a graduate degree in science communication from the University of California Santa Cruz in 2014. Her work has appeared in Gizmodo, Scientific American Mind, and Mic.com.