Thinking clearly is not just about recognizing the ways in which our brain misbehaves. Sometimes, it’s about realizing that the information we find in the world is incomplete. We all know that political information can be cherry-picked in the service of ideology, but science unfortunately does not escape from this phenomenon.
Since at least 1959, scientists have suspected that not every study conducted ends up getting published. That suspicion was largely derived from the realization that almost all of the studies published in four major psychology journals were reporting statistically significant results. That’s like if a casino claimed to publish a list of all their visitors and every visitor listed happened to win big. We’d suspect the casino of carefully curating the list.
We now know that publication bias is very real: it’s when a study does not get published, often because the researchers decide its results are not interesting enough to warrant the hard work of writing it up and submitting it to a journal. It’s also known as the “file drawer problem,” since these studies, which are informative if not sexy, end up accumulating dust in the figurative junk drawer of the lab. Studies with positive results and large effects tend to get published more quickly, in English, and in journals whose articles get cited a lot. This has an impact on summaries of the evidence that get published (reviews and meta-analyses), because they can only round up the evidence they can see. And when decision-makers rely on these summaries, publication bias can bias the decision itself.
There may also be cultural pressures behind some publication biases. When we learned that virtually every trial of acupuncture coming out of China praises the technique for its effectiveness, we had to wonder if negative studies of acupuncture in China were being suppressed.
To reduce publication bias in general, we need more academic journals specifically courting “unsexy” results. We need to register more and more studies in advance of doing them so there is accountability. And scientists who summarize the evidence need to use statistical tests to assess if the body of evidence they have found may be biased in this way. And when we look for scientific evidence to answer our questions, we should never forget that somewhere out there, there’s a file drawer with important data in it that we cannot see.