A new study puzzles a bit in its opening paragraph. It notes that scientific progress depends on the ability to update which ideas are considered acceptable in the light of new evidence. But science itself has produced no shortage of evidence that people are terrible at updating their beliefs and suffer from issues such as confirmation bias and motivated reasoning. Since scientists are, in fact, people, the problems of updating beliefs must severely limit the ability of science to progress.
And there is some indication that it does. For example, Max Planck wrote that “a new scientific truth is conquered not by convincing its opponents and showing them to the light, but because its opponents eventually die out and a new generation grows.”
But a new study suggests it’s not as much of a problem as it could be. Taking advantage of a planned replication study, some scientists survey their peers before and after the results of the replication study are out. And most scientists seemed to update their beliefs without much trouble.
Before and after
The design of the new study is straightforward. The researchers behind it took advantage of a planned replication study — one that would redo some of the key experiments and see if they produced similar results. Before the replication results were announced, the researchers contacted about 1,100 people involved in psychology research. These participants were asked what they thought about the original results.
When the replication task was completed, some of the earlier experiments were repeated, providing more confidence in the original results. Others failed, raising questions about whether the original results were reliable. This should give the research community an opportunity to update their beliefs. To find out whether it was, the researchers behind the new paper went back and found what the same 1,100 people thought about the experiments whether the experiments were repeated.
In practice, the subjects of the research team were asked to read about the results of the studies being replicated and then decided whether the findings were likely to represent a “non-trivial” effect. Participants were also asked whether they were confident in these earlier results or had personally invested in them (as might happen if they based their research on the results). Half of the participants were asked about the quality of the replicate experiments and whether the replicaters were successful in reproducing the conditions of the original experiments.
Once replication was done, all participants were asked once again to estimate whether the effect tested in replication was likely to be non-trivial, as well as their belief in the effect. They also evaluated the quality of the replication experiments.
This setup allowed the researchers behind the new study to decide whether participants were updating their thinking in response to the new data. It also provides researchers with an opportunity to look at some of the factors that influence motivated reasoning, such as personal interest in the outcome. And a participant who engaged in motivated reasoning might dismiss the replication as low-quality, which the researchers also asked about. So, overall, it looked like an in-depth study.
Overall, the participants appear to be doing pretty well from the study. When a replication was successful, they were more likely to believe that the repeated experiment revealed a significant effect. When the replication failed, he adjusted his confidence in the opposite direction. In fact, participants updated their beliefs more than they expected.
He also showed little signs of motivated reasoning. There was little indication that the researchers changed opinion on the quality of replication, even if the data called into question their earlier views. Nor did he focus on the difference between the original experiments and the replications. Personal interest in the results also did not make a difference.
Being aware of potential sources of bias may have shielded people from motivated reasoning, but there was no sign of that here. One thing that seemed to be correlated with appropriate belief updates was a self-reported sense of intellectual humility.
So, overall, psychologists do not suffer from the kind of cognitive biases that prevent people from accurately incorporating new information. At least when it comes to science – it’s quite possible that they do in other areas of their lives.
There are two big caveats. One is that participants knew their responses would be kept confidential, so they could express opinions that could cause problems publicly. Thus, there may still be a gap between what individual participants think privately and how the region as a whole responds to differences in replication condition.
The other caveat is that the participants knew they were participating in a study on reproduction. Therefore, they can be expected to shade their answers so that they look good to their fellow researchers. The main point to argue against is that participants did not change their opinion as much as you would expect based on the magnitude of the difference between the original and replicated results. In other words, participants reacted cautiously to a failed replication—not something you’d expect from someone doing reputation management.
Even with these caveats, it’s probably worth pursuing these results. The types of behaviors that allow people to maintain beliefs despite contrary evidence are a major social problem. If scientists can suspend them, it would be useful, in some contexts, to understand how they do it.
Nature Human Behavior, 2021. DOI: 10.1038/s41562-021-01220-7 (About DOI).