
Our reliance upon the impact factor is destroying public trust in science, argues Damian Pattinson.
Scientific inquiry aims to be a search for the truth. However, this has been warped by the academic community’s long-standing reliance on the impact factor, a journal-level metric which has come to conflate quality and rigour with novelty. It has changed the way success in science is measured and is ultimately impacting public trust.
The issue now
Public trust in experts, particularly scientists, has eroded significantly in recent years. Once revered and respected, scientists now face growing mistrust for their research. Everything from political debates to internet forums are awash with people questioning the validity of scientific experts. As misinformation spreads freely online, the gap between carefully conducted science and what people believe widens.
At the heart of this crisis lies our obsession with getting published in prestigious journals, and how the publishing industry has redefined how we think about prestige. In the past, research came first and journals built their reputations over years by publishing rigorous and impactful papers. But with the rise of journal metrics, and particularly the journal impact factor, this reputation could be rapidly acquired by deliberate manipulation, lobbying, or even just sheer luck.
The result is a publishing industry that is incentivised to publish research likely to be highly cited, be it right or wrong, over that which is carried out carefully.
For scientists, publishing in high impact factor journals has become the ultimate academic currency. Careers, funding and prestige all hinge on it. Researchers and institutions must conform to these metric-driven standards, not because they serve the advancement of knowledge but because opting out feels professionally perilous and isolating.
Taken together, these perverse incentives have served to reduce the reliability of the published literature, which in turn has caused the public to question how much they can trust scientists and the scientific endeavour.
How we got here
To understand the state of our academic publishing today, we must step back and examine how we arrived at this point.
Scientific publishing began as a niche endeavour, run by and for research communities. But over the past 30-40 years, for-profit journals have become increasingly dominant in the academic community. Revenue has soared, turning publishing into a highly concentrated and fiercely competitive media business. Big publishers such as Elsevier, Wiley and Springer Nature now report profit margins approaching 40%, even higher than those of Microsoft and Google.
The impact factor started off as the way in which librarians, and later researchers, could decide whether a journal was worth subscribing to and reading. As the metric became more adopted as a shortcut for quality, publishers started to promote them, leading to even more awareness of the metrics by researchers.
Today, journals and publishers have become so dominant in the scientific ecosystem that getting published in high impact journals matters more to researchers than doing careful science. And publishers have capitalised on this incentive by launching even more journals, using the impact factor to get quickly placed in a league table that scientists unquestioningly use to decide where to publish.
By surrendering its power to the indexers in this way, the academic community has created perverse incentives to prioritise flashy and unreliable findings over rigorous research. To succeed in this system, scientists often tailor their work to fit criteria which elite journals demand, not to uncover truth. This can lead to cherry-picked results, a bias towards fashionable topics and methods, and even reverse engineering of research purposes after results have been collected. Researchers are pressured by peer- reviewers to chase down entirely new questions or add new data that goes well beyond the original scope of the work, diluting the clarity of a study and unnecessarily delaying the publication. This isn’t good science but strategic storytelling.
As misinformation spreads freely online, the gap between carefully conducted science and what people believe widens
The situation is unfair both to scientists and to the public. On publishing research, researchers want to know it’s an accurate representation of their efforts, free from unnecessary extra experiments and discussion that dilute the clarity and value of their findings. Equally, they want confidence that the published research inspiring and informing their work is trustworthy and reliable.
Yet the current system often delivers the opposite. Non-essential requests for additional experiments are often made to researchers before their work is considered for publication in a journal. And, even in some cases where those experiments are done, papers are frequently rejected by journals with little to no explanation, sending authors down a frustrating cascade – submitting to one journal after another until a group of editors finally accepts the work. This lack of transparency and consistency not only delays publication but is also deeply demoralising for those doing the work. We need more open and constructive publishing models.
What needs to be done
The academic community knows this is happening, yet little has been done to shift the culture. The future of science lies in greater transparency and open access. Let it be judged by the scientific community. Truth in science shouldn’t be for sale: it should be shared, scrutinised and owned by all of us.
Science thrives on open dialogue, constructive feedback and the free exchange of ideas across borders and disciplines. When submitting work to eLife, researchers receive constructive, thoughtful feedback. What’s more, this is published online with the preprint ensuring greater transparency and that their work is immediately accessible to the scientific community. Researchers have greater control over their research and can get it out more quickly, without the unnecessary delays from peer review.
This visible collaboration is crucial to properly represent the iterative nature of science. Prestigious journals are currently presented as sources of the absolute truth. When their findings are inevitably disproved or refined, public trust in science is eroded.
Hypotheses are disproved and incremental findings approach a truth after many rounds of testing and refinement. Scientists must educate the public that science is inherently an iterative process, helping to dampen the doubts that arise when new findings dispel or disprove earlier ones. This transparency is essential as we move toward a more open and trustworthy scientific future.
Conclusion
If we want the public to trust and take value from science, we need to do the same. We must move beyond the limitations of the impact factor. Science is not a contest for prestige but a collaborative journey for knowledge that belongs to all of us.
For too long, we have outsourced how we define prestige to the indexers and specifically the impact factor. This has created a system in which the need to get published in prestigious journals creates bad incentives for authors to inflate their findings to tell a good story. We must redefine success beyond narrow metrics to those that value transparency, openness and the rigorous exchange of ideas.
By embracing models like eLife’s, which prioritise constructive feedback, open access and community-driven assessment, we can build a scientific ecosystem where truth is shared, not sold. The future of science depends on all of us – researchers, institutions, funders, publishers and the public – to reclaim science as a trustworthy, open dialogue that informs and improves our world. It’s time to redefine what we measure, why we publish, and how we value knowledge.
Damian Pattinson is the executive director at eLife