Reviewer pinpoints Top 10 howlers to avoid for science publishing success
6 Jul 2025

Researchers concerned that their science journal submissions could be rejected on account of inaccuracies can now access a guide to the most common statistical errors.
Aston University academic Dr Dan Green has compiled a list of 10 frequent mistakes, based on his experience of reviewing more than 200 papers.
Heading the list outlined in BMJ Heart is asserting rather than establishing causation (“x leads to y” etc) on the basis of flimsy evidence or simple association.
Among the other inclusions are poorly formatted abstracts, placing results in methods sections, poor reporting of missing data, employing the wrong type of data analysis, insufficient flow diagrams to explain study design, and failure to detail which initial study participants were included ultimately.
Lead author Green, together with collaborators, the University of Birmingham’s Dr Rebecca Whittle at and Dr Diane Smith at Fuze Research, also offers pointers on other statistical and chart.
He acknowledged: “Researchers are human, and while some oversights can occur naturally in the rush of wanting to get your paper submitted, obvious errors in the structure and presentation of your article don’t give the best first impression to an editor or reviewer.
“Take a little more time to check the details of your submission before clicking submit, check those ‘Instructions for Authors’ for the journal again, and if still unsure, get another opinion.”
And he warned that misleading abstract information would in turn increase the likelihood that untrained readers such as journalists would make incorrect assumptions when sharing details on other platforms.
The study authors said this underscored the importance of those submitting papers ensuring they adequately review their own conclusions. They recommended asking colleagues not involved in the research to provide feedback.
Green, himself a biostatistician, emphasised too the vital role of accurate statistical analysis sections in papers as a means of identifying patterns and trends and demonstrating whether results are authentic or likely due to chance, “using hard maths instead of intuition”.
Sufficient detail was needed, he said, in order to guarantee that another researcher in the field could take the same approach and data to achieve the same answers.
Green’s recommended key to success in this aspect is through the use of identically ordered, bulleted lists of everything described in the methods and reported in the results, with supplementary text to ensure the section is concise.
“We have produced this article with future submissions in mind, so you can quickly whizz through the items, and question whether you have made any typical, but ultimately crucial errors,” he stated.
A little more care now saves a lot more time later on and avoids those annoying re-review edits, or even searching for a new journal to submit to.”
Pic: Charlotte May