Does peer-reviewed process ensure that publications are of high quality?

Source: https://rrjournals.com/peer-review-process/This flow chart shows how peer-review works.

Source: https://rrjournals.com/peer-review-process/

This flow chart shows how peer-review works.

Peer-reviewed process is there to pick holes in a manuscript, sound an alarm and improve manuscript quality or filter out low quality works. It is part of a very foundation on which science claims of objective truth are based. 

Sokal hoax

In 1996, Professor Alan Sokal, NYU and UCL physicist, submitted a manuscript entitled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity” to journal Social text, an academic journal of postmodern cultural studies.  It was published.  The twist was that manuscript was complete gibberish, full of non-sensical statements.

John Bohannon, a biologist at Harvard, submitted a pseudonymous paper on the effects of a chemical derived from lichen on cancer cells to 304 journals describing themselves as using peer review. As in Sokal’s case, manuscript was concocted wholesale and stuffed with clangers in study design, analysis and interpretation of results. Receiving this poorly written, poorly designed study from a fictitious researcher at a made up university, 157 of the journals accepted it for publication. Admittedly, this experiment was directed at the lower tier of academic journals (The Economist’13). 

Editor of the British Medical Journal, sent an article containing eight deliberate mistakes in study design, analysis and interpretation to more than 200 of the BMJ’s regular reviewers. Not one picked out all the mistakes. On average, they reported fewer than two; some did not spot any (The Economist’13).

Similar experiment was conducted in the prestigious American journal, Annal of Emergency Medicine, which has the highest impact factor over the years of all 25 journals in the emergency medicine category of the Science Citation Index.  Ten major and 13 minor glaring errors were placed in the bogus manuscript.  All reviewers of the journal review the manuscript.  Peer reviewers fail to detect glaring errors. The majority of reviewers did not notice that the conclusions of the paper are unsupported by its results (Baxt et al’98).

Stem cells can heal the damage caused by a heart attack and to restore sight, but they are expensive and difficult to produce and one source-embryos-raises serious ethical questions.  A scientific paper published in the prestigious journal Nature claimed that stem cells could be produced from normal adult cells by dipping them into acid for a 30-minute shock period. The announcement of the creation of these "STAP” cells (stimulus-triggered acquisition of pluripotency) sent shockwaves around the world. Essentially, the publication showed that stem cells could be made quickly and cheaply. Publishing in Nature is career-defining moment for scientist researchers any where.  The journal rejection rate is 90%. The problem with this paper was that no one could replicate their process. Later on, the researcher’s institutional committee concluded that the researcher had twice manipulated the data in an intentionally misleading fashion (The Economist’14). In short, it was a research misconduct.

This last case must be taken in with a grain of salt.  It’s not a problem with a peer-review process per se as it is not designed to detect fraud.  If authours intentionally cook up data, there’s no realistic way for reviewers to detect it.

Peer-reviewed process ensures that publications are of “acceptable“ quality?? I’d like to say most of the time, but not all the times.

Why is review quality not as high as we want it to be?

Reviewers do it with a sense of obligation, altruism.  They do it for free.

Good and proper review takes a lot of time.  These are busy people.  They have millions other things to do. Mistakes, errors in manuscripts are bound to be missed.

Any way to improve the situations?

If we somehow can make reviewing manuscripts feel like a time well spent, rely less on a sense of duty/altruism, it may help.

Recognition-the idea is borrowed from business management playbooks. Non-monetary incentives sometimes work better than monetary ones.

This idea, however, is problematic because review process in general is blinded i.e. no one is supposed to know who are researchers and reviewers except journal editors.  Anonymity is there to ensure objectivity.  How to give recognition to reviewers without losing objectivity?

Perhaps give out certificate to reviewers (even in pdf format to minimise cost) to summarise a number of reviews this year and lifetime number of reviews.  Is this some kind of acceptable recognition? I’m not sure, but something to show that their contributions are acknowledged and appreciated is definitely a good idea. 

Free online training to improve review quality. This will not only benefit reviewers but also researchers.  If the quality of manuscript and review process increases, the whole community (the public included) will benefit.

If peer-review process doesn’t necessarily ensure quality, what can we do as an individual, a student or a consumer of scientific literature?

Rely more on yourself, common sense, logic, and carefully look for logical inconsistency.

Don’t take published papers as Godspell.  Several studies discussed above have shown that they are not.

Sources:

Trouble at the lab. The Economist. 2013 Oct

When science gets it wrong: Let the light shine in. The Economist. 2014 July

Baxt WG, Waeckerle JF, Berlin JA, Callaham ML. Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewer performance. Ann Emerg Med. 1998 Sep;32(3 Pt 1):310-7.