Image

AboutOpen | 2023; 10: 4-5

ISSN 2465-2628 | DOI: 10.33393/ao.2023.2530

POINT OF VIEW

Image

Manuscript formatting, delayed peer-reviews, and overemphasizing the impact factor: can something be done?

Institute of Science, Nirma University, Ahmedabad - India

Corresponding author:
Dr. Vijay Kothari 
Institute of Science
Nirma University
S-G Highway,
Ahmedabad-382481 - India
vijay.kothari@nirmauni.ac.in

AboutOpen - ISSN 2465-2628 - www.aboutscience.eu/aboutopen
© 2023 The Authors. This article is published by AboutScience and licensed under Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0). Commercial use is not permitted and is subject to Publisher’s permissions. Full information is available at www.aboutscience.eu

Academicians and researchers do experiments, and then communicate their findings to a journal found to be compatible by them, for publishing their results. The current scenario of the publication process is that each of the journals has its own formatting requirements with respect to font size, reference style, figure resolution, table and chart type, etc. Many of them also pose limits on maximum number of figures/tables which can be allowed in a single article. This compels the academicians/research fellows to spend quality time on manuscript formatting rather than doing actual scientific research in the lab. As happens in nearly half of the cases, papers do get rejected, and needs to be resubmitted to another journal; this leads to another cycle of formatting and reformatting, which of course is not guaranteed to be accepted for publication.

Let us just think whether this formatting really makes any difference to the scientific quality of the data reported in a particular study. Most of us will agree that the answer is a ‘no’. Then why is so much emphasis placed on formatting the manuscripts strictly as per certain guidelines? Taken together, for the global research community, a lot many human hours are sacrificed on this mundane formatting exercise, which otherwise can be spent in the lab or library in a much better way. It seems wise to propose a uniform formatting style for all scientific manuscripts (i.e. one style for all research papers, and another one for all review papers); even better may be not to lay down any formatting requirements to the extent that scientific quality and readability of data do not get affected.

Even after paying enough attention to the formatting demands of the journals, researchers are not guaranteed to receive reviewer’s comments on their manuscript within a reasonable time period. All reviewers need to be sensitive towards the undue delay. Traditionally, reviewing remained a job being done on a voluntary basis, and no rewards (except appreciation and acknowledgement) are usually offered. One of the reasonable ways to expedite the review process seems to be paying some honorarium to the reviewers and editors for their time spent in reviewing and editing activities. Publons was a good initiative for showcasing the reviewing and editorial contributions of experts in their respective fields. Though Publons does not exist now as an independent platform owing to its integration into Web of Science Researcher Profile (1), other platforms like ORCID (2) and ReviewerCredits (3) have come up where verified records of review activity can be maintained. But more needs to be done to ensure that good quality of peer-review can be performed on the majority of the manuscripts without too much delay. Particularly when open access publishing is gaining ground, wherein authors are required to pay for publishing their manuscript, publishers can surely afford to give some share of their profit to the reviewers, who for sure play an essential role in the whole publication process. If not cash rewards always, at least full (and not partial) waiver on ‘page charges’/‘open access fee’ must be offered to the reviewers when they submit their own manuscript to the journal for which they acted as reviewer. Concepts like ‘post-publication review’ and ‘open review’ have been floated, but the overall situation has not changed much. Platforms like medRxiv (4) for health science, bioRxiv (5) for biology, and the Center for Open Science’s OSF Preprints (6) offer a good way of making the preprint available for citation before actual publication happens.

While journals’ strict demands on formatting, and delays in peer-review remain the points of annoyance, when submitting an article; too much emphasis on Impact Factor (IF) (7) is another such issue, after publishing it. Is not ‘Impact’ more important than IF? The San Francisco Declaration on Research Assessment (DORA) (8) talks about several valid points in this context.

In general, there is a need for better ways to be adopted for assessment of research and researchers. Over-emphasis on journal metrics like IF has tempted some publishers to adopt dubious ways to artificially enhance scores of their journals. Evaluating an individual and their contribution is more important for decisions like promotion and awarding research grant, than evaluating journals. Citation count and h-index type of scores (9) actually provide a more direct measure of scientific contribution made by an individual scientist or institute (10).

In fact, to make the quantification of research output more objective, ʻCost to Output ratioʼ should be calculated for each faculty and institute. In this case ʻcostʼ is the money spent for salary, and ʻoutputʼ is the number of citations/ h-index/ grant money etc. This will introduce accountability in the system, and will show whether the institutional resources and the extramural money is being used effectively. This calculation will clearly show how much percentage of resources an individual or organization is using, and against that how much percentage of research output they are contributing. Simply put, this is asking whether my contribution towards total citations of my institute is at par to the payment being received by me from the institute? Since governments divert the hard-earned money of their taxpayers to universities and research institutes in form of research grants, output against such grants needs objective evaluation. While ensuring judicious distribution of research grants is a separate issue of critical importance, effective post-distribution monitoring will save us from criminal wastage of public money.

In summary, it is recommended that reducing the emphasis on formatting requirements and IF of journals, along with expedited peer-review will do an overall good to the scientific community. This will allow researchers to have more quality time to spend on their lab bench, and be more productive.

Disclosures

Conflict of interest: The author declares no conflict of interest.

Financial support: This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

This article was earlier made public in the form of a preprint (Online).

References

  • 1. Clarivate. Web of Science Researcher Profiles. Online (Accessed December 2022)
  • 2. ORCID. Connecting research and researchers. Online (Accessed December 2022)
  • 3. ReviewerCredits. Recognizing and rewarding peer review. Online (Accessed December 2022)
  • 4. MedRxiv. The preprint server for health sciences. Online (Accessed December 2022)
  • 5. BioRxiv. The preprint server for biology. Online (Accessed December 2022)
  • 6. Center for Open Science. OSF Preprints. Online (Accessed December 2022)
  • 7. Garfield E. The history and meaning of the journal impact factor. JAMA. 2006;295:90-93. CrossRef PubMed
  • 8. San Francisco Declaration on Research Assessment. Online (Accessed December 2022)
  • 9. Scotti V. Altmetrics, Beamplots, Plum X Metrics and friends: discovering the new waypoints in the Science Metrics roadmap. AboutOpen 2022;9(1):1-2. CrossRef
  • 10. Kothari V. A new yardstick: is Citation Count a more realistic measure of research impact? Online (Accessed December 2022)