This study revealed considerable variation in the amount of guidance provided by journals about authorship. Only about 60% of the journal instructions included any guidance about authorship, such as a reference to the ICMJE URM, and only one third gave specific information (with the others relying on potential authors to consult the URM). Of those that did cite the URM, 35% referred to an outdated version. Altman noted even worse performance in a survey of 167 high impact medical journals published in 2005. Of the 72 that mentioned the ICMJE URM, he found that 41 (57%) cited an obsolete version.
One limitation of my survey is that it considered only journals whose editors belonged to WAME, or that were listed on Medline, and produce instructions in English on the Internet. This probably biased the sample toward internationally recognized and English language publications. Medline tends to list journals that are well-established, so the sample is likely to have excluded newer journals and those with small circulations. Membership of WAME is more global and inclusive, but well-established journals are still likely to be over-represented. The journal sample was originally made for another project assessing whether journals provide guidance in their instructions about the role of medical writers in developing articles, as recommended by a WAME position statement. However, the sample size and method of sampling seemed equally appropriate for examining journal guidance on authorship. The sample comprised 16% (120/763) of WAME members but <1% of the journals listed in Medline (120/18,673). However, the random sampling technique should have ensured that samples were representative.
As previously suggested, the ICMJE authorship criteria are not universally known or endorsed.[6,7] Forty percent of journals in the current survey made no mention of the URM and 14% of journals recommended their own criteria for determining authorship. It appears, in this respect at least, that the "uniform" requirements are not universally applied. Whatever one thinks of the ICMJE authorship criteria, this may create confusion for contributors (or at least for those who bother to read journal instructions), because it appears that journals do not agree on authorship criteria. For example, some journals attempt to limit the number of authors who may be listed, but this limit ranges from 6 to 12. Such variation might reflect practical differences across disciplines, but variation was found within a single field. For example, the International Journal of Gynecology & Obstetrics sets an upper limit of 6 authors for a research paper, while Gynecologic Oncology permits up to 7 and Cancer up to 10.
Cases submitted to COPE and anecdotal evidence from researchers suggests that authorship problems are not rare. There is considerable concern about the existence of ghost authors, especially when their absence hides the involvement of commercial research sponsors.[1,10] Journal editors are usually not in a position to police authorship abuse but they could play an important role in educating potential authors about their expectations. However, unless journals adopt more uniform policies, it will be hard to establish or promote consistent norms. Given the difficulties of determining authorship and of devising criteria that apply to every situation, listing contributions to a study and its publication has many advantages. Not only does publishing information about individuals" contributions increase transparency, but also the process of gathering such information may also permit editors to detect ghost or guest authors and to educate authors about authorship criteria, especially if forms for collecting such information are well designed.
Editors could play an important role in improving the accuracy, equity, and transparency of listing authors on biomedical publications and therefore in preventing authorship abuses such as ghost or guest authors. However, judging by the results of this survey of 234 mainly well-established, English-language journals, this opportunity is being missed by many journals. There is considerable evidence that lists of authors do not always accurately reflect the true authorship of the work. The COPE guidelines state that "Many people (both editors and investigators) feel that this misrepresentation is a form of research misconduct, and that honesty in reporting science should extend to authorship. They argue that, if scientists are dishonest about their relationship to their work, this undermines confidence in the reporting of the work itself." Determining the authorship of scientific papers is not always straightforward and criteria are not uniformly applied or recognized. Journal editors could assist researchers by agreeing clear guidelines and educating potential contributors about these via their instructions. Editors' organizations such as CSE and WAME should take a lead in trying to obtain consensus among editors to agree on authorship criteria that could be applied universally. Academic and commercial institutions should then be called on to take an active role in implementing the guidelines and ensuring that researchers understand them. As with other forms of misconduct, deliberate abuse will probably remain hard to detect and prevent, but clear guidance from journals and attention to the effects of different submission systems in obtaining authorship data should reduce the problem.
Thanks to Adam Jacobs, Nancy Milligan, and Debbie Reynolds (from Dianthus Medical Ltd, London, UK) for generating the random number listing, identifying the sample journals, and downloading the instructions to authors (which was done as part of another survey).
© 2007 Medscape
Cite this: Do Medical Journals Provide Clear and Consistent Guidelines on Authorship? - Medscape - Jul 19, 2007.