Generative artificial intelligence (AI) continues to have a growing impact in many sectors of society, including scientific publishing. Its ability to integrate and summarize information from various sources is undeniably useful. Especially in a broad field of study dictated by the fast pace of technological developments and theoretical discoveries.
However, the implementation of AI in scientific publishing raises many pressing, necessary questions about maintaining research integrity.
How publishers and preprint servers address AI use in writing and data analysis
The introduction of AI into scientific publishing has meant that publishers and preprint servers have had to adapt to a changing digital landscape.
Guidance published by STM at the end of 2023 aimed to supply publishers with ethical and practical guidelines regarding generative AI use. The ‘white paper’ (a report or guide released by an organization giving information on a topical, complex issue) covered specific uses of generative AI by authors and how publishers should engage with such uses. Here are the uses the report covered:
- Basic author support tool (refine, correct, edit, and format text and documents);
- Uses transcending basic author support tool;
- Create, alter, or manipulate original research data and results;
- Credit GenAI as an author of a published work.
The first example listed is the only use permitted without the need for disclosure. The second example requires the author to disclose about its use, with permission granted (or denied) by editorial teams. The last two uses are ‘not permitted’ under any circumstances.
Similarly to publishers, preprint servers are also addressing AI use in writing and data analysis. Preprints.org follows the Committee on Publication Ethics (COPE) position statement when it comes to the use of Artificial Intelligence (AI) and AI-assisted technology in manuscript preparation. AI tools such as ChatGPT and other large language models (LLMs) do not meet authorship criteria. Thus, they cannot be listed as authors on manuscripts.
Why AI can’t be an author
Generative AI use in writing and data analysis has become increasingly prevalent within publishing, leading to pressing philosophical and ethical debates about its impact. Some of these debates even go as far as to ask can ChatGPT be considered an author?
Well, to answer the question, no it can’t. First, AI lacks the intuition and comprehension of humans. You may think, if AI can assess, summarize, and draw conclusions on complex topics in a matter of seconds, why does this matter? Well, because the advancement of many research areas, science included, depend on people’s real-worlds experiences as well as abstract conceptualizations to solve problems. AI cannot live in the world, so to speak, nor can it strike upon something new and unexpected through the mind’s apprehension. It can generalize to increasingly greater degrees, but it lacks the flexible reasoning and context sensitivity available to humans.
Secondly, while most of the aforementioned points are philosophical arguments against AI authorship, there are also important ethical reasons too. If we start accepting AI as an author in research publications, then this will erode trust in the publishing process from many angles. Is a technically sounder, clearer study written by AI more valuable to society than one written by a human who nonetheless has experience in and a clear passion for their subject? Should we even be condoning the use of AI at all, considering its impact on water and energy usage? These are the kind of ethical dilemmas that the issue of AI and authorship gives rise to. Not to mention other issues it raises surrounding copyright, privacy, and confidentiality.
How to cite or acknowledge AI tools in a manuscript
Whatever direction these debates head in, AI isn’t going anywhere. It’s highly likely that it will continue to be implemented to greater degrees in many areas of society. And scientific publishing is no different.
That’s why it’s important to know how to properly cite or acknowledge AI tools in manuscripts. The use of generative AI must be acknowledged in a manuscript if it has been used in the process of creating the academic work. This can include drafting ideas in the form of paragraphs or structuring written materials. Basically, anything that is generative, rather than supportive (i.e., correcting spelling errors).
For researchers preparing submitting to Preprints,org, the key principle is transparency. The use of generative AI and large language models (LLMs) should be properly documented in the ‘Methods’ section of a manuscript. Here is an example of how to cite AI tools:
“I acknowledge the use of ChatGPT (version GPT-5, OpenAI, https://chatgpt.com/?openaicom_referred=true) to structure my initial notes and to proofread my final draft.”
Do not take any use of AI tools for granted. Check the guidelines specific to the publisher or preprint server. If you’re still unsure about whether you need to disclose any AI tools, reach out to the publisher or preprint server you’re submitting to via their contact page.
