Generative AI Tools
This policy outlines acceptable and unacceptable uses of generative artificial intelligence (GenAI), including large language models (LLMs) and AI chatbots such as ChatGPT by authors.
Why this policy matters
There are many responsible and appropriate uses for generative AI within scholarly research and we support authors using it in this manner. For example, they may help authors overcome language barriers or more efficiently process data. However, these tools can produce misleading or fabricated content, cannot be legally accountable for published work and lack the ability to think critically about the material they are producing.
Additionally, uploading manuscript material to GenAI platforms may expose sensitive data to third parties, potentially breaching the rights of others involved in the work, including authors, participants, data owners and reviewers.
This policy aims to balance these considerations, safeguarding confidentiality, accuracy, and fairness while allowing transparent use of AI during the drafting of a manuscript.
Acceptable Uses
Authors may use generative AI tools to:
- Edit human-written text – this includes minor corrections (checking spelling, grammar and punctuation) and more significant changes (enhancing the clarity and structure of the work).
- Generate text – authors must ensure that they have critically revised any AI-generated text to ensure it is accurate and free from plagiarism.
- Generate figures based on existing data – for example, creating a graph based on data collected during an experiment.
- Support literature review – generative AI tools may be used to help locate relevant publications for authors to read and draw upon in their own manuscript.
- Edit their responses to peer-review reports – authors may use GenAI tools only to improve the language of their responses to peer review reports.
If authors use generative AI tools for any of the tasks listed above, they must disclose this usage in the Acknowledgements section of their manuscript. This disclosure should list the model and version of the generative AI tool and how it was used in the work. Authors are also encouraged to maintain records of previous drafts, as well as any prompts used in the editing or generation of material within their manuscript.
All authors remain fully responsible for all material presented in their manuscript, and for ensuring its accuracy, integrity and originality.
Unacceptable Uses
Authors may not:
- Fabricate original research data or results – any data or results must have been gathered from the experiment presented in the paper.
- Alter or manipulate original research data or results – this includes the manipulation of images such as blots.
- Generate reference lists – while authors can use generative AI tools to support their literature review, all material referenced in their manuscript must have been checked by the authors to confirm that it informs and is relevant to their work.
- Upload reports from reviewers to generative AI tools – this may expose sensitive data to third parties, breaching reviewer rights and privacy laws.
- Generate responses to reviewers – it is important that authors critically engage with reviews and revise their work based on the advice of their peers. Generative AI tools cannot directly participate with the peer-review process as they lack higher-level reasoning and critical thinking.
Authorship
IOP Publishing follows the Committee on Publication Ethics (COPE) position statement that AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements.
Literature review
While generative AI tools can be used to support literature review, authors should keep in mind that these tools are prone to generating false content, including references to non-existent work.
We consider the presence of references to non-existent sources to be strong evidence of irresponsible AI usage and to raise serious concerns about the validity of the work. If they are found during the submission process, this will usually result in a rejection of the submitted manuscript and potentially further sanctions. If they are discovered after publication (either in the Accepted Manuscript or Version of Record), IOP Publishing reserves the right to retract the paper due to a loss of confidence in the work.
Administrative mistakes where references to two existing papers have been mixed-up, or where there is a typo in a reference to an existing paper, are not covered by this policy.
Rights Protection
Before using any AI tools, authors should carefully check the tool’s terms and conditions -especially sections about ownership, reuse, and opting out – to avoid giving away any rights over their work by accident. Authors must not use any AI tools that would limit how they, IOP Publishing, or anyone else can use their submitted work. If authors choose to use an AI tool, they must make sure that the AI tool and its provider only obtain rights that are necessary to provide the requested service, and that they do not obtain additional rights, such as the right to use the work to “train” the AI tool.