Elsevier's policy on AI and AI-assisted tools in scientific writing

The use of Artificial Intelligence (AI) and AI-assisted technologies in scientific discourse has been in the spotlight recently, especially in relation to ChatGPT. This chatbot, which was launched in November 2022, can provide detailed responses across multiple domains of knowledge and produces text that is almost indistinguishable from text written by humans.

In response to the increasing use of Generative AI and AI-assisted technologies by authors, I am pleased to share Elsevier’s new AI author policy, which focuses on ensuring the integrity of the scholarly record and aims to provide greater transparency and guidance to authors, readers, reviewers, editors and contributors.

Where authors use AI and AI-assisted technologies in the writing process, authors should:
  • Only use these technologies to improve readability and language, not to replace key researcher tasks such as interpreting data or drawing scientific conclusions. 
  • Apply the technology with human oversight and control, and carefully review and edit the result, as AI can generate authoritative-sounding output that can be incorrect, incomplete or biased.
  • Not list AI and AI-assisted technologies as an author or co-author, or cite AI as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans, as outlined in Elsevier’s AI author policy
  • Disclose in their manuscript the use of AI and AI-assisted technologies in the writing process by following the instructions in our Guide for Authors (which will be updated centrally this month). When authors declare the use of AI in the writing process, a statement will appear in the published work. Authors are ultimately responsible and accountable for the contents of the work.