Skip to Main Content

AI Use in Publishing and Research at QU

Intro

Rules for authors who use AI in some way when writing an article are rapidly evolving and are not consistent across journals. This article provides a synopsis of existing guidelines related to the use of AI in scholarly publishing.

If you have an idea of the journal(s) you are targeting for publication, check their AI policy before deciding to use AI. Some publishers forbid the use of AI. If you decide to use AI, consider recording any prompts you use, along with any resources you enter into an AI system.

The AI policies for major publishers are linked below, along with excerpts. Policies are changing as AI evolves, so please check the publisher site and read the full policy.

Policies

  • AI use must be declared and clearly explained in publications such as research papers, just as we expect scholars to do with other software, tools and methodologies.
  • AI does not meet the Cambridge requirements for authorship, given the need for accountability. AI and LLM tools may not be listed as an author on any scholarly work published by Cambridge
  • Authors are accountable for the accuracy, integrity and originality of their research papers, including for any use of AI.
  • Any use of AI must not breach Cambridge’s plagiarism policy. Scholarly works must be the author’s own, and not present others’ ideas, data, words or other material without adequate citation and transparent referencing.

Please note, individual journals may have more specific requirements or guidelines for upholding this policy.

"We do not permit the use of Generative AI or AI-assisted tools to create or alter images in submitted manuscripts."

View Elsevier’s generative AI author policies for books

U.S. Copyright and Artificial Intelligence

AI Use and Copyright

Uploading full-text articles and book chapters into generative AI tools may have copyright or licensing implications. To minimize risk, use tools that don't train on or retain what you input, like Microsoft Copilot.