Artificial Intelligence (AI) Policy
In light of recent advancements in artificial intelligence (AI) and evolving practices in scholarly publishing, the Artificial Intelligence Policy establishes the editorial and ethical standards that authors, reviewers, and editors are expected to uphold when utilizing such technologies.
The formulation of this policy has taken into account the following authoritative sources:
- Committee on Publication Ethics (COPE) – Authorship and AI Tools
- International Committee of Medical Journal Editors (ICMJE) – Artificial Intelligence (AI)-Assisted Technology
- World Association of Medical Editors (WAME) – Chatbots, Generative AI, and Scholarly Manuscripts
AI-assisted technologies—including but not limited to large language models (LLMs), chatbots, image generators, and synthetic data generators—are capable of producing diverse forms of content such as text, images, audio, and data. When applied ethically and under human supervision, these tools can enhance productivity and foster scholarly innovation. However, their unsupervised use presents substantial risks, including the generation of inaccurate or biased content, citation errors or omissions, potential breaches of privacy and intellectual property rights, and violations of authorship and publishing rights.
In order to mitigate these risks, the Journal adopts the following core principles regarding the use of AI in scholarly work:
- The use of any AI tool in the creation of content—whether text, figures, images, or code—must be fully disclosed and clearly detailed in the appropriate section of the manuscript (e.g., the acknowledgements or methods section).
- AI tools shall not be credited as authors under any circumstances.
- Authors bear full responsibility for the accuracy, originality, and integrity of their work, regardless of whether AI technologies have been utilized. This includes the proper citation and attribution of any third-party ideas, data, language, or other intellectual contributions.
- Reviewers and editors are strictly prohibited from uploading any portion of submitted manuscripts to generative AI platforms or tools.
- The Journal editorial workflows may employ tools specifically designed to detect AI-generated content, thereby supporting the integrity of the review and publication processes.
Notwithstanding the above, the use of AI tools aimed at improving readability, language clarity, and grammatical correctness is permitted, provided such tools are employed under the supervision of authors and/or editors and do not compromise academic rigor or ethical standards.
For Authors
- Such disclosure must be made both in the cover letter accompanying the submission and within the manuscript itself. This may take the form of a footnote and/or a statement within the methods section, clearly indicating which AI tool was used, the version employed, and the specific purpose for which it was applied.
- In addition, authors must describe the manner in which the AI tool was utilized, how the validity of the AI-generated results was evaluated, and which parts of the manuscript—including the main content, data, or supplementary files—were influenced or generated by AI technologies.
- The responsibility for ensuring the accuracy, validity, and appropriateness of all content, including citations produced with the assistance of AI tools, lies solely with the authors. Authors must rigorously check for and correct any potential errors, inconsistencies, or biases that may arise from such use.
- Moreover, authors are expected to verify all original sources in order to eliminate the risk of plagiarism, acknowledging that AI-generated content may inadvertently contain plagiarized material. They must confirm that all referenced sources have been thoroughly reviewed and are original.
- AI-assisted tools (e.g., ChatGPT, DeepSeek, Gemini, Grok) shall not be listed as authors under any circumstance, as they cannot assume responsibility for the accuracy, integrity, or originality of scholarly work. These attributes are essential components of authorship. Consequently, all content created with the assistance of AI remains the full responsibility of the named authors. Authors must ensure that all hypotheses, interpretations, conclusions, limitations, and implications stated within the manuscript reflect their own scholarly judgment and not that of any AI system.
-
The use of AI tools in ways that supplant the essential duties of researchers and authors violates fundamental principles of scientific publishing and research ethics. Accordingly, submissions involving any of the following practices are deemed unethical and will not be accepted for publication:
- Generating text or code without subsequent critical review and revision by the author;
- Employing synthetic data to substitute for missing data without the application of a robust and transparent methodology;
- Creating inaccurate or artificially generated content, including abstracts or supplementary materials;
- Manipulating images or figures—including but not limited to pictures, graphs, tables, medical images, snippets, computer code, or formulae—or modifying original research data through AI (e.g., by enhancing, obscuring, relocating, removing, or inserting features within an image or figure).
For Peer Reviewers
The confidentiality of the peer review process must be strictly upheld. Reviewers are prohibited from disclosing, sharing, or utilizing the content of manuscripts under review outside the context of the review process itself, in accordance with the COPE Ethical Guidelines for Peer Reviewers.
- Reviewers must not upload any part of a manuscript submitted for review—including text, figures, or data—to generative AI platforms or large language models for the purpose of assisting in evaluation, critique, or decision-making. Such actions constitute a breach of the authors’ privacy and intellectual property rights.
- Peer review is a scholarly duty that must be conducted exclusively by human experts. The evaluative reasoning, nuanced judgment, and ethical discernment required in the peer review process are beyond the capacity of current AI technologies. Moreover, reliance on AI tools in this context risks introducing inaccurate, incomplete, or biased assessments. Therefore, all peer review reports and editorial decisions must reflect the independent, personal assessments of the reviewers and editors. Each reviewer is fully responsible and accountable for the content of their submitted review.
- AI tools may be employed in a strictly limited capacity to enhance the linguistic clarity or perform translations of the review report. Should a reviewer choose to utilize such tools for these purposes, this use must be clearly disclosed at the time of report submission.
- If a reviewer suspects the inappropriate or undisclosed use of generative AI within a manuscript under review, this concern must be promptly communicated to the editor for further investigation.
For Editors
Editors are duty-bound to preserve the confidentiality of all manuscripts submitted to the journal throughout the editorial and peer review process. Under no circumstances should peer-reviewed content be shared, disclosed, or utilized beyond those individuals directly involved in the evaluation process. This obligation is in accordance with the COPE Code of Conduct and Best Practice Guidelines for Journal Editors.
- Editors must not upload submitted manuscripts—or any part thereof—to generative artificial intelligence (AI) platforms or large language models (LLMs) for the purpose of review, evaluation, or decision-making. Such actions risk violating the confidentiality, privacy, and intellectual property rights of the authors.
- The editorial management and evaluation of scholarly manuscripts entail responsibilities that inherently require human judgment. Editors must not use generative AI or AI-assisted technologies to support evaluative or decision-making functions, as these tools lack the capacity for the critical analysis, ethical discernment, and nuanced reasoning required in scholarly publishing. The risk of these technologies producing inaccurate, incomplete, or biased outcomes further precludes their use in editorial decision-making. Editors remain fully responsible and accountable for the integrity of the editorial process, including final decisions and the official communication of those decisions to authors.
- Editors are also prohibited from using generative AI tools to draft decision letters or summaries (e.g., abstracts) of unpublished research.
- Notwithstanding the above, editors may use AI tools in a limited and appropriate manner, such as to assist in identifying potential peer reviewers.
If an editor suspects the use of generative AI in either a submitted manuscript or a referee report, they are expected to initiate an editorial investigation consistent with the journal’s AI policy and ethical guidelines.