Gen AI Usage Policy
Purpose and Scope
1.1 Purpose
To regulate the application of Generative AI (hereinafter referred to as "AI Tools") throughout the publishing process, safeguard academic integrity, content authenticity, and information transparency, and clarify the rights and responsibilities of all parties involved.
1.2 Scope
- Subjects: Applies to authors submitting manuscripts to the publisher’s journals, peer reviewers, editorial board members, and publisher staff.
- Content: Covers all stages of publishing (manuscript writing, peer review, editing, etc.) and all manuscript types (research article, review, case study, etc.).
1.3 Definition of AI Tools
Refers to artificial intelligence technologies that autonomously generate content (text, images, etc.) based on user prompts, including but not limited to Large Language Models (LLMs, e.g., ChatGPT, Gemini) and image-generating tools (e.g., MidJourney, Stable Diffusion).
1.4 Core Principles
- Human oversight takes priority: AI Tools shall not replace human critical thinking, professional expertise, or evaluation.
- Full transparent disclosure: All use of AI Tools must be clearly declared; non-disclosure or concealment is deemed a violation of publishing ethics.
- Authors bear ultimate responsibility: Authors are fully accountable for the accuracy, originality, and compliance of their manuscripts (including AI-generated content).
- Confidentiality and compliance: Strictly adhere to data privacy regulations; no confidential information (e.g., unpublished manuscripts) shall be uploaded to public AI platforms.
Guidelines for Authors
2.1 Permitted and Prohibited Uses
Permitted: AI Tools may be used to assist manuscript preparation (e.g., synthesizing literature overviews, organizing content, improving language readability, standardizing citations).
Prohibited:
- Using AI Tools as a substitute for human intellectual contribution (e.g., generating research hypotheses, experimental data, or conclusions).
- Listing AI Tools as authors/co-authors or citing them as authors.
- Generating or altering images (e.g., adding/removing features in figures) unless the AI use is part of the research design (e.g., AI-assisted biomedical imaging).
- Uploading confidential data (e.g., personally identifiable information) to public AI Tools, or granting AI Tools rights to use manuscript materials for training.
2.2 Author Responsibilities
- Verify the accuracy, comprehensiveness, and impartiality of all AI-generated content (e.g., checking for fabricated references or factual errors).
- Thoroughly edit and adapt AI outputs to ensure the manuscript reflects the author’s original analysis, insights, and ideas.
- Comply with the terms of service of AI Tools to protect intellectual property and data privacy.
2.3 Disclosure Requirements
- Submit a separate "AI Disclosure Statement" in the manuscript (before the reference list) upon submission; this statement will be included in the published work.
- The statement must include:
(1) Name and version of the AI Tool used (e.g., "ChatGPT-4.5 Turbo," "MidJourney v6.0");
(2) Purpose and scope of AI use (e.g., "used to draft the literature review section, with 60% revisions by the author");
(3) Measures for human oversight (e.g., "AI-generated references verified via Google Scholar"). - Exception: Basic grammar/spelling checks using AI Tools do not require disclosure.
2.4 Image-Specific Rules
- Routine adjustments (brightness, contrast, color balance) are allowed only if they do not obscure or remove original information.
- AI-generated/altered images are prohibited unless the AI use is part of the research method. In such cases:
(1) Detail the AI Tool (name, version) and its application process in the "Methods" section;
(2) Provide pre-AI-adjusted raw images upon editorial request.
Guidelines for Peer Reviewers
- Confidentiality: Do not upload manuscripts, parts of manuscripts, or review reports to any AI Tools (to protect authors’ proprietary rights and data privacy).
- Review Integrity: Do not use AI Tools to draft or assist in scientific review (peer review requires human critical thinking and original assessment).
- Responsibility: Reviewers are fully accountable for the content of their review reports. If unreported AI use is suspected, notify the editorial team immediately.
Guidelines for Editors
- Confidentiality: Do not upload manuscripts, decision letters, or other confidential communications to AI Tools.
- Editorial Integrity: Do not use AI Tools to assist in manuscript evaluation or decision-making (editorial decisions require human judgment).
- Oversight: Verify the completeness of authors’ AI disclosure statements. If policy violations are suspected, inform the publisher promptly.
- In-House AI Tools: Only use the publisher’s licensed AI tools (compliant with data privacy rules) for tasks like plagiarism checks or reviewer matching.
Policy Updates and Interpretation
- This policy will be dynamically adjusted based on technological developments and regulatory changes. Updates will be published on the journal's website 30 days in advance.
- The Academic Ethics Committee retains the right to interpret this policy.