Statement on the Use of Generative Artificial Intelligence (GenAI)
Journal: Vocational Nursing Science (VNUS)
ISSN: 2656-8799
1. Scope and Purpose
This statement defines the ethical framework for the use of Generative Artificial Intelligence (GenAI) tools and Large Language Models (LLMs) in the submission, peer review, and editorial processes of the Vocational Nursing Science (VNUS) journal.
The policy aligns with the core publishing ethics and best practices recommended by the Committee on Publication Ethics (COPE) and Elsevier. It reflects VNUS’s commitment to maintaining integrity, transparency, and accountability in all aspects of scholarly publishing.
2. Definition
Generative AI refers to machine learning systems capable of producing human-like text, images, code, or other forms of creative content in response to user prompts.
Examples include:
- 
Text-based models: ChatGPT, GPT-4, Claude, Gemini (Bard), LLaMA, Mistral.
 - 
Image generators: DALL·E, Midjourney, Stable Diffusion.
 - 
Integrated AI productivity tools: GrammarlyGO, Microsoft Copilot, Notion AI, Jasper.
 
Large Language Models (LLMs) are a subset of GenAI tools trained on extensive text corpora to generate and interpret natural language. Such tools are increasingly used within academic writing and research workflows.
3. Principles and Standards
3.1 Ethical Use in Line with COPE and Elsevier Guidelines
All parties involved—authors, reviewers, and editors—must ensure that AI usage complies with COPE’s Core Practices and Elsevier’s publishing standards regarding authorship, transparency, peer review, and data reliability.
GenAI tools may be used for minor mechanical tasks (e.g., grammar checking, language refinement, or summarization) but must not replace human intellectual contributions or be used to fabricate, falsify, or mislead scholarly content.
3.2 Transparency and Disclosure
Authors are required to clearly and explicitly disclose any use of GenAI tools in their manuscripts. The disclosure must specify:
- 
The name and version of the AI tool (e.g., ChatGPT using GPT-4, Claude 3 by Anthropic).
 - 
The specific purpose of its use (e.g., improving grammar, summarizing findings, or formatting).
Failure to disclose such use constitutes a breach of academic integrity. 
3.3 Authorship and Accountability
GenAI tools cannot be recognized or listed as authors.
Authorship must adhere to COPE and Elsevier criteria, requiring:
- 
Substantial contribution to the conception or design of the work,
 - 
Involvement in drafting or revising the manuscript, and
 - 
Accountability for the integrity and accuracy of the final content.
Human authors bear full responsibility for all material, including any part produced with AI assistance. 
3.4 Peer Review Integrity
Reviewers and editors may use GenAI tools for limited, non-decisive tasks (such as summarizing manuscripts or checking readability) but must disclose such use to the editorial board.
Critical evaluation, recommendations, and final judgments must remain solely human-driven. GenAI must never be used to produce confidential peer review reports or to substitute for genuine reviewer commentary.
4. Acceptable Uses of GenAI
GenAI tools may be appropriately used in the following ways—with full disclosure:
- 
Language and Style Enhancement: Improving grammar, fluency, and readability (e.g., ChatGPT, GrammarlyGO).
 - 
Data Visualization: Generating figures or charts from verified, author-provided data (e.g., DALL·E, Excel Copilot).
 - 
Formatting Assistance: Supporting citation organization, layout formatting, or reference summaries.
 - 
Conceptual Brainstorming: Aiding in early-stage idea generation or outlining, provided that all hypotheses, analyses, and conclusions are developed by the authors themselves.
 
5. Prohibited Uses of GenAI
The following applications are strictly prohibited and considered ethical violations:
- 
Fabrication or Manipulation: Creating or altering data, references, or findings using AI.
 - 
Plagiarism: Presenting AI-generated content as original work or without attribution.
 - 
Undisclosed Use: Omitting disclosure of AI involvement in manuscript preparation.
 - 
Misrepresentation: Claiming AI-generated ideas or text as the result of human scholarly reasoning.
 
6. Required Disclosure Statement
All manuscripts must include a Generative AI Use Disclosure Statement, typically placed in the Acknowledgments or Methods section, specifying:
- 
The name(s) of the GenAI or LLM tools used,
 - 
The stage and purpose of their use in the research or writing process, and
 - 
A confirmation that the authors conducted all conceptual, analytical, and interpretive work independently.
 
Example Disclosure:
“Generative AI tools, including ChatGPT (GPT-4, OpenAI) and GrammarlyGO, were used to enhance grammar and language clarity during manuscript preparation. All conceptualization, analysis, and interpretation of the research were performed solely by the authors.”
7. Editorial Oversight and Compliance
Editors will review GenAI disclosures as part of the standard submission evaluation process.
If a manuscript appears to contain undisclosed AI-generated content, it may undergo additional scrutiny, such as plagiarism checks or author clarification requests.
VNUS reserves the right to contact the authors’ affiliated institution in cases of suspected or confirmed ethical misconduct.
8. Consequences of Misuse
In accordance with COPE and Elsevier guidelines on research misconduct, violations of this policy may lead to:
- 
Rejection of the manuscript during review,
 - 
Retraction of the article after publication,
 - 
Notification of the authors’ institution or funding body,
 - 
Restriction or suspension of future submissions in severe or repeated cases.
 
9. References
- 
Elsevier (2023). Generative AI Policies for Journals. Retrieved from: https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals
 - 
Committee on Publication Ethics (COPE) (2023). Position Statement on Authorship and AI Tools. Retrieved from: https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools
 - 
Committee on Publication Ethics (COPE) (2024). Discussion Document on AI and Peer Review. Retrieved from: https://publicationethics.org/news/cope-publishes-guidance-on-ai-in-peer-review
 - 
Committee on Publication Ethics (COPE) (2023). Discussion Paper: Ethical Considerations in the Use of Generative AI in Publishing. Retrieved from: https://publicationethics.org/topic-discussions/artificial-intelligence-ai-and-fake-papers
 
						










