Artificial Intelligence (AI)
NJET is actively monitoring the evolving landscape of AI technologies and will update its policies as necessary to maintain research integrity and uphold ethical publishing standards.
1. AI Authorship
- Large Language Models (LLMs), such as ChatGPT, do not meet authorship criteria for NJET publications. Authorship requires accountability and responsibility, which LLMs cannot fulfill.
- If an LLM or AI tool is used during manuscript preparation (e.g., for content generation or idea structuring), its use must be explicitly disclosed in the Methods section, or an appropriate section if a Methods section is not included.
However, AI-assisted copy editing defined as grammar, spelling, formatting, and style improvements to human-generated content does not require disclosure, provided the final text is reviewed and approved by the authors. The final responsibility for the manuscript content must rest with the human authors, who must confirm that all edits reflect their original work and intent.
2. Generative AI Images
Due to unresolved legal, copyright, and research ethics issues, NJET currently does not allow the use of generative AI-generated images (including video stills, illustrations, or scientific diagrams) for publication.
Exceptions (must be clearly labeled as AI-generated):
- Images obtained from licensed providers that have legally created them using AI tools
- Images included in manuscripts specifically focused on AI research, evaluated case-by-case
- Images or videos generated from scientific datasets using verifiable and ethically sound AI tools, with adherence to copyright and usage terms
Covered under this policy:
Videos, animations, photographs, illustrations, 2D/3D renderings, scientific diagrams, and photo-composites
Not covered under this policy:
Text-based or numerical elements such as tables, flowcharts, and simple graphs without image content
Use of non-generative AI tools to enhance or manipulate existing visuals must be disclosed in the figure caption for editorial review.
3. AI Use by Peer Reviewers
Peer reviewers are selected for their subject-matter expertise and are responsible for the integrity, accuracy, and fairness of their evaluations. Because generative AI tools may provide inaccurate, biased, or misleading content and manuscripts may contain confidential or proprietary data, peer reviewers must not upload manuscripts to any generative AI platforms. If any portion of the peer review process involved AI assistance (e.g., summarizing data or evaluating arguments), this must be transparently disclosed in the peer review report. NJET is exploring the possibility of providing reviewers with access to secure, ethically approved AI tools in the future and will revise this policy as needed.