Online: 3106-9401
Artificial Intelligence
Policy on the Use of Artificial Intelligence (AI) in Manuscript Preparation and Publication
Purpose: The Axis Community Research Journal (ACRJ) acknowledges the transformative potential and associated challenges of Artificial Intelligence (AI) in scholarly research and communication. This policy establishes clear ethical guidelines for the responsible and transparent use of AI tools by all contributors—authors, reviewers, and editors—throughout the manuscript lifecycle, ensuring the continued integrity, originality, and human accountability of the research we publish.
Scope: This policy applies to all uses of generative AI, large language models (LLMs), machine learning, automated analysis tools, and other AI-assisted technologies in the context of research submitted to, reviewed for, or edited within ACRJ.
Definitions:
-
AI Tool: Any software or system that employs machine learning, natural language processing, or other computational techniques to perform tasks typically requiring human intelligence (e.g., text generation, data analysis, literature synthesis, language translation/editing).
-
AI-Assisted Content: Any part of a manuscript, analysis, or review report that has been generated, substantially drafted, or synthetically altered by an AI tool.
-
Human Oversight: The indispensable role of human authors, reviewers, and editors in verifying, validating, and taking ultimate responsibility for all intellectual content and decisions.
Policy Details:
1. Transparency and Disclosure
-
Mandatory Declaration: Authors must explicitly disclose the use of AI tools in the preparation of their manuscript. This includes, but is not limited to, AI assistance in writing, data analysis, literature searches, or image creation.
-
Statement of Use: A declaration must be included in the Acknowledgements or a dedicated section of the manuscript, specifying the AI tool(s) used (e.g., ChatGPT, Gemini, DALL-E, etc.), the purpose of their use, and which sections or analyses were AI-assisted.
-
Reviewer and Editor Disclosure: Reviewers and editors must not use AI tools to generate their review reports or editorial assessments without prior approval from the journal. If used for language polishing of a report, this must be declared.
2. Authorship, Attribution, and Accountability
-
AI and Authorship: AI tools cannot be listed as authors. Authorship requires accountability for the work, which an AI entity cannot hold.
-
Ultimate Human Responsibility: The human authors retain full responsibility and accountability for the entire content of the manuscript, including any portions produced by an AI tool. This encompasses the accuracy of data, integrity of analysis, fairness of interpretations, and absence of plagiarism.
-
Acknowledgment: Significant AI contributions must be formally acknowledged, as specified above.
3. Integrity, Originality, and Quality Assurance
- Plagiarism Screening Policy: The Axis Community Research Journal (ACRJ) screens all submitted manuscripts for plagiarism prior to peer review using professional plagiarism detection software (e.g., Turnitin / iThenticate). Manuscripts found to contain significant plagiarism, redundant publication, or unethical use of AI-generated content will be rejected outright or returned to authors for correction, depending on severity. The journal follows COPE guidelines in handling suspected cases of plagiarism and academic misconduct.
-
Plagiarism and Originality: AI-generated text may inadvertently reproduce content from its training data without attribution. Authors are wholly responsible for ensuring the originality of their manuscript. All submissions will be screened with plagiarism detection software.
-
Validation of Outputs: Authors must critically verify all factual assertions, references, data analyses, and interpretations provided by an AI tool. AI tools can "hallucinate" or produce plausible but incorrect information.
-
Methodological Transparency: For manuscripts where the development or application of an AI model is the core of the research, the methodology must be described with exceptional clarity, including details of training data, algorithms, validation processes, and steps taken to mitigate bias.
4. Ethical Use and Bias Mitigation
-
Bias and Fairness: Authors and editors must be cognizant that AI tools can perpetuate or amplify societal biases present in their training data. This is of particular concern in community-engaged research. Efforts to identify and mitigate such biases must be stated where relevant.
-
Data Privacy and Confidentiality: AI tools must not be fed with confidential, sensitive, or personally identifiable information from research participants unless explicitly covered by the tool's privacy policy and the study's ethical approvals. The protection of community data is paramount.
5. Use in Peer Review and Editorial Processes
-
The core intellectual judgment of peer review is a human responsibility. While AI tools may be used by the editorial office for initial technical checks (e.g., formatting, reference validation), the substantive evaluation of scholarly merit, methodological rigor, and community relevance must be performed by human experts.
-
Any experimental use of AI in the editorial decision-making process will be communicated transparently to authors and reviewers.
Implementation and Enforcement:
-
Compliance: Adherence to this policy is a condition for submission and publication. Failure to disclose AI use, or the submission of AI-generated content presented as purely human-authored, constitutes a breach of publication ethics.
-
Consequences: Manuscripts found in violation of this policy may be rejected or, if discovered post-publication, may be subject to correction or retraction.
-
Policy Review: This policy will be reviewed annually by the editorial board to adapt to the rapidly evolving landscape of AI technology and its applications in academia.
By implementing this policy, ACRJ aims to harness the benefits of AI for enhancing research efficiency and communication while rigorously safeguarding the foundational principles of scholarly integrity, transparency, and human-led intellectual contribution that are essential to trustworthy community-engaged research.

