AI Policies
African Educational Research Journal (AERJ) recognizes the increasing use of Artificial Intelligence (AI) and machine learning tools in academic research and scholarly publishing. As generative AI technologies such as ChatGPT, DeepSeek, Gemini, and Claude become more accessible, AERJ emphasizes the need to use these tools responsibly, ethically, and transparently. This policy sets forth the journal’s official stance on the use of AI by authors, reviewers, and editorial staff, in accordance with standards established by Elsevier and the Committee on Publication Ethics (COPE).
1. Use of AI Tools by Authors
Authors are permitted to use AI tools to assist in the preparation of their manuscripts, but this must be done responsibly and with full transparency. Acceptable uses include language refinement, grammar correction, formatting of references, and translation. However, AI tools must not be used to generate substantive content, conduct scientific reasoning, interpret data, or replace the author’s intellectual contribution.
All listed authors must be human. AERJ does not accept AI tools or bots as credited authors, as they cannot take responsibility for the work or provide accountability. Authorship requires meaningful intellectual input, consent to publication, and responsibility for content—all of which AI tools cannot fulfill.
2. Disclosure of AI Use
Authors must disclose any use of AI tools in their
manuscript. This includes AI applications used for generating text, analyzing
data, assisting with translations, or performing code-related tasks. The
disclosure should be placed in the Acknowledgements section and should mention
the tool’s name, version, and specific role.
An example of an appropriate
disclosure is:
“The authors used ChatGPT (OpenAI, March 2025 version) to enhance the language
and readability of the manuscript. All outputs were reviewed and verified by the
authors.”
Failure to disclose AI usage will be considered a violation of publishing ethics
and may lead to rejection or retraction. An Editorial Office Team is also appointed by the publication committee to directly assist the editors.
3. Author Responsibility and Accountability
Authors are fully responsible for the accuracy, originality,
and integrity of their submitted work, including any content that was developed
with the assistance of AI tools. They must carefully check that AI-generated
content does not include factual inaccuracies, fabricated references,
plagiarized material, or biased language.
AERJ reserves the right to investigate and act on any misuse of AI, including
the submission of predominantly AI-generated manuscripts or the inclusion of AI
content without proper disclosure.
4. Use of AI in Peer Review
Peer reviewers must uphold the confidentiality,
professionalism, and integrity expected in the review process. Reviewers must
not use AI tools to analyze or summarize manuscripts, nor should they upload any
part of a manuscript to AI platforms without express permission, as this may
compromise confidentiality.
If a reviewer chooses to use AI tools for minor, non-confidential tasks such as
improving the grammar of their review, they must disclose this to the editorial
team. AERJ reserves the right to reject any review that has been inappropriately
influenced or generated by AI.
5. Use of AI by Editorial Staff
AERJ’s editorial team may utilize AI tools for limited administrative tasks such as plagiarism checking, reference formatting, or language editing. However, AI will not be used to make editorial decisions. All acceptance and rejection decisions will be made by qualified human editors to maintain transparency and accountability.
6. Ethical Use and Prevention of Bias
All parties involved in the publication process must ensure that AI-generated content does not introduce bias, misinformation, or offensive material. Authors are expected to review AI outputs critically and avoid relying on AI for tasks that require expert academic judgment. The use of AI must never compromise scholarly rigor or ethical standards.
7. Violations and Consequences
Any attempt to misrepresent AI-generated
content as original human work, fabricate data, or submit an
AI-generated manuscript without disclosure will be treated as
unethical conduct. Consequences may include:
Rejection of the manuscript
Retraction of the article post-publication
Notification of the authors’ affiliated institutions
Suspension or ban from future submissions to AERJ
All violations will be investigated following COPE’s guidelines
on publication misconduct.
8. Policy Review and Updates
This AI policy will be periodically reviewed and updated to reflect changes in technology and publishing practices. AERJ remains committed to upholding academic integrity while supporting responsible innovation in scholarly communication.
References
Elsevier (2023). Generative AI Policies for Journals. Link
COPE (2023). Position Statement on Authorship and AI Tools. Link
COPE (2024). AI and Peer Review: Guidance Document. Link
COPE (2023). Discussion Paper: Ethical Considerations in the Use of Generative AI. Link
