Policy on the Use of Generative Artificial Intelligence
- General Principles
The journal supports the transparent and responsible use of generative artificial intelligence (GenAI) tools in scientific research and publishing.
The use of such tools is permitted provided that:
- principles of academic integrity are upheld;
- transparency in their application is ensured;
- full responsibility for the research outcomes remains with the authors.
- Disclosure of AI Use
If generative AI is used, authors are required to disclose its use within the manuscript.
To ensure transparency, the journal recommends the use of GAIDeT (Generative AI Delegation Taxonomy)—an approach that enables clear documentation of tasks delegated to generative AI while preserving author accountability.
The declaration must include:
- identification of the tool used (name and version);
- description of tasks delegated to AI;
- a statement confirming the authors’ responsibility for the final output.
The declaration should be placed in the manuscript before the reference list.
The journal recommends using the GAIDeT Declaration Generator for standardized statements:
https://panbibliotekar.github.io/gaidet-declaration/
Authors are also encouraged to cite the foundational publication:
Suchikova, Y., Tsybuliak, N., Teixeira da Silva, J. A., & Nazarovets, S. (2025). GAIDeT (Generative AI Delegation Taxonomy): A taxonomy for humans to delegate tasks to generative artificial intelligence in scientific research and publishing. Accountability in Research.
https://doi.org/10.1080/08989621.2025.2544331
Additional guidance is available in:
Recommendations for Authors and Editors on Transparent Disclosure of AI Contributions (GAIDeT):
https://doi.org/10.5281/zenodo.16941301
Example of a declaration:
The authors declare the use of generative AI during the research and writing process. According to the GAIDeT taxonomy (2025), the following tasks were delegated to GAI tools under full human supervision: literature search and organization; data analysis; translation; ethical risk analysis. The GAI tool used was ChatGPT-5. Full responsibility for the final version of the manuscript rests with the authors. GAI tools are not listed as authors and bear no responsibility for the final results.
- Limitations of AI Use
Generative AI tools:
- cannot be listed as co-authors;
- cannot assume responsibility for the content of the publication;
- cannot replace scientific interpretation of results.
The use of AI does not exempt authors from responsibility for:
- the accuracy of data;
- the validity of conclusions;
- compliance with ethical standards.
- Use of AI in Peer Review
Peer review must be conducted exclusively by qualified experts.
The use of generative AI in the preparation of reviews is not permitted, as it may:
- compromise confidentiality;
- diminish expert accountability;
- fail to ensure an adequate level of scientific evaluation.
- Principles of Responsible Use
The journal considers generative AI as a supportive research tool rather than an autonomous agent.
Its use must:
- be transparent;
- remain under human control;
- not substitute for the author’s intellectual contribution;
- not introduce risks to the reliability of scientific results.



