Artificial Intelligence Policy

This policy has been developed taking into account the positions of the Committee on Publication Ethics (COPE) regarding the use of artificial intelligence (AI) tools in scholarly publications.

Use of AI by Authors

Authors may use AI tools (large language models, generative systems, etc.) as an auxiliary means in preparing a manuscript, in particular for text editing, grammar checking, translation, or data visualization. At the same time, authors bear full responsibility for the content, reliability, and originality of the submitted manuscript.

The use of AI tools to generate the core scholarly content of the article (formulating research conclusions, interpreting results, creating experimental data) is impermissible.

Authors are required to indicate in the manuscript the fact that AI tools were used, specifying the name of the tool and the purpose of its use. AI tools may not be listed as authors or co-authors of the article.

Use of AI in Peer Review

Reviewers are prohibited from uploading manuscripts or parts thereof to AI tools, as this constitutes a breach of the confidentiality of the review process. A reviewer may not delegate the preparation of a review to AI tools.

Verification by the Editorial Office

The Journal's editorial office checks submitted materials for the use of AI tools using the online service StrikePlagiarism.com. Given the limited capabilities of the service in detecting AI-generated texts, the results of such checks do not constitute irrefutable proof of AI use. At the same time, where there are substantiated grounds to believe that the text of the submitted material was fully or partially generated by AI tools — in particular, characteristic stylistic uniformity, unnatural turns of phrase, mechanistic text structure, comparisons and generalizations atypical of scholarly style — the editorial office reserves the right to decline further consideration of the submitted material.

© 2014-2026 Artificial Intelligence Policy - Zbirnyk. Розроблено ІОЦ ВА