Stiftung Tierärztliche Hochschule Hannover (TiHo)TiHo eLib

Position statement on the use of AI in scientific writing and publishing

Large language models (LLMs) are rapidly reshaping scientific writing, reviewing, and publishing. Journals must respond in ways that safeguard trust while acknowledging the new realities resulting from these Artificial Intelligence-based technologies. This contribution details the position developed by the editors of Madagascar Conservation & Development on the use of LLMs in scholarly work and for publication in the journal. These tools can support authors by enhancing clarity, reducing language barriers, and structural inequities in global science, but recent editorial experience shows that the use of LLMs can generate errors and fabricated references, and formulate false claims that may escape traditional peer review. In volunteer-run journals, such failures impose substantial burdens on editors and reviewers. Our position is simple: LLMs may be used to support and enable authors but their results must never be trusted blindly. Authors remain fully responsible for ensuring the accuracy, originality, and validity of all content, regardless of the tools employed, and any use of LLMs must be disclosed transparently. Protecting scientific integrity remains a shared responsibility.

Cite

Citation style:
Could not load citation form.

Access Statistic

Total:
Downloads:
Abtractviews:
Last 12 Month:
Downloads:
Abtractviews:

Rights

Use and reproduction: