Setting the standards for responsible AI use in evidence synthesis

Setting the standards for responsible AI use in evidence synthesis

Cochrane, the Campbell Collaboration, JBI, and the Collaboration for Environmental Evidence have published a joint position statement on the responsible use of artificial intelligence (AI) in evidence synthesis. This collaborative effort is an important step in shaping how AI is integrated into the production of high-quality, trustworthy research.

Evidence syntheses, including systematic reviews, are built on the principles of research integrity. There is wide recognition that AI and automation have the potential to transform the way we produce evidence syntheses. However, this technology is also potentially disruptive. To safeguard evidence synthesis as the cornerstone of trusted, evidence-informed decision making, Cochrane has come together with other organizations to collaborate on a responsible and pragmatic approach to AI use in evidence synthesis.

The statement supports the Responsible use of AI in evidence SynthEsis (RAISE) recommendations, a framework designed to guide the ethical and transparent use of AI across the evidence synthesis ecosystem. The statement also sets out clear expectations for evidence synthesists, including transparent reporting, assuming responsibility, and ensuring that AI will not compromise the methodological rigour or integrity of their synthesis.

“This joint position statement marks a pivotal moment for the evidence synthesis community,” said Ella Flemyng, Cochrane’s Head of Editorial Policy and Research Integrity Cochrane, and co-convenor of the joint AI Methods Group that authored the position statement. 

“By aligning Cochrane, the Campbell Collaboration, JBI and the Collaboration for Environmental Evidence around the RAISE recommendations, we’re setting a clear, shared standard for the responsible use of AI in evidence synthesis. It’s a proactive step to safeguard research integrity while embracing innovation – ensuring that AI enhances, rather than undermines, the evidence we produce. This guidance empowers evidence synthesists to make informed, transparent decisions, and supports them in navigating the evolving AI landscape with more confidence and accountability.”

The statement acknowledges the opportunities and risks posed by AI, particularly large language models, and calls for human oversight, transparency, and justification when AI is used in evidence synthesis. It also urges AI tool developers to proactively align with RAISE principles, providing clear documentation and transparency around limitations and potential biases.

Published simultaneously in Cochrane Database of Systematic Reviews, Campbell Systematic Reviews, JBI Evidence Synthesis, and Environmental Evidence, the statement reflects a unified commitment to responsible innovation across the field.