How is Cochrane advancing responsible AI for evidence synthesis?

AI istock

Cochrane's Responsible Approach to Artificial Intelligence (AI) in Evidence Synthesis

 

Systematic reviews are founded on rigor, transparency, and replicability. Unchecked AI risks a surge in unreliable, biased, and low-quality reviews. Cochrane is addressing this challenge with a measured and responsible approach to ensure AI is used ethically and effectively in evidence synthesis.

 

1. Using AI to Support Review Authors

Cochrane has a history of implementing automation, such as using machine learning to identify randomised controlled trials (RCTs) since 2016. Leveraging its wealth of high-quality, structured data, current innovations include:

  • RevMan: Dynamic analysis reporting, allowing authors to insert live results while writing and updating reviews.
  • CENTRAL: A new feature that flags retracted publications to authors in the clinical trials database.
  • Future Plans: Increasing support via Cochrane’s Evidence Pipeline (combining automation and crowd verification) and leveraging PICO annotations to inform decisions about new review proposals.

Alongside these technical advances, Cochrane is working on new guidance and training to improve AI literacy across the organisation.

 

2. Developing International Guidance and Standards

Cochrane is co-leading RAISE (Responsible AI in Evidence Synthesis), an international initiative to standardise recommendations for responsible AI use. Updated guidance released in June 2025 includes:

  • RAISE 1: Recommendations for practice across roles in the evidence synthesis ecosystem.
  • RAISE 2: Guidance on building and evaluating AI tools.
  • RAISE 3: Guidance on selecting and using AI tools.

Cochrane authors are permitted to use AI if it upholds the principles of research integrity. All AI and automation use must be disclosed, have human oversight, and authors must remain accountable for the final content. Authors must also justify its use and demonstrate it won’t compromise methodological rigor.


 

3. Engaging with Developers and Collaborating Across the Sector

Cochrane is collaborating externally to align the use of AI tools:

  • AI Tool Developers: RAISE provides frameworks to guide developers on tool evaluation and public transparency, clarifying what information (e.g., strengths, limitations, potential biases) should be publicly available for users. Cochrane is collaborating with developers like Covidence on this.
  • AI Methods Group: Cochrane, Campbell, JBI, and the Collaboration for Environmental Evidence are working together to develop a shared position statement on AI use in systematic reviews.
  • DESTinY Project: A Wellcome-funded project focusing on developing AI-driven tools for evidence synthesis in climate and health. This project includes a community survey to understand expectations on the acceptable level of error when using AI in reviews.
  • ESIC (Evidence Synthesis Infrastructure Collaborative): Co-leading this initiative, also Wellcome-funded, to develop a roadmap for a global, user-centered evidence synthesis infrastructure, which includes costed solutions for responsible and safe AI (finalised July 2025).

Ultimately, Cochrane’s goal is to ensure AI enhances—not replaces—human judgment, making evidence more timely, usable, and equitable.

Read full article here