iconAnil Madhavapeddy, Professor of Planetary Computing

Evaluating a human-in-the-loop AI framework to improve inclusion criteria for evidence synthesis

This is an idea proposed in 2025 as a good starter project, and is currently being worked on by Radhika Agrawal. It is co-supervised with Alec Christie and Sadiq Jaffer.

Whenever we do evidence synthesis (especially for conservation outcomes) to distil the world's scientific literature into actionable insights, we have to decide on what published studies we will include or exclude, and why they are categorised as such. This can be a challenging process, and sometimes inclusion criteria may not be very reproducible or clearly defined, leading to confusion between reviewers and more time-consuming reviews.

In AI-assisted review methods, we are increasingly finding that LLMs may interpret inclusion criteria differently to human reviewers, potentially because human experts may implicitly assume certain things that are not obvious to those working outside the review team (or interpret things differently to fellow reviewers). We trialled an informal process earlier this year to iterate over the inclusion/exclusion criteria for an evidence synthesis using synthetic studies that represent "edge cases", whereby it is difficult to agree on whether they should be in or out. Through back-and-forth with an LLM, human reviewers were able to refine and improve their inclusion criteria.

This project will build on this work to develop a prototype, open-source tool that enables users to refine their inclusion criteria with the help of an LLM chatbot. This will be extremely useful for anyone conducting any type of evidence synthesis and so has great potential to be an impactful project beyond "just" the field of conservation.

# 1st Jun 2025 iconideas ai conservation evidence idea-beginner idea-ongoing llms urop

Related News