Conservation Evidence Copilots
The Conservation Evidence team at the University
of Cambridge has spent years screening 1.6m+ scientific papers on conservation, as
well as manually summarising 8600+ studies relating to conservation actions.
However, progress is limited by the specialised skills needed to screen and
summarise relevant studies -- it took more than 75 person years to manually
curate the current database and only a few 100 papers can be added each year!
We are working on AI-driven techniques to accelerate addition of robust
evidence to the CE database via automated literature scanning,
The goal of the Conservation Evidence project is to transform conservation so that evidence is routinely embedded in decisions to improve outcomes for biodiversity and society. CE is becoming the authoritative, most comprehensive, freely available platform for evidence-led conservation and is starting to profoundly change the way in which conservationists access and use evidence for improving the state of the planet.

The CE collation and synthesis work has significantly improved the availability of evidence for use in conservation practice and remains the only resource of evidence synopses for biodiversity conservation and the largest database of effectiveness reviews of actions outside the field of medicine. The approach of carrying out reviews on an industrial scale means that they can carry out reviews for a fraction of the costs in comparable fields, such as medicine. Using subject-wide evidence synthesis, CE systematically searches the literature and summarise results from (and provides citations for) each study testing the effectiveness of an action. As of April 2024, CE has read 1.6 million paper titles in 17 languages (326 non-english journals) and reviewed evidence for >3600 conservation actions, freely available on their website, with collaboration from over 380 international academics and practitioners.
Accelerating literature surveys with LLMs
We got involved from computer science in 2023 as part of the AI@CAM competition to harness the momentum behind machine learning to accelerate conservation actions. Our overall aim is to help CE to dramatically accelerate their data searching and data extraction pipelines. Currently, the searching of literature and summarising of key data is undertaken by human experts. Although this method of working is time consuming, it does benefit from being thorough and replicable. The main difficulties come in the subtleties of deciphering study designs, methodologies and whether controls are actually appropriate for testing the effectiveness of the specified action. Any LLM-based automation that we deploy must account for these as part of the validation pipeline.
Our
The collaboration originally began in 2022 as part of the Computer Science 1B group projects,
when
Through 2024, we evaluated ten different LLMs against human experts using the
CE database, leading to our
Living Evidence Databases
In October 2025, we published
The system features a hybrid retrieval model combining keyword search with
semantic understanding, and integrates a human-AI collaborative process for
refining inclusion criteria from complex protocols. We also incorporated an
established, statistically-principled stopping rule to ensure efficiency. In
baseline evaluation against a prior large-scale manual review, the fully
automated pipeline achieved 97% recall and identified significant numbers of
relevant studies not included in the original review, demonstrating its
viability as a foundational tool for maintaining living evidence databases.
This work was earlier presented at a
Challenges in the AI-Evidence Era
Our work on LLM-based evidence synthesis has gained urgency in 2025 as the body
of scientific literature faces challenges from
The
Technology for Conservation, Not Division
We also conducted a
- Respect and amplify human expertise rather than replacing it via "human-in-the-loop" methods
- Follow participatory design principles with conservation practitioners
- Maintain open source and open data approaches with thorough documentation to facilitate reproducible outputs
- Address capacity building needs, particularly in the Global South with respect to AI capability
- Keep conservation goals and not short term technology trends at the center of our research.
Our
Activity
On the path to the UK/India AI Summit with OpenUK and the ATI – Research note (Nov 2025)
Royal Society's Future of Scientific Publishing meeting – Research note (Jul 2025)
Is AI poisoning the scientific literature? Our comment in Nature – Note about Will AI speed up literature reviews or derail them entirely? (Jul 2025)
EEG internships for the summer of 2025 – Research note (Jun 2025)
Visiting National Geographic HQ and the Urban Exploration Project – Research note (Jun 2025)
What I learnt at the National Academy of Sciences US-UK Forum on Biodiversity – Research note (Jun 2025)
Evaluating a human-in-the-loop AI framework to improve inclusion criteria for evidence synthesis – Research idea (available, Any level, Jun 2025)
Evaluating LLMs for providing evidence-based information on conservation actions – Research idea (ongoing, Any level, Jun 2025)
We become Junior Rangers at Shenandoah – Research note (May 2025)
Out-of-the-box LLMs are not ready for conservation decision making – Note about Careful design of Large Language Model pipelines enables expert-level retrieval of evidence-based information from syntheses and databases (May 2025)
Humans are the ones that will save nature, helped by AI – Research note (May 2025)
Technology needs to unite conservation, not divide it – Note about The potential for AI to revolutionize conservation: a horizon scan (Apr 2025)
Semi distributed filesystems with ZFS and Sanoid – Research note (Apr 2025)
LIFE becomes an Official Statistic of the UK government – Research note (Mar 2025)
A fully AI-generated paper just passed peer review; notes from our evidence synthesis workshop – Research note (Mar 2025)
Our EEG group discussion on 'useful' AI tools – Research note (Mar 2025)
The AIETF arrives, and not a moment too soon – Research note (Feb 2025)
Thoughts on the National Data Library and private research data – Research note (Feb 2025)
Fake papers abound in the literature – Research note (Feb 2025)
Using computational SSDs for vector databases – Research idea (available, MPhil level, Feb 2025)
Preprint on using LLMs to for evidence-based decision support – Note about Careful design of Large Language Model pipelines enables expert-level retrieval of evidence-based information from syntheses and databases (Nov 2024)
COMPASS 2024 report on the CoRE stack RIC meeting – Research note (Jul 2024)
Interview with AI@CAM about conservation – Research note (Jun 2024)
Assessing mangrove literature for conservation evidence – Research idea (expired, PartII level, Jan 2024)
Spatial and multi-modal extraction from conservation literature – Research idea (expired, MPhil level, Jan 2024)
Accurate summarisation of threats for conservation evidence literature – Research idea (ongoing, MPhil level, Jan 2024)
Evaluating RAG pipelines for conservation evidence – Research idea (completed, Any level, Jan 2024)
Crawling grey literature for conservation evidence – Research idea (completed, Any level, Jan 2024)
Planetary Computing – Project (2022–present)
References
Steps towards an Ecology for the Internet
Anil Madhavapeddy, Sam Reynolds, Alec Christie, David Coomes, Michael Dales, Patrick Ferris, Ryan Gibb, Hamed Haddadi, Sadiq Jaffer, Josh Millar, Cyrus Omar, Bill Sutherland, and Jon Crowcroft.
Paper in the proceedings of the sixth decennial Aarhus conference: Computing X Crisis.
Will AI speed up literature reviews or derail them entirely?
Sam Reynolds, Alec Christie, Lynn Dicks, Sadiq Jaffer, Anil Madhavapeddy, and Bill Sutherland.
Journal paper in Nature (vol 643 issue 8071).
Radhika Iyer, Alec Christie, Anil Madhavapeddy, Sam Reynolds, Bill Sutherland, and Sadiq Jaffer.
Journal paper in PLOS ONE (vol 20 issue 5).
The potential for AI to revolutionize conservation: a horizon scan
Sam Reynolds, Sara Beery, Neil Burgess, Mark Burgman, Stuart Butchart, Steven J. Cooke, David Coomes, Finn Danielsen, Enrico Di Minin, América Paz Durán, Francis Gassert, Amy Hinsley, Sadiq Jaffer, Julia P.G. Jones, Binbin V. Li, Oisin Mac Aodha, Anil Madhavapeddy, Stephanie O'Donnell, Bill Oxbury, Lloyd Peck, Nathalie Pettorelli, Jon Paul Rodríguez, Emily Shuckburgh, Bernardo Strassburg, Hiromi Yamashita, Zhongqi Miao, and Bill Sutherland.
Journal paper in Trends in Ecology & Evolution.