/ Papers / Can Large Language Models facilitate evidence-based decision support for conservation?
Working paper at Research Square, Nov 2024
URL   BibTeX   DOI  

Abstract. Wise use of evidence to support efficient conservation action is key to tackling biodiversity loss with limited time and resources. Evidence syntheses provide key recommendations for conservation decision-makers by assessing and summarising evidence, but are not always easy to access, digest, and use. Recent advances in Large Language Models (LLMs) present both opportunities and risks in enabling faster and more intuitive access to evidence databases. We evaluated the performance of ten LLMs (and three retrieval strategies) versus six human experts in answering synthetic multiple choice question exams on the effects of conservation interventions using the Conservation Evidence database. We found that open-book LLM performance was competitive with human experts on 45 filtered questions, both in correctly answering them and retrieving the document used to generate them. Across 1867 unfiltered questions, closed-book LLM performance demonstrated a level of conservation-specific knowledge, but did vary across topic areas. Hybrid retrieval performed substantially better than dense and sparse retrieval methods, whilst more recent LLMs performed substantially better than older ones. Our findings suggest that, with careful design, LLMs could potentially be powerful tools for enabling expert-level use of evidence databases. However, general LLMs used out-of-the-box are likely to perform poorly and misinform decision-makers.

Authors. Alec Christie, Radhika Iyer, Anil Madhavapeddy, Sam Reynolds, Bill Sutherland and Sadiq Jaffer

See Also. This publication was part of the Conservation Evidence Copilots project.

News Updates

Nov 2024. «» Preprint on LLMs for conservation evidence.
Jun 2024. «» Talk on the conservation copilot using LLMs.