home Anil Madhavapeddy, Professor of Planetary Computing  

Careful design of Large Language Model pipelines enables expert-level retrieval of evidence-based information from conservation syntheses

Radhika Iyer, Alec Christie, Anil Madhavapeddy, Sam Reynolds, Bill Sutherland and Sadiq Jaffer.

Working paper at Research Square.

URL (researchsquare.com)   DOI   BIB

Wise use of evidence to support efficient conservation action is key to tackling biodiversity loss with limited time and resources. Evidence syntheses provide key recommendations for conservation decision-makers by assessing and summarising evidence, but are not always easy to access, digest, and use. Recent advances in Large Language Models (LLMs) present both opportunities and risks in enabling faster and more intuitive systems to access evidence syntheses and databases. Such systems for natural language search and open-ended evidence-based responses are pipelines comprising many components. Most critical of these components are the LLM used and how evidence is retrieved from the database.

We evaluate the performance of ten LLMs across six different database retrieval strategies against human experts in answering synthetic multiple-choice question exams on the effects of conservation interventions using the Conservation Evidence database. We found that LLM performance was comparable with human experts over 45 filtered questions, both in correctly answering them and retrieving the document used to generate them. Across 1867 unfiltered questions, LLM performance demonstrated a level of conservation-specific knowledge, but this varied across topic areas. A hybrid retrieval strategy that combines keywords and vector embeddings performed best by a substantial margin. We also tested against a state-of-the-art previous generation LLM which was outperformed by all ten current models - including smaller, cheaper models.

Our findings suggest that, with careful domain-specific design, LLMs could potentially be powerful tools for enabling expert-level use of evidence syntheses and databases. However, general LLMs used 'out-of-the-box' are likely to perform poorly and misinform decision-makers. By establishing that LLMs exhibit comparable performance with human synthesis experts on providing restricted responses to queries of evidence syntheses and databases, future work can build on our approach to quantify LLM performance in providing open-ended responses.

# 1st Jan 2025   iconpapers ai biodiversity conservation evidence llms preprint

Related News


Older versions

There are earlier revisions of this paper available below for historical reasons. Please cite the latest version of the paper above instead of these.


This is v1 of the publication from Nov 2024.

# 1st Nov 2024   ai biodiversity conservation evidence llms preprint