We have just uploaded a preprint on using LLMs for conservation evidence, based on our work on large-scale crawling of the academic literature. Well done in particular to Radhika Iyer for having done the bulk of the evaluation on this as part of a very productive summer internship with us! This work evaluates whether LLMs can facilitate evidence-based decision support for conservation by testing ten different LLMs against human experts on multiple choice questions about conservation interventions. We found that with careful design, open-book LLM performance was competitive with human experts on filtered questions, both in correctly answering them and retrieving the source documents. However, general LLMs used "out-of-the-box" performed poorly, highlighting the importance of domain-specific design to avoid misinforming decision-makers.