A real-world evaluation of the implementation of NLP technology in abstract screening of a systematic review

Thumbnail image

Download files

DOI

https://doi.org/10.1101/2022.02.24.22268947

Language of the publication
English
Date
2022-02-25
Type
Submitted manuscript
Author(s)
  • Perlman-Arrow, Sara
  • Loo, Noel
  • Bobrovitz, Niklas
  • Yan, Tingting
  • Arora, Rahul K.
Publisher
medRxiv

Abstract

The laborious and time-consuming nature of systematic review production hinders the dissemination of up-to-date evidence synthesis. Well-performing natural language processing (NLP) tools for systematic reviews have been developed, showing promise to improve efficiency. However, the feasibility and value of these technologies have not been comprehensively demonstrated in a real-world review. We developed an NLP-assisted abstract screening tool that provides text inclusion recommendations, keyword highlights, and visual context cues. We evaluated this tool in a living systematic review on SARS-CoV-2 seroprevalence, conducting a quality improvement assessment of screening with and without the tool. We evaluated changes to abstract screening speed, screening accuracy, characteristics of included texts, and user satisfaction. The tool improved efficiency, reducing screening time per abstract by 45.9% and decreasing inter-reviewer conflict rates. The tool conserved precision of article inclusion (positive predictive value; 0.92 with tool vs 0.88 without) and recall (sensitivity; 0.90 vs 0.81). The summary statistics of included studies were similar with and without the tool. Users were satisfied with the tool (mean satisfaction score of 4.2/5). We evaluated an abstract screening process where one human reviewer was replaced with the tool’s votes, finding that this maintained recall (0.92 one-person, one-tool vs 0.90 two tool-assisted humans) and precision (0.91 vs 0.92) while reducing screening time by 70%. Implementing an NLP tool in this living systematic review improved efficiency, maintained accuracy, and was well-received by researchers, demonstrating the real-world effectiveness of NLP in expediting evidence synthesis.

Subject

  • Health

Rights

Pagination

1-32

Peer review

No

Open access level

Green

Article

Submitted date
2022-05-25

Sponsors

SeroTracker receives funding for SARS-CoV-2 seroprevalence study evidence synthesis from the Public Health Agency of Canada through Canada’s COVID-19 Immunity Task Force, the World Health Organization Health Emergencies Programme, the Robert Koch Institute, and the Canadian Medical Association Joule Innovation Fund.

Download(s)

URI

Collection(s)

Public health practice

Full item page

Full item page

Page details

Date modified: