Single data extraction generated more errors than double data extraction in systematic reviews

Authors: Buscemi N (1) , Hartling L (1) , Vandermeer B (1) , Tjosvold L (1) , Klassen TP (1)
Affiliations:
(1) Department of Pediatrics, University of Alberta/Capital Health Evidence-Based Practice Centre
Source: J Clin Epidemiol. 2006 Jul;59(7):697-703
DOI: 10.1016/j.jclinepi.2005.11.010 Publication date: 2006 Jul E-Publication date: March 15, 2006 Availability: abstract Copyright: © 2006 Elsevier Inc. Published by Elsevier Inc. All rights reserved.
Language: English Countries: Not specified Location: Not specified Correspondence address: Buscemi N : nina.buscemi@ualberta.ca

Keywords

Article abstract

BACKGROUND AND OBJECTIVE:

To conduct a pilot study to compare the frequency of errors that accompany single vs. double data extraction, compare the estimate of treatment effect derived from these methods, and compare the time requirements for these methods.

METHODS:

Reviewers were randomized to the role of data extractor or data verifier, and were blind to the study hypothesis. The frequency of errors associated with each method of data extraction was compared using the McNemar test. The data set for each method was used to calculate an efficacy estimate by each method, using standard meta-analytic techniques. The time requirement for each method was compared using a paired t-test.

RESULTS:

Single data extraction resulted in more errors than double data extraction (relative difference: 21.7%, P = .019). There was no substantial difference between methods in effect estimates for most outcomes. The average time spent for single data extraction was less than the average time for double data extraction (relative difference: 36.1%, P = .003).

CONCLUSION:

In the case that single data extraction is used in systematic reviews, reviewers and readers need to be mindful of the possibility for more errors and the potential impact these errors may have on effect estimates.

Find it online