Assessing the quality of reports of randomized clinical trials: is blinding necessary?

Authors: Jadad AR (1,2) , Moore RA (1,2) , Carroll D (1,2) , Jenkinson C (3) , Reynolds DJ (4) , Gavaghan DJ (1) , McQuay HJ (1,2)
Affiliations:
(1) Nuffield Orthopaedic Centre, Oxford (2) Oxford Regional Pain Relief Unit, University of Oxford (3) Department of Public Health and Primary Care, University of Oxford (4) University Department of Clinical Pharmacology, University of Oxford
Source: Control Clin Trials. 1996 Feb;17(1):1-12.
DOI: 10.1016/0197-2456(95)00134-4 Publication date: 1996 Feb E-Publication date: March 2, 1999 Availability: abstract Copyright: © 1996 Published by Elsevier Inc.
Language: English Countries: Not specified Location: Not specified Correspondence address: Alejandro R. Jadad,
Department of Clinical Epidemiology and Biostatistics, McMaster University, 1200 Main Street West, Hamilton, Ontario, Canada L8N 3Z5.

Keywords

Article abstract

It has been suggested that the quality of clinical trials should be assessed by blinded raters to limit the risk of introducing bias into meta-analyses and systematic reviews, and into the peer-review process. There is very little evidence in the literature to substantiate this. This study describes the development of an instrument to assess the quality of reports of randomized clinical trials (RCTs) in pain research and its use to determine the effect of rater blinding on the assessments of quality. A multidisciplinary panel of six judges produced an initial version of the instrument. Fourteen raters from three different backgrounds assessed the quality of 36 research reports in pain research, selected from three different samples. Seven were allocated randomly to perform the assessments under blind conditions. The final version of the instrument included three items. These items were scored consistently by all the raters regardless of background and could discriminate between reports from the different samples. Blind assessments produced significantly lower and more consistent scores than open assessments. The implications of this finding for systematic reviews, meta-analytic research and the peer-review process are discussed.

Find it online