

For those studies where it was not possible to identify the study type (i.e. We excluded cohorts based on prevalence archives, in which a protocol is registered after a study is launched or completed, since such cohorts can already be affected by publication and selection bias.īoth cohorts containing exclusively RCTs or containing a mix of RCTs and non-RCTs were eligible. We focussed on inception cohorts with study protocols being registered before the start of the study as this type of prospective design were deemed more reliable. We included research that assessed an inception cohort of RCTs for study publication bias and/or outcome reporting bias.
#EMPIRICAL FINDINGS UPDATE#
The aim of this study was to update the original review and summarise the evidence from empirical cohort studies that have assessed study publication bias and/or outcome reporting bias in RCTs approved by a specific ethics committee or other inception cohorts of RCTs. While much effort has been invested in trying to identify the former, , it is equally important to understand the nature and frequency of missing data from the latter level. Thus, the bias from missing outcome data that may affect a meta-analysis is on two levels: non-publication due to lack of submission or rejection of study reports (a study level problem) and the selective non-reporting of outcomes within published studies on the basis of the results (an outcome level problem).
#EMPIRICAL FINDINGS TRIAL#
Studies comparing trial publications to protocols or trial registries are also accumulating evidence on the proportion of studies in which at least one primary outcome was changed, introduced, or omitted.
#EMPIRICAL FINDINGS HOW TO#
Work has also been published to show how to identify outcome reporting bias within a review and relevant trial reports. The ORBIT (Outcome Reporting Bias In Trials) study conducted by authors of this review, found that a third of Cochrane reviews found at least one trial with high suspicion of outcome reporting bias for a single review primary outcome. It found that 12 of the 16 included empirical studies demonstrated consistent evidence of an association between positive or statistically significant results and publication and that statistically significant outcomes have higher odds of being fully reported. The original version of this systematic review summarised the empirical evidence for the existence of study publication bias and outcome reporting bias.

The likely bias from selective outcome reporting is to overestimate the effect of the experimental treatment. Randomised controlled trials (RCTs) are planned experiments, involving the random assignment of participants to interventions, and are seen as the gold standard of study designs to evaluate the effectiveness of a treatment in medical research in humans. Here we focus on the selective reporting of outcomes from those that were originally measured within a study outcome reporting bias (ORB). For example, selective reporting of analyses may include intention-to–treat analyses versus per–protocol analyses, endpoint score versus change from baseline, different time points or subgroups. Several different types of selective reporting within a study may occur. It has been defined as the selection on the basis of the results of a subset of the original variables recorded for inclusion in a publication. Within-study selective reporting bias relates to studies that have been published. This “time lag bias” (or “pipeline bias”) will tend to add to the bias since results from early available evidence tend to be inflated and exaggerated. There is additional evidence that research without statistically significant results takes longer to achieve publication than research with significant results, further biasing evidence over time –. Study publication bias will lead to overestimation of treatment effects it has been recognised as a threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Empirical research consistently suggests that published work is more likely to be positive or statistically significant (P<0.05) than unpublished research. Study publication bias arises when studies are published or not depending on their results it has received much attention.
