Systematic reviews in international development mostly address health-related interventions, but the donor involvement since the mid-2000s has resulted in a broader range of topics in which a specific intervention is assessed over a range of variables and intervention groups. Still, the focused questions dominate in estimating direct, easily measurable effects or the intervention impacts even though the development reviews operate in a complex multidisciplinary environment, which requires acknowledging the influence of institutions and social interaction. In addition, the scarcity of comparable evidence about the effects of development interventions necessitates that authors change their strategies when assessing the strength of evidence or synthesizing data. For example, narrative synthesis is used to systematize empirical evidence when data quality and quantity do not allow meta-analysis, which is the preferred method for calculating the effect size in more traditional systematic reviews.
Due to the inherent differences in value judgments, different ways of reviewing and interpreting the same data (evidence) can lead to conflicting conclusions. The focus on asking the ’right’ questions in international development reviews is important precisely because no review process is immune to bias. We emphasize in this study that systematic reviews in international development may be vulnerable to a range of biases and warn that these reviews should not aim, at all cost, at pursuing the classical approach suitable for traditional, ’easy-to-measure’ situations. Instead, the development reviews should adjust the review process so it caters for the type of question they are trying to address. In this way, the differences in type and quality of the included primary studies, methodological approach and the study comprehensiveness will not be a source of bias, but will add to the overall success of the review.