Recent books about evaluations to study the quantitative impact of development programs and projects typically devote a chapter or two of the need to complement the analysis with other methods – specifically qualitative techniques. They often cite how qualitative techniques help explain the reason for positive or negative quantitative results. This is key if the one is to draw conclusions for accountability or for learning to improve future program design. Or they explain how qualitative work is critical to make sure that quantitative data are collected in the right way.
Despite these textbook recommendations, there has been a wide range of experiences in how using both quantitative and qualitative methods have affected the overall quality of evaluations. In many cases, the qualitative analysis consists mostly of quotes to justify findings from the quantitative work. While this helps provide context, there is not much value-added beyond making an otherwise ‘dry’ quantitative presentation more interesting. Some recent evaluations have begun to change this practice and have arguably improved the quality of impact evaluations in terms their relevance, the inferences that are drawn from them and their applicability to policy makers and programme implementers. This includes the use of innovative techniques to form the specific evaluative questions being asked and tested, to gather the right type of data and information on outcomes and intermediating variables, to explain findings and to disseminate them to the appropriate decision-makers. This paper will review this work. It will canvass a purposeful sample of experts from a variety of disciplines to gather the success stories, and where apparently well-planned approaches have failed to add the value expected of them. It will then draw lessons for future evaluations as a basis for guidance on use of mixed methods.