Michael Clemens and Gabriel Demombynes recently published this summary paper of what happened and what they thought about the Millennium Villages Project debacle. It makes interesting reading, and expands to cover their approach to impact evaluation and its limitations. Something that is not covered, however, is a discussion about data.
Simon Brooker – an epidemiologist working in Kenya with experience of working with economists on trials – once said to me that ‘epidemiologists care about data, economists care about analysis’. The more that I read, the more that I think that this is broadly true. If this dichotomy is the case (and I have not taken a systematic approach to this, so, ironically, my data is very poor) then I am sure that it has emerged from the different human genres that the disciplines have grown up in: epidemiologists dealing with messy things like health, and economists with more precise and less subjective things like money. Nowadays the disciplines are overlapping. Economists who are sharp as tacks when it comes to analysis, such as Clemens and Demonbynes, seem conspicuously obtuse on discussions of sampling, measurement bias, recall biases, and missing data. Such biases, especially in the context of an impact evaluation, can be sufficiently large as to make subsequent fiddling with statistical models all but irrelevant.