By far the best read of the current week (as yet, but looks incredibly difficult to surpass this):
Here’s an imagined dialogue between the two sides on Randomized Evaluation (RE) based on this book:FOR: Amazing RE power lets us identify causal effect of project treatment on the treated.
AGAINST: Congrats on finding the effect on a few hundred people under particular circumstances, too bad it doesn’t apply anywhere else.
FOR: No problem, we can replicate RE to make sure effect applies elsewhere.
AGAINST: Like that’s going to happen. Since when is there any academic incentive to replicate already published results? And how do you ever know when you have enough replications of the right kind? You can’t EVER make a generic “X works” statement for any development intervention X. Why don’t you try some theory about why things work?
FOR: We are now moving in the direction of using RE to test theory about why people behave the way they do.
AGAINST: I think we might be converging on that one. But your advertising has not yet got the message, like the JPAL ad on “best buys on the Millennium Development Goals.”
FOR: Well, at least it’s better than your crappy macro regressions that never resolve what causes what, and where even the correlations are suspect because of data mining.
AGAINST: OK, you drew some blood with that one. But you are not so holy on data mining either, because you can pick and choose after the research is finished whatever sub-samples give you results, and there is also publication bias that shows positive results but not zero results.
FOR: OK we admit we shouldn’t do that, and we should enter all REs into a registry including those with no results.
AGAINST: Good luck with that. By the way, even if do you show something “works,” is that enough to get it adopted by politicians and implemented by bureaucrats?
FOR: But voters will want to support politicians who do things that work based on rigorous evidence.
AGAINST: Now you seem naïve about voters as well as politicians. Please be clear: do RE-guided economists know something the local people do not know, or do they have different values on what is good for them? What about tacit knowledge that cannot be tested by RE? Why has RE hardly ever been used for policymaking in developed countries?
FOR: You can take as many potshots as you want, at the end we are producing solid evidence that convinces many people involved in aid.
AGAINST: Well, at least we agree on the on the much larger question of what is not respectable evidence, namely, most of what is currently relied on in development policy discussions. Compared to the evidence-free majority, what unites us is larger than what divides us.
Looks like Easterly’s has very high chances to become the top blog among my econblogs (at least according to how often I dedicate entire blogposts just to cite his posts, e.g. here or here, or a WSJ article here). Not bad, not bad at all: I do have pretty high standards, as all of you should have noticed! :-).
PS. See also an earlier entry on the topic (featuring again some of the heavyweights in this realm): 4th bullet point.