Sunday, March 18, 2012

Eyes wide shut

A big trend in economics these days is the Randomized Control Trial (RCT), often referred to by its enthusiasts as the "gold standard" of evidence. An intervention is tested by randomly assigning it across a chosen sample and computing the average treatment effect. An analogy is often made to clinical trials in medicine, but there is (at least) one important difference. Most drug trials are "double blind", where participants do not know if they are in the "control" group or the "treatment" group.  By contrast, most RCTs are not double blind. Participants know what group they are in.

And, as a fascinating new paper points out, this knowledge can produce "pseudo-placebo effects" in RCTs. That is to say, the expectation of receiving the treatment can cause people to modify their behaviors in a way that produces a significant "average treatment effect" even if the actual intervention in not particularly effective.

The paper illustrates the point by undertaking two different RCTs on cowpea seeds in Tanzania. One is a traditional study where the control group knows they are getting traditional seeds and the treatment group knows they are getting modern seeds. The second is double blind; neither group knows what seed it is getting.

The traditional RCT shows a significant  over 20% increase in yields from the modern seed. But the double blind RCT shows that virtually all of that improvement comes from changed behavior, not from any inherent effectiveness of the modern seed.

Specifically, the average treatment effect in the double blind RCT was zero! And when the harvests in the control groups across the two RCTs were compared, the blind control group showed a significant over 20% increase over the traditional RCT control group which knew they were getting the traditional seeds. This is the "pseudo-placebo" effect and it explains the entire average treatment effect in the traditional RCT.

Wow!

In other words, the significant effect found in the traditional RCT was not due to better seeds, it was due to actions taken by the farmers who thought they were getting better seeds (they planted them in larger plots with more space between the plants on better quality land). These farmers' expectations were wrong (in post experiment surveys, over 60% of them said they were disappointed in the yields), and the significant effect in the traditional RCT would not survive over time because the farmers, having adjusted their expectations downward would stop taking the actions that produced the "success".

People, can I get a "YIKES"?

Hat tip to Justin Sandefur.







15 comments:

Anonymous said...

What about when the researchers do not know who is getting the cool seed or the fake seed?

Doug said...

OED on double blind: " a. Applied to a test or experiment conducted by one person on another in which information about the test that may lead to bias in the results is concealed from both the tester and the subject until after the test is made;" so, it's "tester and subject" that makes it double blind. I say the tests referred to are "single blind" if the farmer doesn't know what seed he's getting and "eyes wide open".

Angus said...

Guys, (1) I am just quoting the paper's terminology, (2) I think you may be missing the point.

Road to Surf Bum said...

How many bazzillion person hours and dollars may have been pissed away on top-down interventions based on flawed methodology and claims? That's the "YIKES" as this Austrian surfer sees it.

Anonymous said...

I agree that "double-blind" is for when both subject and experimenter do not know which group people are in.

On the actual paper, I figured it would be common knowledge to use single-blind when possible given this effect. However, it seems like a lot of experiments wouldn't allow for single blind, since things like testing teacher performance pay would require the teachers know they are not receiving the status quo.

Doug said...

your Point is "the significant effect found in the traditional RCT was not due to better seeds, it was due to actions taken by the farmers who thought they were getting better seeds".

whatever the paper says, what you describe is not even single blind, much less double. Why are you surprised at biased results in a "complete (but one result 'starred')knowledge for everyone" experiment?

Michael Ward said...

Isn't this called the Hawthorne Effect?

Angus said...

Paper mentions Hawthorne effect but makes a distinction. Seems broadly similar to me though.

Anonymous said...

but what if improved seed required a range of complementary inputs (more water, more fertilizer, more pesticide, more weeding labor etc. etc.) in order to maximize yield.

if that were true, then (a) it presumably isn't that surprising that double blind didn't do as well as non-blind but more importantly (b) if outcomes are a function of complementary inputs and the treatment, why would one not want the complementary inputs provided?

Anonymous said...

Pretty harsh critique of this experiment here: http://blogs.worldbank.org/impactevaluations/mind-your-cowpeas-and-cues-inference-and-external-validity-in-rcts

Looks like lots of attrition, for one thing...

Kathy said...

Double blind is the only fully viable option in most types of reserches.

Kathy said...

Our will is to powerfull to ignore.

santa said...

Pharmacology has become increasingly sophisticated; modern biotechnology allows drugs targeted towards specific physiological processes to be developed, sometimes designed for compatibility with the body to reduce side effects. Thanks.
Regards,
hcg 1234

sree wify said...

Box up basic food preparation tools and utensils to store with emergency food supplies. Make sure you have a can opener, eating utensils, and a cup. Also store a gel-fuel or butane stove with backups of cooking fuel. Thanks a lot.
Regards,
how to gain weight fast for women

Eric Crampton said...

Familiar with Berk Ozler's work on this kind of stuff? He's good. See here on some problems with survey measurement of treatment effects.

http://blogs.worldbank.org/impactevaluations/economists-have-experiments-figured-out-what-s-next-hint-it-s-measurement