Five Questions on Evaluating Progress to End Poverty with Dean Karlan
from Development Channel

Five Questions on Evaluating Progress to End Poverty with Dean Karlan

A girl studies while sitting on top of a taxi outside her shanty home at a roadside in Mumbai, India (Shailesh Andrade/Reuters).
A girl studies while sitting on top of a taxi outside her shanty home at a roadside in Mumbai, India (Shailesh Andrade/Reuters).

More on:

India

Pakistan

Development

Financial Markets

Inequality

This post features a conversation with Dean Karlan, professor of economics at Yale University, president and founder of Innovations for Poverty Action, and founder of ImpactMatters, a newly-launched organization that assesses how well nonprofits use and produce evidence of impact.

1) How have development economics and the study of poverty evolved in recent years?

Until about fifteen years ago, there were two different strands of development work, both with limitations. The first asked a big, monolithic policy question—“does aid work?—and compared how aid affected development outcomes across countries. But the cross-country research lacked the necessary data and an understanding of critical micro-level mechanisms, or the obvious first step of why some countries get more aid than others. It led to big debates, but failed to determine causality.

The other strand in academia focused on understanding markets and decision-making at the individual and household-level, an approach that was valuable for understanding the world, but was often fairly removed from policy implications.

Then, we saw major shifts with the availability of cheaper and better data, and intensified pressure on development economists to deliver policy prescriptions. These shifts allowed us to rigorously evaluate specific projects to find out what was working, what was not, and what to do about it.

Perhaps as a byproduct of the cross-country data debate, development economists started focusing on asking when aid works, not whether aid works. The point being: there is no simple answer. This led to a blossoming of work using randomized control trials (RCTs) to test specific policies on the ground with NGOs, the private sector, and governments. We also began to see academic analysis that was more prescriptive than descriptive, and could help to guide policy.

 

2) In your randomized control trials (RCTs), what were some surprising findings about what’s working and what’s not?

Several hot development debates have led to surprises. But, of course, since these were “hot debates,” some were surprised while others were not. Microcredit is a perfect example.

Some oversold microcredit as the tool to fight poverty and to increase income for the world’s poorest, benefiting low-income households, which would ultimately lead to better healthcare and education. On the other side, critics claimed that it led to negative outcomes, such as suicides among poor farmers who could not repay loans. Despite anecdotal evidence, there was equally bad analysis on both sides and neither could establish causality, or prove what would have happened if people had not received the loans. There was no counterfactual, as economists call it.

Wading into the debate using evidence, we saw strikingly similar results from seven RCTs across seven countries, several of them conducted by researchers with Innovations for Poverty Action. The punchline: microcredit loans were not typically reaching world’s poorest, and they were not increasing income on average. So, they were not meeting their main goal (though they were not causing much harm either).

Once we saw the evidence, we said: “we’re a fan of microcredit, but as it’s currently done, it’s best for private investors, not for donors.” Donors should either look elsewhere or use their charitable dollars to push for more innovation, to figure out how to improve the microcredit model.

 

3) So what is an effective investment for donors?

After we found that microcredit is not reaching the world’s poorest, donors asked us: what might? Through RCTs we found one approach that works quite well to move people out of poverty: a graduation program. At its core is the transfer of a “productive asset”—a way to make a living that has a positive impact on household consumption, savings, income, food security, and other life outcomes even years after the initial asset transfer.

We have completed six RCTs on graduation programs in six countries (Ethiopia, Ghana, Honduras, India, Pakistan, and Peru). Other researchers completed a seventh in Bangladesh, and there are several similar studies from Uganda. The program first gave the asset, along with training on how to use it. We also set up a savings account, provided healthcare access, offered life coaching, and supplied food for up to six months (so recipients wouldn’t be forced to kill and eat livestock right away).

Because graduation programs are expensive, at around $1,000 per person, our research focused on cost effectiveness. We had no doubt the program would create a bump in income, but would it last?

In six of the seven RCTs we found strikingly similar results proving both cost effectiveness and a sustained impact—up to three years after assets were transferred in all of our RCT sites.

In one of the sites, India, where we now have seven years of results, the effects are getting bigger over time rather than dissipating. This trend suggests that those in the graduation program were previously stuck in a poverty trap, and that a holistic and integrated approach combining income with social and economic support helped to get them out of it.

 

4) What have you found on cash transfers, another area of growing interest and investment?

The most prominent study Innovations for Poverty Action did on unconditional cash transfers (UCTs) was with GiveDirectly, a nonprofit that does exactly what it says. They donate 91 cents of every dollar to poor households using mobile money for the transfers, which makes for a low-cost, lean operation.

People critical of this approach at first were concerned that money would be used for alcohol and tobacco, rather than for food and other necessities. We said, “let’s go get the facts,” and set up a carefully-designed study. Our randomized evaluation found that households receiving cash transfers spent them on food, education, and medical expenses, as well as on family obligations, resulting in higher assets and better psychological well-being. There was no increase on alcohol and tobacco spending.

Yet even with evidence of positive outcomes, we are seeing that cash transfers work best for short-run problems—catastrophes or conflicts where the challenge is not long-term development. They work best for helping people in a moment of need.

 

5) What do you say to RCT critics who argue they are too expensive, or take too long?

First, it is critical to note that randomization is not the reason RCTs can be expensive (and I’ll explain why they are not always). Costs are driven by tracking and surveying people over time to see if the program affected lives, which is necessary even for non-randomized studies.

While RCTs are more expensive than simple studies that ask beneficiaries—“were you happy with the program?”—I would argue that overall they are far cheaper than non-RCT evaluations, because they allow us to zoom in on what is working and what is not much faster and more accurately, ultimately saving money on bad measurement and ineffective programs.

RCTs allow the testing of multiple versions of a program at once, too. For example, in Uganda we tested four variations of a classroom savings program, and found that three that did not work, but one did. The variations shed insight into why the program was working.

Regarding timing, it is also critical to point out that an RCT need not be a long-term study, nor does it need to be expensive. We have done cheap, rapid-fire tests on getting people to save more by sending text messages reminding them to save, and a few months later, comparing savings rates for those that received the messages versus those that did not. Many operational questions, such as how to enroll people in a program, can and should be rapid-fire studies to give immediate feedback that improves operations. For example, MIT’s Abdul Latif Jameel Poverty Action Lab just released a toolkit that helps organizations run RCTs through data they may already be collecting.

Full, long-term RCTs aren’t always appropriate. But when they are, getting good data that both establishes causality and illuminates why something works can put programs on the right track to maximize impact, and save money in the long run.

More on:

India

Pakistan

Development

Financial Markets

Inequality