Tag Archives: evaluation

Opportunity Child

Happy helping people is not enough when the factors that determine resource allocation are improper.

The the previous posts we have seen that resource allocation is based on a number of things – proximity, emotions, timing – but that efforts to define and measure actual outcomes are lacking.  The result is that resources are likely misdirected such that sizable opportunity costs can be measured in actual children killed (not saved) because of the action of badly determined criteria.

Let me explain.  An NGO is spending $100m on children in a crisis in Africa.  We don’t know exactly what effect this will have (because it is not measured) but the NGO hopes to save $4m children.  That’s $25 per child, which sounds pretty good.  But is it?  First of all, $25 might not be all that cheap, despite huge healthcare costs in the West it might be possible to help a child for much less in a different context and for a different ailment.  Second, we haven’t really defined ‘life saved’ (see previous post), although we could just do that.  Let’s say it is that the child lives beyond childhood and reaches 16.  That doesn’t solve the problem since, third, we haven’t measured what actually happens and the ‘confidence interval’ on our 4m saved estimate is very wide, wide enough that resource allocation goes from cheap and effective to extremely expensive, even wasteful.  So a number of things have to come together for the $100m for 4m claim to hold up and without it there is a good chance that our $100m is not helping as many children reach 16 as it would do if it were spent elsewhere.  What seems like a rational basis for action – maximum number of children reaching 16 per dollar spent – is not the basis at all.

As a result, children needlessly fail to reach 16 – are saved – when we have the funds to help them.

Leave a comment

Filed under Uncategorized

Critical ALNAP

The  Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP), established in 1997, is probably the most respected inter-agency body looking at accountability in the humanitarian sector.  So, you can imagine how I responded to this update on the Humanitarian Performance Indicators Working Group in their annual report:

Launched in 2009, the ALNAP Working Group  on Humanitarian Performance Indicators set out to provide a forum for members to share their experiences and thoughts on approaches to organisational and programme performance indicators within the humanitarian system. After discussions in the group about the difficulty of establishing a clear direction, it was decided to dissolve the group for the time being and to incorporate discussions on performance indicators into the development of the methodology for the next iteration of ‘The State of the Humanitarian System: Assessing performance and progress.’ In 2011/12, ALNAP will decide whether there is additional value in reconvening the Working Group

If you can stifle your laughter or manage not to choke on your coffee then you’ve done well.  The group convened, discussed difficulties and the dissolved within two years.  Reminds me of another critical issue.

Leave a comment

Filed under Uncategorized