h1_bkg

Latest NCS impact measurement doesn’t go far enough

Last week’s report into impact measurement by Third Sector Research Centre made interesting reading.

The Third Sector Research Centre study concludes that there are growing concerns that funders and commissioners are shaping and dominating approaches to impact measurement in the third sector and expresses concern about a lack of comparability between sectors and “selective presentation” of results by organisations.

The study was released only days after the government published its latest evaluation of the National Citizen Service programme. The independent evaluation was produced by NatCen, the social research charity, and paints an upbeat picture of the government programme for 16 and 17 year olds in England and Northern Ireland.

What’s striking about the report is the lack of overt criticism, especially in its overview. It’s not until you start to drill down into the detail of the research that you start to see some glimmers of criticism emerge, but even then the use of phrases such as “no statistically significant positive impacts were found” leads you to question whether they are criticisms at all.

Then there’s the selective use of statistics. At the start of the report, in bold graphics, we’re told that 22,132 young people took part in the summer programme and 3,871 took part in the autumn but it’s not until page 10 that researchers disclose that 27,000 places had been commissioned for the summer scheme and 5,000 for the autumn programme, which meant around 6,000 places went unfilled. The fact that each place cost £1,662 in 2012, compared with the initial government estimate of £1,233, does not warrant a mention either.

Neither is any attempt made to compare the impact of NCS against other organisations or programmes such as the Scouts, the Guides and City Year, the US programme where young people act as mentors in schools. Surely such comparison would be a useful indicator of overall performance.

But does it really matter that impact evaluations are less than clear in places, put a positive spin on the results and lack comparability?

I would argue ‘yes’, given that the stakes are so high. In the instance of the NCS, the government has already committed to expanding the scheme to 150,000 young people in 2016, which – based on the latest figures – would cost the taxpayer in the region of £250m. Its argument for expansion is largely based on the strength of the evidence gathered.

If impact measurement is to be taken seriously it needs to be robust, present a balanced picture and avoid pandering to the requirements of its paymaster. It also needs to provide a genuine comparison between programmes to help put its successes or otherwise in context. As one colleague put it, we need comparables – not parables.

  • Jude Habib

    As someone who works with youth organisations (but not for one) and meets young people through my work, I take an active interest in the NCS initiative. I was involved in doing some short term PR support during the pilot when recruitment was an issue (and clearly still is) and I’ve also met some NCS participants as they have pitched to me their community ideas as part of the programme. Anything that supports Young People to give them new skills, confidence, an understanding of community involvement and more can only be a positive thing. But I worry that NCS is ticking along and expanding without being challenged robustly by the youth sector. I hear privately – that organisations involved in NCS have struggled to recruit young people in certain areas, that some of the delivery partners in reality don’t think that NCS provides value for money and that all-year-round youth services are being directly affected as a result. I look forward to hearing from youth charity leaders about what they really feel about the NCS in order to shape its future.

  • m aiken

    Once more round the measurement roundabout. That’s what I thought when I read the latest review by the Third Sector Research Centre on impact measurement. It’s a sensible document which gives a sober assessment of the current state of play. Meanwhile, Andy Hillier gives a useful critique of the evaluation of the national Citizen Service programme. So why do I get that dizzy feeling on the ‘ole roundabout whenever ‘impact’ pops up?

    The first thing I want to ask is: measuring what? Most of the time impact measurement is nothing of the kind. It’s terminological inflation. Take any of the Charities Evaluation Service’s rather good guides to outcomes from 10 years ago. Find the word ‘outcome’ change to ‘impact’ add an occasional word like ‘broad’ or ‘longer term.’ Stick ‘new for 2013’ on the cover and hey presto – you’re discovered impact. Enjoy!

    The second thing I look for is the plush national press conference and the big launch and a quarter a million quid of funding. Out comes the new Guide decorated with many logos and full of coloured pages. Take, for example, the Insipid – I’m sorry I meant Inspiring – ‘Impact Code of Good Practice.’ In just four simple steps it’s all done! Easy! It’s just these silly practitioners just don’t seem to get it, do they?

    Third, I keep an eye on the actual show, if I can. These are the wily practitioners who are holding the lifeboat’s oars while the Titanic is sinking. They know they must chase for funding and awards and learn the new vocabulary while paddling like mad. They dutifully mouth the words from the new Wizard of Oz bible. Fortunately they carry on pulling quite hard at what they do – which they do quite well in mountainous seas. They count a bit, reflect on the situation, ask people what’s going on, look a bit ahead. Mostly they have to keep rowing.

    Fourth, I get ready for the researchers and consultants. They start wringing their hands at the unexpected complexity in the real world. Oh woe there is still poverty and injustice and our pretty tools appear discarded on the nursery floor. There are murmurs of ‘inconsistencies’, ‘absence of fixed measures’, ‘no real change’ and ‘a lack of tools.’ A new government arrives. A new initiative beckons.

    At this stage I get a bit giddy on the roundabout. I can see it coming in the distance – like a blurr of the new Doctor Who making a first appearance: the new term, it’s going to be, here any minute now. And we are going to start all over again on that roundabout.

    It’s time to get off the roundabout and do some thinking instead. But that’s for another day. Of course.

  • m aiken

    Once more round the measurement roundabout. That’s what I thought when I read the latest review by the Third Sector Research Centre on impact measurement. It’s a sensible document which gives a sober assessment of the current state of play. Meanwhile, Andy Hillier gives a useful critique of the evaluation of the national Citizen Service programme. So why do I get that dizzy feeling on the ‘ole roundabout whenever ‘impact’ pops up?

    The first thing I want to ask is: measuring what? Most of the time impact measurement is nothing of the kind. It’s terminological inflation. Take any of the Charities Evaluation Service’s rather good guides to outcomes from 10 years ago. Find the word ‘outcome’ change to ‘impact’ add an occasional word like ‘broad’ or ‘longer term.’ Stick ‘new for 2013’ on the cover and hey presto – you’re discovered impact. Enjoy!

    The second thing I look for is the plush national press conference and the big launch and a quarter a million quid of funding. Out comes the new Guide decorated with many logos and full of coloured pages. Take, for example, the Insipid – I’m sorry I meant Inspiring – ‘Impact Code of Good Practice.’ In just four simple steps it’s all done! Easy! It’s just these silly practitioners just don’t seem to get it, do they?

    Third, I keep an eye on the actual show, if I can. These are the wily practitioners who are holding the lifeboat’s oars while the Titanic is sinking. They know they must chase for funding and awards and learn the new vocabulary while paddling like mad. They dutifully mouth the words from the new Wizard of Oz bible. Fortunately they carry on pulling quite hard at what they do – which they do quite well in mountainous seas. They count a bit, reflect on the situation, ask people what’s going on, look a bit ahead. Mostly they have to keep rowing.

    Fourth, I get ready for the researchers and consultants. They start wringing their hands at the unexpected complexity in the real world. Oh woe there is still poverty and injustice and our pretty tools appear discarded on the nursery floor. There are murmurs of ‘inconsistencies’, ‘absence of fixed measures’, ‘no real change’ and ‘a lack of tools.’ A new government arrives. A new initiative beckons.

    At this stage I get a bit giddy on the roundabout. I can see it coming in the distance – like a blurr of the new Doctor Who making a first appearance: the new term, it’s going to be, here any minute now. And we are going to start all over again on that roundabout.

    It’s time to get off the roundabout and do some thinking instead. But that’s for another day. Of course.

  • andy benson

    Mike hits the nail on the head. The NCS programme is a government ‘flagship’. It cannot afford to fail so consultants are hired to evaluate it on the basis of an evaluation proposal that is openly sceptical about the programme and its likely effects? Eh no, actually, the proposal will have been warmly congratulatory about the programme and the evaluators – as Andy’s piece illustrates – will carefully avoid tripping over the furniture in their reporting. This isn’t evidence-based evaluation, it’s politics and business for consultants.

    As to the terminology, it’s more of the same. The other day I came across a ‘practical guide to evaluation’ that I wrote for the Child Accident Prevention Trust in 1991 which unpacked ‘impact’ and ‘outcome’, though used the words in exact opposite order to that used today. Either way, focusing on the results of your work or the wider effects on people and communities is not exactly rocket science. Again, it’s business for consultants – look at the guff that comes out of New Philanthropy Capital for example – who offer the prospect of a security blanket to precarious voluntary services groups facing incompetent procurement officers while predatory corporate charities + Serco and their like are elbowing them out of the way.

    How much longer are good and committed people going to put up with this stuff?

    Andy Benson
    http://www.independentaction.net

    PS We’re currently running an Inquiry into the Future of Voluntary Services – lots of detail on our website – so we want to hear from people with stories to tell about their trials and tribulations in coping with the current crazy environment. Or email me at andy@independentaction.net ….. keep on keeping on