Latest NCS impact measurement doesn’t go far enough

Last week’s report into impact measurement by Third Sector Research Centre made interesting reading.

The Third Sector Research Centre study concludes that there are growing concerns that funders and commissioners are shaping and dominating approaches to impact measurement in the third sector and expresses concern about a lack of comparability between sectors and “selective presentation” of results by organisations.

The study was released only days after the government published its latest evaluation of the National Citizen Service programme. The independent evaluation was produced by NatCen, the social research charity, and paints an upbeat picture of the government programme for 16 and 17 year olds in England and Northern Ireland.

What’s striking about the report is the lack of overt criticism, especially in its overview. It’s not until you start to drill down into the detail of the research that you start to see some glimmers of criticism emerge, but even then the use of phrases such as “no statistically significant positive impacts were found” leads you to question whether they are criticisms at all.

Then there’s the selective use of statistics. At the start of the report, in bold graphics, we’re told that 22,132 young people took part in the summer programme and 3,871 took part in the autumn but it’s not until page 10 that researchers disclose that 27,000 places had been commissioned for the summer scheme and 5,000 for the autumn programme, which meant around 6,000 places went unfilled. The fact that each place cost £1,662 in 2012, compared with the initial government estimate of £1,233, does not warrant a mention either.

Neither is any attempt made to compare the impact of NCS against other organisations or programmes such as the Scouts, the Guides and City Year, the US programme where young people act as mentors in schools. Surely such comparison would be a useful indicator of overall performance.

But does it really matter that impact evaluations are less than clear in places, put a positive spin on the results and lack comparability?

I would argue ‘yes’, given that the stakes are so high. In the instance of the NCS, the government has already committed to expanding the scheme to 150,000 young people in 2016, which – based on the latest figures – would cost the taxpayer in the region of £250m. Its argument for expansion is largely based on the strength of the evidence gathered.

If impact measurement is to be taken seriously it needs to be robust, present a balanced picture and avoid pandering to the requirements of its paymaster. It also needs to provide a genuine comparison between programmes to help put its successes or otherwise in context. As one colleague put it, we need comparables – not parables.