The answer is more sector surveys, not less

Research in the third sector. Is it up to the job we need it to do?

This week’s magazine features an interview with Richard Harrison, director of research at the Charities Aid Foundation, who among other things has launched a strong rebuttal to critics who claim that his organisation and the NCVO have published inaccurate research.

On our website, Karl Wilding, director of policy and research at the NCVO, is similarly defending his organisation’s record.

The research that has given rise to such dudgeon in fundraising circles is UK Giving, the survey produced by CAF and the NCVO, that suggested a 20 per cent drop in giving in 2011/12.

Many people in the fundraising sector, most notably Stephen Pidgeon, fundraising consultant and Institute of Fundraising trustee, feel the survey should not have been undertaken. Pidgeon has berated the two organisations for carrying out “silly surveys” and said that if they’re proved wrong, they should apologise.

I read a lot of surveys, and over the years, I hope I’ve developed some skill at picking out what is good research, what is fairly decent research, and what is downright crap, back-of-a-fag-packet, only-did-it-to-drum-up-a-bit-of-press research – the last, to be honest, crosses my desk fairly often.

So is Pidgeon onto something? Yes and no.

UK Giving is pretty near as good as Harrison and Wilding could hope to make it, I reckon, but that still doesn’t make it gospel.

We analysed how reliable it is when it first came out, and reported some concerns. A briefing by Tom McKenzie of Cass Business School called Give or Take a Few Billion gives some further reasons why there might be a fair margin of error – although you might need to have a quick glance at a dictionary of statistical terminology before giving it a read.

If you know enough about statistics, mind you, you can poke holes in any data. If you poke long enough, you can make even the best research look pretty worthless.

Most reports on key aspects of the sector – falling income, interest in merger, concerns about independence – will be based on a smaller number of opinions than UK Giving. They have large margins for error.

Only if you wait two years and look at sector accounts, as the NCVO Almanac and Charity Commission data do, do you get accurate results. But it’s not necessarily that much use reporting what happened in 2009.

And what are researchers supposed to do? Stop asking people what they’re going to do? That doesn’t seem very productive. It’s worth knowing what people are thinking, surely? Even if it’s more valuable to know what they actually do.

And researchers can scarcely only report good news. That’s what politicians do. And no one believes a word they say.

The answer, I think, is not less surveys, but more, so that you aren’t relying on a single set of figures. If we do a lot of surveys, and they all say the same thing, then we can start to rely on them.

The other thing that’s worth doing, as Harrison suggests in this week’s interview, is some meta-analysis. A lot of people have complained that recent data is contradictory. Data often is, though – especially if each survey is looking at something slightly different. It might be worth getting some people to sit down around a table, and see if they can construct an overall narrative which makes some sense.