For service based organisations working to break cycles of disadvantage and reduce poverty, there’s an imperative to measure their impact. To understand if the design of their services and the way they’re being delivered are having a positive impact and benefiting people as intended.
Strategic planning tools such as Theory of Change, Program Logic and the co-design of Service Delivery Models have introduced more rigour and efficacy into service delivery organisations - but these are sometimes filled with assumptions and hypotheses that collide with the complexity of real world environmental factors, human behaviour and unintended consequences.
Impact measurement for community service organisations can be challenging - if you’re trying to help a young person to change the trajectory of their life or work with a community to tackle inter-generational disadvantage it will be years, and can be decades, before there’s meaningful, measurable progress. For these organisations there’s a real tension between allocation of scarce resources in delivering more services because gut feel says that’s the right thing to do, vs. spending precious community dollars on expensive consultants and measuring stuff. But measuring stuff and evidence of impact is important for funders and making good decisions.
There are now tools that make impact measurement more reliable and affordable such as the Centre for Social Impact’s (CSI) Amplify Online which puts validated and reliable social impact indicators in the hands of for-purpose organisations to conduct independent outcomes measurement. Folk was strategic design partner to CSI in developing Amplify and we heard first-hand in consultation the practical challenges organisations face in impact and outcomes measurement, particularly when delivering government programs where evaluation is primarily associated with performance, compliance and competition for funding.
For government there are much bigger questions - at the top of the list is how to get a better return on government programs? How can government provide the greatest positive impact, for the largest number of people, using taxpayer’s dollars? Which policies and programs are the most and least successful? Are we funding ineffective programs and at the same time stopping funding to effective programs?
To answer these questions governments look to systematically evaluate the programs they fund and use the results to improve decision making, returns and outcomes. But, according to a new research report from the Committee for the Economic Development of Australia (CEDA), not all evaluations are equal. And it sounds like others have come to a similar conclusion.