Disrupting Disadvantage - Journal | Strategic Design Consultancy | Folk

Journal

Article

Disrupting Disadvantage
Finding what (really) works and what doesn’t

Disrupting Disdvantage Header AW

For service based organisations working to break cycles of disadvantage and reduce poverty, there’s an imperative to measure their impact. To understand if the design of their services and the way they’re being delivered are having a positive impact and benefiting people as intended.

Strategic planning tools such as Theory of Change, Program Logic and the co-design of Service Delivery Models have introduced more rigour and efficacy into service delivery organisations - but these are sometimes filled with assumptions and hypotheses that collide with the complexity of real world environmental factors, human behaviour and unintended consequences.

Measurement matters

Impact measurement for community service organisations can be challenging - if you’re trying to help a young person to change the trajectory of their life or work with a community to tackle inter-generational disadvantage it will be years, and can be decades, before there’s meaningful, measurable progress. For these organisations there’s a real tension between allocation of scarce resources in delivering more services because gut feel says that’s the right thing to do, vs. spending precious community dollars on expensive consultants and measuring stuff. But measuring stuff and evidence of impact is important for funders and making good decisions.

There are now tools that make impact measurement more reliable and affordable such as the Centre for Social Impact’s (CSI) Amplify Online which puts validated and reliable social impact indicators in the hands of for-purpose organisations to conduct independent outcomes measurement. Folk was strategic design partner to CSI in developing Amplify and we heard first-hand in consultation the practical challenges organisations face in impact and outcomes measurement, particularly when delivering government programs where evaluation is primarily associated with performance, compliance and competition for funding.

Bigger questions

For government there are much bigger questions - at the top of the list is how to get a better return on government programs?  How can government provide the greatest positive impact, for the largest number of people, using taxpayer’s dollars? Which policies and programs are the most and least successful? Are we funding ineffective programs and at the same time stopping funding to effective programs?

To answer these questions governments look to systematically evaluate the programs they fund and use the results to improve decision making, returns and outcomes. But, according to a new research report from the Committee for the Economic Development of Australia (CEDA), not all evaluations are equal.  And it sounds like others have come to a similar conclusion.

At the moment, I worry that too often we have government programs that are set up based on the gut intuition of policymakers, rather than a really evidence based grounding on what works. If we were doing that in the quest to cure cancer or to come up with a new vaccine, we'd be taking things straight out of the lab and putting them into the market. But we don't do that. We carefully evaluate new pharmaceuticals and new medical treatments, and we should take more of that scientific and political approach to policymaking.

Dr Andrew Leigh, in his June 2022 introductory address to staff at the Australian Bureau of Statistics.
Assistant Minister for Competition, Charities and Treasury.

Dr Leigh spoke at this week’s report launch, alongside CEDA CEO Melinda Cilento and report author Senior Economist Cassandra Winzar. The new report is the third in CEDA’s research series on entrenched disadvantage which explores how Australia can disrupt the poverty cycle through better evaluation of the programs designed to tackle these issues. Disrupting Disadvantage - Finding What Works focuses on improving the evaluation of community services for their effectiveness and value. The report outlines how governments can use data collection to build more disciplined and consistent program evaluation, and how to foster a culture that enables this.

Evaluating evaluations - CEDA’s findings

Federal and state government spending on community services is increasing at roughly 5% each year while at the same time poverty policies are failing. The level of poverty is unacceptably high and we’re not making any progress on reducing poverty and disadvantage. We need better feedback loops if policy settings aren’t right to achieve on the intended outcomes of programs trying to alleviate or reduce poverty. The spending increase isn’t sustainable without properly understanding why programs aren’t delivering progress.

Disrupting Disdvantage Tile

CEDA examined 20 Federal Government programs with a total program expenditure of more than $200 billion. Ninety-five per cent were found not to have been properly evaluated. And the Federal Government isn’t alone in this problem – analysis of state government evaluations shows similar results.

The report highlights how effective evaluation starts in policy and program design - having clearly stated objectives, a definition of success or outcomes and a plan to collect data over the program life to inform evaluation.

To help ‘build in’ evaluations CEDA supports the role of an “Office of the Evaluator-General“ that would champion and steward evaluation and develop capability and capacity across the public service, new data investments, and legislation requiring a regular review of all programs.

The report authors remind us that ultimately, evaluations are about accountability and transparency. That they should give communities the information they need to hold governments to account for the success, or otherwise, of their policies and programs. 

You can read Disrupting Disadvantage - Finding what works and a snapshot of the findings.

And the Australian Government has just opened the second stage of consultation on the Measuring what Matters statement – relevant if you want to have your say.

 

In other related news, over the last two years I’ve happily been a part of CEDA’s Better Human Services member advisory committee. The committee brings together people from member organisations to advise on practical ways to improve the design and delivery of human services that are critical to the health and wellbeing of individuals, broader economic, social and community development, and improved standards of living. There’s more here if you’re interested in joining CEDA.

Written by:
Michael Broadhead,
Managing Director