In practice: New approaches to evaluation for dynamically managed delivery in Manchester

Fresh evaluation approaches identify importance of specific innovations

One of the most successful programmes to tackle long-term rough sleeping in the UK took place in Greater Manchester (cf. chapter 8). This programme implemented many of the learnings featured in this report, such as peer mentors with lived experience (cf. chapter 2), a personalised, asset-based approach (chapter 5), and a genuinely different relationship between “commissioner” and “supplier” (chapter 4). The combination of these elements has led to a substantial reduction in this seemingly intractable form of homelessness.

This project was clearly a significant breakthrough for Greater Manchester. However, it would have an even greater impact if we could use its lessons to deliver similar results on a much larger scale. To that end, it was important to tease out the individual components of operational management and delivery that really mattered, so they could be shared and implemented elsewhere. To achieve this, a different approach to evaluation was needed.

“It was important to tease out the individual components of operational management and delivery that really mattered, so they could be shared and implemented elsewhere”

What really worked in Manchester?

The first priority was to understand the key design features and the delivery pilots that were trialled. This was the focus of the excellent evaluation produced by the Greater Manchester Combined Authority, which listed the elements found to be important, and made recommendations for future homelessness policy in the region.

But in order to broaden the learning, it was then necessary to dig further into the relative impact of each change. Which design features and delivery pilots were the game-changers that reduced rough-sleeping?

Was it, for example, the commitment of housing agencies to avoid evicting someone if a tenancy went wrong?

Or did directly employing a qualified mental health professional make the real difference?

New evaluation techniques tease out causality

Chris Fox, Professor of Evaluation and Policy Analysis at Manchester Metropolitan University, is examining the Greater Manchester homelessness programme. Among his evaluation tools are “Qualitative Comparative Analysis” (QCA) and “Process Tracing”.

“QCA looks in depth at a number of cases to find causal patterns,” explains Professor Fox. “It compares different combinations of interventions and the outcomes. This helps us to work out which combinations lead to what outcomes. It helps to show which conditions are essential to produce certain outcomes.

“This approach recognises that different interventions can produce the same outcome. So, for example, providing a ‘managed move’, instead of eviction, can keep someone off the streets. Likewise, preventing someone being jailed for a minor offence might also avoid a return to rough sleeping.

“Additionally, this approach recognises that a single intervention can produce different outcomes, depending on the circumstances. So, for example, receiving personalised mental health support in your own home might turn around one person’s life. But it might not be a game-changer for someone else.”

“QCA helps us to work out which combinations of interventions lead to what outcomes, [and] which conditions are essential to produce certain outcomes”

Process tracing

A second form of analysis tries to uncover any links between possible causes and outcomes. It does this by studying how causal mechanisms might work. This is called “process tracing”, explains Professor Fox. “You start by suggesting some possible causes. Then you identify what you would expect to see if these causes really do exist. Next, you identify pieces of evidence which you would expect to find at each step of a causal chain, if a hypothetical explanation is true. Practical evidence is then gathered to overturn or support rival hypothetical explanations.

“Process tracing helps to establish whether the actual mechanisms at work fit with those that you might have predicted. Assessing each hypothesis alongside the available evidence can help you to understand what causes a given set of outcomes in any given case.”

Sharing the learning

The use of these sophisticated evaluation tools (by Professor Fox and others) should help us to better understand the key factors behind the programme’s success.

“Our hope is that this approach to evaluation will make Greater Manchester’s success more useful to other parts of the country, because we can be more confident about what really matters,” says Professor Fox. “It should also encourage greater application of these innovative evaluation tools to other programmes, and therefore greater devolution of design authority to where it is most effective. When we have a concrete understanding of the key components that are at work, leadership teams in other projects will be able to benefit from this, and central government may become more confident in trusting these teams to work out how best to help people locally.”