Livework newsletter
Subscribe to our newsletter and don't miss a thing ;)

Designing a service to help teams evaluate and prioritise digital health services

PHE needed a process to demonstrate the impact, cost-effectiveness, and benefits of digital health interventions to public health. We helped them and their partners to create an evaluation service to support better decision making in the prioritisation, commissioning, design, implementation, and improvement of new and existing products and services. Our solution needed to be repeatable and scalable across the organisation.

Challenge
Public health services need to improve health outcomes.

Evaluation is a process for finding out what works –and what doesn’t. Evaluation supports decision making: whether to invest, to develop, to scale, to pivot or to discontinue.

Public health interventions are meant to improve the health outcomes of the populations they serve. It turns out that public health products and services are not always evaluated. And when they are, it’s sometimes done superficially, inconsistently or only when the service is already developed.

Traditional academic evaluation frameworks and methods are not well suited to the rapid development of new digital interventions over time. But modern digital evaluation frameworks and methods are not well understood or integrated into the academic and public health worlds. So PHE invited us to help them figure out how to combine the best of both worlds, to sustainably evaluate and improve services over time.

Approach
Partnership, synthesis and proof of concept.

Context about our partnership with PHE:

Public Health England (PHE) exists to protect and improve the nation’s health and wellbeing, and reduce health inequalities. Livework’s ambition is to improve the way people live and work. Healthcare is core to this worldview. Over the past two years, we have been working with PHE as they develop their transformation agenda (read more about this here & other here.)

Discovery:
During the Discovery phase, we developed a working model, built on existing secondary research and conducted user research to understand the role of evaluation for academics as well as for digital and public health professionals. We balanced between the viewpoints of the systems level and the delivery team level. The systems level was represented by a working group of academics, digital and public health professionals. The project level was represented by live teams delivering public health services.

We co-designed a range of concepts, then prioritised and selected the most promising ones to take into alpha.

Alpha:
We began the alpha testing phase by mapping the assumptions and hypotheses that underlie each of the prioritised concepts. We designed a series of prototypes and experiments to test these hypotheses.

Thanks to the support of our internal stakeholders, we had the chance to test the end-to-end service with an active delivery team and their existing product as a proof of concept. We organised several sessions over two weeks to observe how they used the tools and activities we’d crafted to help them create their evaluation strategy. They engaged with the activities and valued both the thoughtful process and the practical outputs. The proof of concept validated the evaluation service we designed.

Outcome
Enabling teams and evolving culture.

The evaluation service enables new delivery teams to build evaluation into the design of a service from the start. It also allows existing delivery teams to create a useful evaluation strategy for live services (see the public version of the PHE Evaluation work here)

This service guides them to build the right team, clarify desired public health outcomes, develop a service blueprint, establish and integrate KPIs, identify the best methods to collect the data, and finally synthesise their findings.

Each of these aspects is essential in creating a successful service. Being clear on outcomes helps prioritise features that promote those outcomes. Developing a blueprint shows how the service features connect to outcomes and therefore which should be developed or improved. Establishing appropriate KPIs enables the measurement of progress towards those outcomes instead of indicators tracking progress of the project alone. Clarifying data requirements ensures the service can effectively measure, learn and take appropriate action. Synthesising the above creates an evaluation strategy that can be shared and implemented.

The evaluation service is designed to be repeatable and scalable. We packaged it as a series of workshops and also imagined how a digital platform could be used to support the approach by providing guidance, materials and connections to other teams. In order to embed evaluation sustainability into ‘business as usual’ processes, we worked with PHE to reflect on the culture of evaluation within their organisation, and consider how to develop it in the future.

Let's talk!
Liz LeBlanc Lead Service Designer

I’m a designer raised by engineers, with a love of data and charts, and an obsession with storytelling. I love the way design requires a pendulum swing between tiny details and the holistic, over-arching goals.

I have read and agree to the Privacy Policy.