Measurement plan – how to create one
Guidance for creating a measurement plan for DBT content.
When we produce content, we normally want users to do something as a result of using it. The thing they do has an outcome we can measure, to see if they did it. By creating a measurement plan, we can make deliberate choices about our target users and the outcomes we seek from them. Then we can see if the content is working.
How to get started
It’s never too late to start a measurement plan. You might want to see how existing content is performing, see if a modification has had the desired effect, or have some old content that’s a candidate for retirement. Or perhaps you’re creating some content from scratch, or picking up someone else’s drafts. It might be a page or collection of pages on GOV.UK, a content block in a service, a content type, or a wider initiative. Or it might be a series of content items in different services or channels that connects a user journey.
A measurement plan consists of:
- purpose of the content
- intended users and their goals (which can include user needs)
- data sources, where you’ll look for evidence of outcomes
- measures and metrics, which you’ll use to look for changes in the data sources
Later, you can use your measurement plan to write hypotheses about your content, which you can test and then share what you learn with others.
You might seek the help of a friendly performance analyst when developing a measurement plan. Or you might feel confident enough to start on your own, or with help from other content designers. You do not need to get it right first time. It’s an evolving document that will become more accurate as you learn more about your users and what they need to do with the subject matter, throughout the content lifecycle.
Run a kickoff workshop
When starting or picking up any content work, it’s good practice to talk to stakeholders, subject matter experts (SMEs) and users, to learn about the subject domain and find out what business requirements and user needs the content must fulfil. Starting work on a measurement plan is a helpful way to do this.
You can run a measurement plan kickoff workshop with stakeholders and SMEs, to get a head start. Forget the content designer’s lament that “you should’ve invited me to a meeting, earlier”, this is your opportunity to hold the space and get people to come to you. A kickoff workshop with the right people can generate discussion, surface assumptions and put issues in a neutral space when you write them on sticky notes – on a physical wall, or a collaborative whiteboard like Mural.
There’s a measurement plan kickoff workshop template in DBT’s Mural space, which people with a Mural account can use. If you cannot use Mural, you can download a PDF version of the template (37kb).
Anatomy of a measurement plan
A good measurement plan has 5 main parts.
Purpose
All content has a purpose. If there’s no clear purpose, it’s a good sign the content does not need to exist. Unnecessary content makes it harder for users to find what they need, and wastes our limited resources to produce and maintain.
Often, the purpose of content can appear stakeholder-driven. For example, an organisation has to publish an annual report, or a minister has requested it. But there’s usually a way to reframe this from a user’s perspective. It might get users to:
- know something. They might want to find out about government policy or understand the government’s response to a major event
- anticipate something. They might need to prepare for new regulations or tariffs that come into force next year
- do something. They might use a government service, apply for a support scheme or respond to a consultation
For example: The purpose of the Industrial Strategy consultation (October 2024) is to seek input from people and organisations who’ll be affected by related policies, so they can help shape them. The purpose of the EU mobility guidance on GOV.UK (published after freedom of movement ended in 2021) is to inform users about travel documents they need when working in the European Economic Area (EEA) and Switzerland.
When thinking about the purpose of content, it helps to be aware of the organisation’s strategic objectives and priorities. Usually, a piece of content is informing users about something or helping them to do something in service of a higher-level outcome. Knowing this can help you communicate the importance of content design to stakeholders.
For example, the EU mobility guidance serves the Secretary of State for Business and Trade (SoS)’s priority for Trade Strategy, by seeking to improve trade in services between UK and the EU. Meanwhile, it also supports SoS’s Industrial Strategy priority, by focusing user research on participants from growth-driving sectors, including professional and business services; and potentially supports SoS’s plan for small businesses, by providing guidance that’s usually only available from in-house lawyers – or DBT’s international trade advisers (ITAs), if the business is big enough to meet the criteria for high-export potential.
Read about DBT’s strategic objectives.
Users and goals
Point #1 of the GOV.UK Service Standard is to understand users and their needs. This means focusing on the user and the problem they’re trying to solve, instead of a particular solution. This applies to any kind of content, as well as products and services.
Let’s ask 3 questions:
- What user groups are we targeting with this content? For example, is it people from specific professions or sectors?
- What do they want to accomplish? For example, do they want to export something, get funding or prepare for a new regulation? We might articulate this as tasks or user needs.
- What do we (the publishing organisation) want them to do as a result of using the content? This may be different from the user task or need, but is usually complementary. For example, apply for something.
If users can accomplish their goal as a result of using the content (whether directly or indirectly), we might describe this as an outcome. Or if the user changes their behaviour to accomplish a goal for the publishing organisation, we might also describe this as an outcome.
Data sources
A data source is where we’ll look for evidence of an outcome, for example, if a goal has been accomplished. They can be quantitative (for example, Google Analytics, service transactions), qualitative (user research, feedback forms) or a mix of both.
We have a growing library of data sources in this playbook. But it’s not a finite list – there are potential data sources everywhere, if you know where to look. But usually you’ll need help from a data specialist (for example, a performance analyst, data analyst or data scientist) to get access to data sources and be able to interrogate them.
At DBT, we have a central data catalogue called Data Workspace, which contains over 1,000 data sets and reports. You can access Data Workspace tools if you’ve completed your mandatory security and data protection training, and then contact information asset managers (IAMs) to request access to items in the data catalogue, if they require permission to use them.
You’ll find there are many data sources (outside Data Workspace) that contain potential evidence, but are not yet accessible, accurate or available for analysis. But you can still describe them in your measurement plan, and talk to a data specialist about doing a data capability review for future use.
It’s good practice to use different data sources when testing your content, so your results are more robust and not biased. This is called triangulating.
Measures and metrics
We use measures and metrics to look for changes in data sources, that show whether a thing we did (an action or intervention, for example publishing content) has had an effect.
Data sources have their own sets of measures and metrics. For example, Google Analytics will show unique page views (from users who’ve consented to cookies), service data will show completion rate, helpdesks (for example, Zendesk) might show enquiry volumes about certain topics, and user research might show the existence of pain points.
We have a growing library of measures and metrics in this playbook.
Hypotheses
When we publish content or launch new features, it’s often a case of throwing stuff at a wall to see what sticks. It’s wasteful and we cannot learn what did or did not work, so we can do it better and quicker next time. We can be intentional about the changes we make and see if they have the desired effect by writing them as testable hypotheses.
A good hypothesis describes the steps for carrying out an experiment. It helps you plan what you want to do in a structured way, and then document your design decisions and share what you’ve learnt.
There are many ways to write a hypothesis. In this playbook, we describe an approach to hypothesis-driven design, which recommends a format for writing them. But the important thing is to write them down and test them, so you know if the thing you did worked and you can learn from it, as a team.
Hypotheses are a way of executing your measurement plan. So you do not need to write them up front, unless you already know what you want to test.
Document your measurement plan
There’s no prescribed format for documenting a measurement plan, as long as it’s clear and usable. It might be an MS Word document, slide deck or SharePoint page. Whatever format you choose, use the 5 main parts as subheadings.
Save your measurement plan somewhere close to the content, where everyone can access it. For example, you can save it in a shared folder on SharePoint and set the permissions so anyone in your organisation can view it. You can then link to it from a Jira ticket or from the corresponding item in the content inventory (or both).
Execute your measurement plan
Once you’ve drafted your measurement plan, the next step is to execute it. You can do this by carrying out the experiments you’ve described in your hypotheses.
Depending on the data sources, measures and metrics you’ve chosen, you will have reports and visualisations that describe the changes you’ve observed. You can add these to slide decks when presenting to colleagues and stakeholders, or doing show and tells. It’s good practice to document these in the same place as your measurement – for example, in a shared folder in SharePoint – so you can commit what you’ve learned to the organisation’s memory. Teams will then remember why you did what you did, and continue to build on it in the future.
Feedback
This service is in alpha. Give us your thoughts by using the feedback form (opens in new tab).