I very much support the guidance “Communicating quality, uncertainty and change”, produced by the Government Statistical Service’s Best Practice and Impact team.
This guidance sets out principles on how to communicate information about quality, uncertainty and change to users. As producers of official statistics, it is our role to explain how any limitations in our statistics feed through into the decisions that users take based on those statistics. Being clear about these issues is absolutely vital. It protects the integrity of the findings and supports the users of our numbers in drawing the correct conclusions to inform the decisions they make.
In line with the Code of Practice for Statistics, the guidance offers practical advice which can be applied to all sources of statistics, including surveys, censuses, administrative and commercial data. It includes examples of good practice, as well as suggested standard wording to be used when appropriate.
As statistical professionals, the challenge for the Government Statistical Service (GSS) when communicating quality, uncertainty and change is to be able to provide assurance to users, explaining complex concepts whilst being clear and transparent about our professional judgements. This is key to inform the public debate and to make citizens aware of the strengths and limitations of our statistics.
I therefore invite you all to make use of this guidance. Communicating quality, uncertainty and change well is key to instil confidence in the statistics we produce.
John Pullinger, National Statistician, December 2018
A “Communicating uncertainty and change” guidance document was first released in November 2014. We have updated the content of this document to include some advice on reporting quality information, more up-to-date examples and to incorporate references to the second edition of the Code of Practice for Statistics.
This guidance draws heavily on the first edition, as well as the UK Statistics Authority’s guidance: Quality assurance and audit arrangements for administrative data. It is further inspired by Professor Sir David Spiegelhalter’s website Understanding uncertainty and by Michael Blastland’s and Andrew Dilnot’s book “The Tiger That Isn’t: Seeing through a world of numbers”.
As with all our guidance, we are publishing this to try and help you in your work, but we don’t regard it as the final word on this topic. Please do get in touch if you have comments, your own experience or examples that we could include.
Who is this guidance for?
This guidance is for producers of official statistics who need to write about quality, uncertainty and change, irrespective of the format used for dissemination or the data sources that are drawn upon. The document may also be helpful for other authors who need to produce and report on numbers.
What is its aim?
Our aim is to provide practical advice on how to communicate quality, uncertainty and change for different types of statistics and for a range of audiences.
Why do we need it?
Uncertainty is an inherent aspect of statistics, but the term is often misinterpreted, possibly implying that the statistics are unusable, or simply wrong. As a result, statistics producers might have understandable concern that pointing out limitations in the statistics could reduce users’ confidence in the published figures. This should not be the case.
Being upfront and transparent about quality, uncertainty and change helps us to protect the integrity of our findings and ensures that users do not draw conclusions that are not supported by the statistics. It reduces the risk – a potential consequence of excessive certainty – of incorrect use leading to inappropriate decisions.
What does it cover?
The guidance provides a common approach to aid the clear communication of information on quality, uncertainty and change. We did think about writing separately about uncertainty and change. However, the two concepts are deeply intertwined: an observed short-term change may be material to one user and regarded as noise by another.
The guidance can be applied to all sources of statistics, including surveys, censuses, administrative and commercial data, as well as estimates derived from a combination of these.
It includes examples of good practice, as well as some suggested standard wording to be used when appropriate.
The Code of Practice for Statistics
Official statistics inform and underpin important decisions. It is essential that statistical quality is effectively communicated so that users get the clearest picture of what the statistics show and how they might be used.
The Code of Practice for Statistics strongly emphasises the need for clear and transparent communication about quality, highlighting the need for statistics to “be based on the right data sources, with transparent judgements about definitions and methods, and judgements about the strengths and limitations of the statistics”.
Clear reporting of quality, uncertainty and change enhances the trustworthiness and value of statistics. The challenge for producers of statistics is to communicate information in a way that provides assurance and supports understanding in an insightful and accessible way. Here’s what the Quality Pillar of the code says about reporting quality, uncertainty and change:
Quality means that statistics fit their intended uses, are based on appropriate data and methods, and are not materially misleading.
Suitable data sources
Q1.5 The nature of data sources, and how and why they were selected, should be explained. Potential bias, uncertainty and possible distortive effects in the source data should be identified and the extent of any impact on the statistics should be clearly reported.
Q2.4 Relevant limitations arising from the methods and their application, including bias and uncertainty, should be identified and explained to users. An indication of their likely scale and the steps taken to reduce their impact on the statistics should be included in the explanation.
Q3.1 Statistics should be produced to a level of quality that meets users’ needs. The strengths and limitations of the statistics and data should be considered in relation to different uses, and clearly explained alongside the statistics.
Q3.3 The extent and nature of any uncertainty in the estimates should be clearly explained.
Understanding the concepts
Quality of a statistical product can be defined as the “fitness for purpose” of that product. More specifically, it is fitness for purpose measured against the European Statistical System Quality Assurance Framework.
Uncertainty helps us to describe the limitations of our statistics. It relates to a range of possible factors that can affect the accuracy of a statistic, including the impact of measurement or sampling error (related to sample surveys) and all other sources of bias and variance that exist in a data source.
A good understanding of sources and levels of uncertainty helps us to recognise how reliable numbers are. Communicating how well we can describe a specific outcome is critical to enable users to apply appropriate weight to the results of the analysis.
Change is the difference in measures of the same phenomenon between two distinct time points. Uncertainty has an impact here, in determining how well we can measure this difference and helping us to decide whether an observed change is material.
Clear presentation and language
Dealing with quality, uncertainty and change is part of the day-to-day job of a statistician, but uncertainty in numbers and what it means for their use can be difficult to grasp for many.
International studies such as “Women listen and men look? How to best communicate risk to support decision making” and “Statistical numeracy for health: a cross-cultural comparison with probabilistic national samples” have demonstrated that people generally find it difficult to interpret numerical uncertainty and risk.
The use of clear language, visualisation techniques and the right choice of communication medium help to improve the understanding of these concepts and provide insight about the information we are presenting.
How we communicate quality, uncertainty and change needs to be tailored to suit the audience. Some users will be interested in detailed information on quality, uncertainty and change e.g. “expert analysts” and “technical users” while others, such as “inquiring citizens” will not. We should offer users the opportunity to browse through the information according to their needs and enable them to find detailed information when they require it. Find out more about user personas.
Readily available information
To avoid misinterpretation, information about quality that is critical to allow users to assess and use statistics sensibly should be prominent, easy to find, clearly signposted and not hidden away in background appendices or footnotes. Vital messages about quality – those that have a profound impact on what can be drawn from the numbers – should be presented up front.
Communicating quality, uncertainty and change
Allow users to make a judgement
Sufficient and appropriate information should be provided to allow users to make a judgement about whether estimates are fit for their intended use and to build and maintain confidence in the statistics. As a producer, your ability to meet this requirement will be greatly improved if you have insight into the likely uses of your statistics and the decisions that they support.
To improve the understanding of quality, uncertainty and change, you should explain, with the use of contextual information, how these issues impact on the results.
You should provide an indication of:
- the quality of the statistics
- the level of uncertainty in the figures presented and how this impacts on their interpretation and appropriate use
- the direction, absolute and/or relative size of any change
- the level of uncertainty in the estimate of change and what this means for the interpretation of the reported numbers
- a longer-term view of change (e.g. trend)
Help with interpretation
Even though you will have little control over how information is likely to be used, as a professional analyst you have a role in helping with the interpretation of the findings:
- be honest and transparent about where the evidence comes from and its limitations
- try to convey a range of possible outcomes
- be neutral and impartial, letting the facts speak for themselves
- explain how uncertainty affects the interpretation of the numbers—what does it mean for the key messages?
- explain what you can and what you cannot conclude from the findings
- pre-empt any misunderstanding and overinterpretation
Illustrate ranges and confidence intervals
The illustration of ranges and confidence intervals can be a powerful tool to communicate uncertainty.
Chart 5.1 on page 35 of the Bank of England’s inflation report from November 2017 demonstrates how data visualisation can be used to support users in understanding the level of uncertainty in the statistics.
This fan chart provides ranges for possible data projections together with a line showing a central estimate (the most likely outcome) for past events. To prevent misinterpretation the data visualisation does not provide a line for future outcomes.
Plain language and transparency
- Use plain language and be open and transparent about quality, uncertainty and change.
- Use words like “estimates” throughout the publication, which helps to indicate that there is uncertainty around the numbers.
- Tell people, in plain English, about the sources of data and any associated uncertainty, including any sampling and non-sampling variations, definitions, processes and systems used by data suppliers and model assumptions or errors.
- Where possible, quantify the impact of uncertainty on the statistics precisely. If this is not quantifiable, make reasoned judgements about the likely size and direction of the uncertainty and the potential impact on the statistics.
- Include high-level information on any adjustments made to the data and statistics before publication (for example, imputation or seasonal adjustment) and explain how these impact on the key messages if this is relevant.
- Describe the direction of change, its absolute size and its relative size over time. When appropriate, put the numbers into a wider context across different time periods, including the longer term.
- Describe any changes in quality made since the previous release and how this impacts on the accuracy of the statistics e.g. a description of a new methodology that produces more accurate estimates.
- For regular publications you might also provide a comparison with the previous periods (i.e. month on month, quarter on quarter change), change on the last year, plus a longer time series (e.g. five years). Is the change typical, or unusual, given the reporting period of the statistics? Is it in line with expected seasonal or other patterns?
- You might emphasise the stories in the data, and how any uncertainty fits into these.
When writing commentary:
- present sufficient detail on the main sources of error and bias, methods and coverage
- if available, provide quantitative measures of uncertainty (summaries of confidence intervals, standard errors, coefficients of variation, measures of coverage and completeness, editing rates) and explain what these mean for the interpretation and use of the statistics
- you could also explain whether a change is statistically significant and provide a plain English description
- be prepared to use different terms for different audiences—the terms “error” and “bias” could be misinterpreted by a non-statistical audience
This example from the Crime Survey for England and Wales: year ending June 2017 explains how small numbers and data collection changes impact on observed trends in knife crime:
Some categories of police recorded crime relating to more serious violent and sexual offences can be broken down further by whether or not a knife or sharp instrument was involved. This information is collected separately by the Home Office from police forces and comparable data are only available from the year ending March 2011 onwards. As offences involving the use of weapons are relatively low in volume, the Crime Survey for England and Wales is not able to provide reliable trends for such incidents.
This example, from the Department for Transport, Reported road casualties in Great Britain: quarterly provisional estimates year September 2017, shows how commentary can be used to explain clearly what can and cannot be concluded from the numbers that have been reported.
What we can conclude
There has been a statistically significant decrease in the number of casualties of all severities in road traffic accidents between the years ending September 2016 and 2017. This indicates that there are a number of factors that have combined together to improve some aspects of safety on Britain’s roads.
What we cannot conclude
Although the number of people killed in road traffic accidents has decreased between years ending September 2016 and 2017, this change is small enough that it can be explained by the natural variation in deaths over time. The serious injuries figures have been substantially affected, and to a much lesser degree slight injuries, by changes in systems for severity reporting by about half of all police forces. As a result, comparisons with year ending June 2016 serious injuries, in particular, should be interpreted with caution.
- clearly indicate changes in definitions, methods of collection, editing, imputation or other issues that affect the data
- publish quantitative measures of uncertainty, where they are available. For example:
- confidence intervals, standard errors, relative standard errors and coefficients of variation
- for administrative data – measures of coverage and completeness, editing rates, imputation rates, suitable comparisons against external sources
Data visualisations can be very effective tools for presenting quality, uncertainty and change.
- Be clear in the title that there is uncertainty associated with the statistics.
- Use different tools to demonstrate uncertainty including texture and colours.
- Highlight likely volatility in a series, placing each change in context.
- If applicable, illustrate the upper and lower bounds of the confidence intervals around the estimates and change over a range of time periods.
- Consider adding annotations directly to explain issues of uncertainty and volatility at the point where they happen.
- Consider using sidebars and breakout boxes to provide additional definitions, technical information or to flag up any quality issues.
The Understanding Uncertainty website provides a collection of helpful examples about how to use visualisation tools to make uncertainty clearer.
Figure 1.3 in the Taking Part Survey: England Adult Report, 2017/18 from the Department for Digital, Culture, Media and Sport uses confidence intervals in a bar chart to visualise uncertainty. This is accompanied by a sidebar which provides a description of a 95% confidence interval.
Section 6 of the Department for Work and Pensions’ infographic explaining how low income is measured in households below average income explains how confidence intervals help to show uncertainty. Non technical users find visualisations like this useful to help them better understand technical concepts such as confidence intervals.
Figure 2 in the Migration Statistics Quarterly Report: November 2018 from the Office for National Statistics is a graph of changes in net migration by citizenship in the UK. This graph also illustrates the uncertainty in the survey estimates. Notice also how annotations are used within the graph to help with the interpretation of the data series e.g. flagging when migration is and is not adding to the UK population.
Quality and methods
For each statistical output, information on quality and methods should be communicated to users both within the main statistical release and in Background Quality Reports (BQRs).
In the statistical release
The quality-related information in the statistical release is usually less detailed than the information in the BQR. It should provide enough detail to enable users to understand the quality implications of the statistics and how to use the statistics appropriately. You should include four categories of quality information in a statistical release. These are:
The key findings in statistical releases should include any vital messages about quality (those that have a profound impact on what can be drawn from the numbers). This enables users such as the “inquiring citizen” (who may not look at the detailed quality information in the background notes or BQR) to obtain important quality information upfront in the release.
This headline message used in a Department for Transport release communicates early on to the user that the figures are estimates and also quantifies the level of uncertainty:
Final estimates for 2016 show that between 220 and 250 people were killed in accidents in Great Britain where at least one driver or rider was over the drink-drive limit, with a central estimate of 230 deaths.
Department for Transport, Reported road casualties in Great Britain, final estimates involving illegal alcohol levels: 2016
About this release
This section should include background to the statistics and how they are collected as well as any vital information that will affect the use of the statistics (for example common pitfalls when using the data or any discontinuities).
It should help users to understand appropriate uses for the data and support them in avoiding inadvertent misuse.
An example from the Valuation Office Agency (VOA):
About this release
The data set used in this release is based on a sample of 482,170 rents recorded between 1 April 2017 and 31 March 2018. These statistics summarise rents paid for private properties in England only. The data used to generate these statistics is based on a sample of rental information, collected by Rent Officers, from landlords and letting agents. Under the current methodology the VOA does not publish a time series and users are advised not to infer trends in the rental market over time.
Valuation Office Agency: Private Rental Market Summary Statistics – April 2017 to March 2018
Critical caveats and quality warnings
Critical quality caveats or warnings should be included in the commentary besides the points that they relate to.
This example from the Department for Transport is set out next to a map. It flags up quality issues that users should consider when using the map:
How accurate are these local estimates?
The Active Lives Survey has a standard sample size of at least 500 persons per local authority.
The data tables accompanying this release include 95% confidence interval half widths, which demonstrate the accuracy of the estimates and the likely range of values for the true value.
*Note that due to their small size, the estimate for City of London and Isles of Scilly has a higher degree of error associated with it.
Department for Transport: Walking and Cycling Statistics, England: 2016
Quality and methods section and background notes
This section should include details that will help the user decide suitable uses for the statistics and signpost to more detailed information about the methods used to create the statistics and what the statistics are used for.
Include the following in the quality and methods background note:
A description of data collection and quality assurance processes.
A comprehensive and detailed description of sources of data and any associated uncertainty – this might include:
- what the data should be used for
- sampling and nonsampling variations
- definitions, processes and systems used by data suppliers
- quality assurance checks or adjustments made to the data
The table at the bottom of the infographic explaining the English indices of deprivation 2019 sets out clearly and succinctly what the English Indices of Deprivation can and cannot be used for.
Information on the size and direction of uncertainty
Include detailed information on the size and direction of uncertainty and its potential impact on the statistics. Wherever possible, quantify the impact on the statistics. In particular, explain to users how uncertainty affects the appropriate use and interpretation of the numbers. Where it is not possible to quantify the impact, make reasoned judgements about the likely size and direction of the uncertainty and the potential impact on the statistics.
Explanation of indicators of uncertainty
An explanation of indicators of uncertainty when they are available. For survey data, this might include relevant information on the sample size and response rates. You could also include any relevant comparisons with other sources.
For administrative data, consider including measures of coverage and completeness, editing rates, imputation rates and any relevant comparisons against external sources.
Information on adjustments
Detailed information on adjustments made to the data before publication (for example, imputation, seasonal adjustment).
Plain English definitions of concepts like coefficient of variation, confidence interval and statistical significance, where appropriate.
Visualisations of the uncertainty
Consider how you can visualise the uncertainty in the statistics using charts, diagrams or infographics.
Signposts to more information
Signpost to more detailed quality information.
The Background Quality Report (BQR)
Background Quality Reports help our users to understand the strengths and limitations of the statistics, so that they can make the best decisions about how to use it. This helps mitigate against the risk of misusing data. The BQR should assess quality against the Quality Assurance Framework for the European Statistical System.
We would expect the “expert analyst” and “technical user” to be the main users of BQRs as they will be interested in detailed information on the quality of the statistics. If you would like help producing a BQR, please email firstname.lastname@example.org.
Examples of BQRs from departments across the GSS:
- Her Majesty’s Revenue and Customs, Quality report: Quarterly Stamp Duty Statistics
- National Health Service (NHS) Digital, Statistics on alcohol: England, 2018 – Data Quality Statement
- Her Majesty’s Land Registry, United Kingdom House Price Index: Quality and methodology
- Department for Transport, Search and rescue helicopter statistics: background quality report
- Department for Digital, Culture, Media and Sport, Quality indicators: Taking Part survey
- Ministry of Defence, Background quality report: trade, industry and contracts 2017
- NHS Digital, Statistics on Obesity, Physical Activity and Diet – England, 2018: Data Quality Statement
- Office for National Statistics, Births Quality and Methodology Information
- Ministry of Defence, Background quality report: Service Family Accommodation Statistics 2009 to 2018
Changes between periods can fall within the margins of survey error and random variation, especially for weekly, monthly or quarterly series. If so, it is important for users to understand that the change they see may not be material.
There are different approaches to try to communicate findings that are not statistically significant:
Focus on the longer term
Focus on the longer term, and what this month’s change adds to the bigger picture. If there is a long term downward movement, is this month’s change in the same direction, or might it be an emerging indication of a turning point, for example? How far back do you need to go before the change is significant? Is it useful to draw attention to this?
Emphasise indicative changes
Emphasise that changes within the bounds of random variation are indicative, rather than definitive, and should be looked at in the context of the broader series.
Emphasise if it’s too early to make definitive statements
Use phrases like “broadly stable” to emphasise that it is too early to make definitive statements about movements in the series when the latest estimate is not significant.
Explain typical changes
Explain what the typical level of change is between measurement points in the series.
Avoid vague statements without supporting information
Phrases like “care must be taken” and “exercise caution” are not sufficient by themselves, because they do not help the reader to understand what they can and cannot do with the numbers. Support statements like this with practical advice.
This extract from a Department for Transport publication provides contextual information to explain that the changes are unlikely to be material. It also provides advice on what can be drawn from the statistics.
In the year ending June 2017, there were 1,710 reported road fatalities, a 5 per cent decrease from 1,799 in the previous year. This decrease is not statistically significant and it is likely that the natural variation in the figures explains the change.
Beyond statistical releases
Many users do not access our statistics via the statistical release and instead get sight of the statistics through other channels such as social media, press releases and ministerial briefings.
Producers of official statistics often have less influence over what is communicated via these channels which means it can be more difficult to communicate information on quality, uncertainty and change.
Producers of official statistics therefore need to have appropriate sign off on what is communicated via these channels to ensure that the material provides an accurate reflection of the story resulting from the statistics.
The main social media channel used across the Government Statistical Service for communicating statistics is Twitter. Some departments have their own independent twitter feeds which they use to tweet about all published statistical releases. Examples of departments with statistical Twitter accounts:
- Department for Transport: @DfTstats
- Scottish Government: @ScotStat
- Department for Environment, Food and Rural Affairs: @DefraStats
Given the small number of characters available, it is not possible or desirable to provide detailed information on quality, uncertainty and change within a tweet. However, the phrasing and graphic within the tweet can be used if necessary to communicate any quality issues.
A link to the release should always be included so the user can find further information if needed.
Many departments do not have statistical Twitter accounts but instead post tweets about their statistics on the main departmental twitter feed. In this situation statisticians should ensure that they are involved in the sign off process for the tweet, making sure that the strengths and limitations are considered to produce a tweet that provides an accurate reflection of the story resulting from the statistics.
This tweet from the Office for National Statistics about crime statistics uses “estimates” and reference to statistical significance to convey uncertainty. The link to the release enables the user to access further quality information.
Press officers play an important role in communicating our statistics to the media. Generally, they do not want to fill products such as press releases with caveats and uncertainties around the statistics. It is therefore important that producers of official statistics work with press colleagues to ensure that the strengths and limitations of the statistics are considered when developing messages, providing an accurate reflection of the story resulting from the statistics.
Producers of official statistics should develop explanations that are accessible to the press and can easily be re-used, for example this question and answer article on inflation measurement which used material from the Office for National Statistics.
It is useful, for each statistical release, to outline a sign-off process for products produced by press colleagues to ensure the roles of statisticians and press colleagues are clearly understood.
Media handling process at the Department for Digital, Culture, Media and Sport
- Key points and trends identified by the lead statistician
- Pre-release meeting between statisticians. press office and policy, with a focus on presenting results in a broader context
- Collaborative development of material (press release or social media) for a range of target audiences
- Sign off by deputy directors (statistics, policy and press)
Tips for working with press officers
- Ensure the key findings from the statistics are clearly communicated to press colleagues – information on quality, uncertainty and change should also be provided.
- Consider sitting with press colleagues on the day of release to help answer any media queries – this has been successfully trialled in departments such as the Department for Transport and Department for Education.
- Consider running sessions with press colleagues on the importance of communicating quality, uncertainty and change.
Example of quality, uncertainty and change in a news article
There are examples of news articles which provide information to the reader on quality, uncertainty and change. This news article on crime survey statistics from the Office for National Statistics is one of these.
We sometimes need to revise our statistics. Revisions fall into two categories:
- Scheduled revisions: Planned amendments to published statistics in order to improve quality by incorporating additional data that were unavailable for the initial publication.
- Unscheduled corrections: Amendments made to published statistics following the identification of errors after their initial publication.
You should notify users as soon as is practicable about errors and revisions. The Code of Practice for Statistics emphasises the need to explain revisions alongside the statistics, being clear about the scale, nature, cause and impact of the changes (Practice Q3.4).
Your department should have a revisions policy in place which sets out how the reporting of revisions is handled. This should be published on the Department’s statistics web page.
The revisions policy should set out:
- which statistics are subject to revisions
- what the timings will be for revisions
- why revisions might need to be made—for example because new data have become available or due to changes to methods or systems
- how revisions will be managed
Producers of official statistics should also communicate the timing of scheduled revisions to users. A revisions calendar can be helpful.
Examples of GSS revisions policies
- Office for National Statistics, Guide to statistical revisions
- Department for Transport: Statement on revisions
- Her Majesty’s Revenue and Customs: Code of Practice for Official Statistics: HMRC policy on revisions
The impact that revisions have on statistics should be quantified and clearly communicated to users. For example, the Office for National Statistics publishes real time databases for Gross Domestic Product which provide a time series of differences between previous and revised estimates. This enables users to understand the size and direction of changes resulting from revisions that are made.
The Organisation for Economic Cooperation and Development publishes guidance for performing and using the results of revisions analysis. Revisions analysis should be used to help identify where improvements may be required e.g. putting quality assurance processes in place for the statistics. The revisions analysis for United Kingdom public service productivity provides a good example of this.
For releases where scheduled revisions will be made to the figures in future releases e.g. estimated figures based on incomplete data at the time of publication which will become available in future releases, you should:
Use phrasing such as “provisional results”
Phrasing such as “provisional results” should be used in titles and commentary to emphasise that the figures are subject to change in the future.
The title of the Business investment real-time database from the Office for National Statistics (ONS).
Explain why figures are published and the revised
Provide an explanation as to why figures are published and then revised at a later date. This can often be the result of a trade-off between timeliness and accuracy.
Section 3 in the ONS publication “Introducing a new publication model for GDP” directly addresses timeliness versus accuracy.
Provide information on accuracy of provisional estimates
Provide information on the accuracy of the provisional estimates when compared with the revised final figures.
The table in the Background Information section of the Department for Transport’s publication “Provisional Road Traffic Estimates Great Britain: April 2018 – March 2019” compares provisional estimates with final estimates.
Be open about when revised results will be published
Provide details of when the revised results will be published.
This excerpt is from the Forestry Commission’s publication “Provisional UK Wood Production and Trade: 2017 provisional figures“:
Next Update: 28 September 2018 (final results for 2017 – see Forestry Facts and Figures 2018 and Forestry Statistics 2018) 16 May 2019 (Provisional results for 2018).
It is important that information on unscheduled corrections is communicated to users within the statistical release. You should explain:
- why the correction has been made
- what the impact of the correction is
This example from Sport England explains the impact of revisions in their publication of results from the Active Lives Survey.
Active Lives: an update
We’ve published revised levels of activity data as part of our March 2018 Active Lives Adult Survey results, after the identification of a technical software issue. As a result of the issue, people doing exactly 150 minutes of physical activity were classified as fairly active, rather than active. This means the figures published previously under-reported the number of active people by a relatively small amount. This was a technical problem with the software used for the analysis – which is now resolved – and doesn’t impact or affect the quality of the data we collected, nor how we processed that data.
The revised statistics, covering mid-November 2015 to mid-November 2016, have been published together with the year two results. There have been small changes to the activity figures previously stated, but the overall participation patterns are unchanged.
We’re currently updating our Active Lives online analysis tool, which will allow you to explore the data and focus on your own areas of interest.