How to Establish an Effective Data Quality Policy for Your Business

January 23, 2019

Story

How to Establish an Effective Data Quality Policy for Your Business

By focusing on ad-hoc incident resolution, organizations struggle to identify and address recurring data quality problems in a structural way.

Too many organizations fail to implement effective data quality and risk management policies.  When data comes in, they typically validate and cleanse it first before distributing it more widely. The emphasis is on preventing downstream systems from receiving erroneous data. That’s important - but by focusing on ad-hoc incident resolution, organizations struggle to identify and address recurring data quality problems in a structural way.

To rectify this, they need to the ability to continuously carry out analysis, targeted at understanding their data quality and reporting on it over time. Very few organizations across the industry are currently doing this and that’s a significant problem. After all, however much data cleansing an organization does, if it fails to track what was done in the past, it will not know how often specific data items contained gaps, completeness, or accuracy issues, nor understand where those issues are most intensively clustered. 

Focusing data quality efforts exclusively on day-to-day data cleansing is also likely to result in organizations struggling to understand how often data quality mistakes are made, or how frequently quick bulk validations replace more thorough analysis.  For many, their focus on day-to-day data cleansing disguises the fact that they don’t have a clear understanding of data quality, let alone how to measure it or to put in place a more overarching data quality policy. When firefighting comes at the expense of properly understanding underlying quality drivers, that’s a big issue. 

Especially in industries where regulation on due process and fit for purpose data have grown increasingly prescriptive, the risks of failing to implement a data quality policy and data risk management processes can be far-reaching.

Implementing a Framework

To address this, organizations need to put in place a data quality framework. That means identifying what the critical data elements are, what the risks and likely errors or gaps in that data are, and what data flows and controls are in place. Very few organizations have implemented such a framework so far. They may have previously put stringent IT controls in place, but these have tended to focus on processes rather than data quality itself.

By using a data quality framework, organizations can outline a policy that establishes a clear definition of data quality and what the objectives of the approach are. It also documents the data governance approach, including not just processes and procedures but also responsibilities and data ownership.

The framework will also help organizations establish the dimensions of data quality – that data should be accurate, complete, timely and appropriate, for example. For all these areas, key performance indicators (KPIs) need to be put in place to allow the organization to measure what data quality means in each case. Key risk indicators (KRIs) need to be implemented and monitored to ensure the organization knows where its risks are and that it has effective controls to deal with them. KPIs and KRIs should be shared with all stakeholders for periodic evaluation.

The Role of Data Quality Intelligence

A data quality framework will inevitably be focused on the operational aspects of an organization’s data quality efforts. To take data quality to the next level, businesses can employ a data quality intelligence approach that gives them the ability to achieve a much broader level of insight, analysis, reporting, and alerts.

This will in turn enable the organization to capture and store historical information about data quality, including how often an item was modified and how often data was erroneously flagged –good indicators of the level of errors as well as the quality of the validation rules. More broadly, it will enable critical analysis capabilities for these exceptions; any data issues arising; and the effectiveness of key data controls and reporting capabilities for data quality KPIs, vendor and internal data source performance, control effectiveness, and SLAs. 

In short, data quality intelligence effectively forms a layer on top of the operational data quality functionality provided by the framework, which helps to visualize what has been achieved by that framework, making sure that all data controls are effective, and that the organization is achieving its KPIs and KRIs. Rather than being an operational tool, it is a business intelligence solution, providing key insight into how the organization is performing against its key data quality goals and targets. CEOs and chief risk officers (CROs) would benefit from this functionality, as would compliance and operational risk departments.

While the data quality framework helps the operational aspects of an organization's data quality efforts, data quality intelligence gives key decision-makers and other stakeholders an insight into that approach, helping them measure its success and demonstrate the organization is compliant with its own data quality policies and with relevant industry regulations. 

Ultimately the benefits of the approach are many and various. It improves data quality in general of course. Beyond that, it helps organizations demonstrate the accuracy, completeness, and timeliness of their data, which in turn helps them meet relevant regulatory requirements and assess compliance with their own data quality objectives.

The time is clearly ripe for all such businesses to knock their data quality processes into shape.

Boyke Baboelal is Director of Data Services at Asset Control.