Which Data Management KPIs Should You Measure?

Everyone agrees that the result your data management efforts should be measured and the way to do that should be to define some Key Performance Indicators that can be tracked.

But what should those KPIs be? This has been a key question (so to speak) in almost all data management initiatives I have been involved with. You can with the tools available today easily define some technical indicators close to the raw data such as percentage of duplicate data records and completeness of data attributes. The harder thing to do is to relate data management efforts to business terms and quantify the expected and achieved results in business value.  

A recent Gartner study points out five areas where such KPIs can be defined and measured. The aim is that data / information become a monetizable asset. The KPIs revolves around business impact, time to action, data quality, data literacy and risk.

Get a free copy of the Gartner report on 5 Data and Analytics KPIs Every Executive Should Track from the parsionate site here.

From Data Quality to Business Outcome

Explaining how data quality improvement will lead to business outcome has always been difficult. The challenge is that there very seldom is a case where you with confidence can say “fix this data and you will earn x money within y days”.

Not that I have not seen such bold statements. However, they very rarely survive a reality check. On the other hand, we all know that data quality problems seriously effect the healthiness of any business.

A reason why the world is not that simple is that there is a long stretch from data quality to business outcome. The stretch goes like this:

  • First, data quality must be translated into information quality. Raw data must be put into a business context where the impact of duplicates, incomplete records, inaccurate values and so on is quantified, qualified and related within affected business scenarios.
  • Next, the achieved information quality advancements must be actionable in order to cater for better business decisions. Here it is essential to look beyond the purpose of why the data was gathered in the first place and explore how a given piece of information can serve multiple purpose of actions.
  • Finally, the decisions must enable positive business outcomes within growth, cost reductions, mitigation of risks and/or time to value. Often these goals are met through multiple chains of bringing data into context, making that information actionable and taking the right decisions based on the achieved and shared knowledge.

Stay tuned – and also look back – on this blog for observations and experiences for proven paths on how to improve data quality leading to positive business outcome.  

TCO, ROI and Business Case for Your MDM / PIM / DQM Solution

Any implementation of a Master Data Management (MDM), Product Information Management (PIM) and/or Data Quality Management (DQM) solution will need a business case to tell if the intended solution has a positive business outcome.

Prior to the solution selection you will typically have:

  • Identified the vision and mission for the intended solution
  • Nailed the pain points the solution is going to solve
  • Framed the scope in terms of the organizational coverage and the data domain coverage
  • Gathered the high-level requirements for a possible solution
  • Estimated the financial results achieved if the solution removes the pain points within the scope and adhering to the requirements

The solution selection (jump-starting with the Disruptive MDM / PIM / DQM Select Your Solution service) will then inform you about the Total Cost of Ownership (TCO) of the best fit solution(s).

From here you can, put very simple, calculate the Return of Investment (ROI) by withdrawing the TCO from the estimated financial results.

MDM PIM DQM TCO ROI Business Case

You can check out more inspiration about ROI and other business case considerations on The Disruptive MDM / PIM /DQM Resource List.

MDM Spending Might be 5 Billion USD per Year

The latest Master Data Management Landscape report from Information Difference was covered in the post Movements in the MDM Vendor Landscape 2019.

Apart from positioning some of the tool vendors on a chart, the report also estimates the size of the MDM market. Information difference estimates that the software vendors make 1.6 B USD per year. Hereof are pure license sales 885 M USD, maintenance fees are 273 M USD per year and professional services counts for 450 M USD per year.

In addition, the report says: “Our research shows that on average the people costs of a MDM project are four times that of the software license cost, so there is clearly a large and separate consultancy market associated with MDM”.

So, the additional spending might be in the area of 3.5 B USD (depending on how you calculate and if that multiplier is right). These costs go to system integrators, freelance MDM consultants and internal staff. From my experience internal staff are sparsely represented in MDM implementations, so yes, there is a large consultancy market within MDM.

The total 5 Billion USD spend by end user organizations yearly on MDM then look like this:

MDM Spending 2019
MDM Yearly Spending. Source: Information Difference

The good question that follows will of course be on the size and distribution of the business benefits achieved.

Sell more. Reduce costs.

Business outcome is the end goal of any data management activity may that be data governance, data quality management, Master Data Management (MDM) and Product Information Management (PIM).

Business outcome comes from selling more and reducing costs.

At Product Data Lake we have a simple scheme for achieving business outcome through selling more goods and reducing costs of sharing product information between trading partners in business ecosystems:

Sell more Reduce costs

Interested? Get in touch:

Falsus in Uno, Falsus in Omnibus

The title of this blog post is a Latin legal phrase meaning “false in one thing, false in everything”. It refers to a principle about regarding everything a witness says as not credible, if one thing said by the witness is proven not to be true. This has been a part of the plot in plenty of courtroom films and TV-shows.

This principle has meaning related to data quality too. An example from direct marketing will be a receiver of a direct mail saying: “If you can’t get my name right, how can I trust you in getting anything right during a purchase?”

Somed data quality dimensions
Some data quality dimensions

An example from the multi-channel world, or should we say omni-channel today, would be a shopper saying: “If you say one thing about the product in the shop and another thing on the website, how can I trust any of your product information?” Falsehood in omni-channel so to speak.

Measuring the impact of such attitudes and thereby the Return on Investment (ROI) in data quality improvement based on this principle is very hard. We usually only have random anecdotal evidence about that this happens.

But, what we can say is: Don’t lie in court and don’t neglect your data quality. It will hurt your credibility and then in the end your creditworthiness.

Bookmark and Share

What Should be Driving Data Quality: Fear or Greed?

Today I attended a nice little event at the British Computer Society. The event was called “Data Surgery” and had sessions with combined presentations and discussions around data management. Among presenters were Julian Schwarzenbach with his beavers and squirrels from the data zoo and Martin “Johari” Doyle of DQ Global discussing data quality.

wet floorIn the data quality session I attended the good old subject of selling data quality was touched and not surprisingly the fear factor was mentioned as a way to go.

While I agree that fear of failure in the form of bad reputation and financial loss is a working concept I have also seen that data quality initiatives based on fear doesn’t stick too long. Similar thoughts were expressed in the Data Quality Pro post called Taking The ‘Fear’ Factor Out Of Data Quality By Duane Smith. Herein Duane says:

“Selling your data quality initiative based on fear may have a short-term pay back, but I believe it will ultimately fail in the longer term.”

euro notesThe opposite approach to relying on fear is counting on greed. That means making better profit by improving data quality. It’s a more sustainable way I think but indeed predicting ROI from a data quality initiative is very hard as examined on the blog page called ROI.

So, most often we fear counting on greed and falls back to greeting the fear.

Bookmark and Share

Social MDM and Matchback

business partnersIn a discussion in the Social MDM group on LinkedIn the following saying came up:

“Why did 85% of the 1700 CMOs interviewed say they use social media as a communications channel and yet only 14% of them measure the ROI?”

A traditional discipline in measuring ROI from a certain market activity is, as told in the post Matchback and Master Data Management, that you try to figure out from which activity a new (prospect) customer was triggered.

The problem is that the trigger may be in one channel but the customer shows up in another channel.

Measuring the Return on Investment (ROI) in doing social media communication and social CRM also requires matchback and in order to do this you will need social master data management where the old systems of records are linked to the new systems of engagement.

As the social business has some considerations not at least around privacy, the matchback activities may very well be done by adapting Hierarchy Management in Social MDM.

Bookmark and Share

Future Identities

Recently I stumbled upon a report called Future Identities in the UK. The purpose of the report is to provide the government in the UK insight into how identities of citizens will develop over the next 10 years. But the insight certainly also applies to how private companies will have to react to this development and certainly also not just in the UK.

The report talks about three different kinds of identities:

identies in the UK

Applied to data quality and master data management I think these future kinds of identities will have these consequences:

Biometric identities relates to hard core identity resolution as in fighting terrorism, crime investigation and physical access control but is sometimes even used in simple commercial checks as told in the post Real World Identity. My guess is that we will see biometrics used more as a mean to have better data quality, but not considerable more due to return of investment also as examined in the post Citizen ID and Biometrics.

Biographical identities and the related attributes resembles what we often also calls demographic attributes used in handling data for direct marketing and other purposes of data management. Direct marketing may, as reported in the post Psychographic Data Quality, be in transition to go deeper into big data in order to be psychographic marketing.

Social identities is the new black. As discussed on this blog, latest in the post Defining Social MDM, my guess is that social data master management is going to be big and has to be partly interwoven with using traditional biographical attributes and even, like it or not, biometric attributes. The art of doing that in a proper way is going to be very exciting.

Bookmark and Share

Return on Investment in Big Reference Data

Currently I’m working with a cloud based service where we are exploiting available data about addresses, business entities and consumers/citizens from all over the world.

The cost of such data varies a lot around the world.

In Denmark, where the product is born, the costs of such data are relatively low. The joys of the welfare state also apply to access to open public sector data as reported in the post The Value of Free Address Data. Also you are able to check the identity of an individual in the citizen hub. Doing it online on a green screen you will be charged (what resembles) 50 cent, but doing it with cloud service brokerage, like in iDQ™, it will only cost you 5 cent.

In the United Kingdom the prices for public sector data about addresses, business entities and citizens are still relatively high. The Royal Mail has a license tag on the PAF file even for government bodies. Ordnance Survey is given the rest of AddressBase free for the public sector, but there is a big tag for the rest of the society. The electoral roll has a price tag too even if the data quality isn’t considered for other uses than the intended immediate purpose of use as told in the post Inaccurately Accurate.

At the moment I’m looking into similar services for the United States and a lot of other countries. Generally speaking you can get your hands on most data for a price, and the prices have come down since I checked the last time. Also there is a tendency of lowering or abandoning the price for the most basic data as names and addresses and other identification data.

As poor data quality in contact data is a big cost for most enterprises around the world, the news of decreasing prices for big reference data is good news.

However, if you are doing business internationally it is a daunting task to keep up with where to find the best and most cost effective big reference data sources for contact data and not at least how to use the sources in business processes.

Wednesday the 25th July I’m giving a presentation, in the cloud, on how iDQ™ comes to the rescue. More information on DataQualityPro.

Bookmark and Share