Business outcome is the end goal of any data management activity may that be data governance, data quality management, Master Data Management (MDM) and Product Information Management (PIM).
Business outcome comes from selling more and reducing costs.
At Product Data Lake we have a simple scheme for achieving business outcome through selling more goods and reducing costs of sharing product information between trading partners in business ecosystems:
The title of this blog post is a Latin legal phrase meaning “false in one thing, false in everything”. It refers to a principle about regarding everything a witness says as not credible, if one thing said by the witness is proven not to be true. This has been a part of the plot in plenty of courtroom films and TV-shows.
This principle has meaning related to data quality too. An example from direct marketing will be a receiver of a direct mail saying: “If you can’t get my name right, how can I trust you in getting anything right during a purchase?”
An example from the multi-channel world, or should we say omni-channel today, would be a shopper saying: “If you say one thing about the product in the shop and another thing on the website, how can I trust any of your product information?” Falsehood in omni-channel so to speak.
Measuring the impact of such attitudes and thereby the Return on Investment (ROI) in data quality improvement based on this principle is very hard. We usually only have random anecdotal evidence about that this happens.
But, what we can say is: Don’t lie in court and don’t neglect your data quality. It will hurt your credibility and then in the end your creditworthiness.
Today I attended a nice little event at the British Computer Society. The event was called “Data Surgery” and had sessions with combined presentations and discussions around data management. Among presenters were Julian Schwarzenbach with his beavers and squirrels from the data zoo and Martin “Johari” Doyle of DQ Global discussing data quality.
In the data quality session I attended the good old subject of selling data quality was touched and not surprisingly the fear factor was mentioned as a way to go.
While I agree that fear of failure in the form of bad reputation and financial loss is a working concept I have also seen that data quality initiatives based on fear doesn’t stick too long. Similar thoughts were expressed in the Data Quality Pro post called Taking The ‘Fear’ Factor Out Of Data Quality By Duane Smith. Herein Duane says:
“Selling your data quality initiative based on fear may have a short-term pay back, but I believe it will ultimately fail in the longer term.”
The opposite approach to relying on fear is counting on greed. That means making better profit by improving data quality. It’s a more sustainable way I think but indeed predicting ROI from a data quality initiative is very hard as examined on the blog page called ROI.
So, most often we fear counting on greed and falls back to greeting the fear.
“Why did 85% of the 1700 CMOs interviewed say they use social media as a communications channel and yet only 14% of them measure the ROI?”
A traditional discipline in measuring ROI from a certain market activity is, as told in the post Matchback and Master Data Management, that you try to figure out from which activity a new (prospect) customer was triggered.
The problem is that the trigger may be in one channel but the customer shows up in another channel.
Measuring the Return on Investment (ROI) in doing social media communication and social CRM also requires matchback and in order to do this you will need social master data management where the old systems of records are linked to the new systems of engagement.
Recently I stumbled upon a report called Future Identities in the UK. The purpose of the report is to provide the government in the UK insight into how identities of citizens will develop over the next 10 years. But the insight certainly also applies to how private companies will have to react to this development and certainly also not just in the UK.
The report talks about three different kinds of identities:
Applied to data quality and master data management I think these future kinds of identities will have these consequences:
Biometric identities relates to hard core identity resolution as in fighting terrorism, crime investigation and physical access control but is sometimes even used in simple commercial checks as told in the post Real World Identity. My guess is that we will see biometrics used more as a mean to have better data quality, but not considerable more due to return of investment also as examined in the post Citizen ID and Biometrics.
Biographical identities and the related attributes resembles what we often also calls demographic attributes used in handling data for direct marketing and other purposes of data management. Direct marketing may, as reported in the post Psychographic Data Quality, be in transition to go deeper into big data in order to be psychographic marketing.
Social identities is the new black. As discussed on this blog, latest in the post Defining Social MDM, my guess is that social data master management is going to be big and has to be partly interwoven with using traditional biographical attributes and even, like it or not, biometric attributes. The art of doing that in a proper way is going to be very exciting.
Currently I’m working with a cloud based service where we are exploiting available data about addresses, business entities and consumers/citizens from all over the world.
The cost of such data varies a lot around the world.
In Denmark, where the product is born, the costs of such data are relatively low. The joys of the welfare state also apply to access to open public sector data as reported in the post The Value of Free Address Data. Also you are able to check the identity of an individual in the citizen hub. Doing it online on a green screen you will be charged (what resembles) 50 cent, but doing it with cloud service brokerage, like in iDQ™, it will only cost you 5 cent.
In the United Kingdom the prices for public sector data about addresses, business entities and citizens are still relatively high. The Royal Mail has a license tag on the PAF file even for government bodies. Ordnance Survey is given the rest of AddressBase free for the public sector, but there is a big tag for the rest of the society. The electoral roll has a price tag too even if the data quality isn’t considered for other uses than the intended immediate purpose of use as told in the post Inaccurately Accurate.
At the moment I’m looking into similar services for the United States and a lot of other countries. Generally speaking you can get your hands on most data for a price, and the prices have come down since I checked the last time. Also there is a tendency of lowering or abandoning the price for the most basic data as names and addresses and other identification data.
As poor data quality in contact data is a big cost for most enterprises around the world, the news of decreasing prices for big reference data is good news.
However, if you are doing business internationally it is a daunting task to keep up with where to find the best and most cost effective big reference data sources for contact data and not at least how to use the sources in business processes.
Wednesday the 25th July I’m giving a presentation, in the cloud, on how iDQ™ comes to the rescue. More information on DataQualityPro.
The idiom turning a blind eye originates from the sea battle at Copenhagen where Admiral Nelson ignored a signal with permission to withdraw by raising the telescope to his blind eye and say “I really do not see the signal”.
Nelson went on and won the battle.
As a data quality practitioner you are often amazed by how enterprises turns the blind eye to data quality challenges and despite horrible data quality conditions keeps on and wins the battle by growing as a successful business.
The evidence about how poor data quality is costing enterprises huge sums has been out there for a long time. But business success are made over and again despite of bad data. There may be casualties, but the business goals are met anyway. So, the poor data quality is just something that makes the fight harder, not impossible.
I guess we have to change the messaging about data quality improvement away from the doomsday prophesies, which make decision makers turn a blind eye to data quality challenges, and be more specific on maybe smaller but tangible wins where data quality improvement and business efficiency goes hand in hand.
Master Data Management is becoming increasingly popular and so are writing books about Master Data Management.
Last month Dalton Cervo and Mark Allen published their contribution to the book selection. The book is called “Master Data Management in Practice: Achieving True Customer MDM”.
As disclosed in the first part of the title, the book emphasizes on the practical aspects of implementing and maintaining Master Data Management and as disclosed in the second part of the title, the book focuses on customer MDM, which, until now, is the most frequent and proven domain in MDM.
In my opinion the book has succeeded very well in keeping a practical view on MDM. And I think that limiting the focus to customer MDM supports the understanding of the issues discussed in a good way, though, as the authors also recognizes in the final part, that multi-domain MDM is becoming a trend.
Mastering customer master data is a huge subject area. In my eyes this book addresses all the important topics with a good balance, both in the sense of embracing business and technology angels with equal weight and not presenting the issues in a too simple way or in a too complex way.
I like how the authors are addressing the ROI question by saying: “Attempts to try to calculate and project ROI will be swag at best and probably miss the central point that MDM is really an evolving business practice that is necessary to better manage your data, and not a specific project with a specific expectation and time-based outcome that can be calculated up front”.
In the final summary the authors say: “The journey through MDM is a constantly learning, churning and maturing experience. Hopefully, we have contributed with enough insight to make your job easier”. Yep, Dalton and Mark, you have done that.
I always wanted to make the above headline, but unfortunately one of the hardest things to do is documenting the direct link between data quality improvement and competitive advantage. Apart from the classic calculation of the cost of returned direct mails most other examples have circumstantial evidences, but there is no smoking gun.
Then yesterday I stumbled upon an example with a different angle. A travel company issued a press release about that new strict rules requires that your name on the flight ticket have to be exactly spelled the same and hold the same name elements as in your passport. So if you made a typo or missed a middle name on your self registration you have to make a correction. Traditional travel companies do that for free, but low-cost airlines may charge up to 100 Euros (often more than the original ticket price) for making the correction.
So traditional travel companies invokes a competitive advantage in allowing better data quality – and the low-cost airlines are making profit from bad data quality.
When discussing information quality a frequent subject is if we can compare quality in manufacturing (and the related methodology) with information and data quality. The predominant argument against this comparison is that raw data can be reused multiple times while raw materials can’t.
Information Economics circles around that difference as well.
The value of data is very much dependent on how the data is being used and in many cases the value increases with the times the data is being used.
Data quality will probably increase with multiple uses as the accuracy and timeliness is probed with each use, a new conformity requirement may be discovered and the completeness may be expanded.
The usefulness of data (as information) may also be increased by each new use as new relations to other pieces of data are recorded.
In my eyes the value of (used) data is very much relying on how well you are able to capture the feedback from how data is used in business processes. This is actually the same approach as in continuous quality improvement (Kaizen) in manufacturing, only here the improvement is only good for the next goods to be produced. In data management we have the chance to improve the quality and value of already used data.