Sell more. Reduce costs.

Business outcome is the end goal of any data management activity may that be data governance, data quality management, Master Data Management (MDM) and Product Information Management (PIM).

Business outcome comes from selling more and reducing costs.

At Product Data Lake we have a simple scheme for achieving business outcome through selling more goods and reducing costs of sharing product information between trading partners in business ecosystems:

Sell more Reduce costs

Interested? Get in touch:

Falsus in Uno, Falsus in Omnibus

The title of this blog post is a Latin legal phrase meaning “false in one thing, false in everything”. It refers to a principle about regarding everything a witness says as not credible, if one thing said by the witness is proven not to be true. This has been a part of the plot in plenty of courtroom films and TV-shows.

This principle has meaning related to data quality too. An example from direct marketing will be a receiver of a direct mail saying: “If you can’t get my name right, how can I trust you in getting anything right during a purchase?”

Somed data quality dimensions
Some data quality dimensions

An example from the multi-channel world, or should we say omni-channel today, would be a shopper saying: “If you say one thing about the product in the shop and another thing on the website, how can I trust any of your product information?” Falsehood in omni-channel so to speak.

Measuring the impact of such attitudes and thereby the Return on Investment (ROI) in data quality improvement based on this principle is very hard. We usually only have random anecdotal evidence about that this happens.

But, what we can say is: Don’t lie in court and don’t neglect your data quality. It will hurt your credibility and then in the end your creditworthiness.

Bookmark and Share

What Should be Driving Data Quality: Fear or Greed?

Today I attended a nice little event at the British Computer Society. The event was called “Data Surgery” and had sessions with combined presentations and discussions around data management. Among presenters were Julian Schwarzenbach with his beavers and squirrels from the data zoo and Martin “Johari” Doyle of DQ Global discussing data quality.

wet floorIn the data quality session I attended the good old subject of selling data quality was touched and not surprisingly the fear factor was mentioned as a way to go.

While I agree that fear of failure in the form of bad reputation and financial loss is a working concept I have also seen that data quality initiatives based on fear doesn’t stick too long. Similar thoughts were expressed in the Data Quality Pro post called Taking The ‘Fear’ Factor Out Of Data Quality By Duane Smith. Herein Duane says:

“Selling your data quality initiative based on fear may have a short-term pay back, but I believe it will ultimately fail in the longer term.”

euro notesThe opposite approach to relying on fear is counting on greed. That means making better profit by improving data quality. It’s a more sustainable way I think but indeed predicting ROI from a data quality initiative is very hard as examined on the blog page called ROI.

So, most often we fear counting on greed and falls back to greeting the fear.

Bookmark and Share

Social MDM and Matchback

business partnersIn a discussion in the Social MDM group on LinkedIn the following saying came up:

“Why did 85% of the 1700 CMOs interviewed say they use social media as a communications channel and yet only 14% of them measure the ROI?”

A traditional discipline in measuring ROI from a certain market activity is, as told in the post Matchback and Master Data Management, that you try to figure out from which activity a new (prospect) customer was triggered.

The problem is that the trigger may be in one channel but the customer shows up in another channel.

Measuring the Return on Investment (ROI) in doing social media communication and social CRM also requires matchback and in order to do this you will need social master data management where the old systems of records are linked to the new systems of engagement.

As the social business has some considerations not at least around privacy, the matchback activities may very well be done by adapting Hierarchy Management in Social MDM.

Bookmark and Share

Future Identities

Recently I stumbled upon a report called Future Identities in the UK. The purpose of the report is to provide the government in the UK insight into how identities of citizens will develop over the next 10 years. But the insight certainly also applies to how private companies will have to react to this development and certainly also not just in the UK.

The report talks about three different kinds of identities:

identies in the UK

Applied to data quality and master data management I think these future kinds of identities will have these consequences:

Biometric identities relates to hard core identity resolution as in fighting terrorism, crime investigation and physical access control but is sometimes even used in simple commercial checks as told in the post Real World Identity. My guess is that we will see biometrics used more as a mean to have better data quality, but not considerable more due to return of investment also as examined in the post Citizen ID and Biometrics.

Biographical identities and the related attributes resembles what we often also calls demographic attributes used in handling data for direct marketing and other purposes of data management. Direct marketing may, as reported in the post Psychographic Data Quality, be in transition to go deeper into big data in order to be psychographic marketing.

Social identities is the new black. As discussed on this blog, latest in the post Defining Social MDM, my guess is that social data master management is going to be big and has to be partly interwoven with using traditional biographical attributes and even, like it or not, biometric attributes. The art of doing that in a proper way is going to be very exciting.

Bookmark and Share

Return on Investment in Big Reference Data

Currently I’m working with a cloud based service where we are exploiting available data about addresses, business entities and consumers/citizens from all over the world.

The cost of such data varies a lot around the world.

In Denmark, where the product is born, the costs of such data are relatively low. The joys of the welfare state also apply to access to open public sector data as reported in the post The Value of Free Address Data. Also you are able to check the identity of an individual in the citizen hub. Doing it online on a green screen you will be charged (what resembles) 50 cent, but doing it with cloud service brokerage, like in iDQ™, it will only cost you 5 cent.

In the United Kingdom the prices for public sector data about addresses, business entities and citizens are still relatively high. The Royal Mail has a license tag on the PAF file even for government bodies. Ordnance Survey is given the rest of AddressBase free for the public sector, but there is a big tag for the rest of the society. The electoral roll has a price tag too even if the data quality isn’t considered for other uses than the intended immediate purpose of use as told in the post Inaccurately Accurate.

At the moment I’m looking into similar services for the United States and a lot of other countries. Generally speaking you can get your hands on most data for a price, and the prices have come down since I checked the last time. Also there is a tendency of lowering or abandoning the price for the most basic data as names and addresses and other identification data.

As poor data quality in contact data is a big cost for most enterprises around the world, the news of decreasing prices for big reference data is good news.

However, if you are doing business internationally it is a daunting task to keep up with where to find the best and most cost effective big reference data sources for contact data and not at least how to use the sources in business processes.

Wednesday the 25th July I’m giving a presentation, in the cloud, on how iDQ™ comes to the rescue. More information on DataQualityPro.

Bookmark and Share

Turning a Blind Eye to Data Quality

The idiom turning a blind eye originates from the sea battle at Copenhagen where Admiral Nelson ignored a signal with permission to withdraw by raising the telescope to his blind eye and say “I really do not see the signal”.

Nelson went on and won the battle.

As a data quality practitioner you are often amazed by how enterprises turns the blind eye to data quality challenges and despite horrible data quality conditions keeps on and wins the battle by growing as a successful business.

The evidence about how poor data quality is costing enterprises huge sums has been out there for a long time. But business success are made over and again despite of bad data. There may be casualties, but the business goals are met anyway. So, the poor data quality is just something that makes the fight harder, not impossible.

I guess we have to change the messaging about data quality improvement away from the doomsday prophesies, which make decision makers turn a blind eye to data quality challenges, and be more specific on maybe smaller but tangible wins where data quality improvement and business efficiency goes hand in hand.        

Bookmark and Share