Data Governance in the Self-Service Age

The term self-service is used increasingly within data management. Self-service may be about people within your organization using self-service capabilities as in self-service business intelligence. But probably more disruptive it may be about customer self-service and supplier self-service meaning that people outside your organization are increasingly more dependent on the level of data quality you can offer within your services.

Customer self-service will not succeed without you offering decent data quality related to product information as exemplified in the post Falsus in Uno, Falsus in Omnibus. There will be more happy customer self-service events with more complete product information. Knowing your customer better helps with helping your customer doing self-serving. And in that sense it may be Time To Turn Your Customer Master Data Management Social?

Data entrySupplier self-service will not fly if you do not know your suppliers and their differences, which is quite similar to the concept of knowing your customer as explained in the post Single Business Partner View. When it comes to approaches to data management within supplier engagement there are several options as those examined in the post Sharing Product Master Data.

Do you think data governance is hard enough when dealing with the dear people within your own organization? I have news for you. It’s going to be even tougher when dealing with all the lovely people outside your organization who you will ask to be part of your data collection and consumption workspace.

Bookmark and Share

Hierarchical Single Source of Truth

Most data quality and master data management gurus, experts and practitioners agree that achieving a “single source of truth” is a nice term, but is not what data quality and master data management is really about as expressed by Michele Goetz in the post Master Data Management Does Not Equal The Single Source Of Truth.

Even among those people, including me, who thinks emphasis on real world alignment could help getting better data and information quality opposite to focusing on fitness for multiple different purposes of use, there is acknowledgement around that there is a “digital distance” between real world aligned data and the real world as explained by Jim Harris in the post Plato’s Data. Also, different public available reference data sources that should reflect the real world for the same entity are often in disagreement.

When working with improvement of data quality in party master data, which is the most frequent and common master data domain with issues, you encounter the same issues over and over again, like:

  • Many organizations have a considerable overlap of real world entities who is a customer and a supplier at the same time. Expanding to other party roles this intersection is even bigger. This calls for a 360° Business Partner View.
  • Most organizations divide activities into business-to-business (B2B) and business-to-consumer (B2C). But the great majority of business’s are small companies where business and private is a mixed case as told in the post So, how about SOHO homes.
  • When doing B2C including membership administration in non-profit you often have a mix of single individuals and households in your core customer database as reported in the post Household Householding.
  • As examined in the post Happy Uniqueness there is a lot of good fit for purpose of use reasons why customer and other party master data entities are deliberately duplicated within different applications.
  • Lately doing social master data management (Social MDM) has emerged as the new leg in mastering data within multi-channel business. Embracing a wealth of digital identities will become yet a challenge in getting a single customer view and reaching for the impossible and not always desirable single source of truth.

A way of getting some kind of structure into this possible, and actually very common, mess is to strive for a hierarchical single source of truth where the concept of a golden record is implemented as a model with golden relations between real world aligned external reference data and internal fit for purpose of use master data.

Right now I’m having an exciting time doing just that as described in the post Doing MDM in the Cloud.

Bookmark and Share

State of this Data Quality Blog

Today is a big day on this blog as it has been live for 3 years.

Success versus Failure

The first entry called Qualities of Data Architecture was a promise to talk about data quality success stories. The reason for emphasizing on success stories related to data quality is a feeling that data quality improvement is too often promoted by horror stories telling about how bad your business may go if you don’t pay attention to data quality.

The problem is that stories about failure usually aren’t taken too seriously. Jim Harris recently had a very good take on that in the post Data Quality and Chicken Little Syndrome.

So, I plan to tell even more success stories along with the inevitable stories about failure that so easily and obviously could have been avoided.

Getting Social

Using social networks to promote your blogging is quite natural.

At the same time social networks has emerged as new source in doing master data management (I call this Social MDM).

Exploring this new discipline over the hype peak, down through the valley of disappointment and up to the plateau of productivity will for sure be a recurring subject on this blog.

People, Processes and Technology

Sometimes you see a statement like “Data Quality is not about technology, it’s all about people”.

Well, most things we can’t solve easily are not just about one thing. In my eyes the old cliché about addressing people, processes and technology surely also relates to getting data quality right.

There are many good blogs around about people and processes. On this blog I’ll try to tell about my comfort zone being technology without forgetting people and processes.

The Hidden Agenda

Most people blogging are doing this to promote our (employers) expertise, services and tools and I am not different.

Lately I have written a lot about a second to none cloud based service for upstream data quality prevention. The wonder is called instant Data Quality.

While upstream prevention is the best approach to data quality still a lot of work must be done every day in downstream cleansing as told in the post Top 5 Reasons for Downstream Cleansing.

As I’m also working with a new stellar cloud based platform for data quality improvement productivity I will for sure share some props for that in the near future.

Bookmark and Share

Happy Uniqueness

When making the baseline for customer data in a new master data management hub you often involve heavy data matching in order to de-duplicate the current stock of customer master data, so you so to speak start with a cleansed duplicate free set of data.

I have been involved in such a process many times, and the result has never been free of duplicates. For two reasons:

  • Even with the best data matching tool and the best external reference data available you obviously can’t settle all real world alignments with the confidence needed and manual verification is costly and slowly.
  • In order to make data fit for the business purposes duplicates are required for a lot of good reasons.

Being able to store the full story from the result of the data matching efforts is what makes me, and the database, most happy.

The notion of a “golden record” is often not in fact a single record but a hierarchical structure that reflects both the real world entity as far as we can get and the instances of this real world entity in a form that are suitable for different business processes.

Some of the tricky constructions that exist in the real world and are usual suspects for multiple instances of the same real world entity are described in the blog posts:

The reasons for having business rules leading to multiple versions of the truth are discussed in the posts:

I’m looking forward to yet a party master data hub migration next week under the above conditions.

Bookmark and Share

Right the First Time

Since I have just relocated (and we have just passed the new year resolution point) I have become a member of the nearby fitness club.

Guess what: They got my name, address and birthday absolutely right the first time.

Now, this could have been because the young lady at the counter is a magnificent data entry person. But I think that her main competency actually rightfully is being a splendid fitness instructor.

What she did was that she asked for my citizen ID card and took the data from there. A little less privacy yes, but surely a lot better for data quality – or data fitness (credit Frank Harland) you might say.

Bookmark and Share

The Snow Queen

During the existence of this blog I have come to use two tags several times, namely the fairy tale author Hans Christian Andersen as an inspiration for data quality related subjects and the tag happy databases as a counterweight against that we may talk too much about all the bad data quality around.

In embracing these two tags the fairy tale The Snow Queen also starts in the very bad end.

An evil troll makes a magic mirror that has the power to distort the appearance of things reflected in it. It fails to reflect all the good and beautiful aspects of people and things while it magnifies all the bad and ugly aspects so that they look even worse than they really are; for example makes the loveliest landscapes look like “boiled spinach.” I think every child understands that metaphor.

We tend to do the same in the data quality realm. In order to make a case for data and information quality improvement we like to tell about trainwrecks like on the site edited by IAIDQ. And for the record, I am guilty as everyone else in reading, laughing and contributing to the mobbing when everyone else makes a mistake within data management.

Bookmark and Share

Snowman Data Quality

Right now it is winter in the Northern Hemisphere and this year winter has come earlier than usual to Northern Europe where I live. We have already had a lot of snow.

One of the good things with snow is that you are able to build a snowman. Snowmen are beautiful pieces of art but very vulnerable.  Wind and not at least rising temperatures makes the snowman ugly and finally go away sooner or later.

Snowmen have this unfortunate fate common with many data quality initiatives.

Many articles, blog posts and so on in the data quality realm focuses on this fate related to technology based initiatives. The common practice of executing downstream cleansing of data using data quality tools is often criticized. As a practitioner in this field I have to admit that: Yes, I am often making the art of building snowman data quality.

An often stated alternative to using data quality tools is improving data quality through change management including relaying on changing the attitude of people entering and maintaining data. Though it’s not my area of expertise I have seen such initiatives too. And I am afraid that I am not convinced that such initiatives unfortunately also sooner or later have the same fate as the snowman.

As said, I’m not the expert here. I am only the little child watching how this snowman is exposed to the changing winds in many business environments and how it finally disappears when the business climate varies over time.

Now, this is supposed to be a cheerful blog about happy databases. I am ready for getting into some warm clothes and build a beautiful snowman of any kind.  

Bookmark and Share