Reaching the Cloud with MDM

As reported in the post The MDM Landscape is Slowly Changing a saying from the Information Difference MDM Landscape 2013 is:

  • “The market is starting to dabble in cloud-based implementations…”

I have spent some part of the last months with a cloud-based Master Data Management implementation in this case using the iDQ™ MDM Edition.

Well, actually it isn’t a full cloud implementation. There is a frontend taking care of user interaction in the cloud and there is a backend taking care of integration on-premise.

I guess many other MDM implementations embracing cloud technology will look like this solution being a hybrid, where some services are based in the cloud and some services are based on-premise.

What about your MDM implementation(s). Is it cloud-based, based on-premise or hybrid?

Hohenzollern Castle in Southern Germany

Bookmark and Share

On Washing Rental Cars and Shared Data

Recently a tweet from Doug Laney of Gartner has been retweeted a lot:

Rented Car

As most analogies it may fit or maybe not fit seen in different perspectives. Actually rental cars are probably some of the most washed cars as the rental company wash and clean the car between every rental.

In the same way as rental cars usually are quite clean I have also found that sharing data is a powerful way to have clean data as told on the page about Data Quality 3.0. This is also the grounding concept behind the instant Data Quality solution I’m working with, where we have just released our iDQ™ MDM Edition.

Bookmark and Share

Where the Streets have Two Names

As told in post The Art in Data Matching a common challenge in matching names and addresses is that in some parts of the world the streets have more than one name at the same time because more than one language is in use.

We have the same challenge when building functionality for rapid addressing, being functionality that facilitates fast and quality assured entry of addresses supported by reference data that knows about postal codes / cities and street names.

The below example is taken from the instant Data Quality tool address form:

Finish Swedish

The Finnish capital Helsinki also has an official name in Swedish being Helsingfors and the streets in Helsinki/Helsingfors have both Finnish and Swedish names. So when you start typing a letter suggestions could be in both Finnish and Swedish.

What challenges have you encountered with street names in multiple languages?

Bookmark and Share

The Data Enrichment ABC

A popular and indeed valuable method of avoiding decay of data quality in customer master data and other master data entities is setting up data enrichment services based on third party reference data sources. Examples of such services are:

  • Relocation updates like National Change Of Address services from postal services
  • Change of name, address and a variety of status updates from business directories and in some countries citizen directories too

When using such services you will typically want to consider the following options for how to deal with the updates:

A: Automatic Update

Here your internal master data will be updated automatically when a change is received from the external reference data source.

C: Excluded Update

Here an automated rule will exclude the update as there may be a range reasons for why you don’t want to update certain entity segments under certain circumstances.

B: Interactive Update

Here the update will require a form of manual intervention either to be fulfilled or excluded based on human decision.

An example will be if a utility supplier receives a relocation update for the occupier at an installation address. This will trigger/support a complex business process far beyond changing the billing address.

iDQ logo
iDQ

As explained in the post When Computer Says Maybe we need functionality within data quality tools and Master Data Management (MDM) solutions to support data stewards in cost effectively handling these situations and this certainly also applies to the B pot in data enrichment.

Right now I’m working with designing such data stewardship functionality within the instant Data Quality environment.

Bookmark and Share

Last Time Right

The ”First Time Right” principle is a good principle for data quality and indeed getting data right the first time is a fundamental concept in the instant Data Quality service I’m working with these days.

However, some blog posts in the data quality realm this week has pointed out that there is a life, and sometime an end of life, after data has hopefully been captured right the first time.

In the post From Cable to Grave by Guy Mucklow on the Postcode Anywhere blog the bad consequences of a case of chasing debt from a customer not among us anymore is examined.

Asset in, Garbage Out: Measuring data degradation is the title of a post by Rob Karel on Informatica Perspectives. Herein Rob goes through all the dangers data may encounter after being entered right the first time.

timingSome years ago I touched the subject in the post Ongoing Data Maintenance. As told here I’m convinced, after having seeing it work, that a good approach to also getting it right the last time is to capture data in a way that makes data maintainable.

Some techniques for doing this are:

  • Where possible collect external identifiers
  • Atomize data instead of squeezing several different elements into one attribute
  • Make the data model reflect the real world

And oh, it’s not the first time, neither the last time, I will touch this subject. It needs constant attention.

Bookmark and Share

Fact Checking by Mashing Up

A recent blog post by Andrew Grill, CEO of Kred, is called Can you spot a social media faker? Fact checking on social media is now becoming even more important.

Besides methods within the social sphere for fact checking, as described in Andrew Grill’s post, I also believe that mashing up social network profiles and traditional external reference data is a great way of getting the full picture.

As explained in the post Sharing is the Future of MDM there are several available external options for checking the facts:

  • Public sector registries which are getting more and more open being that for example for the address part or even deeper in due respect of privacy considerations which may be different for business entities and individual entities.
  • Commercial directories often build on top of public registries.
  • Personal data lockers like Mydex
  • Social network profiles, including credibility (or influence) services

The challenge is of course that there are plenty of external reference data sources as many sources are national, making up 255 or so variants of each data source, as well as there are plenty of social networks and some credibility (or influence) services for that matter.

Making that easy for you is exactly the concept we are working on in the instant Data Quality, iDQ™, concept.

idq_framework

Bookmark and Share

Sharing is the Future of MDM

Over at the DataRoundtable blog Dylan Jones recently posted an excellent piece called The Future of MDM?

Herein Dylan examines how a lot of people in different organizations spend a lot of time on trying to get complete, timely and unique data about customers and other business partners.

A better future for MDM (Master Data Management) could certainly be that every organization doesn’t have to do the work over and over and again. While self registration by customers is a way of letting off the burden on private enterprises and public sector bodies, we may even do better by not having the customer being the data entry clerk and typing in the same information over and over and again.

Today there are several available options for customer and other business partner reference data:

  • Public sector registries which are getting more and more open being that for example for the address part or even deeper in due respect of privacy considerations which may be different for business entities and individual entities.
  • Commercial directories often build on top of public registries.
  • Personal data lockers like the Mydex service mentioned by Dylan.
  • Social network profiles.

instant Single Customer ViewMy guess is that the future of MDM is going to be a mashup of exploiting the above options.

Oh, and as representatives of such a mashup service we recently at iDQ made sure we had the accurate, complete and timely information filled in on our Linkedin Company profile.

Bookmark and Share

instant Single Customer View

Achieving a Single Customer View (SCV) is a core driver for many data quality improvement and Master Data Management (MDM) implementations.

As most data quality practitioners will agree, the best way of securing data quality is getting it right the first time. The same is true about achieving a Single Customer View. Get it right the first time. Have an instant Single Customer View.

The cloud based solution I’m working with right now does this by:

  • Searching external big reference data sources with information about individuals, companies, locations and properties as well as social networks
  • Searching internal master data with information already known inside the enterprise
  • Inserting really new entities or updating current entities by picking  as much data as possible from external sources

instant Single Customer View

Some essential capabilities in doing this are:

  • Searching is error tolerant so you will find entities even if the spelling is different
  • The receiving data model is real world aligned. This includes:
    • Party information and location information have separate lives as explained in the post called A Place in Time
    • You may have multiple means of contact attached like many phones, email addresses and social identities

How do you achieve a Single Customer View?

Bookmark and Share

Master Data Management in the Utility Sector

Making vertical MDM (Master Data Management) solutions, being MDM solutions prepared for a given industry, seems to become a trend in the MDM realm.

Traditionally many MDM solutions actually are strong in a given industry or a few related industries.

This is also true for the MDM solution I’m working with right now, as this solution has gained traction in the utility sector.

So, what’s special (and not entirely special) about the utility sector?

Here are three of my observations:

Exploiting big external reference data

As examined in the post instant Data Quality at Work the utility sector may gain much in using all the available external reference data available in the party master data domain, including:

  • Consumer/citizen directories
  • Business directories
  • Address directories
  • Property directories

However, if data quality shouldn’t be a joke, this means using the best national data sources available as many of the world-wide data sources is this domain are far from providing the precision, accuracy and timeliness needed in the utility sector.

Location precision

Managing locations is a big thing in the utility sector. The post called Where is the Spot explains how identifying locations isn’t as simple as we may use to think in daily life.

This is indeed also true in the utility sector where the issue also includes managing many different locations for the same customer fulfilling different purposes at the same time.

The products

puzzleThe electricity supply part of the utility sector share a lot of issues with the telco sector when it comes to fixed installations and the products and services are in fact the same in some cases which also as a consequence means that  some organizations belongs to both sectors.

This is also a danger with vertical MDM solutions as there may be several best-of-breed options for a given organization, which eventually will result in choosing more than one platform and thereby introducing the silos which MDM in first place was supposed to eliminate.

You probably won’t find the truth (and salsa) inside your firewall

In a Data Roundtable blog post published today and called Big Data in Your Kitchen Phil Simon says:

“CXOs who believe that “data” is simply the content in their own internal databases are increasing being seen as anachronistic. More progressive leaders understand that data is everywhere, including–and especially–external to the enterprise.”

Bringing in external data was also touched recently by Kim Loughead of Informatica in the post Bring The Outside In: Why Integrating External Data Sources Should Be Your Next Data integration Project.

Herein Kim emphasizes that: “Innovation is driven by data and that data largely resides outside your firewall”.

SalsaMy humble work in bringing in the outside revolves around a service called instant Data Quality (iDQ™). This service is about exploiting the increasing choice if external directories holding valuable information about the individuals, companies, addresses and properties we have so much trouble with reflecting in our party master data hubs.

What about you? Are you anachronistic or do you bring in the outside? Or as it will sound in Phil’s Big Data Kitchen: Will you miss salsa tonight?

Bookmark and Share