Many CRM applications have the concepts of leads, accounts and contacts for registering customers or other parties with roles in sales and customer service.
Most CRM systems have a data model suited for business-to-business (B2B) operations. In a B2B environment:
- A lead is someone who might become your customer some day
- An account is a legal entity who has or seems to become your customer
- A contact is a person that works at or in other ways represent an account
In business-to-consumer (B2C) environments there are different ways of making that model work.
The general perception is that data about a lead can be so and so while it of course is important to have optimal data quality for accounts and contacts.
However, this approach works against the essential data quality rule of getting things right the first time.
Converting a lead into an account and/or a contact is a basic CRM process and the data quality pitfalls in that process are many. To name a few:
- Is the lead a new account or did we already have that account in the database?
- Is the contact new or did we know that person maybe at another account?
- How do we align the known data about the lead with external reference data during the conversion process?
In other words, the promise of having a 360-degree customer view is jeopardized by the concept of most CRM systems.
Every year Information Difference publishes a report about the Master Data Management (MDM) Landscape. This year’s report celebrates the 10th year of MDM solutions around. Of course, the MDM industry didn’t start on a certain date 10 years ago, but the use of MDM as a common accepted notation for a branch of IT solutions within data management, and in my eyes as a much needed spinoff of the data quality discipline, was commonly being accepted.
A birthday is a good occasion to look ahead. The Information Difference report takes on some of the trends in the MDM solutions around, being that:
- Most MDM vendors today claims to be multi-domain MDM providers, but certainly they are on different stages coming from different places
- Providing MDM in the cloud is slowly but steadily adapted
- Integrating big data into MDM solutions has, in my words, reached the marketing and R&D departments at the MDM vendors and will someday also reach the professional service and accounting folks there
Read the MDM landscape Q2 2014 report from Information Difference here.
A problem in data cleansing I have come across several times is when you have some name and address registrations where it is uncertain to which country the different addresses belong.
Many address-cleansing tools and services requires a country code as the first parameter in order to utilize external reference data for address cleansing and verification. Most business cases for address cleansing is indeed about a large number of business-to-consumer (B2C) addresses within a particular country. But sometimes you have a batch of typical business-to-business (B2B) addresses with no clear country registration.
The problem is that many location names applies to many different places. That is true within a given country – which was the main driver for having postal codes around. If a none-interactive tool or service have to look for a location all over the world that gets really difficult.
For example I’m in Richmond today. That could actually be a lot of places all over the world as seen on Wikipedia.
I am actually in the Richmond in the London, England, UK area. If I were in the state capital of the US state of Virginia, I could have written I’m in “Richmond, VA”. If an international address-cleansing tool looked at that address, I guess it would first look for a country code, quickly find VA as a two-character country code in the end of the string and firmly conclude I’m at something called Richmond in the Vatican City State.
Have you tried using or constructing an international address cleansing process? Where did you end up?
When laying out data policies and data standards within a data governance program one the most important input is the business rules that exist within your organization.
I have often found that it is useful to divide business rules into two different types:
- External business rules, which are rules based on laws, regulations within industries and other rules imposed from outside your organization.
- Internal business rules, which are rules made up within your organization in order to make you do business more competitive than colleagues in your industry do.
External imposed business rules are most often different from country to country (or group of countries like the EU). Internal business rules may be that too but tend to be rules that apply worldwide within an organization.
The scope of external business rules tend to be fairly fixed and so does the deadline for implementing the derived data policy and standard. With internal business rules you may minimize and maximize the scope and be flexible about the timetable for bringing them into force and formalizing the data governance around the rules. It is often a matter of prioritizing against other short term business objectives.
The distinctions between these two kinds of business rules may not be so important in the first implementation of a data governance program but comes very much into play in the ongoing management of data policies and data standards.
There is a famous poster called The New Yorker. This poster perfectly illustrates the centricity we often have about the town, region or country we live in.
The same phenomenon is often seen in data management as told in the post Foreign Affaires.
If we for example work with postal addresses we tend to think that postal addresses in our own country has a well-known structure while foreign addresses is a total mess.
In Denmark where I am born and raised and has worked most of my life we have two ways of expressing an address:
- The envelope way where there are a certain range of possibilities especially on how to spell a street name and how to write the exact unit within a high rise building, though there is a structure more or less known to native people.
- The code way, as every street has a code too and there is a defined structure for units (known as the KVHX code). This code is used by the public sector as well as in private sectors as financial services and utility companies and this helps tremendously with data quality.
But around 3.5 percent of Danes, including yours truly, has a foreign address. And until now the way of registering and storing those addresses in the public sector and elsewhere has been totally random.
This is going to change. The public authorities has, with a little help from yours truly, made the first standard and governance principles for foreign addresses as seen in this document (in Danish).
At iDQ A/S we have simultaneously developed Master Data Management (MDM) services that helps utility companies, financial services and other industries in getting foreign addresses right as well.
One of the cleverest things said ever is in my eyes Parkinson ’s Law that states: “Work expands so as to fill the time available for its completion”.
There is even a variant for data that says: “Data expands to fill the space available for storage”. This is why we have big data today.
Another similar law that seems to be true is Murphy’s Law saying: “Anything that can go wrong will go wrong”. The sharper version of that is Finagle’s Law that warns: “Anything that can go wrong, will—at the worst possible moment”.
When I started working with data quality the most common trigger for data quality improvement initiatives were after a perfect storm encompassing these laws like saying: “The quality of data will decrease until everything goes wrong at the worst possible moment”.
Fortunately more and more organizations are becoming proactive about data quality these days. In doing that I recommend reversing Finagle, Murphy and Parkinson by doing this:
More and more of my work within data quality and Master Data Management (MDM) is around data governance. One side of data governance is the organizational issues and the roles of people involved.
Some of the common roles are:
Data Steward: This is a good role in my eyes and how you select and empower data stewards is in my experience often the difference between failure and success. Data stewards are in most cases already known in the organization as data champions and subject matter experts. A successful data governance program lays out the organizational structure for the of work data stewards and supply the means for the data stewards in the daily struggle for maintaining an optimal degree of data quality.
Data Owner: I don’t like the term data owner as told and discussed several years ago in the post Bad Word:? Data Owner. The existence of data owners is unfortunately why we need data governance. Data owners are heads of data silos. Especially when it comes to master data the problem is that data owners and data silos makes it difficult to look at data as an enterprise asset.
Chief Data Officer (CDO): This is a relatively new term but we have had the concept for many years earlier for example known as a data czar. We need such a person because data owners are bad for the idea of data being an enterprise asset. But how long will CDOs remain in office compared to data owners? Not long I’m afraid.