Combining Data Matching and Multidomain MDM

Data Matching GroupTwo of the most addressed data management topics on this blog is data matching and multidomain Master Data Management (MDM). In addition, I have also founded two LinkedIn Groups for people interested in one of or both topics.

The Data Matching Group has close to 2,000 members. In here we discus nerdy stuff as deduplication, identity resolution, deterministic matching using match codes, algorithms, pattern recognition, fuzzy logic, probabilistic learning, false negatives and false positives.

Check out the LinkedIn Data Matching Group here.

Multidomain MDM GroupThe Multi-Domain MDM Group has close to 2,500 members. In here we exchange knowledge on how to encompass more than a single master data domain in an MDM initiative. In that way the group also covers the evolution of MDM as the discipline – and solutions – has emerged from Customer Data Integration (CDI) and Product Information Management (PIM).

Check out the LinkedIn Multi-Domain MDM Group here.

The result of combining data matching and multi-domain MDM is golden records. The golden records are the foundation of having a 360-degree / single view of parties, locations, products and assets as examined in The Disruptive MDM / PIM / DQM List blog post Golden Records in Multidomain MDM.

Welcome Reifier on the Disruptive MDM / PIM List

The Disruptive MDM / PIM List is list of solutions in the Master Data Management (MDM), Product Information Management (PIM) and Data Quality Management (DQM) space.

The list presents both larger solutions that also is included by the analyst firms in their market reports and smaller solutions you do not hear so much about, but may be exactly the solution that addresses the specific challenges you have.

The latest entry on the list, Reifier, is one of the latter ones.

Matching data records and identifying duplicates in order to achieve a 360-degree view of customers and other master data entities is the most frequently mentioned data quality issue. Reifier is an artificial intelligence (AI) driven solution that tackles that problem.

Read more about Reifier here.

New entry Reifier

Three Not So Easy Steps to a 360-Degree Customer View

Getting a 360-degree view (or single view) of your customers has been a quest in data management as long as I can remember.

This has been the (unfulfilled) promise of CRM applications since they emerged 25 years ago. Data quality tools has been very much about deduplication of customer records. Customer Data Integration (CDI) and the first Master Data Management (MDM) platforms were aimed at that conundrum. Now we see the notion of a Customer Data Platform (CDP) getting traction.

There are three basic steps in getting a 360-degree view of those parties that have a customer role within your organization – and these steps are not at all easy ones:

360 Degree Customer View

  • Step 1 is identifying those customer records that typically are scattered around in the multiple systems that make up your system landscape. You can do that (endlessly) by hand, using the very different deduplication functionality that comes with ERP, CRM and other applications, using a best-of-breed data quality tool or the data matching capabilities built into MDM platforms. Doing this with adequate results takes a lot as pondered in the post Data Matching and Real-World Alignment.
  • Step 2 is finding out which data records and data elements that survives as the single source of truth. This is something a data quality tool can help with but best done within an MDM platform. The three main options for that are examined in the post Three Master Data Survivorship Approaches.
  • Step 3 is gathering all data besides the master data and relate those data to the master data entity that identifies and describes the real-world entity with a customer role. Today we see both CRM solution vendors and MDM solution vendors offering the technology to enable that as told in the post CDP: Is that part of CRM or MDM?

Top 15 MDM / PIM Requirements in RFPs

A Request for Proposal (RFP) process for a Master Data Management (MDM) and/or Product Information Management (PIM) solution has a hard fact side as well as there are The Soft Sides of MDM and PIM RFPs.

The hard fact side is the detailed requirements a potential vendor has to answer to in what in most cases is the excel sheet the buying organization has prepared – often with the extensive help from a consultancy.

Here are what I have seen as the most frequently included topics for the hard facts in such RFPs:

  • MDM and PIM: Does the solution have functionality for hierarchy management?
  • MDM and PIM: Does the solution have workflow management included?
  • MDM and PIM: Does the solution support versioning of master data / product information?
  • MDM and PIM: Does the solution allow to tailor the data model in a flexible way?
  • MDM and PIM: Does the solution handle master data / product information in multiple languages / character sets / script systems?
  • MDM and PIM: Does the solution have capabilities for (high speed) batch import / export and real-time integration (APIs)?
  • MDM and PIM: Does the solution have capabilities within data governance / data stewardship?
  • MDM and PIM: Does the solution integrate with “a specific application”? – most commonly SAP, MS CRM/ERPs, SalesForce?
  • MDM: Does the solution handle multiple domains, for example customer, vendor/supplier, employee, product and asset?
  • MDM: Does the solution provide data matching / deduplication functionality and formation of golden records?
  • MDM: Does the solution have integration with third-party data providers for example business directories (Dun & Bradstreet / National registries) and address verification services?
  • MDM: Does the solution underpin compliance rules as for example data privacy and data protection regulations as in GDPR / other regimes?
  • PIM: Does the solution support product classification and attribution standards as eClass, ETIM (or other industry specific / national standards)?
  • PIM: Does the solution support publishing to popular marketplaces (form of outgoing Product Data Syndication)?
  • PIM: Does the solution have a functionality to ease collection of product information from suppliers (incoming Product Data Syndication)?

Learn more about how I can help in the blog page about MDM / PIM Tool Selection Consultancy.

MDM PIM RFP Wordle

Human Errors and Data Quality

Every time there is a survey about what causes poor data quality the most ticked answer is human error. This is also the case in the Profisee 2019 State of Data Management Report where 58% of the respondents said that human error is among the most prevalent causes of poor data quality within their organization.

This topic was also examined some years ago in the post called The Internet of Things and the Fat-Finger Syndrome.

Errare humanum estEven the Romans new this as Seneca the Younger said that “errare humanum est” which translates to “to err is human”. He also added “but to persist in error is diabolical”.

So, how can we not persist in having human errors in data then? Here are three main approaches:

  • Better humans: There is a whip called Data Governance. In a data governance regime you define data policies and data standards. You build an organizational structure with a data governance council (or any better name), have data stewards and data custodians (or any better title). You set up a business glossary. And then you carry on with a data governance framework.
  • Machines: Robotic Processing Automation (RPA) has, besides operational efficiency, the advantage of that machines, unlike humans, do not make mistakes when they are tired and bored.
  • Data Sharing: Human errors typically occur when typing in data. However, most data are already typed in somewhere. Instead of retyping data, and thereby potentially introduce your misspelling or other mistake, you can connect to data that is already digitalized and validated. This is especially doable for master data as examined in the article about Master Data Share.

IoT and Business Ecosystem Wide MDM

Two of the disruptive trends in Master Data Management (MDM) are the intersection of Internet of Things (IoT) and MDM and business ecosystem wide MDM (aka multienterprise MDM).

These two trends will go hand in hand.

IoT and Ecosystem Wide MDM

The latest MDM market report from Forrester (the other analyst firm) was mentioned in the post Toward the Third Generation of MDM.

In here Forrester says: “As first-generation MDM technologies become outdated and less effective, improved second generation and third-generation features will dictate which providers lead the pack. Vendors that can provide internet-of-things (IoT) capabilities, ecosystem capabilities, and data context position themselves to successfully deliver added business value to their customers.”

This saying is close to me in my current job as co-founder and CTO at Product Data Lake as told in the post Adding Things to Product Data Lake.

In business ecosystem wide MDM business partners collaborate around master data. This is a prerequisite for handling asset master data involved in IoT as there are many parties involved included manufacturers of smart devices, operators of these devices, maintainers of the devices, owners of the devices and the data subjects these devices gather data about.

In the same way forward looking solution providers involved with MDM must collaborate as pondered in the post Linked Product Data Quality.

Artificial Intelligence (AI) and Multienterprise MDM

The previous post on this blog was called Machine Learning, Artificial Intelligence and Data Quality. In here the it was examined how Artificial Intelligence (AI) is impacted by data quality and how data quality can impact AI.

Master Data Management (MDM) will play a crucial role in sustaining the needed data quality for AI and with the rise of digital transformation encompassing business ecosystems we will also see an increasing need for ecosystem wide MDM – also called multienterprise MDM.

Right now, I am working with a service called Product Data Lake where we strive to utilize AI including using Machine Learning (ML) to understand and map data standards and exchange formats used within product information exchange between trading partners.

The challenge in this area is that we have many different classification systems in play as told in the post Five Product Classification Standards. Besides the industry and cross sector standards we still have many homegrown standards as well.

Some of these standards (as eClass and ETIM) also covers standards for the attributes needed for a given product classification, but still, we have plenty of homegrown standards (at no standards) for attribute requirements as well.

Add to that the different preferences for exchange methods and we got a chaotic system where human intervention makes Sisyphus look like a lucky man. Therefore, we have great expectations about introducing machine learning and artificial intelligence in this space.

AI ML PDL

Next week, I will elaborate on the multienterprise MDM and artificial theme on the Master Data Management Summit Europe in London.

Data Matching and Real-World Alignment

Data matching is a sub discipline within data quality management. Data matching is about establishing a link between data elements and entities, that does not have the same value, but are referring to the same real-world construct.

The most common scenario for data matching is deduplication of customer data records held across an enterprise. In this case we often see a gap between what we technically try to do and the desired business outcome from deduplication. In my experience, this misalignment has something to do with real-world alignment.

Data Matching and Real World Alignment

What we technically do is basically to find a similarity between data records that typically has been pre-processed with some form of standardization. This is often not enough.

Location Intelligence

Deduplication and other forms of data matching with customer master data revolves around names and addresses.

Standardization and verification of addresses is very common element in data quality / data matching tools. Often such at tool will use a service either from its same brand or a third-party service. Unfortunately, no single service is often enough. This is because:

  • Most services are biased towards a certain geography. They may for example be quite good for addresses in The United States but very poor compared to local services for other geographies. This is especially true for geographies with multiple languages in play as exemplified in the post The Art in Data Matching.
  • There is much more to an address than the postal format. In deduplication it is for example useful to know if the address is a single-family house or a high-rise building, a nursing home, a campus or other building with lots of units.
  • Timeliness of address reference data is underestimated. I recently heard from a leader in the Gartner Quadrant for Data Quality Tools that a quarterly refresh is fine. It is not, as told in the post Location Data Quality for MDM.

Identity Resolution

The overlaps and similarities between data matching and identity resolution was discussed in the post Deduplication vs Identity Resolution.

In summary, the capability to tell if two data records represent the same real-world entity will eventually involve identity resolution. And as this is very poorly supported by data quality tools around, we see that a lot of manual work will be involved if the business processes that relies on the data matching cannot tolerate too may, or in some cases any, false positives – or false negatives.

Hierarchy Management

Even telling that a true positive match is true in all circumstances is hard. The predominant examples of this challenge are:

  • Is a match between what seems to be an individual person and what seems to be the household where the person lives a true match?
  • Is a match between what seems to be a person in a private role and what seems to be the same person in a business role a true match? This is especially tricky with sole proprietors working from home like farmers, dentists, free lance consultants and more.
  • Is a match between two sister companies on the same address a true match? Or two departments within the same company?

We often realize that the answer to the questions are different depending on the business processes where the result of the data matching will be used.

The solution is not simple. The data matching functionality must, if we want automated and broadly usable results, be quite sophisticated in order to take advantage of what is available in the real-world. The data model where we hold the result of the data matching must be quite complex if we want to reflect the real-world.