If Gartner is still postponing this year’s MDM quadrant, they may even manage to reflect this change. We are of course also waiting to see if newcomers will make it to the quadrant and make the crowd of vendors in there go back to an above 10 number. Some of the candidates will be likes of Reltio and Semarchy.
Else, back to the takeover of Orchestra by Tibco, this is not the first time Tibco buys something in the MDM and Data Quality realm. Back in 2010 Tibco bought the data quality tool and data matching front runner Netrics as reported in the post What is a best-in-class match engine?
Then Tibco didn’t defend Netrics’ position in the Gartner Magic Quadrant for Data Quality Tools. The latest Data Quality Tool quadrant is also as the MDM quadrant from 2017 and was touched on this blog here.
So, will be exciting to see how Tibco will defend the joint Tibco MDM solution, which in 2017 was a sliding niche player at Gartner, and the Orchestra MDM solution, which in 2017 was a leader at the Gartner MDM quadrant.
The intersection between Artificial Intelligence (AI) and Master Data Management (MDM) – and the associated discipline Product Information Management (PIM) – is an emerging topic.
A use case close to me
In my work at setting up a service called Product Data Lake the inclusion of AI has become an important topic. The aim of this service is to translate between the different taxonomies in use at trading partners for example when a manufacturer shares his product information with a merchant.
In some cases the manufacturer, the provider of product information, may use the same standard for product information as the merchant. This may be deep standards as eCl@ss and ETIM or pure product classification standards as UNSPSC. In this case we can apply deterministic matching of the classifications and the attributes (also called properties or features).
However, most often there are uncovered areas even when two trading partners share the same standard. And then again, the most frequent situation is that the two trading partners are using different standards.
As always, applying too much human interaction is costly, time consuming and error prone. Therefore, we are very eagerly training our machines to be able to do this work in a cost-effective way, within a much shorter time frame and with a repeatable and consistent outcome to the benefit of the participating manufacturers, merchants and other enterprises involved in exchanging products and the related product information.
Learning from others
This week I participated in a workshop around exchanging experiences and proofing use cases for AI and MDM. The above-mentioned use case was one of several use cases examined here. And for sure, there is a basis for applying AI with substantial benefits for the enterprises who gets this. The workshop was arranged by Camelot Management Consultants within their Global Community for Artificial Intelligence in MDM.
Enterprises are increasingly going to be part of business ecosystems where collaboration between legal entities not belonging to the same company family tree will be the norm.
This trend is driven by digital transformation as no enterprise possibly can master all the disciplines needed in applying a digital platform to traditional ways of doing business.
Enterprises are basically selfish. This is also true when it comes to Master Data Management (MDM). Most master data initiatives today revolve around aligning internal silos of master data and surrounding processes to fit he business objectives within an enterprise as a whole. And that is hard enough.
However, in the future that is not enough. You must also be able share master data in the business ecosystems where your enterprise will belong. The enterprises that, in a broad sense, gets this first will survive. Those who will be laggards are in danger of being left out of business.
In multidomain Master Data Management (MDM) we often focus on the two most frequently addressed domains being Customer MDM and Product MDM.
However, managing the critical data elements that describes the vendors to an enterprise is increasingly being on the agenda in MDM implementations.
Handling vendor master data shares a good deal of the same challenges as with customer master data, as we are describing real world entities that have a role as a second party to our enterprise. In more cases than what often is acknowledged, vendors may also have a role as a customer or other business partner roles at the same time. In my eyes, we should handle vendor master data as a subset of party master data as described in the post about How Bosch is Aiming for Unified Partner Master Data Management.
The self-service theme has also emerged in handling vendor master data as self-service based supplier portals have become common as the place where vendor master data is captured and maintained. Where making the first purchase order or receiving the first invoice was the starting point for vendor master data in the old days, this is often not the case anymore.
Gartner, the analyst firm, has a hype cycle for Information Governance and Master Data Management.
Back in 2012 there was a hype cycle for just Master Data Management. It looked like this:
I have made a red circle around the two rightmost terms: “Data Quality Tools” and “Information Exchange and Global Data Synchronization”.
Now, 6 years later, the terms included in the cycle are the below:
The two terms “Data Quality Tools” and “Information Exchange and Global Data Synchronization” are not mentioned here. I do not think it is because the they ever fulfilled their purpose. I think they are being supplemented by something new. One of these terms that have emerged since 2012 is, in red circle, Multienterprise MDM.
As touched in the post Product Data Quality we have seen data quality tools in action for years when it comes to customer (or party) master data, but not that much when it comes to product master data.
Global Data Synchronization has been around the GS1 concept of GDSN (Global Data Synchronization Network) and exchange of product data between trading partners. However, after 40 years in play this concept only covers a fraction of the products traded worldwide and only for very basic product master data. Product data syndication between trading partners for a lot of product information and related digital assets must still be handled otherwise today.
In my eyes Multienterprise MDM comes to the rescue. This concept was examined in the post Ecosystem Wide MDM. You can gain business benefits from extending enterprise wide product master data management to be multienterprise wide. This includes:
Working with the same product classifications or being able to continuously map between different classifications used by trading partners
Utilizing the same attribute definitions (metadata around products) or being able to continuously map between different attribute taxonomies in use by trading partners
Sharing data on product relationships (available accessories, relevant spare parts, updated succession for products, cross-sell information and up-sell opportunities)
Having shared access to latest versions of digital assets (text, audio, video) associated with products.
This is what we work for at Product Data Lake – including Machine Learning Enabled Data Quality, Data Classification, Cloud MDM Hub Service and Multienterprise Metadata Management.
The Information Difference MDM Landscape Q2 2018 is out.
The report confirms the trend of increasing uptake of cloud Master Data Management solutions as examined in the recent post called The Rise of Cloud MDM.
According to the report the coexistence of big data and master data is another trend and more and more MDM vendors are embracing all master data domains while though as stated “most vendors have their roots in either customer or product data, and their particular functionality and track record of deployment is usually deeper where the software had its roots”.
The plot of vendors looks like this:You can read the full report here.
Cloud as a deployment method for Master Data Management (MDM) solutions is on the rise.
In the latest MDM vendor selection activities I am involved in cloud is not an absolute must but certainly the preferred deployment method.
The MDM vendor market is responding to that trend. Some of the new players offers purely cloud based solutions. In a recent post on this blog I wrote about Three Remarkable Observations about Reltio. The fourth will be that this is a cloud-based MDM (and more) solution – called Reltio Cloud.
Another example of going the cloud path is Riversand. Their new release is put forward as a cloud-native suite of Master Data Management solutions as told in an interview by Katie Fabiszak with CEO & Founder Upen Varanasi of Riversand. The interview is posted as a guest blog post on The Disruptive MDM List. The post is called Cloud multi-domain MDM as the foundation for Digital Transformation.
When working in Master Data Management (MDM) programs some of the main pain points always on the list are duplicates. As explained in the post Golden Records in Multi-Domain MDM this may be duplicates in party master data (customer, supplier and other roles) as well as duplicates in product master data, assets, locations and more.
Most of the data quality technology available to solve these problems revolves around identifying duplicates. This is a very intriguing discipline where I have spent some of my best years. However, this is only a remedy to the symptoms of the problem and not a mean to eliminate the root cause as touched in the post The Good, Better and Best Way of Avoiding Duplicates.
The root causes are plentiful and as all challenges they involve technology, processes and people.
Having an IT landscape with multiple applications where master data are a created, updated and consumed is a basic problem and a remedy to that is the main reason of being for Master Data Management (MDM) solutions. The challenge is to implement MDM technology in a way that the MDM solution will not just become another silo of master data but instead be solution for sharing master data within the enterprise – and ultimately in the digital ecosystem around the enterprise.
The main enemy from a technology perspective is in my experience peer-to-peer system integration solutions. If you have chosen application X to support a business objective and application Y to support another business objective and you learn that there is an integration solution between X and Y available, this is very bad news. Because short term cost and timing considerations will make that option obvious. But in the long run it will cost you dearly if the master data involved are handled in other applications as well. Because then you will have blind spots all over the place where through duplicates will enter.
The only sustainable solution is to build a master data hub where through master data are integrated and thus shared with all applications inside the enterprise and around the enterprise. This hub must encompass a shared master data model and related metadata.