Multidomain MDM has moved on from the Trough of Disillusionment to climbing up the Slope of Enlightenment. I have been waiting for this to happen for 10 years – both in the hype cycle and in the real-world – since I founded the Multi-Domain MDM Group on LinkedIn back then.
Interinterprise MDM has swapped place with Cloud MDM, so this term is now ahead of Cloud MDM. It is though hard to imagine Interenterprise MDM without Cloud MDM, and MDM in the cloud will also according Gartner reach the the Plateau of Productivity before ecosystem wide MDM. The promise of this is also in accordance with a poll I made as told in the post Interenterprise MDM Will be Hot.
You can get the full report from the MDM consultancy parsionate here.
Exchange of data between enterprises – aka interenterprise data sharing – is becoming a hot topic in the era of digital transformation. As told in the post Data Quality and Interenterprise Data Sharing this approach is the cost-effective way to ensure data quality for the fast-increasing amount of data every organization has to manage when introducing new digital services.
McKinsey Digital recently elaborated on this theme in an article with the title Harnessing the power of external data. As stated in the article: “Organizations that stay abreast of the expanding external-data ecosystem and successfully integrate a broad spectrum of external data into their operations can outperform other companies by unlocking improvements in growth, productivity, and risk management.”
The arguments against interenterprise data sharing I hear most often revolves around privacy and confidentiality concerns.
Let us have a look at this challenge within the two most common master data domains: Party data and product data.
Privacy through the enforced data privacy and data protection regulations as GDPR must (and should) be adhered to and sets a very strict limit for exchanging Personal Identifiable Information only leaving room for the legitimate cases of data portability.
However, information about organizations can be shared not only as exploitation of public third-party sources as business directories but also as data pools between like-minded organizations. Here you must think about if your typos in company names, addresses and more really are that confidential.
Though the vast amount of product data is meant to become public the concerns about confidentiality also exist with product data. Trading prices is an obvious area. The timing of releasing product data is another concern.
In the Product Data Lake syndication service I work with there are measures to ensure the right level of confidentiality. This includes encryption and controlling with whom you share what and when you do it.
Data governance plays a crucial role in orchestrating interenterprise data sharing with the right approach to data privacy and confidentiality. How this is done in for example product data syndication is explained in the page about Product Data Lake Documentation and Data Governance.
When working with data quality improvement there are three kinds of data to consider:
First-party data is the data that is born and managed internally within the enterprise. This data has traditionally been in focus of data quality methodologies and tools with the aim of ensuring that data is fit for the purpose of use and correctly reflects the real-world entity that the data is describing.
Third-party data is data sourced from external providers who offers a set of data that can be utilized by many enterprises. Examples a location directories, business directories as the Dun & Bradtstreet Worldbase and public national directories and product data pools as for example the Global Data Synchronization Network (GDSN).
Enriching first-party data with third-party is a mean to ensure namely better data completeness, better data consistency, and better data uniqueness.
Second-party data is data sourced directly from a business partner. Examples are supplier self-registration, customer self-registration and inbound product data syndication. Exchange of this data is also called interenterprise data sharing.
The advantage of using second-party in a data quality perspective is that you are closer to the source, which all things equal will mean that data better and more accurately reflects the real-world entity that the data is describing.
In addition to that, you will also, compared to third-party data, have the opportunity to operate with data that exactly fits your operating model and make you unique compared to your competitors.
Finally, second-party data obtained through interenterprise data sharing, will reduce the costs of capturing data compared to first-party data, where else the ever-increasing demand for more elaborate high-quality data in the age of digital transformation will overwhelm your organization.
The Balancing Act
Getting the most optimal data quality with the least effort is about balancing the use of internal and external data, where you can exploit interenterprise data sharing through combining second-party and third-party data in the way that makes most sense for your organization.
As always, I am ready to discus your challenge. You can book a short online session for that here.
Interenterprise Master Data Management is on the rise as reported in the post Watch Out for Interenterprise MDM. Interenterprise MDM is about how organizations can collaborate by sharing master data with business partners in order to optimize own master data and create new data driven revenue models together with business partners.
One of the most obvious places to start with Interenterprise MDM is Product Data Syndication (PDS). While PDS until now has been mostly applied when syndicating product data to marketplaces there is a huge potential in streamlining the flow of product from manufacturers to merchants and end users of product information.
Inbound and Outbound Product Data Syndication
There are two scenarios in interenterprise Product Data Syndication:
Outbound, where your organization as being part of a supply chain will provide product information to your range of customers. The challenge is that with no PDS functionality in between you must cater for many (hundreds or thousands) different structures, formats, taxonomies and exchange methods requested by your customers.
Inbound, where your organization as being part of a supply chain will receive product information from your range of suppliers. The challenge is that with no PDS functionality in between you must cater for many (hundreds or thousands) different structures, formats, taxonomies and exchange methods coming in.
There are these four main use cases for exchanging product data in supply chains:
Exchanging product data for resell products where manufacturers and brands are forwarding product information to the end point-of-sale at a merchant. With the rise of online sales both in business-to-consumer (B2C) and business-to-business (B2B) the buying decisions are self-service based, which means a dramatic increase in the demand for product data throughput.
Exchanging product data for raw materials and packaging. Here there is a rising demand for automating the quality assurance process, blending processes in organic production and controlling the sustainability related data by data lineage capabilities.
Exchanging product data for parts used in MRO (Maintenance, Repair and Operation). As these parts are becoming components of the Industry 4.0 / Industrial Internet of Things (IIoT) wave, there will be a drastic demand for providing rich product information when delivering these parts.
Exchanging product data for indirect products, where upcoming use of Artificial Intelligence (AI) in all procurement activities also will lead to requirements for availability of product information in this use case.
In the Product Data Lake venture I am working on now, we have made a framework – and a piece of Software as a Service – that is able to leverage the concepts of inbound and outbound PDS and enable the four mentioned use cases for product data exchange.
The framework is based on reusing popular product data classifications (as GPC, UNSPSC, ETIM, eClass, ISO) and attribute requirement standards (as ETIM and eClass). Also, trading partners can use their preferred data exchange method (FTP file drop – as for example BMEcat, API or plain import/export) on each side.
All in all, the big win is that each upstream provider (typically a manufacturer / brand) can upload one uniform product catalogue to the Product Data Lake and each downstream receiver (a merchant or user organization) can download a uniform product catalogues covering all suppliers.
Interenterprise data sharing must be leveraged through interenterprise MDM, where master data are shared between many companies as for example in supply chains. The evolution of interenterprise MDM and the current state of the discipline was touched in the post MDM Terms In and Out of The Gartner 2020 Hype Cycle.
In the 00’s the evolution of Master Data Management (MDM) started with single domain / departmental solutions dominated by Customer Data Integration (CDI) and Product Information Management (PIM) implementations. These solutions were in best cases underpinned by third party data sources as business directories as for example the Dun & Bradstreet (D&B) world base and second party product information sources as for example the GS1 Global Data Syndication Network (GDSN).
In the previous decade multidomain MDM with enterprise-wide coverage became the norm. Here the solution typically encompasses customer-, vendor/supplier-, product- and asset master data. Increasingly GDSN is supplemented by other forms of Product Data Syndication (PDS). Third party and second party sources are delivered in the form of Data as a Service that comes with each MDM solution.
In this decade we will see the rise of interenterprise MDM where the solutions to some extend become business ecosystem wide, meaning that you will increasingly share master data and possibly the MDM solutions with your business partners – or else you will fade in the wake of the overwhelming data load you will have to handle yourself.
So, watch out for not applying interenterprise MDM.
PS: That goes for MDM end user organizations and MDM platform vendors as well.
When working with Product Information Management (PIM) and Product Master Data Management (Product MDM) one of the most important and challenging areas is how you effectively onboard product master data / product information for products that you do not produce inhouse.
There are 4 main scenarios for that:
Onboarding product data for resell products
Onboarding product data for raw materials and packaging
Onboarding product data for parts used in MRO (Maintenance, Repair and Operation)
Onboarding product data for indirect products
Onboarding product data for resell products
This scenario is the main scenario for distributors/wholesalers, retailers and other merchants. However, most manufactures also have a range of products that are not produced inhouse but are essential supplements when selling own produced products.
The process involves getting the most complete set of product information available from the supplier in order to fit the optimal set of product information needed to support a buying decision by the end customer. With the increase of online sales, the buying decision today is often self-serviced. This has dramatically increased the demand for product information throughput.
Onboarding product data for raw materials and packaging
This scenario exists at manufacturers of products. Here the objective is to get product information needed to do quality assurance and in organic production apply the right blend in order to produce a consistent finished product.
Also, the increasing demand for measures of sustainability is driving the urge for information on the provenance of the finished product and the packaging including the origin of the ingredients and circumstances of the production of these components.
Onboarding product data for parts used in MRO
Product data for parts used in Maintenance, Repair and Operation is a main scenario at manufacturers related to running the production facilities. However, most organizations have facility management around logistic facilities, offices, and other constructions where products for MRO are needed.
With the rise of the Internet of Things (IoT) these products are becoming more and more intelligent and are operated in an automatic way. For that, product information is needed in an until now unseen degree.
Onboarding product data for indirect products
Every organization needs products and services as furniture, office supplies, travel services and much more. The need for onboarding product data for these purchases is still minimal compared to the above-mentioned scenarios. However, a foreseeable increased use of Artificial Intelligence (AI) in procurement operations will ignite the requirement for product data onboarding for this scenario too in the coming years.
The Need for Collaborative Product Data Syndication
The sharp rise of the need product data onboarding calls for increased collaboration between suppliers and Business-to-Business (B2B) customers. It is here worth noticing, that many organizations have both roles in one or the other scenario. The discipline that is most effectively applied to solve the challenges is Product Data Syndication. This is further explained in the post Inbound and Outbound Product Data Syndication.
A consequence of the business benefits in sharing data will be a rise in data management disciplines aiming at business ecosystem wide data sharing, where product data syndication is an obvious opportunity.
During the last years I have been working on such a solution. This one is called Product Data Lake.
In here it is said that: “The integrated network economy could represent a global revenue pool of $60 trillion in 2025 with a potential increase in total economy share from about 1 to 2 percent today to approximately 30 percent by 2025”.
This dramatic shift will in my eyes mean a change of direction in the way we see Master Data Management (MDM) as well as Product Information Management (PIM) and Data Quality Management (DQM) solutions.
360 is a magic number in the master data and data quality world. It is about having a 360-degree view of customers, suppliers, and products. This is an inside-out view. The enterprise is looking at a world revolving around the enterprise just as back then when we thought the universe revolved around the planet Earth.
By 2025 forward looking enterprises must have changed that view and directed master data, product information and data quality management into a state fit for the network economy by having a business ecosystem wide MDM (PIM and DQM) solution landscape.
The public kind where everyone shares the same product information. The prominent examples are marketplaces and data pools.
The collaborative kind where you can exchange the same product information with all your accepted trading partners but also supplement with one-to-one product information that allows the merchant to stand out from the crowd.
When syndicating – or synchronizing – through data pools you are limited to the consensus on the range of data elements, structure and format enforced by those who control the data pool – which can be you and your competitors.
With a collaborative PDS solution you can get the best of two worlds. You can have the market standard that makes you not falling behind your competitors. However, you can also have unique content coming through that puts you ahead of your competitors.
Right now, I am working with a collaborative PDS solution. This solution welcomes other (collaborative) PDS solutions as part of the product information flow. The solution will also encompass data pools in a reservoir concept. This PDS solution is called Product Data Lake.