20 years ago, when I started working as a contractor and entrepreneur in the data management space, data was not on the top agenda at many enterprises. Fortunately, that has changed.
An example is displayed by Schneider Electric CEO Jean-Pascal Tricoire in his recent blog post on how digitization and data can enable companies to be more sustainable. You can read it on the Schneider Electric Blog in the post 3 Myths About Sustainability and Business.
Manufacturers in the building material sector naturally emphasizes on sustainability. In his post Jean-Pascal Tricoire says: “The digital revolution helps answering several of the major sustainability challenges, dispelling some of the lingering myths regarding sustainability and business growth”.
One of three myths dispelled is: Sustainability data is still too costly and time-consuming to manage.
From my work with Master Data Management (MDM) and Product Information Management (PIM) at manufacturers and merchants in the building material sector I know that managing the basic product data, trading data and customer self-service ready product data is hard enough. Taking on sustainability data will only make that harder. So, we need to be smarter in our product data management. Smart and sustainable homes and smart sustainable cities need smart product data management.
In his post Jean-Pascal Tricoire mentions that Schneider Electric has worked with other enterprises in their ecosystem in order to be smarter about product data related to sustainability. In my eyes the business ecosystem theme is key in the product data smartness quest as pondered in the post about How Manufacturers of Building Materials Can Improve Product Information Efficiency.
The term data monetization is trending in the data management world.
Data monetization is about harvesting direct financial results from having access to data that is stored, maintained, categorized and made accessible in an optimal manner. Traditionally data management & analytics has contributed indirectly to financial outcome by aiming at keeping data fit for purpose in the various business processes that produced value to the business. Today the best performers are using data much more directly to create new services and business models.
In my view there are three flavors of data monetization:
- Selling data: This is something that have been known to the data management world for years. Notable examples are the likes of Dun & Bradstreet who is selling business directory data as touched in the post What is a Business Directory? Another examples is postal services around the world selling their address directories. This is the kind of data we know as third party data.
- Wrapping data around products: If you have a product – or a service – you can add tremendous value to these products and services and make them more sellable by wrapping data, potentially including third party data, around those products and services. These data will thus become second party data as touched in the post Infonomics and Second Party Data.
- Advanced analytics and decision making: You can combine third party data, second party data and first party data (your own data) in order to make advanced analytics and fast operational decision making in order to sell more, reduce costs and mitigate risks.
Please learn more about data monetization by downloading a recent webinar hosted by Information Builders, their expert Rado Kotorov and yours truly here.
Sometimes you may get the impression that sales, including online sales, is driven by extremely smart sales and marketing people targeting simple-minded customers.
Let us look at an example with selling a product online. Below are two approaches:
Bigger picture is available here.
My take is that the data rich approach is much more effective than the alternative (but sadly often used one). Some proof is delivered in the post Ecommerce Su…ffers without Data Quality.
In many industries, the merchant who will cash in on the sale will be the one having the best and most stringent data, because this serves the overwhelming majority of buying power, who do not want to be told what to buy, but what they are buying.
So, pretending to be an extremely smart data management expert, I will argue that you can monetize on product data by having the most complete, timely, consistent, conform and accurate product information in front of your customers. This approach is further explained in the piece about Product Data Lake.
This week I attended the Master Data Management Summit Europe 2018 and Data Governance Conference Europe 2018 in London.
Among the recurring sessions year by year on this conference and the sister conferences around the world will be Aaron Zornes presenting the top MDM Vendors as he (that is the MDM Institute) sees it and the top System Integrators as well.
Managing an ongoing list of such entities can be hard and doing it in PowerPoint does not make the task easier as visualized in two different shots captured via Twitter as seen below around the Top 19 to 22 European MDM / DG System Integrators:
Bigger picture available here.
Now, the variations between these two versions of the truth and the real world are (at least):
- Red circles: Is number 17 (in alphabetical order) Deloitte – in Denmark – who bought Platon 5 years ago or is it KPMG.
- Blue arrow and circles: Is SAP Professional Services in there or not – and if they are, there must be 21 Top 20 players with two number 11: Edifixio and Entity Group
- Green arrow: Number 1 (in alphabetical order) Affecto has been bought by number 8 CGI during this year.
PS: Recently I started a disruptive list of MDM vendors maintained by the vendors themselves. Perhaps the analysts can be helped by a similar list for System Integrators?
Our February 2018 version of the Product Data Lake cloud service is live. New capabilities include:
- Subscriber clusters
- Put APIs
As a Product Data Lake customer, you can be a subscriber to our public cloud (www.productdatalake.com) or install the Product Data Lake software on your private cloud.
Now there is a hybrid option: Being a member of a subscriber cluster. A subscriber cluster is an option for example for an affiliated group of companies, where you can share product data internally while at the same time you can share product data with trading partners from outside your group using the same account.
Already existing means to feed Product Data Lake include FTP file drops, traditional file upload from your desktop or network drives or actually entering data into Product Data Lake. Now you can also use our APIs for system to system data exchange.
Get the Overview
Get the full Product Data Lake Overview here (opens a PDF file).
Back in 2015 Gartner, within a Magic Quadrant for MDM, described two different ways observed in how you may connect big data and master data management as reported in the post Two Ways of Exploiting Big Data with MDM.
In short, the two ways observed were:
- Capabilities to perform MDM functions directly against copies of big data sources such as social network data copied into a Hadoop environment. Gartner then found that there have been very few successful attempts (from a business value perspective) to implement this use case, mostly as a result of an inability to perform governance on the big datasets in question.
- Capabilities to link traditionally structured master data against those sources. Gartner then found that this use case is also sparse, but more common and more readily able to prove value. This use case is also gaining some traction with other types of unstructured data, such as content, audio and video.
In my eyes the ability to perform governance on big datasets is key. In fact, master data will tend to be more externally generated and maintained, just like big data usually is. This will change our ways of doing information governance as for example discussed in the post MDM and SCM: Inside and outside the corporate walls.
Eventually, we will see use cases of intersections of MDM and big data. The one I am working with right now is about how you can improve sharing of product master data (product information) between trading partners. While this quest may be used for analytical purposes, which is the said aim with big data, this service will fundamentally serve operational purposes, which is the predominant aim with master data management.
This big data, or rather data lake, approach is about how we by linking metadata connects different perceptions of product information that exists in cross company supply chains. While everyone being on the same standard at the same time would be optimal, this is quite utopic. Therefore, we must encourage pushing product information (including rich textual content, audio and video) with the provider’s standard and do the “schema-on-read” stuff when each of the receivers pulls the product information for their purposes.
If you want to learn more about how that goes, you can follow Product Data Lake here.
Business outcome is the end goal of any data management activity may that be data governance, data quality management, Master Data Management (MDM) and Product Information Management (PIM).
Business outcome comes from selling more and reducing costs.
At Product Data Lake we have a simple scheme for achieving business outcome through selling more goods and reducing costs of sharing product information between trading partners in business ecosystems:
Interested? Get in touch:
Within the upcoming EU General Data Protection Regulation (GDPR) the term data subject is used for the persons for whom we must protect the privacy.
These are the persons we handle as entities within party Master Data Management (MDM).
In the figure below the blue area covers the entity types and roles that are data subjects in the eyes of GDPR
While GDPR is of very high importance in business-to-consumer (B2C) and government-to-citizen (G2C) activities, GDPR is also of importance for business-to-business (B2B) and government-to-business (G2B) activities.
GDPR does not cover unborn persons which may be a fact of interest in very few industries as for example healthcare. When it comes to minors, there are special considerations within GDPR to be aware of. GDPR does not apply to deceased persons. In some industries like financial services and utility, the handling of the estate after the death of a person is essential, as well as knowing about that sad event is of importance in general as touched in the post External Events, MDM and Data Stewardship.
One tough master data challenge in the light of GDPR will be to know the status of your registered party master data entities. This also means knowing when it is a private individual, a contact at an organization or an organization or department hereof as such. From my data matching days, I know that heaps of databases do not hold that clarity as reported in the post So, how about SOHO homes.
Being ready for the EU GDPR (European Union – General Data Protection Regulation) is – or should be – a topic on the agenda for European businesses and international businesses operating with an European reach.
The finish date is fixed: 25th May 2018. What GDPR is about is well covered (perhaps too overwhelmingly) on the internet. But how do you get there?
Below is my template for a roadmap:
The roadmap has as all programs should have an as-is phase, here in concrete as a Privacy Impact Assessment covering what should have been done, if the regulation was already in force. Then comes the phase stating the needed to-be state with the action plan that fills the gaps while absorbing business benefits as well. And then implementation of the prioritized tasks.
GDPR is not only about IT systems, but to be honest, for most companies it will mostly be. Your IT landscape determines which applications will be involved. Most companies will have sales and marketing applications holding personal data. Human Resource Management is a given too. Depending on your business model there will be others. Remember, this is about all kind of personal data – that includes for example supplier contact data that identifies a person too.
The skills needed spans from legal, (Master) Data Management and IT security. You may have these skills internally or you may need interim resources of the above-mentioned kind in order to meet the fixed finish date and being sure things are done right.
By the way: My well skilled associates and I are ready to help. Get in contact:
The upcoming application of the EU General Data Protection Regulation (GDPR) is an attempt to harmonize the data protection and privacy regulations across member states in the European Union.
However, there is room for deviance in ongoing national law enforcement. Probably article 87 concerning processing of the national identification number and article 88 dealing with processing in the context of employment is where we will see national peculiarities.
National identification numbers are today used in different ways across the member states. In The Nordics, the use of an all-purpose identification number that covers identification of citizens from cradle to grave in public (tax, health, social security, election and even transit) as well as private (financial, employment, telco …) registrations have been practiced for many years, where more or less unlinked single purpose (tax, social security, health, election …) identification numbers are the norm most places else.
How you treat the employment force and the derived ways of registering them is also a field of major differences within the Union, and we should therefore expect to be observant of national specialties when it comes to mastering the human resource part of the data domains affected by GDPR.
Do you see other fields where GDPR will become national within the Union?