Three Flavors of Data Monetization

The term data monetization is trending in the data management world.

Data monetization is about harvesting direct financial results from having access to data that is stored, maintained, categorized and made accessible in an optimal manner. Traditionally data management & analytics has contributed indirectly to financial outcome by aiming at keeping data fit for purpose in the various business processes that produced value to the business. Today the best performers are using data much more directly to create new services and business models.

In my view there are three flavors of data monetization:

  • Selling data: This is something that have been known to the data management world for years. Notable examples are the likes of Dun & Bradstreet who is selling business directory data as touched in the post What is a Business Directory? Another examples is postal services around the world selling their address directories. This is the kind of data we know as third party data.
  • Wrapping data around products: If you have a product – or a service – you can add tremendous value to these products and services and make them more sellable by wrapping data, potentially including third party data, around those products and services. These data will thus become second party data as touched in the post Infonomics and Second Party Data.
  • Advanced analytics and decision making: You can combine third party data, second party data and first party data (your own data) in order to make advanced analytics and fast operational decision making on order to sell more, reduce costs and mitigate risks.

Please learn more about data monetization by downloading a recent webinar hosted by Information Builders, their expert Rado Kotorov and yours truly here.

Data Monetization

Product Data Syndication Freedom

When working with product data syndication in supply chains the big pain is that data standards in use and the preferred exchange methods differ between supply chain participants.

As a manufacturer you will have hundreds of re-sellers who probably have data standards different from you and most likely wants to exchange data in a different way than you do.

As a merchant you will have hundreds of suppliers who probably have data standards different from you and most likely wants to exchange data in a different way than you do.

The aim of Product Data Lake is to take that pain away from both the manufacturer side and the merchant side. We offer product data syndication freedom by letting you as manufacturer push product information using your data standards and your preferred exchange method and letting you as a merchant pull product information using your data standards and your preferred exchange method.

Product Data SyndicationIf you want to know more. Get in contact here:

Avoid Duplicates by Avoiding Peer-to-Peer Integrations

When working in Master Data Management (MDM) programs some of the main pain points always on the list are duplicates. As explained in the post Golden Records in Multi-Domain MDM this may be duplicates in party master data (customer, supplier and other roles) as well as duplicates in product master data, assets, locations and more.

Most of the data quality technology available to solve these problems revolves around identifying duplicates.  This is a very intriguing discipline where I have spent some of my best years. However, this is only a remedy to the symptoms of the problem and not a mean to eliminate the root cause as touched in the post The Good, Better and Best Way of Avoiding Duplicates.

The root causes are plentiful and as all challenges they involve technology, processes and people.

Having an IT landscape with multiple applications where master data are a created, updated and consumed is a basic problem and a remedy to that is the main reason of being for Master Data Management (MDM) solutions. The challenge is to implement MDM technology in a way that the MDM solution will not just become another silo of master data but instead be solution for sharing master data within the enterprise – and ultimately in the digital ecosystem around the enterprise.

blind-spot-take-careThe main enemy from a technology perspective is in my experience peer-to-peer system integration solutions. If you have chosen application X to support a business objective and application Y to support another business objective and you learn that there is an integration solution between X and Y available, this is very bad news. Because short term cost and timing considerations will make that option obvious. But in the long run it will cost you dearly if the master data involved are handled in other applications as well. Because then you will have blind spots all over the place where through duplicates will enter.

The only sustainable solution is to build a master data hub where through master data are integrated and thus shared with all applications inside the enterprise and around the enterprise. This hub must encompass a shared master data model and related metadata.

 

Welcome Dynamicweb PIM on the Disruptive MDM and PIM List

This Disruptive Master Data Management Solutions list is a list of available:

  • Master Data Management (MDM) solutions
  • Customer Data Integration (CDI) solutions
  • Product Information Management (PIM) solutions
  • Digital Asset Management (DAM) solutions.

You can use this site as a supplement to the likes of Gartner, Forrester, MDM Institute and others when selecting a MDM / CDI / PIM / DAM solution, not at least because this site will include both larger and smaller disruptive MDM, PIM and similar solutions.

The latest entry on the list is Dynamicweb PIM. This is a mature cloud-based Product Information Management (PIM) solution that can be deployed either as a stand-alone PIM implementation or in their combined all-in-one platform together with content management, ecommerce and marketing and tightly integrated with popular ERP and CRM solutions. This integrated approach offers a short time to value opportunity for midsized companies on the quest to ramp up online sales.

Read more about the Dynamicweb PIM solution here.

Dynamicweb PIM front

Three Remarkable Observations about Reltio

The latest entry on The Disruptive Master Data Management Solutions List is Reltio. I have been following Reltio for more than 5 years and have had the chance to do some hands on lately.

In doing that, I think there are three observations that makes the Reltio Cloud solution a remarkable MDM offering.

More than Master Data

While the Reltio solution emphasizes on master data the platform can include the data that revolves around master data as well. That means you can bring transactions and big data streams to the platform and apply analytics, machine learning, artificial intelligence and those shiny new things in order to go from a purely analytical world for these disciplines to exploit these data and capabilities in the operational world.

The thinking behind this approach is that you can not get a 360-degree on customer, vendor and other party roles as well as 360-degree on products by only having a snapshot compound description of the entity in question. You also need the raw history, the relationships between entities and access to details for various use cases.

In fact, Reltio provides not just operational MDM, but through a module called Reltio IQ also brings continuously mastered data, correlated transactions into an Apache Spark environment for analytics and Machine Learning. This eliminates the traditional friction of synchronizing data models between MDM and analytical environments. It also allows for aggregated results to be synchronized back into the MDM profiles, by storing them as analytical attributes. These attributes are now available for use in operational context, such as marketing segmentation, sales recommendations, GDPR exposure and more.

Multiple Storing Capabilities

There is an ongoing debate in the MDM community these days about if you should use relational database technology or NoSQL technology or graph technology? Reltio utilizes all three of them for the purposes where each approach makes the most sense.

Reference data are handled as relational data. The entities are kept using a wide column store, which is a technique encompassing scalability known from pure column stores but with some of the structure known from relational databases. Finally, the relationships are handled using graph techniques, which has been a recurring subject on this blog.

Reltio calls this multi-model polyglot persistence, and they embrace the latest technologies from multiple clouds such as AWS and Google Cloud Platform (GCP) under the covers.

Survival of the Fit Enough

One thing that MDM solutions do is making a golden record from different systems of records where the same real-world entity is described in many ways and therefore are considered duplicate records. Identifying those records is hard enough. But then comes the task of merging the conflicting values together, so the most accurate values survive in the golden record.

Reltio does that very elegantly by actually not doing it. Survivorship rules can be set up based on all the needed parameters as recency, provenance and more and you may also allow more than one value to survive as touched in the post about the principle of Survival of the Fit Enough.

In Reltio there is no purge of the immediately not surviving values. The golden record is not stored physically. Instead Reltio keeps one (or even more than one) virtual golden record(s) by letting the original source records stay. Therefore, you can easily rollback or update the single view of the truth.

The Reltio platform allows survivorship rules to be customized in rulesets for an unlimited number of roles and personas. In effect supporting multiple personalized versions of the truth. In an operational MDM context this allows sales, marketing, compliance, and other teams to see the data values that they care about most, while collaborating continuously in what Reltio calls the Self-Learning Enterprise.

Going beyond operational MDM

 

Ecosystem Wide Product Information Management

The concept of doing Master Data Management (MDM) not only enterprise wide but ecosystem wide was examined in the post Ecosystem Wide MDM.

As mentioned, product master data is an obvious domain where business outcomes may occur first when stretching your digital transformation to encompass business ecosystems.

The figure below shows the core delegates in the ecosystem wide Product Information Management (PIM) landscape we support at Product Data Lake:

Ecosystem Wide PIM.png

Your enterprise is in the centre. You may have or need an in-house PIM solution where you manipulate and make product information more competitive as elaborated in the post Using Internal and External Product Information to Win.

At Product Data Lake we collaborate with providers of Artificial Intelligence (AI) capabilities and similar technologies in order to improve data quality and analyse product information.

As shown in the top, there may be a relevant data pool with a consensus structure for your industry available, where you exchange some of product information with trading partners. At Product Data Lake we embrace that scenario with our reservoir concept.

Else, you will need to make partnerships with individual trading partners. At Product Data Lake we make that happen with a win-win approach. This means, that providers can push their product information in a uniform way with the structure and with the taxonomy they have. Receivers can pull the product information in a uniform way with the structure and with the taxonomy they have. This product data syndication concept is outlined in the post Sell more. Reduce costs.

Where to Buy a Magic Wand?

Sometimes you may get the impression that sales, including online sales, is driven by extremely smart sales and marketing people targeting simple-minded customers.

Let us look at an example with selling a product online. Below are two approaches:

Magic wand

Bigger picture is available here.

My take is that the data rich approach is much more effective than the alternative (but sadly often used one). Some proof is delivered in the post Ecommerce Su…ffers without Data Quality.

In many industries, the merchant who will cash in on the sale will be the one having the best and most stringent data, because this serves the overwhelming majority of buying power, who do not want to be told what to buy, but what they are buying.

So, pretending to be an extremely smart data management expert, I will argue that you can monetize on product data by having the most complete, timely, consistent, conform and accurate product information in front of your customers. This approach is further explained in the piece about Product Data Lake.

MDM in The Cloud, On-Premise or Both

One of the forms of Master Data Management (MDM) is the rising cloud deployment model as touched in the Disruptive MDM List blog post about 8 Forms of Master Data Management.

If we look at the MDM solution vendors, they may in that sense be divided into three kinds:

  • Cloud only, which are vendors born in the cloud age and who are delivering their service in the cloud only. Reltio is an example of that kind of MDM vendor.
  • Cloud or on-premise, which are vendors that can deliver both in the cloud and on premise, but where it makes most sense that you as a customer chooses the one that fits you the best. An example is Semarchy.
  • Cloud and on-premise. Informatica is the example of an MDM vendor that embraces both deployment models (together with other data management disciplines) at the same time (called hybrid) as told in an article by Kristin Nicole of SiliconANGLE. The title goes like this: Balancing act: Informatica straddles on-prem needs with cloud data at Informatica World 2018

Cloud MDM

What is Interenterprise Data Sharing?

The term “Interenterprise Data Sharing” has been used a couple of times by Gartner, the analyst firm, during the last two decades.

Lately it has been part of the picturing in conjunction with a recent research document with the title Fundamentals for Data Integration Initiatives.

Data Integration.png
Source: Gartner Inc with red ovals added

The term was also used back in 2001 in the piece about that Data Ownership Extends Outside the Enterprise. Here on the blog it was included in the title of the post about Interenterprise Data Sharing and the 2016 Data Quality Magic Quadrant.

In my eyes interenterprise data sharing is closely related to how you can achieve business benefits from taking part in the ecosystem flavor of a digital business platform. Some of the data types where we will see such business ecosystem platform flourish will be around sharing product model master data and data about and coming from things related to the Internet of Things (IoT) theme. This is further explained in the blog page about Master Data Share.

Diversities in Civil Registration

Citizen Registry

The way governments around the world has organized their Master Data Management (MDM) is quite different. When it comes to registering citizens, the practice varies a lot as described in the post Citizen Master Data Management.

I have lived most of my years in Denmark where our national ID is unique and used for everything by public agencies and also a lot by private companies. Some years ago I lived in the United Kingdom, where the public agencies (and my bank) had no clue about who I were, when I came, what I did and when I left.

Recently the World Economic Forum has circulated some videos on LinkedIn telling about how stuff is done differently around the world. The video below is about the Danish civil registry (which by the way is similar in other Scandinavian countries):

What do you think? Would this public MDM and data quality practice work in USA, UK, Germany or where else you live?