It is Magic Quadrant Week

Earlier this week this blog featured the Magic Quadrant for Customer MDM and the Magic Quadrant for Product MDM. Today it is time to have a look at the just published Magic Quadrant for Data Quality Tools.

Last year I wondered if we finally will see that data quality tools will focus on other pain points than duplicates in party data and postal address precision as discussed in the post The Multi-Domain Data Quality Tool Magic Quadrant 2014 is out.

Well, apparently there still isn’t a market for that as the Gartner report states: “Party data (that is, data about existing customers, prospective customers, citizens or patients) remains the top priority for most organizations: Almost nine in 10 (89%) of the reference customers surveyed for this Magic Quadrant consider it a priority, up from 86% in the previous year’s survey.”

Multi-Domain MDM and Data Quality DimensionsFrom own experience in working predominantly with product master data during the last couple of years there are issues and big pain points with product data. They are just different from the main pain points with party master data as examined in the post Multi-Domain MDM and Data Quality Dimensions.

I sincerely believes that there are opportunities in providing services to solve the specific data quality challenges for product master data, that, according to Gartner, “is one of the most important information assets an organization has; second-only, perhaps, to customer master data”. In all humbleness, my own venture is called the Product Data Lake.

Anyway, as ever, Informatica is our friend when it comes to free copies of a data management quadrant. Get a free copy of the 2015 Magic Quadrant for Data Quality Tools here.

The Perhaps Second Most Important MDM Quadrant 2015 is Out

This year the Gartner Magic Quadrant for Master Data Management of Product Data Solutions is published very shortly after the Gartner Magic Quadrant for Master Data Management of Customer Data Solutions. Now 1 day in between. I hope this is a sign of that the two MDM quadrants eventually will melt into a (Multi-Domain) MDM Quadrant as touched yesterday in my post about the Customer MDM Quadrant.

MDM Brands
This is not the quadrant, just some vendor names

The product MDM quadrant states: “Product master data is one of the most important information assets an organization has; second-only, perhaps, to customer master data”. In my humble opinion, I think you can refine that statement. It depends on the number of customers (or other party roles) versus the number of products you deal with. Highest number names the most important domain to start with in your organization.

As usual Informatica seems to be the fastest MDM vendor measured on providing a free copy of the Gartner quadrants. Find the 2015 Product MDM Quadrant here from Informatica.

Two Ways of Exploiting Big Data with MDM

MDM Wordle
This is not the quadrant, just some vendor names

The Gartner 2015 Magic Quadrant for Master Data Management of Customer Data Solutions is out. One way of getting the report without being a Gartner customer is through this link on the Informatica site.

Successful providers of Mater Data Management (MDM) solutions will sooner or later need to offer ways of connecting MDM with big data.

In the Customer MDM quadrant Gartner, without mentioning if this relates to customer MDM only or multi-Domain MDM in general, mentions two ways of connecting MDM with big data:

  • Capabilities to perform MDM functions directly against copies of big data sources such as social network data copied into a Hadoop environment. Gartner have found that there have been very few successful attempts (from a business value perspective) to implement this use case, mostly as a result of an inability to perform governance on the big datasets in question.
  • Capabilities to link traditionally structured master data against those sources. Gartner have found that this use case is also sparse, but more common and more readily able to prove value. This use case is also gaining some traction with other types of unstructured data, such as content, audio and video.

My take is that these ways applies to the other MDM domains (supplier, product, location, asset …) as well – just as I think Gartner sooner or later will need to make only one MDM quadrant as pondered in the post called The second part of the Multi-Domain MDM Magic Quadrant is out.

Also I think the ability to perform governance on big datasets is key. In fact, in my eyes master data will tend to be more externally generated and maintained, just like big data usually is. This will change our ways of doing information governance as discussed in my previous post on this blog. This post was by the way inspired by the Gartner product MDM person. The post is called MDM and SCM: Inside and outside the corporate walls.

MDM and SCM: Inside and outside the corporate walls

QuadrantIn my journey through the Master Data Management (MDM) landscape, I am currently working from a Supply Chain Management (SCM) perspective. SCM is very exciting as it connects the buy-side and the sell-side of a company. In that connection we will be able to understand some basic features of multi-domain MDM as touched in a recent post about the MDM ancestors called Customer Data Integration (CDI) and Product Information Management (PIM). The post is called CDI, PIM, MDM and Beyond.

MDM and SCM 1.0: Inside the corporate walls

Traditional Supply Chain Management deals with what goes on from when a product is received from a supplier, or vendor if you like, to it ends up at the customer.

In the distribution and retail world, the product physically usually stays the same, but from a data management perspective we struggle with having buying views and selling views on the data.

In the manufacturing world, we sees the products we are going to sell transforming from raw materials over semi-finished products to finished goods. One challenge here is when companies grow through acquisitions, then a given real world product might be seen as a raw material in one plant but a finished good in another plant.

Regardless of the position of our company in the ecosystem, we also have to deal with the buy side of products as machinery, spare parts, supplies and other goods, which stays in the company.

MDM and SCM 2.0: Outside the corporate walls

SCM 2.0 is often used to describe handling the extended supply chain that is a reality for many businesses today due to business process outsourcing and other ways of collaboration within ecosystems of manufacturers, distributors, retailers, end users and service providers.

From a master data management perspective the ways of handling supplier/vendor master data and customer master data here melts into handling business-partner master data or simply party master data.

For product master data there are huge opportunities in sharing most of these master data within the ecosystems. Usually you will do that in the cloud.

In such environments, we have to rethink our approach to data / information governance. This challenge was, with set out in cloud computing, examined by Andrew White of Gartner (the analyst firm) in a blog post called “Thoughts on The Gathering Storm: Information Governance in the Cloud”.

Spectre vs James Bond and the Unique Product Identifier

bond_24_spectreThe latest James Bond movie is out. It is called Spectre. Spectre is the name of a criminal organization.

In the movie “Bond, James Bond” alias 007 and in this case Mickey Mouse sneaks into a Spectre meeting. At that meeting the Spectre folks reports how they maliciously earns money. One way is selling falsified medicine.

Of course Bond hits Spectre hard during the movie. And if Bond didn’t hit all the villains, data management will do so related to falsified medicine.

The method is using a unique product identifier.

Usually in master data management, we describe a product to the level of unique characteristics also called a Stock Keeping Unit (SKU). In the pharmaceutical world that will typically be a brand name, a concentration of active substances, a dosage type and pack size and possibly a destination country.

From the electronics and machinery sectors, we know the approach of assigning each physical instance of the product a serial number. The same approach is becoming mandatory for medicine in more and more countries. The pharmaceutical manufacturers will assign a unique number to every package (and sometimes also shipping boxes) and report those to the health care authorities around the world. At the point of delivery, it is checked that the identifier equals an original product instance.

The identifier is formed by a product identifier being a Global Trade Identification Number (GTIN) or a National Drug Code (NDC) plus a randomly assigned serial number, making it hard to guess the serial number part.

The World of Reference Data

Google EarthReference Data Management (RDM) is an evolving discipline within data management. When organizations mature in the reference data management realm we often see a shift from relying on internally defined reference data to relying on externally defined reference data. This is based on the good old saying of not to reinvent the wheel and also that externally defined reference data usually are better in fulfilling multiple purposes of use, where internally defined reference data tend to only cater for the most important purpose of use within your organization.

Then, what standard to use tend to be a matter of where in the world you are. Let’s look at three examples from the location domain, the party domain and the product domain.

Location reference data

If you read articles in English about reference data and ensuring accuracy and other data quality dimensions for location data you often meet remarks as “be sure to check validity against US Postal Services” or “make sure to check against the Royal Mail PAF File”. This is all great if all your addresses are in the United States or the United Kingdom. If all your addresses are in another country, there will in many cases be similar services for the given country. If your address are spread around the world, you have to look further.

There are some Data-as-a-Service offerings for international addresses out there. When it comes to have your own copy of location reference data the Universal Postal Union has an offering called the Universal POST*CODE® DataBase. You may also look into open data solutions as GeoNames.

Party reference data

Within party master data management for Business-to-Business (B2B) activities you want to classify your customers, prospects, suppliers and other business partners according to what they do, For that there are some frequently used coding systems in areas where I have been:

  • Standard Industrial Classification (SIC) codes, the four-digit numerical codes assigned by the U.S. government to business establishments.
  • The North American Industry Classification System (NAICS).
  • NACE (Nomenclature of Economic Activities), the European statistical classification of economic activities.

As important economic activities change over time, these systems change to reflect the real world. As an example, my Danish company registration has changed NACE code three times since 1998 while I have been doing the same thing.

This doesn’t make conversion services between these systems more easy.

Product reference data

There are also a good choice of standardized and standardised classification systems for product data out there. To name a few:

  • TheUnited Nations Standard Products and Services Code® (UNSPSC®), managed by GS1 US™ for the UN Development Programme (UNDP).
  • eCl@ss, who presents themselves as: “THE cross-industry product data standard for classification and clear description of products and services that has established itself as the only ISO/IEC compliant industry standard nationally and internationally”. eCl@ss has its main support in Germany (the home of the Mercedes E-Class).

In addition to cross-industry standards there are heaps of industry specific international, regional and national standards for product classification.

Bookmark and Share

Using a Data Lake for Reference Data

TechTarget has recently published a definition of the term data lake.

In the explanation it is mentioned that the term data lake is being accepted as a way to describe any large data pool in which the schema and data requirements are not defined until the data is queried. The explanation also states that: “While a hierarchical data warehouse stores data in files or folders, a data lake uses a flat architecture to store data.”

A data lake is an approach to overcome the known big data characteristics being volume, velocity and variety, where probably the former one being variety is the most difficult to overcome with a traditional data warehouse approach.

If we look at traditional ways of using data warehouses, this has revolved around storing internal transaction data linked to internal master data. With the raise of big data there will be a swift to encompassing more and more external data. One kind of external data is reference data, being data that typically is born outside a given organization and data that has many different purposes of use.

Big reference dataSharing data with the outside must be a part of your big data approach. This goes for including traditional flavours of big data as social data and sensor data as well what we may call big reference data being pools of global data and bilateral data as explained on this blog on the page called Data Quality 3.0. The data lake approach may very well work for big reference data as it may for other flavours of big data.

The BrightTalk community on Big Data and Data Management has a formidable collection of webinars and videos on big data and data management topics. I am looking forward to contribute there on the 25th June 2015 with a webinar about Big Reference Data.

Bookmark and Share

Data Quality: The Union of First Time Right and Data Cleansing

The other day Joy Medved aka @ParaDataGeek made this tweet:

https://twitter.com/ParaDataGeek

Indeed, upstream prevention of bad data to enter our databases is sure the better way compared to downstream data cleaning. Also real time enrichment is better than enriching long time after data has been put to work.

That said, there are situations where data cleaning has to be done. These reasons were examined in the post Top 5 Reasons for Downstream Cleansing. But I can’t think of many situations, where a downstream cleaning and/or enrichment operation will be of much worth if it isn’t followed up by an approach to getting it first time right in the future.

If we go a level deeper into data quality challenges, there will be some different data quality dimensions with different importance to various data domains as explored in the post Multi-Domain MDM and Data Quality Dimensions.

With customer master data we most often have issues with uniqueness and location precision. While I have spend many happy years with data cleansing, data enrichment and data matching tools, I have during the last couple of years been focusing on a tool for getting that first time right.

Product master data are often marred by issues with completeness and (location) conformity. The situation here is that tools and platforms for mastering product data are focussed on what goes on inside a given organization and not so much about what goes on between trading partners. Standardization seems to be the only hope. But that path is too long to wait for and may in some way be contradicting the end purpose as discussed under the post Image Coming Soon.

So in order to have a first time right solution for product master data sharing, I have embarked on a journey with a service called the Product Data Lake. If you want to join, you are most welcome.

PS: The product data lake also has the capability of catching up with the sins of the past.

Bookmark and Share

Making a Firmographic Analysis

What demographics are to people, firmographics are to organizations.

I am currently working with starting up a Business-to-Business (B2B) service. In order to assess the market I had to know something about how many companies there are out there who possibly could be in need of such a service.

The service will work word-wide, but adhering to the sayings about thinking globally/big and starting locally/small I have started with assessing the Danish market. Also there are easy and none expensive access to business directories for Denmark.

My first filter was selecting companies with at least 50 employees.

As the service is suitable for companies within ecosystems of manufacturers, distributors and retailers, I selected the equivalent range of industry codes. In this case it was NACE codes which resembles SIC codes and other classifications of Line-Of-Business used in other geographies.

There were circa 2,500 companies in my selection. However, some belong to the same company family tree. By doing a merge/purge with the largest company in a company family tree as the survivor, the list was down to circa 2,000 companies.

For this particular service, there are some other possibly competing approaches that are stronger for some kinds of goods than other kinds of goods. For that purpose, I made a bespoke categorization being:

  • Priority A: Building materials, furniture, houseware, machinery and vehicles.
  • Priority B: Electronics, books and clothes.
  • Priority C: Pharmaceuticals, food, beverage and tobacco.

Retailers that span several priorities were placed in priority B. Else, for this high level analysis, I only used the primary Line-Of-Business.

The result was as shown below:

Firmographic

So, from my firmographic analysis I know the rough size of the target market in one locality. I can assume, that other markets look more or less the same or I can do specific firmographics on other geographies. Also, I can apply first results of dialogues with entities in the breakdown model and see if the model needs a modification.

Bookmark and Share

Image Coming Soon

End customer self-service has grown dramatically during the last decades due to the increasing adoption of ecommerce. When customers shop online they need a lot of information about the product they intent to buy. One of the pieces of information they need is an image of the product. The image helps customers to understand if it is the intended product they are going to buy and helps with quickly differentiating among a range of products.

Unfortunately the most common image around on web shops is the “image coming soon”.

Image coming soon

Completeness is a huge problem in Product Information Management (PIM) as examined in my previous post called Multi-Domain MDM and Data Quality Dimensions. A missing product image is a classic completeness issue for product master data.

As a web shop you can collect a product image in several ways, namely:

  • Take the image yourself
  • Get it from the manufacturer

The former approach is cumbersome and usually only used for selected products for a special purpose of use. The latter one is far the most common. When you deal with many products and constant new on-boarding of products, you want to have a uniform and automated approach to collect images along with all the other product information needed for the specific product category.

A clumsy variant of the latter is scraping it from your manufacturer’s website or even your competitor’s website. Or having someone far away doing that for you.

The better way is to start sharing product data and digital assets, including product images, within the ecosystems of manufacturers, distributors, retailers and end users. Stay tuned. A service for that is coming soon 🙂

Bookmark and Share