Data Warehouse vs Data Lake, Take 2

The differences between a data warehouse and a data lake has been discussed a lot as for example here and here.

To summarize, the main point in my eyes is: In a data warehouse the purpose and structure is determined before uploading data while the purpose with and structure of data can be determined before downloading data from a data lake. This leads to that a data warehouse is characterized by rigidity and a data lake is characterized by agility.

take-2Agility is a good thing, but of course, you have to put some control on top of it as reported in the post Putting Context into Data Lakes.

Furthermore, there are some great opportunities in extending the use of the data lake concept beyond the traditional use of a data warehouse. You should think beyond using a data lake within a given organization and vision how you can share a data lake within your business ecosystem. Moreover, you should consider not only using the data lake for analytical purposes but commence on a mission to utilize a data lake for operational purposes.

The venture I am working on right now have this second take on a data lake. The Product Data Lake exists in the context of sharing product information between trading partners in an agile and process driven way. The providers of product information, typically manufacturers and upstream distributors, uploads product information according to the data management maturity level of that organization. This information may very well for now be stored according to traditional data warehouse principles. The receivers of product information, typically downstream distributors and retailers, download product information according to the data management maturity level of that organization. This information may very well for now end up in a data store organized by traditional data warehouse principles.

As I have seen other approaches for sharing product information between trading partners these solutions are built on having a data warehouse like solution between trading partners with a high degree of consensus around purpose and structure. Such solutions are in my eyes only successful when restricted narrowly in a given industry probably within a given geography for a given span of time.

By utilizing the data lake concept in the exchange zone between trading partners you can share information according to your own pace of maturing in data management and take advantage of data sharing where it fits in your roadmap to digitalization. The business ecosystems where you participate are great sources of data for both analytical and operational purposes and we cannot wait until everyone agrees on the same purpose and structure. It only takes two to start the tango.

Bookmark and Share

Connecting Product Information

In our current work with the Product Data Lake cloud service, we are introducing a new way to connect product information that are stored at two different trading partners.

When doing that we deal with three kinds of product attributes:

  • Product identification attributes
  • Product classification attributes
  • Product features

Product identification attributes

The most common used notion for a product identification attribute today is GTIN (Global Trade Item Number). This numbering system has developed from the UPC (Universal Product Code) being most popular in North America and the EAN (International Article Number formerly European Article Number).

Besides this generally used system, there are heaps of industry and geographical specific product identification systems.

In principle, every product in a given product data store, should have a unique value in a product identification attribute.

When identifying products in practice attributes as a model number at a given manufacturer and a product description are used too.

Product classification attributes

A product classification attribute says something about what kind of product we are talking about. Thus, a range of products in a given product data store will have the same value in a product classification attribute.

As with product identification, there is no common used standard. Some popular cross-industry classification standards are UNSPSC (United Nations Products and Service Code®) and eCl@ss, but many other standards exists too as told in the post The World of Reference Data.

Besides the variety of standards a further complexity is that these standards a published in versions over time and even if two trading partners use the same standard they may not use the same version and they may have used various versions depending on when the product was on-boarded.

Product features

A product feature says something about a specific characteristic of a given product. Examples are general characteristics as height, weight and colour and specific characteristics within a given product classification as voltage for a power tool.

Again, there are competing standards for how to define, name and identify a given feature.

pdl-tagsThe Product Data Lake tagging approach

In the Product Data Lake we use a tagging system to typify product attributes. This tagging system helps with:

  • Linking products stored at two trading partners
  • Linking attributes used at two trading partners

A product identification attribute can be tagged starting with = followed by the system and optionally the variant off the system used. Examples will be ‘=GTIN’ for a Global Trading Item Number and ‘=GTIN-EAN13’ for a 13 character EAN number. An industry geographical tag could be ‘=DKVVS’ for a Danish plumbing catalogue number (VVS nummer). ‘=MODEL’ is the tag of a model number and ‘=DESCRIPTION’ is the tag of the product description.

A product classification tag starts with a #. ‘#UNSPSC’ is for a United Nations Products and Service Code where ‘#UNSPSC-19’ indicates a given main version.

A product feature is tagged with the feature id, an @ and the feature (sometimes called property) standard. ‘EF123456@ETIM’ will be a specific feature in ETIM (an international standard for technical products). ‘ABC123@ECLASS’ is a reference to a property in eCl@ss.

Bookmark and Share

Data Quality 3.0 as a stepping-stone on the path to Industry 4.0

The title of this blog post is a topic on my international keynote at the Stammdaten Management Forum 2016 in Düsseldorf, Germany on the 8th November 2016. You can see the agenda for this conference that starts on the 7th and end the on 9th here.

stepping_stones_ocData Quality 3.0 is a term I have used over the years here on the blog to describe how I see data quality, along with other disciplines within data management, changing. This change is about going from focusing on internal data stores and cleansing within them to focusing on external sharing of data and using your business ecosystem and third party data to drastically speed up data quality improvement.

Industry 4.0 is the current trend of automation and data exchange in manufacturing technologies. When we talk about big data most will agree that success with big data exploitation hinges on proper data quality within master data management. In my eyes, the same can be said about success with industry 4.0. The data exchange that is the foundation of automation must be secured by common understood master data.

So this is the promising way forward: By using data exchange in business ecosystems you improve data quality of master data. This improved master data ensures the successful data exchange within industry 4.0.

Bookmark and Share

Ways of Sharing Product Data in Business Ecosystems

Sharing product data within business ecosystems of manufacturers, distributors, retailers and end users has grown dramatically during the last years driven by the increased use of e-commerce and other customer self-service sales approaches.

At Product Data Lake we recently had a survey about how companies shares product data today. The figures were as seen below:

our survey

The result shows that there are different approaches out there. Spreadsheets still rules the world though closely, in this survey, followed by external data portals. Direct system to system approaches are also present while supplier portals seems to be not that common.

At the Product Data Lake we aim to embrace those different approaches. Well, regarding use of spreadsheets and digital asset files via eMail our embracement is meant to be that of a constrictor snake. The Product Data Lake is the solution to end the hailstorms of spreadsheets with product data within cross company supply chains.

For external data portals, the Product Data Lake offers the concept of a data reservoir. A data reservoir in the Product Data Lake can be with an industry focus or with a special focus on certain data elements as for example sustainability data as described in the post Sustainability Data in PIM.

Direct systems to system exchange can be orchestrated through the Product Data Lake and supplier portals can served by the Product Data Lake. In that way existing investments in those approaches, that typically are implemented to serve basic data elements shared with your top trading partners, can be supplemented by a method that caters for exchange with all your trading partners and covering all data elements and digital assets.

Bookmark and Share

Launching too early or too late

Today the 28th August 2016 is one month away from the official launch of the Product Data Lake.

When to launch is an essential question for every start-up. Launching too early with an immature product is one common pitfall and launching too late with a complex product that does not fit the market is another common pitfall for a start-up.

At Product Data Lake we hope we have struck the right balance. You can see what we have chosen to put up in the cloud in this document.

Right now both the technical team at Larion in Ho Chi Min City and the commercial team in Copenhagen is working hard to get the last details in place for the launch that will happen as told on LinkedIn in the post Meet The Product Data Lake.

One thing we have in place is the company’s vehicle fleet. As you can see, this is according to us being both environmental and economically responsible.

Cykler

Bookmark and Share

Emerging Database Technologies for Master Data

The MDM Landscape Q2 2016 from Information Difference is out. MDM vendors usually celebrate these yearly analyst reports with tweets and posts about their prominent position, like Informatica trailed by Stibo Systems for being in the top right corner and Agility Mulitichannel closely followed by Orchestra Networks for having the happiest customers.

The Information DifferenceBut the market analysis and the trends observed is good stuff as well.

This year I noticed the trend in the underlying technology used by MDM vendors to store the master data. The report says: “Some vendors have also decided to cut their ties with the relational database platform that has traditionally been the core storage mechanism for master data. Certain types of analysis e.g. of relationships between data, can be well handled by other types of emerging databases, such as graph databases like Neo4J and NoSQL databases like MongoDB. One vendor has recently switched its underlying platform entirely away from relational, and others have similar plans.”

While we usually see graph databases and NoSQL databases as something to use for analytical purposes, the trend of moving master data platforms to these technologies implies that operational purposes will be based on these technologies too.

This is close to me as the master data service I’m work with right now is based on storing data for operational purposes in MongoDB (in the cloud).

Bookmark and Share

Putting Context into Data Lakes

The term data lake has become popular along with the raise of big data. A data lake is a new of way of storing data that is more agile than what we have been used to in data warehouses. This is mainly based on the principle that you should not have thought through every way of consuming data before storing the data.

This agility is also the main reason for fear around data lakes. Possible lack of control and standardization leads to warnings about that a data lake will quickly develop into a data swamp.

LakeIn my eyes we need solutions build on the data lake concept if we want business agility – and we do want that. But I also believe that we need to put data in data lakes in context.

Fortunately, there are many examples of movements in that direction. A recent article called The Informed Data Lake: Beyond Metadata by Neil Raden has a lot of good arguments around a better context driven approach to data lakes.

As reported in the post Multi-Domain MDM 360 and an Intelligent Data Lake the data management vendor Informatica is on that track too.

In all humbleness, my vision for data lakes is that a context driven data lake can serve purposes beyond analytical use within a single company and become a driver for business agility within business ecosystems like cross company supply chains as expressed in the LinkedIn Pulse post called Data Lakes in Business Ecosystems.

Bookmark and Share

Choosing the Best Term to Use in MDM

Right now I am working with a MDM (Master Data Management) service for sharing product data in the business ecosystems of manufacturers, distributors, retailers and end users of product information.

One of the challenges in putting such a service to the market is choosing the best term for the entities handled by the service.

Below is the current selection with the chosen term and some recognized alternate terms used frequently and found in various standards that exists for exchanging product data:

Terms

Please comment, if you think there are other English (or variant of English) terms that deserves to be in here.

1st Party, 2nd Party and 3rd Party Master Data

Until now, much of the methodology and technology in the Master Data Management (MDM) world has been about how to optimize the use of what can be called first party master data. This is master data already collected within your organization and the approaches to MDM and the MDM solutions offered has revolved around federating internal silos and obtain a single source of truth within the corporate walls.

Besides that third-party data has been around for many years as described in the post Third-Party Data and MDM. Use of third party data in MDM has mainly been about enriching customer and supplier master data from business directories and in some degree utilizing standardized pools of product data in various solutions.

open doorUsing third party data for customer and supplier master data seems to be a very good idea as exemplified in the post Using a Business Entity Identifier from Day One. This is because customer and supplier master looks pretty much the same to every organization. With product master data this is not case and that is why third party sources for product master data may not be fully effective.

Second party data is data you get directly from the external source. With customer and supplier master data we see that approach in self-registration services. My recommendation is to combine self-registration and third party data in customer and supplier on-boarding processes. With product master data I think leaning mostly to second party connections in business ecosystems seems like the best way forward. There is more on that in a discussion on the LinkedIn  MDM – Master Data Management Group.

Bookmark and Share

Takeaways from MDM Summit Europe 2016

Yesterday I popped in at the combined Master Data Management Summit Europe 2016 and Data Governance Conference Europe 2016.

This event takes place Monday to Thursday, but unfortunately I only had time and money for the Tuesday this year. Therefore, my report will only be takeaways from Tuesday’s events. On a side note the difficulties in doing something pan-European must have troubled the organisers of this London event as avoiding the UK May bank holidays has ended in starting on a Monday where most of the rest of Europe had a day off due to being Pentecost Monday.

MDM

Tuesday morning’s highlight for me was Henry Peyret of Forrester shocking the audience in his Data Governance keynote by busting the myth about the good old excuse for doing nothing, being the imperative of top-level management support, is not true.

Back in 2013 I wondered if graph databases will become common in MDM. Certainly graph databases has become the talk of the town and it was good to learn from Andreas Weber how the Germany based figurine manufacturer Schleich has made a home grown PIM / Product MDM solution based on graph database technology.

Ivo-Paul Tummers of Jibes presented the MDM (and beyond) roadmap for the Dutch food company Sligro. I liked the alley of embracing multi-channel, then omnichannel with self-service at the end of the road and how connect will overtake collect during this journey. This is exactly the reason of being for the Product Data Lake venture I am working on right now.

Bookmark and Share