The Rise of Business Ecosystems in Data Management

There are many signs showing that we are entering the age of business ecosystems. A recent example is an article from Digital McKinsey. This read worthy article is called Adopting an ecosystem view of business technology.

In here, the authors emphasizes on the need to adapt traditional IT functions to the opportunities and challenges of emerging technologies that embraces business ecosystems. I fully support that sentiment.

In my eyes, some of the emerging technologies we see are in large misunderstood as something meant for being behind the corporate walls. My favorite example is the data lake concept. I do not think a data lake will be an often seen success solely within a single company as explained in the post Data Lakes in Business Ecosystems.

The raise of technology for business ecosystems will also affect the data management roles we know today. For example, a data steward will be a lot more focused towards external data than before as elaborated in the post The Future of Data Stewardship.

Encompassing business ecosystems in data management is of course a huge challenge we have to face while most enterprises still have not reached an acceptable maturity when it comes internal data and information governance. However, letting the outside in will also help in getting data and information right as told in the post Data Sharing Is The Answer To A Single Version Of The Truth.

biz-eco

Alternatives to Product Data Lake

Within Product Information Management (PIM) there is a growing awareness about that sharing product information between trading partners is a very important issue.

So, how do we do that? We could do that, on a global scale, by using:

  • 1,234,567,890 spreadsheets
  • 2,345,678 customer data portals
  • 901,234 supplier data portals

Spreadsheets is the most common mean to exchange product information between trading partners today. The typical scenario is that a receiver of product information, being a downstream distributor, retailer or large end user, will have a spreadsheet for each product group that is sent to be filled by each supplier each time a new range of products is to be on-boarded (and potentially each time you need a new piece of information). As a provider of product information, being a manufacturer or upstream distributor, you will receive a different spreadsheet to be filled from each trading partner each time you are to deliver a new range of products (and potentially each time they need a new piece of information).

Customer data portals is a concept a provider of product information may have, plan to have or dream about. The idea is that each downstream trading partner can go to your customer data portal, structured in your way and format, when they need product information from you. Your trading partner will then only have to deal with your customer data portal – and the 1,234 other customer data portals in their supplier range.

Supplier data portals is a concept a receiver of product information may have, plan to have or dream about. The idea is that each upstream trading partner can go to your supplier data portal, structured in your way and format, when they have to deliver product information to you. Your trading partner will then only have to deal with your supplier data portal – and the 567 other supplier data portals in their business-to-business customer range.

Product Data Lake is the sound alternative to the above options. Hailstorms of spreadsheets does not work. If everyone has either a passive customer data portal or a passive supplier data portal, no one will exchange anything. The solution is that you as a provider of product information will push your data in your structure and format into Product Data Lake each time you have a new product or a new piece of product information. As a receiver you will set up pull requests, that will give you data in your structure and format each time you have a new range of products, need a new piece of information or each time your trading partner has a new piece of information.

Learn more about how that works in Product Data Lake Documentation and Data Governance.

alternatives
Potential number of solutions / degree of dissatisfaction / total cost of ownership

 

Is blockchain technology useful within MDM?

This question was raised on this blog back in January this year in the post Tough Questions About MDM.

Since then the use of the term blockchain has been used more and more in general and related to Master Data Management (MDM). As you know, we love new fancy terms in our else boring industry.

blockchainHowever, there are good reasons to consider using the blockchain approach when it comes to master data. A blockchain approach can be coined as centralized consensus, which can be seen as opposite to centralized registry. After the MDM discipline has been around for more than a decade, most practitioners agree that the single source of truth is not practically achievable within a given organization of a certain size. Moreover, in the age of business ecosystems, it will be even harder to achieve that between trading partners.

This way of thinking is at the backbone of the MDM venture called Product Data Lake I’m working with right now. Yes, we love buzzwords. As if cloud computing, social network thinking, big data architecture and preparing for Internet of Things wasn’t enough, we can add blockchain approach as a predicate too.

In Product Data Lake this approach is used to establish consensus about the information and digital assets related to a given product and each instance of that product (physical asset or thing) where it makes sense. If you are interested in how that develops, why not follow Product Data Lake on LinkedIn.

Bookmark and Share

Approaches to Sharing Product Information in Business Ecosystems

One of the most promising aspects of digitalization is sharing information in business ecosystems. In the Master Data Management (MDM) realm, we will in my eyes see a dramatic increase in sharing product information between trading partners as touched in the post Data Quality 3.0 as a stepping-stone on the path to Industry 4.0.

Standardization (or standardisation)

A challenge in doing that is how we link the different ways of handling product information within each organization in business ecosystems. While everyone agrees that a common standard is the best answer we must on the other hand accept, that using a common standard for every kind of product and every piece of information needed is quite utopic. We haven’t even a common uniquely spelled term in English.

Also, we must foresee that one organization will mature in a different pace than another organisation in the same business ecosystem.

Product Data Lake

These observations are the reasons behind the launch of Product Data Lake. In Product Data Lake we encompass the use of (in prioritized order):

  • The same standard in the same version
  • The same standard in different versions
  • Different standards
  • No standards

In order to link the product information and the formats and structures at two trading partners, we support the following approaches:

  • Automation based on product information tagged with a standard as explained in the post Connecting Product Information.
  • Ambassadorship, which is a role taken by a product information professional, who collaborates with the upstream and downstream trading partner in linking the product information. Read more about becoming a Product Data Lake ambassador here.
  • Upstream responsibility. Here the upstream trading partner makes the linking in Product Data Lake.
  • Downstream responsibility. Here the downstream trading partner makes the linking in Product Data Lake.

cross-company-data-governanceData Governance

Regardless of the mix of the above approaches, you will need a cross company data governance framework to control the standards used and the rules that applies to the exchange of product information with your trading partners. Product Data Lake have established a partnership with one of the most recommended authorities in data governance: Nicola Askham – the Data Governance Coach.

For a quick overview please have a look at the Cross Company Data Governance Framework.

Please request more information here.

Bookmark and Share

Cultured Freshwater Pearls of Wisdom

One of my current engagements is within jewelry – or is it jewellery? The use of these two respectively US English and British English words is a constant data quality issue, when we try to standardize – or is it standardise? – to a common set of reference data and a business glossary within an international organization – or is it organisation?

Looking for international standards often does not solve the case. For example, a shop that sells this kind of bijouterie, may be classified with a SIC code being “Jewelry store” or a NACE code being “Retail sale of watches and jewellery in specialised stores”.

shiny thingsA pearl is a popular gemstone. Natural pearls, meaning they have occurred spontaneously in the wild, are very rare. Instead, most are farmed in fresh water and therefore by regulation used in many countries must be referred to as cultured freshwater pearls.

My pearls of wisdom respectively cultured freshwater pearls of wisdom for building a business glossary and finding the common accepted wording for reference data to be used within your company will be:

  • Start looking at international standards and pick what makes sense for your organization. If you can live with only that, you are lucky.
  • If not, grow the rest of the content for your business glossary and reference data by imitating the international or national standards for your industry, and use your own better wording and additions that makes the most sense across your company.

And oh, I know that pearls of wisdom are often used to imply the opposite of wisdom 🙂

Bookmark and Share

Takeaways from MDM Summit Europe 2016

Yesterday I popped in at the combined Master Data Management Summit Europe 2016 and Data Governance Conference Europe 2016.

This event takes place Monday to Thursday, but unfortunately I only had time and money for the Tuesday this year. Therefore, my report will only be takeaways from Tuesday’s events. On a side note the difficulties in doing something pan-European must have troubled the organisers of this London event as avoiding the UK May bank holidays has ended in starting on a Monday where most of the rest of Europe had a day off due to being Pentecost Monday.

MDM

Tuesday morning’s highlight for me was Henry Peyret of Forrester shocking the audience in his Data Governance keynote by busting the myth about the good old excuse for doing nothing, being the imperative of top-level management support, is not true.

Back in 2013 I wondered if graph databases will become common in MDM. Certainly graph databases has become the talk of the town and it was good to learn from Andreas Weber how the Germany based figurine manufacturer Schleich has made a home grown PIM / Product MDM solution based on graph database technology.

Ivo-Paul Tummers of Jibes presented the MDM (and beyond) roadmap for the Dutch food company Sligro. I liked the alley of embracing multi-channel, then omnichannel with self-service at the end of the road and how connect will overtake collect during this journey. This is exactly the reason of being for the Product Data Lake venture I am working on right now.

Bookmark and Share

It is not all about People or Processes or Technology

People Processes TechnologyWhen following the articles, blog posts and other inspirational stuff in the data management realm you frequently stumble upon sayings about a unique angle towards what it is all about, like:

  • It is all about people, meaning that if you can change and control the attitude of people involved in data management everything will be just fine. The problem is that people have been around for thousands of years and we have not nailed that one yet – and probably will not do that isolated in the data management realm. But sure, a lot of consultancy fees will go down that drain still.
  • It is all about processes. Yes it is. The only problem is that processes are dependent on people and technology.
  • It is all about technology. Well, no one actually says so. However, relying on that sentiment – and that shit does happen, is a frequent reason why data management initiatives goes wrong.

The trick is to find a balance between a priceworthy people focused approach, a heartfelt process way of going forward and a solid methodology to exploit technology in the good cause of better data management all aligned with achieving business benefits.

How hard can it be?

Bookmark and Share

The ups and downs with anecdotal evidence in data management

Anecdotes are powerful when working with getting awareness of opportunities in the data quality, data governance and Master Data Management (MDM) realm. Such anecdotes are most often either external or internal data and information train-wrecks, while the success stories are more seldom – at least until now in my experience.

Using anecdotal evidence is useful when identifying major pain points with potential for improvement and are indispensable when striving to get a common understanding about the issues to be solved.

However, it is within data management as in all other disciplines dangerous to jump to conclusions based on anecdotal evidence. We do need some more scientific evidence to nail the collection of issues and the prioritizing of proper solutions.

HippoThe anecdotal evidences with the highest weights are those included according to the HiPPO (Highest Paid Person’s Opinion) principle as examined in the post When Rhino Hunt and the HiPPO Principle makes a Perfect Storm. Here we may have a clash between getting executive sponsorship and support for a given programme and actually doing the right things, based on scientific evidence, within the programme.

What are your experiences and lessons learned? How have you managed to balance anecdotal evidence and scientific evidence in data management?

Bookmark and Share