The title of this blog post is the title of, in my rapid eye movements, one the best albums ever: Automatic for the People by R.E.M., which came out 25 years ago in 1992.
It began in manufacturing
Automation began in the manufacturing industry. Since then automation has been part of most other industries. Not at least within Information Technology, automation is part of the promise in almost every initiative.
When automating stuff, we should always be aware of not just automating old bad processes. To the most extreme, as Michael Hammer said back in 1990: Don’t Automate, Obliterate.
However, some of the most successful companies today are companies born in the information age and delivering services that in a high degree automates processes of value to their customers based on working intensively with information technology.
How can we close the loop and bring that kind of modern automation back to where it began: In the manufacturing industry? The challenges of doing that was examined by Harri Juntunen in a guest blog post called Data Born Companies and the Rest of Us.
IT will come back to manufacturing
In all humbleness we want to be part of that endeavor at Product Data Lake. Therefore, we are setting up a Product Data Push solution for manufacturers, in order to solve one of most severe issues for manufacturers today, being a dysfunctional flow of product information out to whoever is managing the point of sales for the produced goods.
Automation is the end goal. But in order to get started, we accept upload of product information in whatever format, structure and state it is available in. We will then get it in shape to be pulled by retailers, etailers and other trading partners. We will use manual workforce for that and we will use Artificial Intelligence for that too. And in the end, it will be automatic for the people.
This survey points to that the main reason why this does that take place is that manufacturers need to mature in handling and consolidating product information internally, before they are confident in sharing the detailed data elements (in an automated way) with their downstream partners. This subject was elaborated in the post Product Information Sharing Issue No 1: We Need to Mature Internally.
Issue no 3 is the apparent absence of a good solution for sharing product information with trading partners that suites the whole business ecosystem. I guess it is needless to say to regular readers of this blog that, besides being able to support issue no 1 and issue no 2, that solution is Product Data Lake.
“Organisations need architectural thinking beyond their organisational boundaries” and “The days of Enterprise Architecture taking a castle and moat approach are over”.
The end of the castle and moat thinking in Enterprise Architecture (and Business Information Architecture) is also closely related to the diminished importance of the brick and mortar ways of selling, being increasingly overtaken by eCommerce.
However, some figures I have noticed that cause the brick and mortar way to resist the decline by still having a castle and moat thinking is:
Merchants, distributors and manufacturers need to move on from the castle and moat thinking in Enterprise Architecture and Business Information Architecture and start interacting effectively in their business ecosystems with product information.
The most votes in the current standing has gone to this answer:
We must first mature in handling our product information internally
Solving this issue is one of the things we do at Liliendahl.com. Besides being an advisory service in the Master Data Management (MDM) and Product Information Management (PIM) space, we have a developing collaboration with companies providing consultancy, cleansing and, when you come to that step, specialized technology for inhouse MDM and PIM. Take a look at Our Business Ecosystem.
If you are a manufacturer with a limited need for scaling the PIM technology part and already have much of your needs covered by an ERP and/or Product Lifecycle Management (PLM) solution, you may also fulfill your inhouse PIM capabilities and the external sharing needs in one go by joining Product Data Lake.
In this post Shamanth, exemplified with mascara products, discusses how PIM (Product Information Management) as an enterprise solution helps with effective data management, cutting down new product introduction timelines, multi-channel content management, adhering to regulations and facilitating advanced data analytics.
I agree with all the goodness gained from an enterprise PIM solution for these matters. PIM is the new bacon.
However, in the end Shamanth mentions PIM vendor portals: “The Vendor portal automates the product onboarding process and significantly cuts down operating costs by allowing Vendors to upload complete and curated product data, in bulk, into the system.”
I am sorry to say that I think that using a PIM vendor (or supplier) portal is like lipstick on a pig.
The concept looks tempting by first glance. But it is a flawed concept. The problem is that it is hostile to your trading partners. Your upstream trading partner may have hundreds of downstream trading partners and if every one of these offers their vendor (supplier) portal, they will have to learn and update into hundreds of different portals.
All these portals will have a different look and feel coming from many different PIM solution providers.
The opposite concept, having suppliers providing their customer product data portals, has the same flaw, just the other way around.
The best solution is having a PIM vendor neutral hub sitting in the product information exchange zone. This is the idea behind Product Data Lake.
Multi-Domain Master Data Management (MDM) and Product Information Management (PIM) are two interrelated disciplines within information management.
While we may see Product Information Management as the ancestor or sister to Product Master Data Management, we will in my eyes gain much more from Product Information Management if we treat this discipline in conjunction with Multi-Domain Master Data Management.
Party and product are the most common handled domains in MDM. I see their intersections as shown in the figure below:
Your company is not an island. You are part of a business ecosystem, where you may be:
Upstream as the maker of goods and services. For that you need to buy raw materials and indirect goods from the parties being your vendors. In a data driven world you also to need to receive product information for these items. You need to sell your finished products to the midstream and downstream parties being your B2B customers. For that you need to provide product information to those parties.
Midstream as a distributor (wholesaler) of products. You need to receive product information from upstream parties being your vendors, perhaps enrich and adapt the product information and provide this information to the parties being your downstream B2B customers.
Downstream as a retailer or large end user of product information. You need to receive product information from upstream parties being your vendors and enrich and adapt the product information so you will be the preferred seller to the parties being your B2B customers and/or B2C customers.
Knowledge about who the parties being your vendors and/or customers are and how they see product information, is essential to how you must handle product information. How you handle product information is essential to your trading partners.
The term infonomics does not yet run unmarked through my English spellchecker, but there are some information available on Wikipedia about infonomics. Infonomics is closely related to the often-mentioned phrases in data management about seeing data / information as an asset.
Much of what I have read about infonomics and seeing data / information as an asset is related to what we call first party data. That is data that is stored and managed within your own company.
Some information is also available in relation to third party data. That is data we buy from external parties in order to validate, enrich or even replace our own first party data. An example is a recent paper from among others infonomic guru Doug Laney of Gartner (the analyst firm). This paper has a high value if you want to buy it as seen here.
Anyway, the relationship between data as an asset and the value of data is obvious when it comes to third party data, as we pay a given amount of money for data when acquiring third party data.
Second party data is data we exchange with our trading and other business partners. One example that has been close to me during the recent years is product information that follows exchange of goods in cross company supply chains. Here the value of the goods is increasingly depending on the quality (completeness and other data quality dimensions) of the product information that follows the goods.
In my eyes, we will see an increasing focus on infonomics when it comes to exchanging goods – and the related second party data – in the future. Two basic factors will be:
Completeness of product information. The more (accurate, conform and consistent) information that follows the good, the more total value as touched in the post Ecommerce Su…ffers without Data Quality.
A supplier portal (or vendor portal if you like) is usually an extension to a Product Information Management (PIM) solution. The idea is that the suppliers of products, and thus providers of product information, to you as a downstream participant (distributor or retailer) in a supply chain, can upload their product information into your PIM solution and thus relieving you of doing that. This process usually replace the work of receiving spreadsheets from suppliers in the many situations where data pools are not relevant.
In my opinion and experience, this is a flawed concept, because it is hostile to the supplier. The supplier will have hundreds of downstream receivers of products and thus product information. If all of them introduced their own supplier portal, they will have to learn and maintain hundreds of them. Only if you are bigger than your supplier is and is a substantial part of their business, they will go with you.
Another concept, which is the opposite, is also emerging. This is manufacturers and upstream distributors establishing PIM customer portals, where suppliers can fetch product information. This concept is in my eyes flawed exactly the opposite way.
And then let us imagine that every provider of product information had their PIM customer portal and every receiver had their PIM supplier portal. Then no data would flow at all.
The below figure shows the cross border data flows on this planet. There are inter-regional data flows and there are flows between the worldwide regions:
Now, a small part of this data will be product data exchanged between trading partners participating in global business ecosystems. While I have no data on if product data are distributed by the same proportions as data in general, it will be a qualified guess, that the picture will look somewhat the same.
Exchanging product data across borders has some challenges:
Language is an issue. Product data will eventually have to be translated into the language of the end buyer, if this is not the language in which the product data originally are provided. The definitions (metadata) of product data will also be subject to translation. Even the language of the transmission tools would not be in English all over.
Regulations around product data are different from country to country.
The cultural content of the optimal data describing a product in structured data elements and related digital assets are different between countries and regions.