Five Product Classification Standards

When working with Product Master Data Management (MDM) and Product Information Management (PIM) one important facet is classification of products. You can use your own internal classification(s), being product grouping and hierarchy management, within your organization and/or you can use one or several external classification standards.

Five External Standards

Some of the external standards I have come across are:

UNSPSC

The United Nations Standard Products and Services Code® (UNSPSC®), managed by GS1 US™ for the UN Development Programme (UNDP), is an open, global, multi-sector standard for classification of products and services. This standard is often used in public tenders and at some marketplaces.

GPC

GS1 has created a separate standard classification named GPC (Global Product Classification) within its network synchronization called the Global Data Synchronization Network (GDSN).

Commodity Codes / Harmonized System (HS) Codes

Commodity codes, lately being worldwide harmonized and harmonised, represent the key classifier in international trade. They determine customs duties, import and export rules and restrictions as well as documentation requirements. National statistical bureaus may require these codes from businesses doing foreign trade.

eClass

eCl@ss is a cross-industry product data standard for classification and description of products and services emphasizing on being a ISO/IEC compliant industry standard nationally and internationally. The classification guides the eCl@ss standard for product attributes (in eClass called properties) that are needed for a product with a given classification.

ETIM

ETIM develops and manages a worldwide uniform classification for technical products. This classification guides the ETIM standard for product attributes (in ETIM called features) that are needed for a product with a given classification.

pdl-whyThe Competition and The Neutral Hub

If you click on the links to some of these standards you may notice that they are actually competing against each other in the way they represent themselves.

At Product Data Lake we are the neutral hub in the middle of everyone. We cover your internal grouping and tagging to any external standard. Our roadmap includes more close integration to the various external standards embracing both product classification and product attribute requirements in multiple languages where provided. We do that with the aim of letting you exchange product information with your trading partners, who probably do the classification differently from you.

Plug and Play – The Future for Data

What does the future for data and the need for power when travelling have in common? A lot, as Ken O’Connor explains in today’s guest blog post:

Bob Lambert wrote an excellent article recently summarising the New Direction for Data set out at Enterprise Data World 2017 (#EDW17).  As Bob points out “Those (organisations) that effectively manage data perform far better than organisations that don’t”. A key theme from #EDW17 is for data management professionals to “be positive” and to focus on the business benefits of treating data as an asset.  On a related theme, Henrik on this blog has been highlighting the emergence and value to be derived from business ecosystems and digital platforms.  

Building on Bob and Henrik’s ideas, I believe we need a paradigm shift in the way we think and talk about data.  We need to promote the business benefits of data sharing via “Plug and Play Data”.

AdaptorWhen we travel, we expect to be able to use our mobile devices anywhere in the world. We do this by using universal adaptors that convert country specific plug shapes and power levels for us.   

We need to apply the same concept to data. To enable data to be more easily reused across and between enterprises, we need to create “plug and play data”.         

How can organisations create “plug and play data”?

In the past, organisations could simply verify that the data they create / capture / ingest and share conforms to the business rules for their own organisation. That “silo-based” approach is no longer tenable. In today’s world, as Henrik points out, organisations increasing play a role within a business ecosystem, as part of a data supply chain. Hence they need to exchange data with business partners. To do this, they need to apply a “Data Sharing Concept” within a “Common Data Architecture” as set out by Michael Brackett in his excellent books “Data Resource Simplexity” and  “Data Resource Integration”.  Michael describes a “Data Sharing Medium”, which is similar in concept to the universal adaptor above. For data sharing, this involves organisations within a  given business ecosystem agreeing a “preferred form” for data sharing.  

Data Sharing.png

I quote Michael “The Common Data Architecture provides a construct for readily sharing data. When the source data are not in the preferred form, the source organisation must translate those non-preferred data to the preferred form before being shared over the data sharing medium. Similarly, when the target organisation uses the preferred data, they can be readily received from the data sharing medium. When the target organisation does not use preferred data, they must translate the preferred data to their non-preferred form. The “data sharing concept” states that shared data are transmitted over the data sharing medium as preferred data. Any organisation, whether source or target, that does not have or use data in the preferred form is responsible for translating the data.

In conclusion:

We Data Management Professionals need to educate both Business and IT on the need for, and the benefits of “plug and play data”. We need to help business leaders to understand that data is no longer used by just one business process. We need to explain that even tactical solutions within Lines of Business need to consider Enterprise and business ecosystem demands for data such as:

  1. Data feed into regulatory systems
  2. Data feeds to and from other organisations in the supply chain
  3. Ultimate replacement of application with newer generation system

We must educate the business on the increasingly dynamic information requirements of the Enterprise and beyond – which can only be satisfied creating “plug and play data” that can be easily reused and interconnected.

Ken O’Connor is an independent consultant with extensive experience helping multi-national organisations satisfy the Data Quality / Data Governance requirements of regulatory compliance programmes such as GDPR, Solvency II, BASEL II/III, Anti-Money Laundering, Anti-Fraud, Anti-Terrorist Financing and BCBS 239 (Risk Data Aggregation and Reporting).

Ken’s “Data Governance Health Check” provides an independent, objective assessment of your organisation’s internal data management processes to help you to identify gaps you may need to address to comply with regulatory requirements.

Ken is a founding board member of the Irish Data Management Association (DAMA) chapter. He writes a popular industry blog that regularly focuses on a wide range of data management issues faced by modern organisations: (Kenoconnordata.com).

You may contact Ken directly by emailing: Ken@Kenoconnordata.com

Ecosystems are The Future of Digital and MDM

A recent blog post by Dan Bieler of Forrester ponders that you should Power Your Digital Ecosystems with Business Platforms.

In his post, Dan Bieler explains that such business platforms support:

·      The infrastructure that connect ecosystem participants. Business platforms help organizations transform from local and linear ways of doing business toward virtual and exponential operations.

·      A single source of truth for ecosystem participants. Business platforms become a single source of truth for ecosystems by providing all ecosystem participants with access to the same data.

·      Business model and process transformation across industries. Platforms support agile reconfiguration of business models and processes through information exchange inside and between ecosystems.

A single source of truth (or trust) for ecosystem participants is something that rings a bell for every Master Data Management (MDM) practitioner. The news is that the single source will not be a single source within a given enterprise, but a single source that encompasses the business ecosystem of trading partners.

Gartner Digital Platforms.png

Gartner, the other analyst firm, has also recently been advocating about digital platforms where the ecosystem type is the top right one. As stated by Gartner: Ecosystems are the future of digital.

I certainly agree. This is why all of you should get involved at Master Data Share.

 

Multi-Domain MDM and PIM, Party and Product

Multi-Domain Master Data Management (MDM) and Product Information Management (PIM) are two interrelated disciplines within information management.

While we may see Product Information Management as the ancestor or sister to Product Master Data Management, we will in my eyes gain much more from Product Information Management if we treat this discipline in conjunction with Multi-Domain Master Data Management.

Party and product are the most common handled domains in MDM. I see their intersections as shown in the figure below:

Multi-Side MDM

Your company is not an island. You are part of a business ecosystem, where you may be:

  • Upstream as the maker of goods and services. For that you need to buy raw materials and indirect goods from the parties being your vendors. In a data driven world you also to need to receive product information for these items. You need to sell your finished products to the midstream and downstream parties being your B2B customers. For that you need to provide product information to those parties.
  • Midstream as a distributor (wholesaler) of products. You need to receive product information from upstream parties being your vendors, perhaps enrich and adapt the product information and provide this information to the parties being your downstream B2B customers.
  • Downstream as a retailer or large end user of product information. You need to receive product information from upstream parties being your vendors and enrich and adapt the product information so you will be the preferred seller to the parties being your B2B customers and/or B2C customers.

Knowledge about who the parties being your vendors and/or customers are and how they see product information, is essential to how you must handle product information.  How you handle product information is essential to your trading partners.

You can apply party and product interaction for business ecosystems as explained in the post Party and Product: The Core Entities in Most Data Models.

3 Old and 3 New Multi-Domain MDM Relationship Types

Master Data Management (MDM) has traditionally been mostly about party master data management (including not at least customer master data management) and product master data management. Location master data management has been the third domain and then asset master data management is seen as the fourth – or forgotten – domain.

With the rise of Internet of Things (IoT) asset – seen as a thing – is seriously entering the MDM world. In buzzword language, these things are smart devices that produces big data we can use to gain much more insight about parties (in customer roles), products, locations and the things themselves.

In the old MDM world with party, product and location we had 3 types of relationships between entities in these domains. With the inclusion of asset/thing we have 3 more exiting relationship types.

Multi-Domain MDM Relations

The Old MDM World

1: Handling the relationship between a party at its location(s) is one of the core capabilities of a proper party MDM solution. The good old customer table is just not good enough as explained in the post A Place in Time.

2: Managing the relationship between parties and products is essential in supplier master data management and tracking the relationship between customers and products is a common use case as exemplified in the post Customer Product Matrix Management.

3:  Some products are related to a location as told in the post Product Placement.

The New MDM World

4: We need to be aware of who owns, operates, maintains and have other party roles with any smart device being a part of the Internet of Things.

5: In order to make sense of the big data coming from fixed or moving smart devices we need to know the location context.

6: Further, we must include the product information of the product model for the smart devices.

Expanding to Business Ecosystems

In my eyes, it is hard to handle the 3 old relationship types separately within a given enterprise. When including things and the 3 new relationship types, expanding master data management to the business ecosystems you have with trading partners will be imperative as elaborated in the post Data Management Platforms for Business Ecosystems.

How MDM, PIM and DAM Sticks Together

When working with product data I usually put the data into this five level model:

Five levels

The model is explained in the post Five Product Data Levels.

A recent post by Simon Walker of Gartner, the analyst firm, outlined the possible system landscape. The post is called Creating the 360-Degree view of Product.

MCM-v1.0-284x300

Simon defines these three kind of platforms for managing a 360 degree product data view:

  • MDM of product master data solutions help manage structured product data for enterprise operational and analytical use cases
  • PIM solutions help extend structured product data through the addition of rich product content for sales and marketing use cases
  • DAM solutions help users create and manage digital multimedia files for enterprise, sales and marketing use cases

These two models fit quite well together:

MDM PIM DAM.png

And oh, when it comes to creating a business ecosystem digital platform for exchanging product data with trading partners, the best model looks like this:

MDM PIM DAM PDL

Learn more about Product Data Lake here.

5 Data Management Mistakes to Avoid during Data Integration Projects

mistake-876597_1920

I am very pleased to welcome today’s guest blogger. Canada based Maira Bay de Souza of Product Data Lake Technologies shares her view on data integration and the mistakes to avoid doing that:

Throughout my 5 years of working with Data Integration, Data Migration and Data Architecture, I’ve noticed some common (but sometimes serious) mistakes related to Data Management and Software Quality Management. I hope that by reading about them you will be able to avoid them in your future Data Integration projects.

 1 Ignoring Data Architecture

Defining the Data Architecture in a Data Integration project is the equivalent of defining the Requirements in a normal (non-data-oriented) software project. A normal software application is (most of the times) defined by its actions and interactions with the user. That’s why, in the first phase of software development (the Requirements Phase), one of the key steps is creating Use-Cases (or User Stories). On the other hand, a Data Integration application is defined by its operations on datasets. Interacting with data structures is at the core of its functionality. Therefore, we need to have a clear picture of what these data structures look like in order to define what operations we will do on them.

 It is widely accepted in normal software development that having well-defined requirements is key to success. The common saying “If you don’t know where you’re going, any road will get you there” also applies for Data Integration applications. When ETL developers don’t have a clear definition of the Data Architecture they’re working with, they will inevitably make assumptions. Those assumptions might not always be the same as the ones you, or worse, your customer made.

(see here and here for more examples on the consequences of not finding software bugs early in the process due to by badly defined requirements)

 Simple but detailed questions like “can this field be null or not?” need to be answered. If the wrong decision is made, it can have serious consequences. Most senior Java programmers like me are well aware of the infamous “Null Pointer Exception“. If you feed a null value to a variable that doesn’t accept null (but you don’t know that that’s the case because you’ve never seen any architecture specification), you will get that error message. Because it is a vague message, it can be time-consuming to debug and find the root cause (especially for junior programmers): you have to open your ETL in the IDE, go to the code view, find the line of code that is causing the problem (sometimes you might even have to run the ETL yourself), then find where that variable is located in the design view of your IDE, add a fix there, test it to make sure it’s working and then deploy it in production again. That also means that normally, this error causes an ETL application to stop functioning altogether (unless there is some sort of error handling). Depending on your domain that can have serious, life-threatening consequences (for example, healthcare or aviation), or lead to major financial losses (for example, e-commerce).

 Knowing the format, boundaries, constraints, relationships and other information about your data is imperative to developing a high quality Data Integration application. Taking the time to define the Data Architecture will prevent a lot of problems down the road.

2 Doing Shallow Data Profiling

Data profiling is another key element to developing good Data Integration applications.

 When doing data profiling, most ETL developers look at the current dataset in front of them, and develop the ETL to clean and process the data in that dataset. But unfortunately that is not enough. It is important to also think about how the dataset might change over time.

 For example, let’s say we find  a customer in our dataset with the postal code in the city field. We then add an instruction in the ETL for when we find that specific customer’s data, to extract the postal code from the city field and put it in the postal code field. That works well for the current dataset. But what if next time we run the ETL another customer has the same problem? (it could be because the postal code field only accepts numbers and now we are starting to have Canadian customers, who have numbers and letters in the postal code, so the user started putting the postal code in the city field)

Not thinking about future datasets means your ETL will only work for the current dataset. However, we all know that data can change over time (as seen in the example above) – and if it is inputted by the user, it can change unpredictably. If you don’t want to be making updates to your ETL every week or month, you need to make it flexible enough to handle changes in the dataset. You should use data profiling not only to analise current data, but also to deduce how it might change over time.

Doing deep data profiling in the beginning of your project means you will spend less time making updates to the Data Cleaning portion of your ETL in the future.

 3 Ignoring Data Governance

 This point goes hand-in-hand with my last one.

 A good software quality professional will always think about the “what if” situations when designing their tests (as opposed to writing tests just to “make sure it works”). In my 9 years of software testing experience, I can’t tell you how many times I asked a requirements analyst “what if the user does/enters [insert strange combination of actions/inputs here]?” and the answer was almost always “the user will never do that“. But the reality is that users are unpredictable, and there have been several times when the user did what they “would never do” with the applications I’ve tested.

The same applies to data being inputted into an ETL. Thinking that “data will never come this way” is similar to saying “the user will never do that“. It’s better to be prepared for unexpected changes in the dataset instead of leaving it to be fixed later on, when the problem has already spread across several different systems and data stores. For example, it’s better to add validation steps to make sure that a postal code is in the right format, instead of making no validation and later finding provinces in the postal code field. Depending on your data structures, how dirty the data is and how widespread the problem is, the cost to clean it can be prohibitive.

This also relates to my first point: a well-defined Data Architecture is the starting point to implementing Data Governance controls.

 When designing a high quality Data Integration application, it’s important to think of what might go wrong, and imagine how data (especially if it’s inputted by a human) might be completely different than you expect. As demonstrated in the example above, designing a robust ETL can save hours of expensive manual data cleaning in the future.

 4 Confusing Agile with Code-And-Fix

 A classic mistake in startups and small software companies (especially those ran by people without a comprehensive education or background in Software Engineering) is rushing into coding and leaving design and documentation behind. That’s why the US Military and CMU created the CMMI: to measure how (dis)organized a software company is, and help them move from amateur to professional software development. However, the compliance requirements for a high maturity organization are impractical for small teams. So things like XP, Agile, Scrum, Lean, etc have been used to make small software teams more organized without getting slowed down by compliance paperwork.

Those techniques, along with iterative development, proved to be great for startups and innovative projects due to their flexibility. However, they can also be a slippery slope, especially if managers don’t understand the importance of things like design and documentation. When the deadlines are hanging over a team’s head, the tendency is always to jump into coding and leave everything else behind. With time, managers start confusing agile and iterative development with code-and-fix.

 Throughout my 16 years of experience in the Software Industry, I have been in teams where Agile development worked very well. But I have also been in teams where it didn’t work well at all – because it was code-and-fix disguised as Agile. Doing things efficiently is not the same as skipping steps.

Unfortunately, in my experience this is no different in ETL development. Because it is such a new and unpopular discipline (as opposed to, for example, web development), there aren’t a lot of software engineering tools and techniques around it. ETL design patterns are still in their infancy, still being researched and perfected in the academic world. So the slippery slope from Agile to code-and-fix is even more tempting.

 What is the solution then? My recommendation is to use the proven, existing software engineering tools and techniques (like design patterns, UML, etc) and adapt them to ETL development. The key here is to do something. The fact that there is a gap in the industry’s body of knowledge is no excuse for skipping requirements, design, or testing, and jumping into “code-and-fix disguised as Agile“. Experiment, adapt and find out which tools, methodologies and techniques (normally used in other types of software development) will work for your ETL projects and teams.

5 Not Paying Down Your Technical Debt

The idea of postponing parts of your to-do list until later because you only have time to complete a portion of them now is not new. But unfortunately, with the popularization of agile methodologies and incremental development, Technical Debt has become an easy way out of running behind schedule or budget (and masking the root cause of the problem which was an unrealistic estimate).

As you might have guessed, I am not the world’s biggest fan of Technical Debt. But I understand that there are time and money constraints in every project. And even the best estimates can sometimes be very far from reality – especially when you’re dealing with a technology that is new for your team. So I am ok with Technical Debt, when it makes sense.

However, some managers seem to think that technical debt is a magic box where we can place all our complex bugs, and somehow they will get less complex with time. Unfortunately, in my experience, what happens is the exact opposite: the longer you owe technical debt (and the more you keep adding to it), the more complex and patchy the application becomes. If you keep developing on top of – or even around – an application that has a complex flaw, it is very likely that you will only increase the complexity of the problem. Even worse, if you keep adding other complex flaws on top of – or again, even around – it, the application becomes exponentially complex. Your developers will want to run away each time they need to maintain it. Pretty soon you end up with a piece of software that looks more like a Frankenstein monster than a clean, cohesive, elegant solution to a real-world problem. It is then only a matter of time (usually very short time) before it stops working altogether and you have no choice but to redesign it from scratch.

This (unfortunately) frequent scenario in software development is already a nightmare in regular (non-data-oriented) software applications. But when you are dealing with Data Integration applications, the impact of dirty data or ever-changing data (especially if it’s inputted by a human), combined with the other 4 Data Management mistakes I mentioned above, can quickly escalate this scenario into a catastrophe of epic proportions.

So how do you prevent that from happening? First of all, you need to have a plan for when you will pay your technical debt (especially if it is a complex bug). The more complex the required change or bug is, the sooner it should be dealt with. If it impacts a lot of other modules in your application or ecosystem, it is also important to pay it off sooner rather than later. Secondly, you need to understand why you had to go into technical debt, so that you can prevent it from happening again. For example, if you had to postpone features because you didn’t get to them, then you need to look at why that happened. Did you under-estimate another feature’s complexity? Did you fail to account for unknown unknowns in your estimate? Did sales or your superior impose an unrealistic estimate on your team? The key is to stop the problem on its tracks and make sure it doesn’t happen again. Technical Debt can be helpful at times, but you need to manage it wisely.

 I hope you learned something from this list, and will try to avoid these 5 Data Management and Software Quality Management mistakes on your next projects. If you need help with Data Management or Software Quality Management, please contact me for a free 15-min consultation.

Maira holds a Bsc in Computer Science, 2 software quality certifications and over 16 years of experience in the Software Industry. Her open-mindedness and adaptability have allowed her to thrive in a multidisciplinary career that includes Software Development, Quality Assurance and Project Management. She has taken senior and consultant roles at Fortune 20 companies (IBM and HP), as well as medium and small businesses. She has spent the last 5 years helping clients manage and develop software for Data Migration, Data Integration, Data Quality and Data Consistency. She is a Product Data Lake Ambassador & Technology Integrator through her startup Product Data Lake Technologies.