The Disruptive MDM/PIM/DQM List 2022: Datactics

A major rework of The Disruptive MDM/PIM/DQM List is in the making as the number of visitors keep increasing and so do the number of requests for individual solution lists.

It is good to see that some of the most innovative solution providers commit to be part of the list also next year.

One of those is Datactics.

Datactics is a veteran data quality solution provider who is constantly innovating in this space. This year Datactics was one of the rare new entries in The Gartner Magic Quadrant for Data Quality Solutions 2021.

It will be exciting to follow the ongoing development at Datactics, who is operating under the slogan: “Democratising Data Quality”.

You can learn more about how their self-service data quality and matching solution looks like here.

Core Datactics capabilities

Five Pairs of Data Quality Dimensions

Data quality dimensions are some of the most used terms when explaining why data quality is important, what data quality issues can be and how you can measure data quality. Ironically, we sometimes use the same data quality dimension term for two different things or use two different data quality dimension terms for the same thing. Some of the troubling terms are:

Validity / Conformity – same same but different

Validity is most often used to describe if data filled in a data field obeys a required format or are among a list of accepted values. Databases are usually well in doing this like ensuring that an entered date has the day-month-year sequence asked for and is a date in the calendar or to cross check data values against another table and see if the value exist there.

The problems arise when data is moved between databases with different rules and when data is captured in textual forms before being loaded into a database.

Conformity is often used to describe if data adheres to a given standard, like an industry or international standard. This standard may due to complexity and other circumstances not or only partly be implemented as database constraints or by other means. Therefore, a given piece of data may seem to be a valid database value but not being in compliance with a given standard.

Sometimes conformity is linked to the geography in question. For example a postal code will be conform depending on the country where the address is in. Therefore, a the postal code 12345 is conform in Germany, but not in United Kingdom.

Accuracy / Precision – true, false or not sure

The difference between accuracy and precision is a well-known statistical subject.

In the data quality realm accuracy is most often used to describe if the data value corresponds correctly to a real-world entity. If we for example have a postal address of the person “Robert Smith” being “123 Main Street in Anytown” this data value may be accurate because this person (for the moment) lives at that address.

But if “123 Main Street in Anytown” has 3 different apartments each having its own mailbox, the value does not, for a given purpose, have the required precision.

If we work with geocoordinates we have the same challenge. A given accurate geocode may have the sufficient precision to tell the direction to the nearest supermarket is, but not precise enough to know in which apartment the out-of-milk smart refrigerator is.

Timeliness / Currency – when time matters

Timeliness is most often used to state if a given data value is present when it is needed. For example, you need the postal address of “Robert Smith” when you want to send a paper invoice or when you want to establish his demographic stereotype for a campaign.

Currency is most often used to state if the data value is accurate at a given time – for example if “123 Main Street in Anytown” is the current postal address of “Robert Smith”.

Uniqueness / Duplication – positive or negative

Uniqueness is the positive term where duplication is the negative term for the same issue.

We strive to have uniqueness by avoiding duplicates. In data quality lingo duplicates are two (or more) data values describing the same real-world entity. For example, we may assume that

  • “Robert Smith at 123 Main Street, Suite 2 in Anytown”

is the same person as

  • “Bob Smith at 123 Main Str in Anytown”

Completeness / Existence – to be, or not to be

Completeness is most often used to tell in what degree all required data elements are populated.

Existence can be used to tell if a given dataset has all the needed data elements for a given purpose defined.

So “Bob Smith at 123 Main Str in Anytown” is complete if we need name, street address and city, but only 75 % complete if we need name, street address, city and preferred colour and preferred colour is an existent data element in the dataset.

Data Quality Management 

Master Data Management (MDM) solutions and specialized Data Quality Management (DQM) tools have capabilities to asses data quality dimensions and improve data quality within the different data quality dimensions.

Check out the range of the best solutions to cover this space on The Disruptive MDM &PIM &DQM List.

Opportunities on The Data Quality Tool Market

The latest Information Difference Data Quality Landscape is out. This is a generic ranking of major data quality tools on the market.

You can see the previous data quality landscape in the post Congrats to Datactics for Having the Happiest DQM Customers.

There are not any significant changes in the relative positioning of the vendors. Only thing is that Syncsort has been renamed to Precisely.

As stated in the report, much of the data quality industry is focused on name and address validation. However, there are many opportunities for data quality vendors to spread their wings and better tackle problems in other data domains, such as product, asset and inventory data.

One explanation of why this is not happening is probably the interwoven structure of the joint Master Data Management (MDM), Product Information Management (PIM) and Data Quality Management (DQM) markets and disciplines. For example, a predominant data quality issue as completeness of product information is addressed in PIM solutions and even better in Product Data Syndication (PDS) solutions.

Here, there are some opportunities for pure play vendors within each speciality to work together as well as for the larger vendors for offering both a true integrated overall solution as well as contextual solutions for each issue with a reasonable cost/benefit ratio.

Data Quality and Interenterprise Data Sharing

When working with data quality improvement there are three kinds of data to consider:

First-party data is the data that is born and managed internally within the enterprise. This data has traditionally been in focus of data quality methodologies and tools with the aim of ensuring that data is fit for the purpose of use and correctly reflects the real-world entity that the data is describing.  

Third-party data is data sourced from external providers who offers a set of data that can be utilized by many enterprises. Examples a location directories, business directories as the Dun & Bradtstreet Worldbase and public national directories and product data pools as for example the Global Data Synchronization Network (GDSN).

Enriching first-party data with third-party is a mean to ensure namely better data completeness, better data consistency, and better data uniqueness.

Second-party data is data sourced directly from a business partner. Examples are supplier self-registration, customer self-registration and inbound product data syndication. Exchange of this data is also called interenterprise data sharing.

The advantage of using second-party in a data quality perspective is that you are closer to the source, which all things equal will mean that data better and more accurately reflects the real-world entity that the data is describing.

In addition to that, you will also, compared to third-party data, have the opportunity to operate with data that exactly fits your operating model and make you unique compared to your competitors.

Finally, second-party data obtained through interenterprise data sharing, will reduce the costs of capturing data compared to first-party data, where else the ever-increasing demand for more elaborate high-quality data in the age of digital transformation will overwhelm your organization.    

The Balancing Act

Getting the most optimal data quality with the least effort is about balancing the use of internal and external data, where you can exploit interenterprise data sharing through combining second-party and third-party data in the way that makes most sense for your organization.

As always, I am ready to discus your challenge. You can book a short online session for that here.

The Most Annoying Way of Presenting Data

Polls are popular on LinkedIn and I have been a sinner of making a few too recently.

One was about what way of presenting data (data format) that is the most annoying.

There were the 4 mentioned above to choose from.

The MM/DD/YYYY date format is in use practically only in the United States. In the rest of the world either the DD/MM/YYYY format or the ISO recommended YYYY-MM-DD format is the chosen one. The data quality challenge appears when you see a date as 03/02/2021 in an international context, because this can be either March, 2 or 3rd February.  

The 12-hour clock with AM and PM postfix, is more commonly in use around the world. But obviously the 12-hour clock is not as well thought as the 24-hour clock. We need some digital transformation here.

Imperial units of measure like inch, foot, yard, pound, and more is far less logical and structured compared to the metric system. Only 3 countries around the world – United States, Myanmar and Liberia has not adopted the metric system. And then there is United Kingdom, who has adopted the metric system in theory, but not in practice.

The Fahrenheit temperature scale is something only used in the United States opposite to Celsius (centigrade) used anywhere else. When someone writes that it is 30 degrees outside that could be quite cold or rather hot if there is no unit of measure applied.

Another example of international trouble mentioned in the comments to the poll is decimal point. In English writing you will use a dot for the decimal point, in many other cultures you use a comma as decimal point.

Most of the annoyance are handled by that mature software have settings where you can set your preferences. The data quality issues arise when these data are part of a text including when software must convert a text into a number, date or time.

If you spot some grey colour (or is it color) in my hair, I blame varying data formats in CSV files, SQL statements, emails and more.

From Data Quality to Business Outcome

Explaining how data quality improvement will lead to business outcome has always been difficult. The challenge is that there very seldom is a case where you with confidence can say “fix this data and you will earn x money within y days”.

Not that I have not seen such bold statements. However, they very rarely survive a reality check. On the other hand, we all know that data quality problems seriously effect the healthiness of any business.

A reason why the world is not that simple is that there is a long stretch from data quality to business outcome. The stretch goes like this:

  • First, data quality must be translated into information quality. Raw data must be put into a business context where the impact of duplicates, incomplete records, inaccurate values and so on is quantified, qualified and related within affected business scenarios.
  • Next, the achieved information quality advancements must be actionable in order to cater for better business decisions. Here it is essential to look beyond the purpose of why the data was gathered in the first place and explore how a given piece of information can serve multiple purpose of actions.
  • Finally, the decisions must enable positive business outcomes within growth, cost reductions, mitigation of risks and/or time to value. Often these goals are met through multiple chains of bringing data into context, making that information actionable and taking the right decisions based on the achieved and shared knowledge.

Stay tuned – and also look back – on this blog for observations and experiences for proven paths on how to improve data quality leading to positive business outcome.  

The Start of the History of Data and Information Quality Management

I am sad to hear that Larry English has passed away as I learned from this LinkedIn update by C. Lwanga Yonke.

As said in here: “When the story of Information Quality Management is written, the first sentence of the first paragraph will include the name Larry English”.

Larry pioneered the data quality – or information quality as he preferred to coin it – discipline.

He was an inspiration to many data and information quality practitioners back in the 90’s and 00’s, including me, and he paved the way for bringing this topic to the level of awareness that it has today.

In his teaching Larry emphasized on the simple but powerful concepts which are the foundation of data quality and information quality methodologies:

  • Quantify the costs and lost opportunities of bad information quality
  • Always look for the root cause of bad information quality
  • Observe the plan-do-check-act circle when solving the information quality issues

Let us roll up our sleeves and continue what Larry started.

B2B2C in Data Management

The Business-to-Business-to-Consumer (B2B2C) scenario is increasingly important in Master Data Management (MDM), Product Information Management (PIM) and Data Quality Management (DQM).

This scenario is usually seen in manufacturing including pharmaceuticals as examined in the post Six MDMographic Stereotypes.

One challenge here is how to extend the capabilities in MDM / PIM / DQM solutions that are build for Business-to-Business (B2B) and Business-to-Consumer (B2C) use cases. Doing B2B2C requires a Multidomain MDM approach with solid PIM and DQM elements either as one solution, a suite of solutions or as a wisely assembled set of best-of-breed solutions.B2B2C MDM PIM DQMIn the MDM sphere a key challenge with B2B2C is that you probably must encompass more surrounding applications and ensure a 360-degree view of party, location and product entities as they have varying roles with varying purposes at varying times tracked by these applications. You will also need to cover a broader range of data types that goes beyond what is traditionally seen as master data.

In DQM you need data matching capabilities that can identify and compare both real-world persons, organizations and the grey zone of persons in professional roles. You need DQM of a deep hierarchy of location data and you need to profile product data completeness for both professional use cases and consumer use cases.

In PIM the content must be suitable for both the professional audience and the end consumers. The issues in achieving this stretch over having a flexible in-house PIM solution and a comprehensive outbound Product Data Syndication (PDS) setup.

As the middle B in B2B2C supply chains you must have a strategic partnership with your suppliers/vendors with a comprehensive inbound Product Data Syndication (PDS) setup and increasingly also a framework for sharing customer master data taking into account the privacy and confidentiality aspects of this.

This emerging MDM / PIM / DQM scope is also referred to as Multienterprise MDM.

TCO, ROI and Business Case for Your MDM / PIM / DQM Solution

Any implementation of a Master Data Management (MDM), Product Information Management (PIM) and/or Data Quality Management (DQM) solution will need a business case to tell if the intended solution has a positive business outcome.

Prior to the solution selection you will typically have:

  • Identified the vision and mission for the intended solution
  • Nailed the pain points the solution is going to solve
  • Framed the scope in terms of the organizational coverage and the data domain coverage
  • Gathered the high-level requirements for a possible solution
  • Estimated the financial results achieved if the solution removes the pain points within the scope and adhering to the requirements

The solution selection (jump-starting with the Disruptive MDM / PIM / DQM Select Your Solution service) will then inform you about the Total Cost of Ownership (TCO) of the best fit solution(s).

From here you can, put very simple, calculate the Return of Investment (ROI) by withdrawing the TCO from the estimated financial results.

MDM PIM DQM TCO ROI Business Case

You can check out more inspiration about ROI and other business case considerations on The Disruptive MDM / PIM /DQM Resource List.