The Trouble with Data Quality Dimensions

Data Quality Dimensions

Data quality dimensions are some of the most used terms when explaining why data quality is important, what data quality issues can be and how you can measure data quality. Ironically, we sometimes use the same data quality dimension term for two different things or use two different data quality dimension terms for the same thing. Some of the troubling terms are:

Validity / Conformity – same same but different

Validity is most often used to describe if data filled in a data field obeys a required format or are among a list of accepted values. Databases are usually well in doing this like ensuring that an entered date has the day-month-year sequence asked for and is a date in the calendar or to cross check data values against another table and see if the value exist there.

The problems arise when data is moved between databases with different rules and when data is captured in textual forms before being loaded into a database.

Conformity is often used to describe if data adheres to a given standard, like an industry or international standard. This standard may due to complexity and other circumstances not or only partly be implemented as database constraints or by other means. Therefore, a given piece of data may seem to be a valid database value but not being in compliance with a given standard.

For example, the code value for a colour being “0,255,0” may be the accepted format and all elements are in the accepted range between 0 and 255 for a RGB colour code. But the standard for a given product colour may only allow the value “Green” and the other common colour names and “0,255,0” will when translated end up as “Lime” or “High green”.

Accuracy / Precision – true, false or not sure

The difference between accuracy and precision is a well-known statistical subject.

In the data quality realm accuracy is most often used to describe if the data value corresponds correctly to a real-world entity. If we for example have a postal address of the person “Robert Smith” being “123 Main Street in Anytown” this data value may be accurate because this person (for the moment) lives at that address.

But if “123 Main Street in Anytown” has 3 different apartments each having its own mailbox, the value does not, for a given purpose, have the required precision.

If we work with geocoordinates we have the same challenge. A given accurate geocode may have the sufficient precision to tell the direction to the nearest supermarket is, but not precise enough to know in which apartment the out-of-milk smart refrigerator is.

Timeliness / Currency – when time matters

Timeliness is most often used to state if a given data value is present when it is needed. For example, you need the postal address of “Robert Smith” when you want to send a paper invoice or when you want to establish his demographic stereotype for a campaign.

Currency is most often used to state if the data value is accurate at a given time – for example if “123 Main Street in Anytown” is the current postal address of “Robert Smith”.

Uniqueness / Duplication – positive or negative

Uniqueness is the positive term where duplication is the negative term for the same issue.

We strive to have uniqueness by avoiding duplicates. In data quality lingo duplicates are two (or more) data values describing the same real-world entity. For example, we may assume that

  • “Robert Smith at 123 Main Street, Suite 2 in Anytown”

is the same person as

  • “Bob Smith at 123 Main Str in Anytown”

Completeness / Existence – to be, or not to be

Completeness is most often used to tell in what degree all required data elements are populated.

Existence can be used to tell if a given dataset has all the needed data elements for a given purpose defined.

So “Bob Smith at 123 Main Str in Anytown” is complete if we need name, street address and city, but only 75 % complete if we need name, street address, city and preferred colour and preferred colour is an existent data element in the dataset.

More on data quality dimensions:

What is in a business directory?

When working with Party Master Data Management one approach to ensure accuracy, completeness and other data quality dimensions is to onboard new business-to-business (B2B) entities and enrich such current entities via a business directory.

While this could seem to be a straight forward mechanism, unfortunately it usually is not that easy peasy.

Let us take an example featuring the most widely used business directory around the world: The Dun & Bradstreet Worldbase. And let us take my latest registered company: Product Data Lake.

PDL at DnB

On this screen showing the basic data elements, there are a few obstacles:

  • The address is not formatted well
  • The country code system is not a widely used one
  • The industry sector code system shown is one among others

Address Formatting

In our address D&B has put the word “sal”, which is Danish for floor. This is not incorrect, but addresses in Denmark are usually not written with that word, as the number following a house number in the addressing standard is the floor.

Country Codes

D&B has their own 3-digit country code. You may convert to the more widely used ISO 2-character country code. I do however remember a lot of fun from my data matching days when dealing with United Kingdom where D&B uses 4 different codes for England, Wales, Scotland and Northern Ireland as well as mapping back and forth with United States and Puerto Rico. Had to be made very despacito.

Industry Sector Codes

The screen shows a SIC code: 7374 = Computer Processing and Data Preparation and Processing Services

This must have been converted from the NACE code by which the company has been registered:  63.11:(00) = Data processing, hosting and related activities.

The two codes do by the way correspond to the NAICS Code 518210 = Data processing, hosting and related activities.

The challenges in embracing the many standards for reference data was examined in the post The World of Reference Data.

What’s in an Address (and a Product)?

Our company Product Data Lake has relocated again. Our new address, in local language and format, is:

Havnegade 39
1058 København K
Danmark

If our address were spelled and formatted as in England, where the business plan was drafted, the address would have looked like this:

The Old Seed Office
39 Harbour Street
Copenhagen, 1058 K
Danelaw

Across the pond, a sunny address could look like this:

39 Harbor Drive
Copenhagen, CR 1058
U.S. Virgin Islands

copenhagen_havnegadeNow, the focal point of Product Data Lake is not the exciting world of address data quality, but product data quality.

However, the same issues of local and global linguistic and standardization – or should I say standardisation – issues are the same.

Our lovely city Copenhagen has many names. København in Danish. Köpenhamn in Swedish. Kopenhagen in German. Copenhague in French.

So have all the nice products in the world. Their classifications and related taxonomy are in many languages too. Their features can be spelled in many languages or be dependent of the country were to be sold. The documents that should follow a product by regulation are subject to diversity too.

Handling all this diversity stuff is a core capability for product data exchange between trading partners in Product Data Lake.

What Will you Complicate in the Year of the Rooster?

rooster-6Today is the first day in the new year. The year of the rooster according to the Lunar Calendar observed in East Asia. One of the characteristics of the year of the rooster is that in this year, people will tend to complicate things.

People usually likes to keep things simple. The KISS principle – Keep It Simple, Stupid – has many fans. But not me. Not that I do not like to keep things simple. I do. But only as simple as it should be as Einstein probably said. Sometimes KISS is the shortcut to getting it all wrong.

When working with data quality I have come across the three below examples of striking the right balance in making things a bit complicated and not too simple:

Deduplication

One of the most frequent data quality issues around is duplicates in party master data. Customer, supplier, patient, citizen, member and many other roles of legal entities and natural persons, where the real world entity are described more than once with different values in our databases.

In solving this challenge, we can use methods as match codes and edit distance to detect duplicates. However, these methods, often called deterministic, are far too simple to really automate the remedy. We can also use advanced probabilistic methods. These methods are better, but have the downside that the matching done is hard to explain, repeat and reuse in other contexts.

My best experience is to use something in between these approaches. Not too simple and not too overcomplicated.

Address verification

You can make a good algorithm to perform verification of postal and visit addresses in a database for addresses coming from one country. However, if you try the same algorithm on addresses from another country, it often fails miserably.

Making an algorithm for addresses from all over the world will be very complicated. I have not seen one yet, that works.

My best experience is to accept the complication of having almost as many algorithms as there are countries on this planet.

Product classification

Classifications of products controls a lot of the data quality dimensions related to product master data. The most prominent example is completeness of product information. Whether you have complete product information is dependent on the classification of the product. Some attributes will be mandatory for one product but make no sense at all to another product by a different classification.

If your product classification is too simple, your completeness measurement will not be realistic. A too granular or other way complicated classification system is very hard to maintain and will probably seem as an overkill for many purposes of product master data management.

My best experience is that you have to maintain several classification systems and have a linking between them, both inside your organization and between your trading partners.

Happy New Lunar Year

Did You Mean Potato or Potahto?

As told in the post Where the Streets have Two Names one aspect of address validation is the fact, that in some parts of the world, a given postal address can be presented in more than one language.

I experienced that today when using Google Maps for directions to a Master Data Management (MDM) conference in Helsinki, Finland. When typing in the address I got this message:

Helsinki

The case is that the two addresses proposed by Google Maps are exactly the same address, just spelled in Swedish and Finnish, the two official languages used in this region.

I think Google Maps is an example of a splendid world-wide service. But even the best world-wide services sometimes don’t match local tailored services. This is in my experience the case when it comes to address management solutions as address validation and assistance whether they come as an integrated part of a Master Data Management (MDM) solution, a stand-alone data quality tool or a general service as Google Maps.

Bringing the Location to Multi-Domain MDM

When we talk about multi-domain Master Data Management (MDM) we often focus on the two dominant MDM domains being customer (or rather party) MDM and product (or maybe things) MDM.

The location domain is the third bigger domain within MDM. Location management can be more or less complex depending on the industry vertical we are looking at. In the utility and telco sectors location management is a big thing. Handling installations, assets and networks is typically supported by a Geographical Information System (GIS).

Master Data Management is much about supporting that different applications can have a unified view of the same core business entities. Therefore, in the utility and telco sectors a challenge is to bring the GIS application portfolio into the beat with other applications that also uses locations as explained in the post Sharing Big Location Reference Data.

Location2

The last couple of days I enjoyed taking part in the Nordic user conference for a leading GIS solution in the utility and telco sector. This solution is called Smallword.

It is good to see that at least one forward looking organization in the utility and telco sector is working with how location master data management can be shared between business functions and applications and aligned with party master data management and product master data management.

Bookmark and Share

Reading the right Reading

TripItIn order to have all my travel arrangements in one place I use a service called TripIt. When I receive eMail confirmations from airlines, hotels, train planners and so, I simply forward those to plans@tripit.com, and within seconds they build or amend to an itinerary for me that is available in an app.

Today I noticed a slight flaw though. I was going by train from London, UK up to the Midlands via a large town in the UK called Reading.

The strange thing in the itinerary was that the interchanges in Reading was placed in chronology after arriving at and leaving the final destination.

A closer look at the data revealed two strange issues:

  • Reading was spelled Reading, PA
  • The time zone for the interchange was set to EST

Hmmm…  There must be a town called Reading in Pennsylvania across the pond. Tripit must, when automatically reading the eMail, have chosen the US Reading for this ambiguous town name and thereby attached the Eastern American time zone to the interchange.

Picking the right Reading for me in the plan made the itinerary look much more sensible.

Bookmark and Share

Cleansing International Addresses

A problem in data cleansing I have come across several times is when you have some name and address registrations where it is uncertain to which country the different addresses belong.

Many address-cleansing tools and services requires a country code as the first parameter in order to utilize external reference data for address cleansing and verification. Most business cases for address cleansing is indeed about a large number of business-to-consumer (B2C) addresses within a particular country. But sometimes you have a batch of typical business-to-business (B2B) addresses with no clear country registration.

The problem is that many location names applies to many different places. That is true within a given country – which was the main driver for having postal codes around. If a none-interactive tool or service have to look for a location all over the world that gets really difficult.

For example I’m in Richmond today. That could actually be a lot of places all over the world as seen on Wikipedia.

popeI am actually in the Richmond in the London, England, UK area. If I were in the state capital of the US state of Virginia, I could have written I’m in “Richmond, VA”. If an international address-cleansing tool looked at that address, I guess it would first look for a country code, quickly find VA as a two-character country code in the end of the string and firmly conclude I’m at something called Richmond in the Vatican City State.

Have you tried using or constructing an international address cleansing process? Where did you end up?

Bookmark and Share

Foreign Addresses

The New YorkerThere is a famous poster called The New Yorker. This poster perfectly illustrates the centricity we often have about the town, region or country we live in.

The same phenomenon is often seen in data management as told in the post Foreign Affaires.

If we for example work with postal addresses we tend to think that postal addresses in our own country has a well-known structure while foreign addresses is a total mess.

In Denmark where I am born and raised and has worked most of my life we have two ways of expressing an address:

  • The envelope way where there are a certain range of possibilities especially on how to spell a street name and how to write the exact unit within a high rise building, though there is a structure more or less known to native people.
  • The code way, as every street has a code too and there is a defined structure for units (known as the KVHX code). This code is used by the public sector as well as in private sectors as financial services and utility companies and this helps tremendously with data quality.

But around 3.5 percent of Danes, including yours truly, has a foreign address. And until now the way of registering and storing those addresses in the public sector and elsewhere has been totally random.

This is going to change. The public authorities has, with a little help from yours truly, made the first standard and governance principles for foreign addresses as seen in this document (in Danish).

At iDQ A/S we have simultaneously developed Master Data Management (MDM) services that helps utility companies, financial services and other industries in getting foreign addresses right as well.

Bookmark and Share

There is Open Data in the Air

It is spring in Europe and the good news in Europe this week is that from December next year we finally have the end of paying exorbitant fees for having data access on your mobile phone outside a WiFi when in a another EU country as told by BBC here. As a person travelling a lot between EU countries this is, though years too late, fantastic news.

open-doorBeing too late was unfortunately also the case as examined in the article Sale of postcodes data was a ‘mistake’ say Committee – in News from UK Parliament. When the UK Royal Mail was privatised last year the address directory, known as the PAF file, was part of the deal. It would have been a substantial better deal for the society as a whole if the address data had been set free. This calculation is backed up by figures from experiences in Denmark as reported in the post The Value of Free Address Data.

In the next week I’m looking forward to being part of an innovation camp arranged by the Danish authorities as a step in an initiative to exploit open public sector data in the private sector. Here public data owners, IT students, enterprise data consumers and IT tool and service vendors including iDQ A/S will meet openly and challenge each other in the development of the most powerful ideas for new ways to create valuable knowledge based on open public sector data.

Bookmark and Share