Royal Data Quality

The intersection of royalty and data quality was touched on this blog in the post Royal Exceptions 5 years ago, when the Queen of Denmark turned 70 years old. Now when her Majesty just rounded the 75-year mark, it is time to revisit the subject.

Royal Coat of Arms of DenmarkAs always when a Royal event is around the debate on the reason of being for a Royal House stirs up. Putting the historical and emotional arguments aside, let us have a look on the economic arguments.

In Denmark there are two main arguments in favor of having a Royal House:

  • Having a president instead of a Royal House will cost the same anyway
  • The costs of the Royal House is less than the wins from brand value when exporting goods and services

Cost of having a president versus a Royal House

The idea of an expensive presidency is probably founded in looking at the amount of money countries like the United States and France puts into the safety and glory of their presidency.

On the other hand, countries may make their own choice on the level of costs for a presidency. If you look at countries like Ireland and Finland, countries of similar size of population as Denmark, their costs for the presidency is only a fraction of the costs of the Danish Royal House.

Brand Value of the Royal House

Even high-ranking executives in large Danish companies often make the argument of a high brand value attached to the Royal House. However, I doubt they have checked with their own business intelligence department.

In fact, there have not been made a single public available study on the matter, and I doubt any business college researcher will risk the career on doing so.

There was a comic situation some years ago when it was taunted that there was a correlation between Denmark getting a crown princess from Australia and a sharp rise in the Danish export to Australia. The Mary-effect it was called. Sadly, for royalists at least, a sense check revealed that Norway and Sweden had the same development without importing a crown princess from Australia.

Conclusion

I hope the above examples are Royal Exceptions and most other decisions around are taken based on carefully considered facts.

Bookmark and Share

Four Flavors of Big Reference Data

In the post Five Flavors of Big Data the last flavor mentioned is “big reference data”.

The typical example of a reference data set is a country table. This is of course a very small data set with around 250 entities. But even that can be complicated as told in the post The Country List.

Reference data can be much bigger. Some flavors of big reference data are:

  • Third-party data sources
  • Open government data
  • Crowd sourced open reference data
  • Social networks

Third-party data sources:

The use of third-part data within Master Data Management is discussed in the post Third-Party Data and MDM. These data may also have a more wide use within the enterprise not at least within business intelligence.

Examples of such data sets are business directories, where the Dun & Bradstreet World Base as probably the best known one today counts over 200 million business entities from all over the world. Another example is address and property directories.

Open government data

The above mentioned directories are often built on top of public sector data which are becoming more and more open around the world. So an alternative is digging directly into the government data.

Crowd sourced open reference data

There are plenty of initiatives around where directories similar to the commercial and government directories are collected by crowd-sourcing and shared openly.

Social networks

In social networks profile data are maintained by the entities in question themselves which is a great advantage in terms of timeliness of data.

London Big Data Meet-up

If you are in London please join the TDWI UK and IRM UK complimentary London meet-up on big data on the 19th February 2014 where I will elaborate on the four flavors of big reference data.

Bookmark and Share

What’s so Special About MDM?

In a blog post from yesterday one of my favorite bloggers Loraine Lawson writes:

“Take master data management, for instance. Oh sure, experts preach that it’s a discipline, not “just” a technology, but come on. Did anybody ever hear about MDM before MDM solutions were created?”

The post is called: Let’s Talk: Do You Really Need an Executive Sponsor for MDM?

And yes we do need an executive sponsor. Also we need a business case as we must avoid doing it big bang style and we need to establish metrics for measuring success and so on.

All wise things as it is wise sayings about data quality improvement initiatives, business intelligence (BI) implementations, customer relationship management (CRM) system roll-out and almost any other kind of technology enabled project.

shiny thingsI touched this subject some years ago in the post Universal Pearls of Wisdom.

So let’s talk:

  • Is an executive sponsor more important for Master Data Management (MDM) than for Business Intelligence (BI)?
  • Is a business case more important for Master Data Management (MDM) than for Supplier Chain Management (SCM)?
  • Is big bang style more dangerous for Master Data Management (MDM) than for Service Oriented Architecture (SOA)?

And oh, don’t just tell me that I can’t compare apples and pears.

Bookmark and Share

Data Quality Does Matter!

The title of this blog post is the title of a seminar about data quality and data matching taking place in Copenhagen:

Data Quality Does Matter

The seminar is hosted by Affecto, a data management consultancy firm with strong presence in the Nordic and the Baltic countries, and Informatica, a leading data management tool vendors word-wide.

There will be three sessions on the seminar:

  • First you will learn about steps for working with a data quality platform to improve BI and master data management solutions.
  • Then you will see a walkthrough of the architecture and capabilities of the Informatica Data Quality platform.
  • And finally you shouldn’t miss the session with yours truly on data matching based on a Informatica Perspectives blog post called Five Future Data Matching Trends.

Hope to see you in Copenhagen, København, Köpenhamn, Kopenhagen, Copenhague, Copenaghen, Hafnia or whatever name you use for that place as told in the post about data matching and Diversity in City Names.

Bookmark and Share

Data Quality at Terminal Velocity

Recently the investment bank Saxo Bank made a marketing gimmick with a video showing a BASE jumper trading foreign currency with the banks mobile app at terminal velocity (e.g. the maximum speed when free falling).

Today business decisions have to be taken faster and faster in the quest for staying ahead of competition.

When making business decisions you rely on data quality.

Traditionally data quality improvement has been made by downstream cleansing, meaning that data has been corrected long time after data capture. There may be some good reasons for that as explained in the post Top 5 Reasons for Downstream Cleansing.

But most data quality practitioners will say that data quality prevention upstream, at data capture, is better.

I agree; it is better.  Also, it is faster. And it supports faster decision making.

The most prominent domain for data quality improvement has always been data quality related to customer and other party master data. Also in this quest we need instant data quality as explained in the post Reference Data at Work in the Cloud.

Bookmark and Share

Data Quality and Decision Intelligence

“The substitute for Business Intelligence is called Decision Intelligence” was the headline in an article on the Danish IT site Version2 last month. The article was an interview with Michael Borges, head of Copenhagen based data management system integrator Platon. The article is introduced in English on Platon’s Australia site.

The term Decision Intelligence as a successor for Business Intelligence (BI) has been around for a while. In an article from 2008 Claudia Imhoff and Colin White explains what Decision Intelligence does that Business Intelligence don’t.  Very simplified it is embracing and integrating operational Business Intelligence, traditional Data Warehouse based Business Intelligence and (Business) Content Analytics.  

It is said in the article: “This, of course, has implications for both data integration and data quality. This aspect of decision intelligence will be covered in a future article.” I haven’t been able to find that future article. Maybe it’s still pending.

Anyway, certainly this – call it Decision Intelligence or something else – has implications for data quality.

The operational BI side is about supporting, and maybe have the systems making, decisions based on events taking place here and now based on incoming transactions and related master data. This calls for data quality prevention at data collection time opposite to data cleansing downstream which may have served well for informed decisions in traditional Data Warehouse based BI.

The content analysis side, which according to Imhoff/White article includes information expertise, makes me consider the ever recurring discussion in the data quality realm about the difference between data quality and information quality. Maybe we will come to an intelligent decision on that one when Business Intelligence is succeeded by Decision Intelligence.   

Bookmark and Share

New Eyes on Iceland

This eights Data Quality World Tour blog post is about Iceland.

Patronymics

Rather than using family names, the Icelanders use patronymics. This means that the first Icelandic President Sveinn Björnsson must have been son of Björn and I guess current Prime Minister Jóhanna Sigurðardóttir is the daughter of Sigurð. This must create some havoc for well proven algorithms for finding households. (Add to that that the Prime Minister is in a same-sex marriage).

Volcanoes

In the good old days air traffic wasn’t concerned with the recurring volcanic eruptions on Iceland. Today it seems to be a repeating cause of travel havoc. A bit like poor data quality wasn’t taken seriously in the good old days, but today dirty data creates havoc in business intelligence implementations.  

Previous Data Quality World Tour blog posts: