When looking out of the windows from Product Data Lake global headquarters (well, that is also our home office) we see our neighbour, which is the global headquarters of Maersk, a major worldwide operating shipping company.
In all humbleness we do very parallel business. Maersk is good at moving goods. We are going to move data about the goods. Product data or product information if you like.
The reason of being for a shipping company is that it would be very ineffective for each manufacturer of goods, if they should arrange and carry out the transportation of their manufactured goods to each distributor around the world. Furthermore, it would be equally ineffective, if each distributor should arrange and carry out the transportation of their range of goods to each reseller or large end buyer.
Until now, this ineffectiveness has unfortunately been the case when it comes to exchanging data about the goods. Manufacturers are asked by their distributors to provide product information in a different way for each – most often meaning in a different spreadsheet. And the same craziness repeats itself when it comes to exchanging data between distributors, resellers and large end users of product information.
Inadequate data quality is the enemy of any business. Proof of that for ecommerce too was revealed in a recent survey from the Danish E-commerce Association (FDIH). Over 7,000 respondents were asked if they would turn away from a web-shop, if the product information is incomplete or the product image is bad.
52 % answered that they totally agree. 29 % more agreed. 12 % was not sure. 4 % disagreed and 3 % totally disagreed.
The importance of the maintenance and publishing of adequate product information in order to support self-service sales approaches has been pondered on this blog many times as for example in the post Self-service Ready Product Data.
Having product Images of good quality is a part of that and add to that you often see missing product images as reported in the post Image Coming Soon.
By the way: The root cause of incomplete product information and images is lack of agile and process driven sharing of this in business ecosystems. The remedy to that is the Product Data Lake and we will be at the Danish E-Commerce Association event in Copenhagen the 13th October 2016. More information about this event here.
Master Data Management (MDM) is a bit more than 10 years old as told in the post from last year called Happy 10 Years Birthday MDM Solutions. MDM has developed from the two disciplines called Customer Data Integration (CDI) and Product Information Management (PIM). For example, the MDM Institute was originally called the The Customer Data Integration Institute and still have this website:http://www.tcdii.com/.
Today Multi-Domain MDM is about managing customer, or rather party, master data together with product master data and other master data domains as visualized in the post A Master Data Mind Map.
You may argue that PIM (Product Information Management) is not the same as Product MDM. This question was examined in the post PIM, Product MDM and Multi-Domain MDM. In my eyes the benefits of keeping PIM as part of Multi-Domain MDM are bigger than the benefits of separating PIM and MDM. It is about expanding MDM across the sell-side and the buy-side of the business eventually by enabling wide use of customer self-service and supplier self-service.
The external self-service theme will in my eyes be at the centre of where MDM is going in the future. In going down that path there will be consequences for how we see data governance as discussed in the post Data Governance in the Self-Service Age. Another aspect of how MDM is going to be seen from the outside and in is the increased use of third party reference data and the link between big data and MDM as touched in the post Adding 180 Degrees to MDM.
Besides Multi-Domain MDM and the links between MDM and big data a much mentioned future trend in MDM is doing MDM in the cloud. The latter is in my eyes a natural consequence of the external self-service themes and increased use of third party reference data.
This weekend I’m in Copenhagen where I, opposite to when in London, enjoy a bicycle ride.
In the old days I had a small cycle computer that gave you a few key performance indicators about your ride as time of riding, distance covered, average and maximum speed. Today you can use an app on your smartphone and along the way have current figures displayed on your smartwatch.
As explained in the post American Exceptionalism in Data Management the first thing I do when installing an app is to change Fahrenheit to Celsius, date format to an useable one and in this context not at least miles to kilometers.
The cool thing is that the user interface on my smartwatch reports my usual speed in kilometer per hour as miles per hour making me 60 % faster than I used to be. So next year I will join Tour de France making Jens Voigt (aka Der Alte) look like a youngster.
Using such an app is also a good example of why we have big data today. The app tracks a lot of data as detailed route on map with x, y and z coordinates, split speed per kilometer and other useful stuff. Analyzing these data tells me Tour de France maybe isn’t a good idea. After what I thought was 100 miles, but was 100 kilometers, my speed went from slow to grandpa.
That’s a bit like IT projects by the way. Regardless of timeframe, they slows down in progress after 80 % of plan has been covered.
Using the royal we is usually only for majestic people, but as a person with a being in two countries at the same time, I do sometimes feel that I am we.
So, this morning we once again found our way to London Heathrow Airport for one of our many trips between London and Copenhagen as we have lived in the United Kingdom the last couple of years but still have many business and private ties with The Kingdom of Denmark where we (is that was or were?) born, raised and worked and from where we still hold a passport.
Most public sector and private sector business processes and master data management implementations simply don’t cope with the fast evolving globalization. Reflecting on this, flying over Doggerland, we memorize situations where:
We as a prospect or customer in a global brand are stored as a duplicate record for each country as told in the post Hello Leading MDM Vendor.
You as an employee in a multi-national firm have a duplicate record for each country you have worked in.
People moving between countries are still treated as an exception not covered by adequate business rules and data capture procedures. Most things are sorted out eventually, but it always takes a whole lot of more trouble compared to if you just are born, raised and stays in the same country.
When we landed in Copenhagen this morning we (is that was or were?) able to use the new local smart travel card in order to travel on with public transit. But it wasn’t easy getting the card we remember. With a foreign address you can’t apply online. So we had to queue up at the Central Station, fill in a form and explain that you don’t have an official document with your address in the UK – and we avoided explaining the shocking fact that in the UK your electricity bill is your premier proof of almost anything related to your identity.
What about you? Do you have a being in several countries? Any war stories experienced related to your going back and forth?
A variant of the saying “Know Your Customer” for a football club will be “Know Your Fan” and indeed fans are customers when they buy tickets. If they can.
FC Copenhagen cruised into stormy waters when they apparently cancelled all purchases for the upcoming Champions League (European soccer club paramount tournament) clashes against Real Madrid, Juventus and Galatasaray if the purchasers didn’t have a Danish sounding name. The reason was to prevent mixing fans of the different clubs, but surely this poorly thought screening method wasn’t received well among the FC Copenhagen fans not called Jensen, Nielsen or Sørensen.
The title of this blog post is the title of a seminar about data quality and data matching taking place in Copenhagen:
The seminar is hosted by Affecto, a data management consultancy firm with strong presence in the Nordic and the Baltic countries, and Informatica, a leading data management tool vendors word-wide.
There will be three sessions on the seminar:
First you will learn about steps for working with a data quality platform to improve BI and master data management solutions.
Then you will see a walkthrough of the architecture and capabilities of the Informatica Data Quality platform.
And finally you shouldn’t miss the session with yours truly on data matching based on a Informatica Perspectives blog post called Five Future Data Matching Trends.
Hope to see you in Copenhagen, København, Köpenhamn, Kopenhagen, Copenhague, Copenaghen, Hafnia or whatever name you use for that place as told in the post about data matching and Diversity in City Names.