The Internet of Things and the Fat-Finger Syndrome

When coining the term “the Internet of Things” Kevin Ashton said:

“The problem is, people have limited time, attention and accuracy—all of which means they are not very good at capturing data about things in the real world.”

Indeed, many many data quality flaws are due to a human typing the wrong thing. We usually don’t do that intentionally. We do it because we are human.

Typographical errors, and the sometimes dramatic consequences, are often referred to as the “fat-finger syndrome”.

As reported in the post Killing Keystrokes avoiding typing is a way forward for example by sharing data instead of typing in the same data (a little bit differently) within every organization.

IoT Data QualityThe Internet of Things, being common access to data provided by a huge number of well defined devices, is another development in avoiding typos.

It’s not that data coming from these devices can’t be flawed. As debated in the post Social Data vs Sensor Data there may be challenges in sensor data due to errors in a human setting up the sensors.

Also misunderstandings by humans in combining sensor data for analytics and predictions may cause consequences as bad as those based on the traditional fat-finger syndrome.

All in all I guess we won’t see a decrease in the need to address data quality in the future, we just will need to use different approaches, methodologies and tools to fight bad data and information quality.

Are you interested in what all this will be about? Why not joining the Big Data Quality group on LinkedIn?

Bookmark and Share

2 thoughts on “The Internet of Things and the Fat-Finger Syndrome

  1. Richard Ordowich 26th May 2013 / 12:30

    There is an assumption that the new ICD-10 codes (International Statistical Classifications of Diseases) for healthcare being adopted in the US will improve healthcare. We are transitioning from approximately 18,000 codes to 140,000. Some have stated these codes will “devote more time and energy toward coding, which may detract from patient care.” However ignoring those arguments and focusing on data quality. Imagine maintaining the quality of 140,000 codes!

    We begin with the metadata for these codes. The naming, definitions and meanings of these codes. This metadata was created by humans, with all their frailties. Then imagine the instances of these codes. The doctor diagnosing, the nurse diagnosing and selecting the codes, the clerical staff transposing the codes, the billing department “adjusting” the codes, the insurance company interpreting these codes etc. etc. I would imagine billions of recorded instances of these codes. What is the probability of errors?

    This is an example of data created by and reported by humans. I don’t think the results will be of high quality. Sensors in and on the human body that report the data directly will be better quality data. Until then we will spend an inordinate amount of money and resources manipulating the 140,000 codes. I guess that will help the unemployment for a while.

  2. AFFÄRSBLOGGEN- Bernt-Olof Hellgren 27th May 2013 / 09:29

    The typing error is why companies shall automate onboarding of new customers & vendors through connections to quality assured external reference databases. It does not solve all problems but helps a lot.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s