When coining the term “the Internet of Things” Kevin Ashton said:
“The problem is, people have limited time, attention and accuracy—all of which means they are not very good at capturing data about things in the real world.”
Indeed, many many data quality flaws are due to a human typing the wrong thing. We usually don’t do that intentionally. We do it because we are human.
Typographical errors, and the sometimes dramatic consequences, are often referred to as the “fat-finger syndrome”.
As reported in the post Killing Keystrokes avoiding typing is a way forward for example by sharing data instead of typing in the same data (a little bit differently) within every organization.
It’s not that data coming from these devices can’t be flawed. As debated in the post Social Data vs Sensor Data there may be challenges in sensor data due to errors in a human setting up the sensors.
Also misunderstandings by humans in combining sensor data for analytics and predictions may cause consequences as bad as those based on the traditional fat-finger syndrome.
All in all I guess we won’t see a decrease in the need to address data quality in the future, we just will need to use different approaches, methodologies and tools to fight bad data and information quality.
Are you interested in what all this will be about? Why not joining the Big Data Quality group on LinkedIn?