Big Data and the cloud are putting supercomputer capabilities into everyone’s hands. But what’s getting lost in the mix is that the tools we use to interpret and apply this tidal wave of information often have a fatal flaw, CNN reports.
Much of the data analysis we do rests on erroneous models, meaning mistakes are inevitable. And when our outsized expectations exceed our capacity, the consequences can be dire.
This wouldn’t be such a problem if Big Data wasn’t so very, very big. But the amount of data that we have access to is enabling us to use even flawed models to produce what are often useful results. The trouble is that we’re frequently confusing those results for omniscience.
We’re falling in love with our own technology, and when the models fail it can be pretty ugly, especially when the mistakes all that data produces are concomitantly large.
Part of the issue is oversimplification of the models computer programs are based on, rather than actual errors in their programming.