“Big data”—the large data sets that can be managed and analyzed only by increasingly powerful and sophisticated tools—is an expansive and rapidly evolving field, Forbes reports.
For now, I’m not going to talk about internally generated information that companies use primarily to mine for operational and financial efficiency or the increasing amount of data that machines can generate to indicate that they need servicing, that they are out of an item, and so on. This is fascinating stuff, but beyond the scope of this blog.
Yet even “limiting” ourselves to customer-oriented data barely shrinks the field. So for my inaugural post, my colleagues and I have created a taxonomy to help us get beyond generalities like big data, zero in on the most useful information, and point out how it can help companies get to new insights.
Why is this important? Because the amount of data generated by digitization will always exceed our ability to store, process, and make sense of it. Don’t just take my word for it.
The celebrated statistician Nate Silver says that every day, three times per second, we produce the equivalent of the amount of data that the Library of Congress has in its entire collection. Most of it is irrelevant noise, so unless non-technical businesspeople are clear about the kinds of data being gathered and how to make practical use of it, they will be overwhelmed.