The majority of raw data, particularly big data, doesn’t offer a lot of value in its unprocessed state. Of course, by applying the right set of tools, we can pull powerful insights from this stockpile of bits, InformationWeek reports.
In any big data setup, the first step is to capture lots of digital information, “which there’s no shortage of,” said Dr. Michael Wu, chief scientist of San Francisco-based Lithium Technologies, which develops social customer experience management software for businesses.
With data in hand, you can begin doing analytics. But where do you begin? And which type of analytics is most appropriate for your big data environment?
In a phone interview with InformationWeek, Wu explained how descriptive, predictive, and prescriptive analytics differ, and how they provide value to organizations.
“Once you have enough data, you start to see patterns,” he said. “You can build a model of how these data work. Once you build a model, you can predict.”
The first step: descriptive analytics.
In a March 2013 blog series on this topic, Wu called descriptive analytics “the simplest class of analytics,” one that allows you to condense big data into smaller, more useful nuggets of information.
- Vernier Software & Technology Updates its Flash Photolysis Spectrometer for College-Level Chemistry - April 27, 2021
- Edthena Makes it Easy to Blur Teaching Videos with New Feature - March 12, 2021
- Vernier Software & Technology Uses Food Experiments to Engage Students in Chemistry Exploration - March 5, 2021