3) Inferred Content Data: “How well does a piece of content ‘perform’ across a group, or for any one subgroup, of students? What measurable student proficiency gains result when a certain type of student interacts with a certain piece of content? How well does a question actually assess what it intends to?” Ferreira wrote. “Efficacy data on instructional materials isn’t easy to generate — it requires algorithmically normed assessment items. However it’s possible now for even small companies to ‘norm’ small quantities of items.”

4) System-Wide Data: “Rosters, grades, disciplinary records, and attendance information are all examples of system-wide data,” he wrote on Knewton’s blog. “Assuming you have permission (e.g. you’re a teacher or principal), this information is easy to acquire locally for a class or school. But it isn’t very helpful at small scale because there is so little of it on a per-student basis.”

“At very large scale it becomes more useful, and inferences that may help inform system-wide recommendations can be teased out. But even a lot of these inferences are tautological … or unactionable. So these data sets — which are extremely wide but also extremely shallow on a per-student basis — should only be used with many grains of salt,” Ferreira added.

5) Inferred Student Data: “Exactly what concepts does a student know, at exactly what percentile of proficiency? Was an incorrect answer due to a lack of proficiency, or forgetfulness, or distraction, or a poorly worded question, or something else altogether? What is the probability that a student will pass next week’s quiz, and what can she do right this moment to increase it?” he wrote.


Add your opinion to the discussion.