This post was first published on the Next Gen Learning blog.
With A’s doled out in almost half of all undergraduate courses—compared to only 15 percent in 1961— have grades become meaningless?
Ten years ago, Princeton University began limiting the A-range awards in each course to 35 percent. In recent news, the university is likely to reverse efforts to curb grade inflation and instead allow academic departments to set their own grading standards. Such shifts in policy, however, appear to be solutions to the wrong question.
Grade inflation policies simply underscore the inadequacy of grades as proxies for student learning. With learning standards that are disparate and subjective from professor to professor across every single course in the approximately 4,700 degree-granting institutions in the U.S., it is no wonder that grades are poor indicators of student learning. With no agreed-upon standardized unit of learning, there is no useful metric that can translate across institutions, state borders, and employers.
Much of this comes down to the interdependent architecture of postsecondary institutions—the way each facet of a college is designed and made to work only within the brick-and-mortar campus itself. Credits are not easily transferable between institutions; moreover, the financial incentives are not in place for colleges to integrate innovations that would speed up degree completion and lessen the impact of students’ foregone wages in the pursuit of their degrees. The inability of traditional institutions to evolve from interdependence to modularization is a complicated transition with which companies in all industries grapple.
(Next page: Fusing technology with mastery-based learning)