Many outcomes/standards are really broad statements of policy—promissory notes for what will be taught. As is, they cannot be directly used for assessment. Critically look at each and break it down into smaller parts as required so they are expressed in a clear language others will understand and find easier to observe. These are the basis for your system rubrics and the selection of key assessments.
A good provider will roll up their sleeves and dive into this work with you. They should also provide advice about how to best set up the system to get valid data (data that measures what it says it does statistically) in a way the represents a sound research design.
3) The past is past
This is a tough one. Chuck most of your extant rubrics. Too often, schools drive faculty into a semester’s worth of committee work to collect/generate rubrics and then to map them to as many outcomes as possible. This has two effects. First, you will feel you must use these rubrics after all that labor.
This creates a second, bigger problem in that they do not usually assess competency over time in a way that will yield high inter-rater reliability. They are used to come up with a course grade that is normed for the students of the author of the rubric and it contains many criteria that have nothing directly to do with the outcome—though the assignment may be “holistically” related.
Therefore, when closely examined, you find that many of the criterion in the rubrics have nothing to do with the actual outcomes to which they are linked. They are task-specific but not competency relevant. Put another way, you end up not measuring what you say you do by mixing in a lot of non-related data with data that is. Instead, leave your faculty alone to teach, improve instruction and collaborate to develop more authentic assessments.
4) Avoid the big bucket o’data effect
Holistic linking (see above) invariably leads to the “big bucket effect.” Some schools present with initial system designs that call for more than 2500 criterion links to ONE standard set. This happens when people mix up the notion of “coverage” (curriculum mapping) with “direct measurement of competency” (assessment). The former is very helpful for planning instruction but not useful for planning what to assess, when and how. Such designs can tell you nothing specific about what a student can or cannot do.
A better number of linked criteria in a strategically structured, systematic design would be closer to 130-160 for a single, program’s core standard. Anyone looking at the criterion links should see a clear semantic relationship between what was measured (the criterion) and the goal of measurement (the outcome) to which it is linked. If not, the validity of the entire system can be threatened and placed in doubt.
- 6 tips for launching new assessments - July 14, 2014