online-assessments-platform

6 tips for launching new assessments


Industry expert shares six important strategies for launching a successful assessment platform

online-assessments-platformAssessment platforms are a huge endeavor and can be a redefining event for an institution if kicked-off correctly.

By following a few best practices during a new launch, assessment teams can create valuable goals, define objectives, and set a program up for success.

1) Establish a small leadership team

Create and announce a small three-to-five person team of designers for the assessment systems and process and assign someone the role of Lead Administrator to liaise with the provider’s personnel. This team works first to create a vision for the system – articulating what the proactive, positive and valuable goals of using the system are to be.

Accreditation, while very important, will not fly as a core vision. This team will also later approach selected faculty for advice about key assessments. Your provider should be able to help you get organized and advise you about optimal approaches given an emerging vision.

2) Forget the technology and focus on clarity first

Develop a clear understanding of what you want to measure and how. DO think of it as research. The vision and methodology drive the design of your system. This same maxim applies to your next stop: outcomes.

(Next page: Tips 2-4)

Many outcomes/standards are really broad statements of policy—promissory notes for what will be taught. As is, they cannot be directly used for assessment. Critically look at each and break it down into smaller parts as required so they are expressed in a clear language others will understand and find easier to observe. These are the basis for your system rubrics and the selection of key assessments.

A good provider will roll up their sleeves and dive into this work with you. They should also provide advice about how to best set up the system to get valid data (data that measures what it says it does statistically) in a way the represents a sound research design.

3) The past is past

This is a tough one. Chuck most of your extant rubrics. Too often, schools drive faculty into a semester’s worth of committee work to collect/generate rubrics and then to map them to as many outcomes as possible. This has two effects. First, you will feel you must use these rubrics after all that labor.

This creates a second, bigger problem in that they do not usually assess competency over time in a way that will yield high inter-rater reliability. They are used to come up with a course grade that is normed for the students of the author of the rubric and it contains many criteria that have nothing directly to do with the outcome—though the assignment may be “holistically” related.

Therefore, when closely examined, you find that many of the criterion in the rubrics have nothing to do with the actual outcomes to which they are linked. They are task-specific but not competency relevant. Put another way, you end up not measuring what you say you do by mixing in a lot of non-related data with data that is. Instead, leave your faculty alone to teach, improve instruction and collaborate to develop more authentic assessments.

4) Avoid the big bucket o’data effect

Holistic linking (see above) invariably leads to the “big bucket effect.” Some schools present with initial system designs that call for more than 2500 criterion links to ONE standard set. This happens when people mix up the notion of “coverage” (curriculum mapping) with “direct measurement of competency” (assessment). The former is very helpful for planning instruction but not useful for planning what to assess, when and how. Such designs can tell you nothing specific about what a student can or cannot do.

A better number of linked criteria in a strategically structured, systematic design would be closer to 130-160 for a single, program’s core standard. Anyone looking at the criterion links should see a clear semantic relationship between what was measured (the criterion) and the goal of measurement (the outcome) to which it is linked. If not, the validity of the entire system can be threatened and placed in doubt.

5) Gain local control of the machine

Face-to-face training is a must. Arrange for about three days of customized, face-to-face training for the Assessment Design Team. It’s a good idea to add someone to the team to implement the design. Use these days, cloistered somewhere on campus with your cell phone off, to learn how take charge of the system so that you can confidently implement the system design and workflows you have designed. Insist on focusing only on the parts of the system that you need right now rather than all it can do.

6) Communicate just-in-time

Keep faculty up to date from time to time. Broad-brush strokes only, not the details. Before you “go live” and do formal training, develop an orientation presentation lasting about 20-30 minutes for faculty. Demonstrate exactly what a student and a faculty member will do. If you picked the right system, the steps in the workflow will be very easy.

The goal here is to take the mystery out of the process. Develop customized Quick Start manuals and resources and show them to faculty during the orientation. It is crucial to tell everyone how to get help, from whom and when.

Geoff Irvine is CEO of Chalk & Wire. A version of this article originally appeared on the Chalk & Wire blog.

Sign up for our newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

Latest posts by Geoff Irvine (see all)

Oops! We could not locate your form.

Sign up for our newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.