In practice, we have found the opaqueness of higher ed silos can complicate these questions.
We work with universities to scale one definition of success and map a flow of related student actions and decisions across the whole life cycle. This work is foundational to delivering proactive micro-guidance before each desired action. Over the past few years, we’ve collected examples that help illustrate the value of each of the four questions.
Tackling the first question unearths the inherent complexity and assumptions that pop up when forcing transparency across different silos in one system. Mapping the student flow must conform to unambiguous and trackable student action points (applied/did not apply, showed/did not show, turned in first assignment on time/did not, scheduled appointment with career office/did not, etc.). And each point must be linked to higher yield, retention, and the ultimate win of being employed.
Starting out, we learned to expect multiple answers within and across silos for the same question. Getting to the correct answer is a journey that unearths important sources of mis-information, outdated assumptions, and “data deserts” within the institution. Usually, the correct answer is solved in the weeds and at the frontlines. However, this bottom-up fix requires a top-down endorsement from the start. Another important lesson learned came when our engineers proposed we adapt Agile Management principles to make this process more efficient.
Example flow point: Let’s assume success securing financial aid is linked to higher re-enrollment rates. Often, our request for basics such as financial aid deadlines initially yields multiple and conflicting answers. Finding and charting that flow point (in a way that it can be updated without a repeat scavenger hunt) creates a new trackable and proactive decision/action point for retention in the flow map.
How to scale personal success at your university without burdening faculty or staff
This example represents a mutual win. The student improves his or her financial options, and the system benefits by realizing higher retention. Further, as students apply for aid, additional rewards loops could increase the frequency and thus increase the probability of achieving the goal and receiving aid.
The second and third questions get at the nature of current system-wide rewards and whether a student perceives the intended rewards as a reward at all. One of the most ubiquitous reward practices in higher ed is the personalized acceptance letter or call; however, this practice is not a reward in the “active” sense. The next iconic reward is the frameable degree, the culmination of year(s) of action. What lies in between?
One possible example: Many students set alarms to dive into re-enrolling at the first possible second to score the desired courses. In that moment, an effective visual indicating that with the courses selected the student is x% closer to the next level [insert added power and control here]. The inserted power and control could align to the specific motivation a student has for getting to graduation, such as employability. Or, to a specific experience sequenced into the student flow such as anticipating a junior year abroad, etc.
All of this can be automated without disrupting any existing process or technologies, or requiring new skills learned by system staff. And, unlike risk-flagging methods, the moment and the solution are resolved together. Risk flagging involves some determination of recourse or intervention by personnel and generally serves 10 percent or so of any population. This last point highlights why a technique that scales personal success without burdening or requiring adoption by operators and faculty is worthy of consideration.