You don’t have to look far to understand that data is arguably an organization’s most valuable asset. The Economist declared that “The world’s most valuable resource is no longer oil, but data,” while Facebook is being scrutinized over its handling of data and how it may have been used to influence the 2016 U.S. presidential election. However, many higher education institutions fail to recognize the value of the data they hold beyond their day-to-day operational needs.

In 2015, Indiana University embarked on the Decision Support Initiative (DSI). Our goal was to improve decision making at all levels of the university by dramatically enhancing the availability of timely, relevant, and accurate information to support decision makers.

Higher ed institutions produce an abundance of data from our systems of record (financial, student, HR, learning management, etc.) that are valuable to inform decision makers, but often the data is not accessible in a timely manner for the people who need it. Decision makers rely on analysts that “know the data,” where to find it, and how to massage it for the question of the day. The result? One-time solutions that must be recreated many times, based on the personal knowledge analysts have acquired, with varying results. DSI strives to put information directly in the hands of decision makers, using well-curated and documented data sources so they may make data-informed decisions.

Cultural change
Indiana University has historically had a distributed structure and culture for decision support. In most cases, units have done their own decision support using self-service capabilities against a central data warehouse. Data has been amassed from our systems of record and modeled with the hope that all the institution’s questions can be answered. In practice, a large portion of our data warehouse is never used and only those analysts with appropriate privileges are granted access, forcing us to rely on those who “know the data.”

The long cycle times and coordination costs associated with our traditional methods for gathering data and making it available to decision makers were no longer effective. Moreover, we had no visibility as to whether or not our data was being used for better decision making. These challenges drove us to rethink our processes and how we better support our decision makers.

Know the decisions you’re supporting
While it may seem obvious to start with defining the decisions to be supported, we typically hear, “I need a report that shows….” These requests end with a deliverable that may be what a user wants, but not what they need to make better decisions. We start by defining the decisions to be supported. For example:

  • As a department chair I need to decide the number of courses to offer to meet student demand given staffing limitations.”
  • As a department chair I need to decide when to offer courses to accommodate student need while making effective use of classroom space.”

Knowing the questions you want to answer as a first step is critical for success.

Accelerating outcomes through Agile Business Intelligence (BI)
Agile software development methodologies like Scrum and Extreme Programming have become the new norm for software development. The adoption of Agile principles within the BI domain, while now more common, has lagged compared to traditional software development.

Through Agile BI, we focus developing a solution that addresses business questions. Past practices concentrated on delivering data into a central repository based on an analyst’s requests and usually required significant data modelling and transformation. By focusing on the specific business questions, we narrow the scope of the data we need, allowing us to progress more quickly to using tools to explore and visualize data. From there we iterate, based on feedback, keeping the scope narrow with the intent of adding value with each iteration.

Tools for better decision making
Historically, our approach was:

  1. Create copy of your production database
  2. Build a data warehouse data model conducive to end-user reporting
  3. Build data transformation scripts to load data into your data warehouse

Using this approach, it took many weeks or even months to make data available in the data warehouse, depending on the complexity of the source system. With our data increasingly residing in the cloud, that timeline gets even longer.

By using data virtualization software, we are able to quickly connect to disparate data sources wherever they reside. Data virtualization lets you easily centralize, standardize, and secure data across the enterprise, no matter the location, design, or platform of the data source.

Good enough for decision making
Finally, it is important that decision makers understand the data they are using. Data provided to decision makers must be well-defined in business terms so that decisions makers are well informed. While we will strive for the highest data quality, our goal is “good enough for decision making,” not necessarily perfect. Understanding the decisions being supported by our data along with the tolerance for error can eliminate countless hours of research that often yields little in the form of meaningful results.

About the Author:

Aaron Neal leads the enterprise software team at Indiana University (IU) and is responsible for decision support, enterprise systems integration, database and server administration, identity management and storage, and server infrastructure. These systems and infrastructure provide the foundation for the enterprise systems used at IU.