In what seems like just a few short years, analytics has become one of the most talked-about and celebrated technology innovations to hit higher education. While much of this discussion has highlighted efforts to gain better insight into areas such as retention or student learning, there have been fewer acknowledgments about how and why analytics initiatives come about and how prepared institutions are to respond to these forces.

Rather than an organic, internally developed strategic initiative designed to improve student success, for example, many—if not most—institutions we’ve observed tend to develop analytics initiatives as a response to external factors. A state mandate, legislation, accreditation, and bad press are common examples. We’ve also observed that these factors can create a highly pressurized, vise-like environment, putting institutions under the gun to gather and analyze data without enough preparation or time, resulting in an inaccurate or incomplete analytics strategy.

As we reflect on the causes of the analytics movement and prepare for our upcoming analytics panel at the Eduventures Summit on June 9th, we thought it was worth revisiting two stories we heard at the Blackboard Symposium in Austin several weeks ago about initiatives at the California Community Colleges (CCC) and Indian River State College (IRSC).

The California Community Colleges: Understanding Which Metrics Matter

In 1990, the federal Student Right-To-Know Act required all institutions to track a cohort of all certificate-, degree-, and transfer-seeking first-time, full-time students over a three-year period and publicly report their completion and transfer rates. Based on this legislation, the first report for the California Community College System (CCCS) in 1995 showed an unflattering 34% completion rate and 23% transfer rate.

Amid some rather bad press, CCCS decided to ask what “success” really meant for its students. In looking at the methodology of the Act, CCCS asserted that the definitions for “degree-seeking,” “full-time,” and “transfer” were flawed. These definitions and corresponding metrics seemed to be more fitting for four-year institutions than two-year ones and, so, applying them created an inaccurate picture of their student success.

CCCS, therefore, decided to develop a different set of definitions and metrics—Student Progress and Attainment Rate (SPAR)—that it believed were much more attuned to the particular nature of community college students. As a result, the picture of student success at CCC was much different, with graduation and transfer rates over 50%.  Although CCCS’s efforts did not change the Act itself, it did create a different culture across its 113 colleges, all of them now leveraging their data and SPAR to find areas where they can improve student success.

Indian River State College: Engaging a Cross-Functional Team

Like CCC, Indian River State College (IRSC) faced a common question—how do I know whether my instruction contributes to student learning? Also, like CCC, Indian River State College encountered problems amplified by legislation. In IRSC’s case, two pieces of state legislation, FL Senate Bill 1720 and FL Senate Bill 524, increased the difficulty of answering this question. The former allowed students to opt out of placement tests and the latter implemented performance-based funding for Florida higher education institutions. As a result, there was a considerable need for IRSC to develop instructional programs for students performing at different academic levels to ensure not only their achievement but also ensure that IRSC wouldn’t suffer in its performance-based funding allocation.

To overcome both constraints, IRSC undertook four main steps. It first established an overall master online course strategy, one based on the Quality Matters Course Design Standards.  As part of this strategy, IRSC then looked to understand which metrics would best capture student success, such as student engagement (e.g. learning management system clicks, etc.). Likewise, IRSC implemented a set of analytics software tools to capture and analyze grades, course/program completion and outcomes, and other data. Finally, and most interestingly, IRSC engaged a broad cross-section of stakeholders—deans, faculty, department chairpersons, vice presidents, and instructional design teams—to periodically review the outputs of the analyses and regularly intervene at the student and instructional level to refine the instructional model.

Suggestions for Your Institution

There may come a time where your institution must analyze its data to meet an external demand. In advance of this, it would be helpful for you to offset any potential pitfalls by keeping in mind the following:

  • Create a cross-functional team: One key takeaway from these examples is that you should consider forming a cross-functional team to determine metrics, perform the analyses, review the output, and, if needed, make corrective actions to ensure that your analytics approach meets your needs.
  • Understand your metrics for student success: Likewise, these examples show that institutions respond best to external demands when they already understand which student success metrics will support their strategy. Without this understanding, institutions face the risk of grabbing data from different sources and finding that the data is not helpful at all.
  • Understand your data inventory: Finally, a metric is only as useful as the supporting data. It is critical, therefore, that you know what data you collect and where you store it. Knowing this in advance of any external demands will make it easier to leverage data to help your team respond.

 

Never
Miss Your
Wake-Up Call


Learn more about our team of expert research analysts here.

Like, Follow, Share.

Twitter
Facebook
LinkedIn

Recent Posts