How Analytics 3.0 Will Subvert the Dominant Paradigm


  • Susan Grajek, VP Communities and Research, EDUCAUSE
  • Vince Kellen, CIO, UCSD
  • Jenn Stringer, Deputy CIO & AVP, UC Berkeley (not present)
  • John Suess, VP of IT and CIO, University of Maryland, Baltimore County


JS: We’re re-thinking about changing out a lot of tech recently; what does this mean for our analytics?

VK: my background is in data, has been since I was in my 20s. I’ve been in a range of institutions and verticals since then, but higher ed for some time now. The way we go about analytics is archaic! Scale-out streaming technology has come a really long way recently. It’s so much faster than the traditional way of moving flat files around. We’ve got low-cost storage and serverless technologies. Virtually all our mental models around manufacturing analytics is owed to the 20th century. Here are some rules I’d like to put forth:

  1. we need to conceptualize things as verbs
  2. express things with maximum semantic complexity
  3. build provisionally
  4. design for the speed of thought
  5. waste is good
  6. democratize the data

Data analysis should be viewed proximal to the business rules; sharing is power. It requires organizational incentives to make it happen though. I believe that to compete in the 21st century, we need to follow the rules above.

JS: talk for a moment about your research; is analytics going through a paradigm change?

SG: I think it is; I think higher ed is going through a paradigm change. It’s going to be an unevenly distributed change, though. How do you do it proactively and affordably? Many institutions are purchasing predictive analytics tools (29%), and the two-year change in this trend’s influence on IT strategy is significant.

JS: when you look at the stats on these things, you’ll notice that there are a lot of us who are doing them. Our institutions have mountains of data, but much of the data we’ve got has not been incorporated into vendor models. We’ve been using Caliper learning analytics data running in AWS, allowing us to build a 360-degree view of students. When classes make heavy use of an LMS, this is great, but when they don’t not so much. Capturing and processing student data depends on the systems feeding them (i.e. attendance systems feeding early warning systems). Unfortunately, these projects are not typically done in less than six month stints.

SG: the three pillars of Dx are culture, workforce, technology. It’s not just the technology. How do you put together a revolutionary analytics approach that will work with your institution’s values?

VK: if you try to get buy-in all up-front, you’ll never get anywhere. How many of you have experienced data wars in your institutions? (a lot of hands raised). Segregation of duties is job one…the CIO must be able to control stewardship of data. You need to have the inspiration to build the infrastructure to support it. You also need an IT organization to build this without the shackles of the 20th century. I needed to learn my own language!

JS: I didn’t have the same level of authority as Vince. I had to figure out how to work closely with my provost. We needed this to support the provost and academic affairs directly. If you can’t say you own data, you need to know how to work with everyone to make things work properly.

SG: we have an opportunity to rethink the relationship between the data science academic colleagues…how do we not make the same mistakes as we made with computer science departments in years past?

VK: tap into the motivations of your faculty. Support graduate students, support grants, find those common interests or you’re not going to get very far.

JS: about three years ago, we started our internal data science efforts with our undergraduate students. We partnered with the departments and started working on a couple problems. The students were getting experience with real-world problems, like who was going to come to our university so that we can better allocate our marketing dollars to attract the students our efforts would influence. We’ve also been incorporating Jupyter notebooks into our work. We now partner with our CS department and leaning into the research side.

SG: let’s talk about the money. How does this become affordable to most institutions?

VK: for us, it’s clear we need to do more research using pound for pound fewer administrators than we’re used to. We should be running much of this using AI so we can spend more of our money on educating students, which is what taxpayers want. We need to increase productivity of individuals.

JS: we’ve historically spent a lot of money on tools, but usage has not been as strong as we’d like. More open source, more cloud is a more flexible platform that will allow us to move and invest in staffing more appropriately. If you make a dent in student success, it can serve as a significant payback mechanism.

Question: how do you grow your own capabilities? Most vendors say “give us your stuff and we’ll figure out your problems for you.”

JS: we’re trying to move away from that model.

VK: if you’re improving instruction, great! If not, we’ll reverse engineer your work pretty quickly and share our work widely.

Question: how do you put an end to the data wars/data hoarders?

VK: when you unleash a system with a breadth of information that people find useful, it takes on a self-correcting role. Nothing more corrosive than a very powerful person who has something that they need to make come true.

Question: this conversation is about the institution, not the student.

VK: we engage our students liberally. It’s a balance of concerns; we can learn a lot about this from our peers in the medical industry.

JS: through degree audit systems, we’re providing some windows into student data for students.