Adaptive Learning in Higher Education is Here – Maybe

The “Black Swan”moment for personalized learning is happening around us, after decades of growing sophistication in computing and years of tinkering by academics and the private sector. As the metaphor suggests, the advent of real, effective individualized learning systems in education may come as a surprise. But in order to realize the true potential and promise, R&D collaboration between the research community, higher education and the private sector can get us over the tipping point. While it’s essential forthat collaboration happen, I’ll use this first post to set up the context and start the conversation. More will follow here in the coming weeks.

Also called personalized learning or machine learning, the idea that algorithms, data and processors can optimize knowledge acquisition for individual learning styles has been around for some time. It has intensified since the gradual adoption in the early 2000s of learning management systems (LMS), which were highly successful in enabling the administration of learning, but less so in enabling learning itself.  The initial design criteria point for LMS design had fundamentally been undergraduate education.

But with the explosion of online content for formal and informal learning, the undergraduate population now represents only a small percentage of the total learner market. With the rapid globalization and consumerization of traditional digital learning  non-traditional education consumers have now become the vast majority and in a world of rapidly developing data science, the education market, and the increasingly diversified learners it serves, has a prime opportunity to innovate in profoundly new ways.

The ultimate goal of digital teaching and learning is to offer a learning experience where all learners of differing skill sets, backgrounds, and abilities can be successful at their own pace, level of understanding, and individual learning capacity. From K-adult we know from neuroscience that there is no such thing as an average brain. The world of education as for centuries revolved around the law of averages and technology enabled learning, like textbooks, is designed for an average, one-size-fits-all learner. The vexing question we need to address is that with new technology advancements, backed by deep data science, why should we be satisfied with the status quo? Where do we go next?

In actuality, adaptive learning is a concept already realized or in active development all around us today. Systems with adaptive learning components are highly visible in algorithms that touch our daily lives by managing our personal finances, discovering new medications, and finding us jobs, books, and dates. These are applications that carefully learn and organize complex information from the paths of data that follow us in real time and are left behind in an increasingly digital world.

We have all witnessed the power of the Google search engine and the sophisticated math that serves up results, products, and advertisements pertaining to an individual's search history and online persona. Another example is Amazon’s recent patent application for “anticipatory shipping,” a system designed to shorten delivery times by predicting what buyers want before they buy it and shipping products to the customer’s general geographic area — and possibly even right to their door — well before actually hitting the “buy” button.

These groundbreaking developments in data science have enormous implications for education in solving the perennial problem of the inequity ratio: one faculty member faced with the responsibility of teaching 30 or more students in a classroom. There is tremendous power in looking beyond the concept of “Big Data,” which expresses the scale of our digital environment, at how these machine learning advances can utilize the power of “Small Data,” which is about causality. Small data is the individualized signal set that allows the system to accurately achieve an adaptive state with a unique and scalable capability to make predictions.

The notion of solving one of education’s most plaguing challenges of teacher and faculty having to succeed in educating more than 30 different nervous systems in a classroom is powerful. It has the potential to put a teacher at every table. It is a compelling reason to invest heavily in research and development and do so in strategic evolutionary phases. It is also important to note for context that much of adaptive learning endeavors are to address Bloom's 2 Sigma Problem. Benjamin Bloom, through his research, discovered that the average student tutored one-to-one using mastery learning techniques performed two standard deviations better than students who learn via conventional instructional methods.[1]

Our industry now faces the opportunity and challenge to embrace experimentation and research to move beyond the law of averages. While our current evaluation systems have inherited the theory of norms, we have the basis now with rapid advances in data science to develop learning environments that accurately map an individual’s learning potential. To get there, higher education needs to shift its position toward the private sector in two ways: Adopt process development approaches similar to the private sector and consumer markets, but also to hold commercial entities accountable to a greater degree in the education sector.

More on that, and the transformative power of adaptive learning when done right, in my next post.

###

_____________________

1 The 2-sigma problem: The search for methods of group instruction as effective as one-to-one tutoring