Developing Digital Learning Resources with Minimally Viable Product Model
Educators are just starting to appreciate the full potential of big data. For example, big data can be analyzed to create a picture of an individual learner’s course of learning, not just the level of proficiency attained but the way the learner allocated his or her time and used system resources to attain that proficiency.
It can also provide portraits of different learner types in a particular classroom or school or at a district, state, national, and even global level. Shared with individual learners, such findings can enhance their understanding of how they learn and where and how they could most profitably spend additional study time. The findings can give educators insight into the concepts students struggle with and individual student differences. Detailed information about variation across learners can be used to create alternative learning paths and supports that lead to more personalized learning, defined as instructional methods and pace tailored to the needs, preferences, and interests of different learners (U.S. Department of Education 2010a).
Education researchers can use big data to test the applicability of principles of instruction derived from laboratory-based learning research in new, more authentic contexts and with more learners than ever before.
The growing availability and adoption of sophisticated digital learning systems are changing the nature of learning resources and who develops them, in addition to redrawing familiar development and distribution models. For example, technology developers from disciplines other than education, such as search, gaming, mobile, and social technologies, are imagining and developing new digital learning resources that compete with print- based textbooks and other learning materials. They also bring R&D and evidence approaches and practices that are different from those of the established academic and government R&D communities. These should be considered in the effort to create innovative learning resources more rapidly and to expand the evidence approaches used to make decisions about which resources to adopt and how to improve them over time.
The most widely accepted model today for determining the impact of a learning resource or intervention consists of three stages of research: small investigations testing the principles behind a resource or intervention, somewhat larger studies testing its efficacy under ideal conditions, and effectiveness studies—large-scale multisite randomized controlled trials (RCTs) that test how the intervention works in the real world. Positive findings from each R&D stage are generally a prerequisite for the next.
In learning environments powered by technology, there is both the need and the opportunity to create more and more timely guidance for developing, purchasing, and using digital learning resources. An important factor in leveraging this opportunity is accepting that the strongest level of causal evidence is not necessary for every learning resource decision. If an idea has never been tried, justifying a high confidence that it will produce positive outcomes will be difficult. Yet if digital learning resources are implemented only when confidence levels are high, technology innovation may never occur in education.
To introduce innovations to users in a timely way in the commercial world, industry has evolved an R&D model in which an early-stage innovation—“a minimally viable product” (Ries 2011)—is launched and used on a massive scale, with data collection and analysis occurring simultaneously with widespread adoption rather than before.
The minimally viable product model involves specifying a product, building out its core idea and enough of its features to be useful, and deploying it to see how users react. As users engage with it, the product collects massive amounts of data about user interactions, which are then analyzed for insights into how to continuously refine and improve the product. This model transforms R&D into an iterative process with rapid design cycles and built-in feedback loops as opposed to a linear process with stages.
When used to develop digital learning resources, this model severs the link between the maturity of an innovation and the scale at which it can be implemented and studied. The model frees early-stage digital products from having to be kept small scale. Because data collection can be embedded in the technology and data analysis can be partially automated, researchers can handle much larger datasets than was possible in the past. This enables them to ask and answer more and different types of questions about learning outcomes and how to improve the product.
This model has advantages when used to develop digital learning resources. When a resource is intended for use as part of formal education, however, educators and developers must be concerned with more than what learners do when using the product. They must also consider whether the learning demonstrated inside the product can be also observed in learners’ actions outside the product—for example, in an independent performance assessment or in performing some new task requiring the same understanding or skill. This is necessary because while a student may demonstrate what appears to be understanding of fractions in a digital game, the student may not necessarily demonstrate that understanding in another situation. The ability to transfer what one has learned is a challenge in digital learning just as it is in face-to-face learning.
Educators and developers also need to be concerned about disentangling the multiple potential sources of observed learning differences. If the best math teachers gravitate toward a new technology-based resource for instruction, the strong performance of their students is not necessarily caused by that new resource but instead may be the result of the teachers’ skills. To determine whether it enhances student outcomes, a digital learning system must be subjected to research designs in which outcome data are collected outside the system or in which other variables related to student learning, such as teacher skills, are carefully controlled.
U.S. Department of Education, Office of Educational Technology, Expanding Evidence Approaches for Learning in a Digital World, Washington, D.C., 2013.