Do real-time assessment analytics exert looping effects on learners?

Ben Williamson, University of Stirling

The emergence of computer-based assessment twinned with the rise in the collection of big data in education means that international assessment data can increasingly be collected and analysed in real-time and automatically. These big data developments for international assessment techniques are largely being led by major commercial organizations. Pearson plc, for example, has established a Center for Digital Data, Analytics, and Adaptive Learning that is intended to ‘make sense of learning in the digital age.’ It has produced a report on the impacts of big data on education that envisions education systems where ‘teaching and learning becomes digital’ and ‘data will be available not just from once-a-year tests, but also from the wide-ranging daily activities of individual students.’ In this context, Pearson researchers are developing a range of digital methods for the collection, analysis and visualization of ‘real-time’ assessment data. These include data analytics methods of pattern detection and recognition, visual analytics to display the data graphically to teachers and students, and machine learning techniques of predictive analysis that can be used to anticipate students’ future progress on the basis of current assessment.

Pearson has also published a report calling for a revolution in educational assessment and the related policy process. It calls for a shift in the focus from the governance of education through the institution of the school to ‘the student as the focus of educational policy and concerted attention to personalising learning.’ In particular, the report promotes ‘the application of data analytics and the adoption of new metrics to generate deeper insights into and richer information on learning and teaching,’ as well as adaptive ‘online intelligent learning systems’ to provide ‘ongoing feedback to personalize instruction and improve learning and teaching.’ Pearson’s aim is to shift away from large-scale testing to less obtrusive methods of performance data collection and analysis. Consonant with the wider potentials of data analytics, these approaches combine real-time data tracking of the individual with synchronous feedback and adaptive pedagogic recommender systems.

Ultimately, the data analytics being developed at Pearson anticipate a new form of real-time and ‘up-close’ digital education governance. These analytics capacities complement existing large-scale database techniques of governance conducted at discrete temporal intervals through large-scale testing like PISA but also, to some extent, short-circuit those techniques. The deployment of big data practices in assessment is intended to accelerate the timescales of ‘governing by numbers’, making the collection of enumerable educational data, its processes of calculation, and its consequences into a real-time and recursive process materialized and operationalized up close from within the classroom and regulated at a distance by new centers of expertise in digital data analytics, visualization and statistical calculation.

Are there implications for the students themselves who participate in these new kinds of computer-based assessment? Pearson claims that through pattern detection techniques it is able to identify evidence of learning processes that have so far not been theorized or modelled. Indeed, it claims that new forms of data and experience will create a theory gap between the increase in data-based results and the theory base to integrate them, and lead to the production of new generalizable models of learning processes and progressions. New theoretical understandings and models of learning might then be folded-back into the kind of pedagogic resources that Pearson itself produces and promotes to schools, particularly its personalized and adaptive learning applications. This could exert a kind of ‘looping effect’, as Ian Hacking has described it, where the data-derived model acts to shape and ‘make up’ the people that it purports to measure and represent. In other words, the big data-based assessment analytics of Pearson could become highly consequential to the formation of new models of learning, and thereby to ‘making up’ students as new ‘kinds of people’ who are understood in terms of the data and encouraged through the pedagogic apparatus of the adaptive classroom to relate to their own learning in novel ways.

Ben Williamson is a lecturer in the School of Education at the University of Stirling. His research focuses on software, code and data in the governance of education, and he is leading the ESRC-funded project Code Acts in Education: