PIAAC as coded

PIAAC as Coded Agency?

By Cormac O’Keeffe.

Decisions about educational policy-making and spending are frequently informed by data produced during large-scale standardised assessment programmes. The Programme for International Assessment of Adult Competencies (PIAAC), is the less well-known companion to the Programme for the International Assessment of Student Achievement (PISA). While PISA is designed to assess specific abilities among 15-16 year olds, PIAAC or ‘PISA for everybody else’ attempts to assess the literacy, numeracy and IT skills of people between the ages of 16-60.

One of the things that makes PIAAC so interesting is not so much what it assesses but how it assesses. PIAAC is the first international comparative assessment to have replaced traditional ‘paper and pen’ tests with a digitalised version. Taken separately, neither e-assessment nor the literacy assessment technologies employed within PIAAC are new. Yet together, PIAAC is a novel instantiation of assessment technologies within a larger assemblage of calculative and visualisation techniques.

The PIAAC consortium recognises the innovations and considerable effort that went into digitalising PIAAC. However, the implications of digital assessment experience is yet to be fully appreciated. Other than being able to deliver more test items in less time and generally optimise data ‘collection’, performing PIAAC on the laptop was not always recognised as being significantly different to the paper and pen versions. The emphasis in the research literature on digital assessment is on equivalence. However, there is emerging evidence to show that digital assessments are a new and important area of enquiry.

Performing PIAAC

As a household survey, PIAAC’s e-assessment events took place by sending interviewers out into the field in 23 different countries. Once a respondent (a randomly contacted citizen) had been contacted by an interviewer and both had settled on a time and place, the assessment events could begin. Interviewers were however, far from being alone – as they were accompanied by a variety of software packages.

One of these was called TAO, or Testing Assisté par Ordinateur (Computer Assisted Testing). It’s a workflow engine that managed interactions between test takers and items during the computer-based assessment. It assisted test-makers by choosing and scoring questions drawn from a question databank according to pre-defined rules. Another was called Blaise. Blaise is a type of Computer Assisted Personal Interview (CAPI) software that managed and coordinated the background questionnaire.

These software actants did more than assist the interviewer. TAO and Blaise worked more as managers. Blaise scrupulously recorded and timestamped the interviewers’ interactions throughout the e-assessment event and through the use of branching rules supervised a workflow that ensured a highly scripted questioning and answering routine between the interviewer and respondent.

That assessment events are rule-bound and carefully controlled is hardly surprising. What however is of note here is the interplay between human and non-human agency. Maddox (2014) describes assessment events as ‘unpredictable and unstable processes of “translation” as various actors and objects come together to form a network that constitutes particular settings and events’.

In the case of e-assessment in PIAAC, that interaction involved a triad of agents, including the interviewer, the test-taker and the computer. This suggests that the production of data did not happen without certain transformations.  TAO and Blaise worked together to script and manage the assessment – keeping the invigilator and test-taker on task.

However, the nature of that interaction requires more research and patient observation if we are to understand the role of coded agency in assessment.  Similarly, the role of computer hardware is not without significance – the influence that it has on the software (crashing, data loss) or the test-taker (trackpad vs mouse, screen size, power loss, etc). The lashing together of human, hardware and software actants creates permutable combinations of interactions and agency.

As the reach and scope of international assessments increases, so too does the role of digitally mediated and distributed interactions.  E-assessment events are sites rich in unexpected and uncontrolled for interactions. Rather than ignoring or even attempting to obviate their existence our understanding of what it means to assess and to be assessed is made fuller and more accurate by following and at least attempting to understand them. While a great deal of attention is often devoted to the data produced by these survey-test hybrids, much less attention is given to how these data are produced.

Cormac O’Keeffe is co-founder and director of teaching and learning at YES ’N’ YOU and associate member of Lancaster University’s Centre for Technology Enhanced Learning. His research interests include e-assessment, e-learning and digital ethnography.

Image copyright: Creative Commons BY 3.0 made by Freekpik at www.flaticon.com. Xml code – source from TAO

 

Twitter