National Assessments in the Age of Global Metrics

National Assessments in the Age of Global Metrics

Sara and Keito cropped

An international symposium hosted by the Education Governance and Policy program of REDI, in collaboration with the Laboratory of International Assessment Studies

Over the last two decades, international large-scale assessments (ILSAs) such as PISA and TIMSS have been exerting considerable influence on national education policies globally. Notable among these influences is the rise of national assessments of various kinds. More emphasis on national testing and monitoring is advocated on the grounds of transparency and accountability, and as a way of monitoring the ‘state of the nation’ through education outcomes. Regular national testing is seen as a way of gathering useful data about student learning to improve teaching and to provide evidence to satisfy demands for progress toward the UN’s Sustainable Development Goals.

How are national and sub-national assessments evolving in the age of global metrics? What is the relationship between national assessments and ILSAs? What effects are they having? What can we learn from the experiences of the past 20 years?

On 4 and 5 December 2017, a group of education scholars, policy sociologists, policy makers, psychometricians, and other experts from Australia, Canada, France, Kenya, Norway, South Africa, Sweden, UK reflected on these critical issues at the international symposium National Assessments in the Age of Global Metrics, jointly convened by Deakin University’s Strategic Centre Research for Educational Impact (REDI) and the Laboratory of International Assessment Studies.

The program opened with a keynote by Professor Ray Adams, Director of the Centre for Global Education Monitoring at the Australian Centre for Educational Research, titled ‘The implications of the SDGs: Do we have to harmonise assessment?’ This presentation detailed work that is currently being undertaken by Ray’s team to develop a universal ‘Learning Progression’ that would act as a translation tool to convert student achievement on any national assessment into the universal Learning Progression units. In this way, countries could have their own culturally and politically appropriate test, and still be able to compare their results with other countries, without having to participate in international assessments such as PIRLS or TIMSS. A version of his talk can be accessed here. Ray’s keynote was successful in focusing conversation during the seminar on the viability and desirability of such a single harmonising measure, and this was reflected in the comments made in the final plenary.

In his response, Sam Sellar argued that the strength of the Learning Progression tool is its function as a reporting framework, rather than a new test. He raised the debate beyond the economic costs and benefits, and beyond epistemological debates, to ask about the ethics of large-scale assessments. Rational policy approaches presuppose that better knowledge is needed to act ethically, but we can invert this logic to ask whether there might be something better than pursuing more and more information about educational performance.

Sara Ruto’s keynote addressed another key issue for the seminar, the possibilities for wider participation in the decision-making processes of educational accountability. Our attention was immediately captivated by the story with which she began, which introduced us to the numeracy situation in her country, Kenya, and to UWEZO, the citizen-led assessments that are conducted across the nation. She went on to explore the benefits and politics of citizen engagement in education reform and accountability.

Responding to Sara’s keynote, Keita Takayama notes that UWEZO’s illustrates the ideas in the UNESCO’s Education 2030 Incheon Declaration, which states the need for greater citizen-led accountability in education. He says that even though UWEZO came out in response to the particular context and needs of the global south, it provides a refreshing and alternative way of designing and using metrics for bottom up accountability purpose in the global north as well.

A very engaging panel discussion was convened and moderated by Mary Hamilton, with Hans Wagemaker, Anil Kanjee (via Skype), Sara Ruto, Sue Thomson
and Barry McGaw. With such a knowledgeable panel, and with the audience hand-picked for their expertise of the field, the questions were extremely productive and interesting. Anil offered important insights from the assessment situation in South Africa. NAPLAN, Australia’s national assessment program, became a hot topic. The discussion focused on whether national assessments were indeed required, and on particular features – such as making the results public and publishing them in comparative formats – and their ramifications. The ways in which national assessments played out in different countries was discussed.

The seminar ended with a plenary convened by Fazal Rizvi where participants reflected on the issues raised and their key “takeaway” points. While everyone agreed that the conversation among diverse viewpoints had been valuable they also recognised that reaching consensus is difficult: who should be involved in designing measurement instruments and procedures? Should we be aiming for a single measure of learning progress? A variety of “collateral effects” of international and national assessment were identified and speakers asserted the need to widen the conversation, recalibrate participation and the purpose of learning metrics. Fazal also reminded us that educationalindicators are dependent on wider discourses.

This conversation will continue in November this year when Gustavo Fischman and Iveta Silova organize a symposium at Arizona State University. Some of the people who were at this seminar will be there as well, and others will be brought into that conversation.



Lab Team
Written by Lab Team