The dangers of oversimplifying discussions on International Large Scale Educational Assessments
By Gustavo E. Fischman (Arizona State University), Amy Topper (independent researcher) and Iveta Silova (Arizona State University)
While analyzing publications for our study about the uses of International Large Scale Educational Assessments (ILSEAs) in policy making, it was evident that in spite of the obvious stylistic differences there was a strong tendency to use ILSEAs’ results as evidence in support of the user preferred solution to various educational problems. Given the continuous back and forth and heated tone of the debates, it seems that often “evidence” is simply one more tool that comes to hand in the combat of education politics, only invoked when it serves an author’s interests. As Camilla Addey has discussed in a previous post, contrary to ILSEAs popularity, there is little consensus among stakeholders about the policy value and relevance of ILSEAs. Are the results of ILSEAs being used by policymakers to revise, plan, and execute educational reforms? What changes in national education policies and practices, if any, have been made in countries as a result of ILSEAs?
To answer these questions, we analyzed over a hundred research articles that explicitly explored these questions and surveyed 90 professionals[1]asking the same questions. Our findings show that it is almost impossible to establish any causal or direct relationship between ILSEAs and changes in educational policies. Nonetheless, we found very strong arguments made by researchers, academics, and policymakers asserting the existence of a direct relationship, although a caveat is needed—some of the studies found a positive or beneficial relationship between ILSEAs and changes in educational policies, while others saw a negative relationship.
ILSEAs are “the answer” in search of the right problem
Our exploratory review of the ILSEA literature found that policymakers appear to be using these assessments as tools to legitimize existing or new educational reforms, although there is little evidence of any positive or negative causalrelationship between ILSEA participation and reform implementation. That is, educational reform efforts have often already been proposed or underway, and policymakers use ILSEA results as they become available to argue for or against new or existing legislation.
At the same time, results from our surveys with experts, policymakers, and educators (see Figure 1) pointed to a growing perception among respondents that ILSEAs are having an effect on national educational policies, with 38% of all survey respondents stating that ILSEAs were generallymisused in national policy contexts. Interestingly, experts are generally more critical in their assessment of ILSEAs compared to non-experts, with 43% arguing that ILSEAs are often being misused (see Figure 2). Experts explain that policymakers have little understanding of ILSEAs and use them for “ceremonial effects,” while at the same time arguing that these assessments are too broad and decontextualized to be used meaningfully in national contexts. Based on their professional and personal experiences, respondents were divided over whether ILSEAs actually contribute to or hinder national and global education reform efforts (See Figures 1 & 3).
Figure 1. Survey respondents’ perceptions of the contribution and/or hindrance of ILSEAs to national education reform efforts
Note: Includes responses from both the expert and non-expert surveys.
Figure 2. Survey respondents’ perceived impact of ILSEAs on national and global education policies
Figure 3. Survey respondents’ perceptions of the contribution and/or hindrance of ILSEAs to global reform efforts
Note: Includes responses from both the expert and non-expert surveys.
Perhaps the most significant finding associated with the use of ILSEAs in the literature we reviewed is the way in which new conditions for educational comparison have been made possible at the national, regional, and global levels. These new conditions have strengthened or given rise to a few educational legends that are often taken as true and accurate descriptions, such as: a) the presumed poor performance of all public schools is due to teacher (in)effectiveness; b) the relevance of ILSEAs is based on the existence of causal link between the results in those exams and economic growth; and, c) in more general terms, avoiding ILSEAs is also a factor in the impeding educational “crisis” worldwide.[2]These new conditions also create an assumption of the existence of single and globally applicable “best practice” or “best policy,” which can uniformly inform policymaking and improve education in local contexts.
From our perspective, the challenge is to avoid the illusion of certainty that any quantitative measure such as an ILSEA result provides. Granted the challenge is not easy to overcome because as Nobel Laureate Simon Kuznets (1934, pp. 5-7) affirmed:
With quantitative measurements especially, the definiteness of the result suggests, often misleadingly, a precision and simplicity in the outlines of the object measured. Measurements of national income [and we can add, of education] are subject to this type of illusion and resulting abuse, especially since they deal with matters that are the center of conflict of opposing social groups where the effectiveness of an argument is often contingent upon oversimplification.
Our research shows that, onthe one hand ILSEAs have the potential to provide governments and education stakeholders with useful and relevant modes of comparison that purportedly allow for the assessment of educational achievement both within cities, states, and regions, and between countries. On the other hand, using narrowly constructed frames to analyze ILSEAs’ results—the good, the bad, and the ugly—without considering the strong influence of unequal educational opportunities in various contexts, or acknowledging broader political or economic agendas driving the production and use of ILSAs in education, is dangerous. Generating oversimplified narratives using ILSEAs, disregarding the different contexts and multiple obstacles, showing a lack of concern for the educational opportunities and rights of millions of children, and focusing one’s energies on justifying one’s own opinions—while quickly discarding any counterevidence to legitimate your interests and benefits—is the most dangerous form of oversimplification.
The best defense against educational oversimplification? We already have discovered it, discussed it, experimented with it, assessed it, and considered the evidence: avoid the exclusive reliance onsingle-framed measures in determining education outcomes, shift attention away from short-term strategies designed to quickly climb the ILSEAs rankings, implement proven strategies to reduce inequalities in opportunities to improve long term outcomes. Above all, stop pushing for education reforms based on a single, narrow yardstick of quality.
More than ever, educational stakeholders need to consider multiple types and sources of data and explore more meaningful ways of reporting comparative data. We need to recognize the importance of the civic and public purposes of education, and we need to involve our diverse communities—parents, educators, administrators, community leaders, and students—in a public dialogue about what education is and ought to be about.Overcoming educational oversimplifications would thus entail realistic opportunities to focus on larger and more relevant educational questions than how a country ranks on international large-scale assessments.
References
Fischman, G. E., Topper, A, Silova, I., Goebel, J. & Holloway, J. (2018) Examining the influence of Large Scale Educational Assessments on National School Reform Policies.Journal of Education Policy http://dx.doi.org/10.1080/02680939.2018.1460493
Fischman, G. E. (2016)The simplimetrification of educational researchEducation in Crisis, Blog ofEducation Internationalpublishedon Thursday, 12 May 2016
Kuznets, S. (1934). National Income,1929-32. U.S. Congress.
To reference this blog: Fischman, G. E., Topper, A, Silova, I.. (2018). The dangers of oversimplifying discussions on International Large Scale Educational Assessments. Laboratory of International Assessment Studies Blogs. Accessed at http://wordpress-ms.deakin.edu.au/international-assessments/the-dangers-of-oversimplifying-discussions-on-international-large-scale-educational-assessments/
The authors of the blog
Gustavo E. Fischman is a professor of education policy at the Mary Lou Fulton Teachers College, Arizona State University. He has authored over 100 articles, chapters, and books. He has been the lead editor of Education Policy Analysis Archives and is the editor of Education Review.
Amelia Marcetti Topper, PhD, is an independent researcher and education scholar. Her research draws on critical and human development frameworks to examine the conceptualization and measurement of student learning and outcomes at the institutional, national, and global levels.
Iveta Silova is a professor and director of the Center for Advanced Studies in Global Education at Mary Lou Fulton Teachers College, Columbia University. Her research focuses on the study of globalization, post-socialist transformations, and knowledge production and transfer in education
Footnotes
[1]Our respondents were grouped as a) “experts” professionals identified as having extensive experience researching or working with ILSEAs/GLMs; b) “non-Experts” professionals less experienced with ILSEAs/GLMs but interested in the issue, such as educators and policymakers. Non-Experts were identified among registrants of the Inaugural Symposium of the Comparative and International Education Society held November 10-11, 2016, at Arizona State University.
[2]See Rappleye and Komatsu’s recent commentary about “flawed statistics” and new “truths” in education policymaking.