Dr Brenda Tay-Lim


Dr Brenda Tay-Lim is a Programme Specialist at the UNESCO Institute for Statistics, with extensive experience working on educational assessments. Brenda is also a member of the Expert Advisory Group of the Laboratory of International Assessments Studies.  We asked Brenda about her views on educational assessment and her involvement in the field.


Interview with Brenda Tay-Lim, Expert Advisor to the Lab

How did you become involved in educational assessment?

I got into education assessment at graduate school in a fellowship at the University of Pittsburgh’s Learning Research and Development Centre. I worked there as a Data and Analysis Manager for the New Standards Reference Assessment project, which developed new frameworks for Reading, Mathematics and Science Assessments. The assessments were adopted in several states in the United States. Then, as a final year PhD graduate (in 1999), I was recruited by Educational Testing Service (ETS).

At ETS, I worked as Psychometrician (Measurement Statistician) on the National Assessment and Education Progress (NAEP) project, also known as The Nation’s Report Card, at the Centre for Large-Scale Assessment. I managed several cycles of the NAEP pilot test, field test and main assessment in various subject areas: Reading, Mathematics, U.S. History, and Geography, at grades 4, 8 and 12. I conducted quality assurance and data modelling to ensure data were reliable, valid and comparable across years for trend reporting.

Later (in 2005), I moved to the UNESCO Institute for Statistics (UIS) to work on adult literacy assessment. The Literacy Assessment and Monitoring Programme (LAMP) built on the International Adult Literacy Survey (IALS) and the Adult Literacy and Life-skills Survey (ALL) frameworks and developed tools for low- and middle-income countries. I also worked on education attainment data to develop methodology and produce mean year of schooling figures for the United Nations Development Programme’s Human Development Index.

Could you add something about your experience with LAMP and what you and UIS learned from that experience?

The work in LAMP provides excellent experience in understanding low- and middle-income country’s capacity needs, on improving understanding among donors and engaging technical experts, and the limitations of international assessment. Through working with a range of countries, I learnt that many countries have good skills but need proper guidance. Donors need good understanding of the assessment implementation process and good data to make informed decision on resource allocation. Technical experts need an open mind, understanding country background to help devise pragmatic analysis plan to produce relevant data. Depending on financial resources and country’s needs, assessment design and process need to be pragmatic and innovative.

What is the position of UIS in relation to International Large-Scale Assessments and National Large-Scale Assessments?

With the Sustainable Development Goal for Education (SDG 4), the international community has agreed on an ambitious education agenda to “Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all”. As a result, the measurement of relevant learning outcomes is essential for tracking progress towards the education targets. Given my technical background, I contribute to the coordination and development of global methodological and reporting framework under the auspices of the Global Alliance to Monitoring Learning (GAML). The ‘Education 2030 Framework for Action’ (FFA) gives UIS an important role as the official source of cross-nationally comparable data on education.

Most countries have their own national or regional educational assessments. What additional benefits do governments gain from participation in global educational assessments and rankings?

Keep in mind that the reporting for SDG 4 does not require countries to participate in global educational assessments. The UIS with technical partners are developing methodology to align existing country’s national assessment and regional educational assessment to accomplish comparability for reporting. Countries could continue to do what they are doing now, conduct their own national educational assessments and/or participate in regional educational assessments. The UIS will look at what countries have submitted and prepare the data for reporting. The benefits of having the data on a comparable scale means countries could learn from other neighboring counties with similar social and economic structure to develop relevant policy to improve.

So how is UIS contributing to the benchmarking of SDG4?

The UIS established GAML in 2016 to generate solutions to the technical challenges inherent in global comparable learning targets. GAML operates as a platform for dialogue among diverse stakeholders to SDG 4 measurements.

GAML builds on a set of working principles. The first principle recognizes the importance of national-level measurement to guide country action. National data on learning provides a source of information on learning that can support global monitoring. The second principle recognizes the importance of equity in education. This requires innovation in measurement to ensure all groups are represented, and, in addition, that assessment content is fair and reflects a range of skills and competencies. The third principle recognizes the many viable alternatives for measuring learning, so it is essential to connect and fully leverage the wealth of expertise found in a range of organizations across all countries and regions. The last principle involves the promotion of knowledge sharing and exchange in the design and implementation of measurement strategies.

To ensure that indicators are reported and used well, exchange and dialogue between the international education community and country experts is essential. That dialogue will help to identify problems about the measurement of learning and to find and agree on solutions.

Could you give us one or two examples?

For example, to support the first principle of recognizing ‘the importance of national-level measurement to guide country action’, the UIS is working with the International Bureau of Education (IBE) in developing a construct mechanism for indicator 4.1.1. The mechanism is to help countries understand their national curriculum and national assessment objectives and identify gaps if they exist. In other words, this alignment will improve the understanding of national curriculum (inputs used to structure students’ learning) and national assessment (outputs on expectation/performance of students’ learning) for better policy development.  A second example, relating to the last principle of promoting ‘knowledge sharing and exchange in the design and implementation of measurement strategies’, the UIS has conducted two independent global consultations for indicator 4.1.1, Reading and Mathematics domains, gathering inputs from regional organizations and country experts to improve on the finalization of the global content framework. This bottom-up approach is to promote inclusiveness in indicator development.

What other ways of evaluating learning outcomes does UIS support? What benefits and new challenges have ILSAs created for the monitoring of education progress?

There are ambitious demands on countries to generate learning-related indicators for different targets, so GAML has an important role to play. The demand for globally comparable data on learning requires new ways of working with countries, regional bodies and global organizations to produce the required indicators for monitoring. Each GAML thematic task force group has identified two critical measurement issues that represent the highest priority for obtaining technical solutions – these include Global Comparability, and Definitions of ‘Minimum Proficiency Levels’. There is still more work to be done and GAML welcomes collaboration from the international education community.

Thanks Brenda. Finally, can you tell us which of your professional accomplishments or publications you are most proud of and why? Also, if you have any stories, examples, personal experiences that you would like to add to the interview, that would make it fascinating.

The publication that I am proudest of is the co-author report with John Mazzeo and Edward Kulick on the NAEP technical and research report series, ‘Technical report for the 2000 Market-Basket Study in Mathematics’. In this report, I researched and conducted analysis in the application of various innovative methodologies for reporting. Based on this work, I produced a series of conference papers studying the factors that affect reporting outcomes. I learned a lot from this research and I attribute this research to my professional growth. With regards to personal experience, I would think the field trips to Mongolia for both field testing and the main survey are memorable. It opened up my eyes on how each implementation process affects the next and how this collectively affects the outcome of data production. The most important lesson I learnt is that there is a lot of capacity at country level, and countries are eager to learn. Methodology development will need to take into account countries’ perspectives and contexts.