The table below provides a systematic overview of key features of international and regional learning assessments.
Filter by name | Purposes | Origins and evolution | Measurement objective: learning domains and contextual information | Target population | Sampling method | Assessment administration | Reporting and dissemination | More information |
---|---|---|---|---|---|---|---|---|
ASER
(Annual Status of Education Report) Go to pamphlet |
To obtain reliable estimates of the status of children’s schooling and basic learning in reading and mathematics at the district level; and to measure the change in these basic learning and school statistics over time. |
Developed by NGO Pratham. First implemented in 2005. Implemented by the ASER centre (a unit in the Pratham network) since 2008. ASER is part of the PAL network. |
Basic skills in reading and arithmetic are assessed. Contextual information is collected about households, one government school in each sampled village, and the conditions of sampled villages. |
Children living in rural areas in India, ages 3–16 for enrolment and background information, ages 5–16 for assessment. |
Two-stage sampling. Stage 1: panel of 30 villages per rural district is replenished each year (10 new villages selected with PPS , 10 villages selected two years before are removed). Stage 2: 20 households sampled from each village on-site by test administrator. All children in target population in sampled households assessed. |
Oral and one-on-one, administered in homes annually. |
Results are reported at the national and state levels. Comparisons of results across states are also provided. Achievement reported as percentages of children successfully completing each task. |
Reports, data from summary tables and other documentation available for release of annual report widely publicised in India. Press releases and other supplementary documentation produced. Reports, data from summary tables and other documentation available for download from the ASER centre website (https://www.asercentre.org/).
|
EGMA
(Early Grade Mathematics Assessment) Go to pamphlet |
To assess the acquisition of basic mathematics skills in students in developing countries. Specifically, EGMA can: inform educational policies; highlight the impact of curricular reforms or programs; and evaluate instructional practices or interventions. |
Instruments developed by RTI and USAID. First implemented in Kenya in 2009. Revised in 2011 and renamed the Core EGMA. Implemented in at least 22 countries to date. |
Basic skills in mathematics are assessed. Collecting contextual information is optional, and it can be obtained from students, teachers, school principals and parents, or by classroom observation. |
Students in Grades 1–3. |
Typically three-stage sampling, such as: Stage 1: Regions or zones selected. Stage 2: Schools selected from sampled regions/zones. Stage 3: Students selected from sampled schools. Sample size varies. |
Oral and one-on-one, administered in school. Administration cycle varies. |
Instruments and results documented by implementing countries. Achievement levels typically reported for each sub-task with mean percentages of correct answers, mean numbers of correct answers per minute, or percentages of students scoring zero. Within-country dissemination strategy varies for each EGMA application. |
Reports are available from EdData II’s website: https://www.edu-links.org/resources/early-grade-math-assessment-egma |
EGRA
(Early Grade Reading Assessment) Go to pamphlet |
To assess children’s acquisition of basic literacy skills in developing countries. In addition to system-level monitoring, EGRA can guide the content of an instructional programme and help evaluate programmes. |
Instruments developed by RTI, USAID and the World Bank. First implemented in the Gambia and Senegal in 2007. Implemented in more than 70 countries in 100 languages to date. |
Basic skills in reading assessed. Collecting contextual information is optional, and it can be obtained from students, teachers, school principals and parents, or by classroom observation. |
Students in Grades 1–3. |
Typically three-stage sampling, such as: Stage 1: Schools selected. Stage 2: Classes selected from sampled schools. Stage 3: Students selected from sampled classes. Sample size varies. |
Oral and one-on-one, administered in school. Administration cycle varies. |
Instruments and results documented by implementing countries. Achievement levels typically reported for each sub-task with mean scores, percentage of children scoring zero, or percentage of children achieving benchmarks. Within-country dissemination strategy varies for each EGRA application. |
Reports are available from EdData II’s website: https://www.edu-links.org/resources |
ICCS (International Civic and Citizenship Education Study) |
To assess young people's preparedness as adult citizens. |
The ICCS was first implemented in 2009 with a follow-up cycle in 2016 and one in progress for 2022. ICCS built on the previous IEA studies of civic education, namely the Civic Education Study in 1971 and the CIVED study in 1999. |
Civic knowledge, beliefs and attitudes, in the context of social media, sustainable development and changing paterns of democratic and civic engagement. Organization and content of civic and citizenship education in the curriculum. Contextual information is collected from students, teachers, school principals and parents. |
Eighth year of school
|
Two-stage stratified sampling to obtain representative sample.
Stage 2: One intact class per grade randomly selected from each sampled school. |
In schools. Student questionnaires in print format. Teacher questionnaires print or computer-based. Irregular frequency. |
Results are reported for each participating country in terms of means and distributions of student achievement. A four-level scale was used providing numerical proficiency scores and detailed proficiency descriptions. Reports include separate chapters focused on school contexts and student characteristics. Results are reported in international and regional reports. |
Reports, data and more information can be downloaded from the project webpage via the IEA: https://www.iea.nl/studies/iea/iccs
|
ICILS (International Computer and Information Literacy Study)
|
To assess preparedness for study, work and life in the digital age.
|
The first cycle of ICILS was in 2013, with 21 education systems participating. This was then followed up in 2018 with 14 education system (12 countries and two benchmarking entities). ICILS builds on previous studies on computer literacy conducted by the IEA. |
Computer literacy. Information literacy. Contextual information is collected from students, teachers, school principals and parents. |
Eighth year of school
|
Two-stage stratified sampling to obtain representative sample. Stage 1: Schools sampled with PPS. Stage 2: A systematic simple random sampling approach: (a) students enrolled in the target grade within participating schools, and (b) teachers teaching the target grade within participating schools. |
Student assessment is computer-based. Teachers' assessment is online or paper-based. |
Results are reported for each participating country in terms of means and distributions of student achievement. A four-level scale was used providing numerical proficiency scores and detailed proficiency descriptions. Reports include separate chapters focused on contexts and student characteristics. Results are reported in international and regional reports. |
Reports, data and more information can be downloaded from the project webpage via the IEA: https://www.iea.nl/studies/iea/icils
|
IELS (International Early Learning and Child Well-Being Study) |
To identify key factors that drive or hinder the development of early learning. |
Initiated in 2016 by the OECD. The first cycle was in 2018, with an intention for future cycles. |
Emergent literacy and numeracy. Self-regulation. Social and emotional skills. |
Children aged between 5 and 6 years old. |
Two-stage design. Centres/schools as primary sampling units and children as secondary sampling units. Centres/schools are selected with systematic random sampling and probabilities proportional to size. Within all sampled centres/schools, at least 15 children are sampled. |
Child assessment is tablet-based. Questionnaires are online or paper-based. |
Results are reported for each participating country in terms of means, percentages above and below average, distributions and proficiency levels. Associations of the results between the learning domains are investigated. Relationship between contextual factors and child development are investigated. Results are reported in national reports and an international report. |
Reports and more information is available via the OECD: https://www.oecd.org/education/school/early-learning-and-child-well-being-study/
|
LLECE (The Latin-American Laboratory for Assessment of the Quality of Education) Go to pamphlet |
To provide information about the quality of education in Latin America and guide decision-making in public education policies. |
Created in 1994 by the Ministers of Education in the region and coordinated by UNESCO’s regional office. First implemented in 13 countries in 1997. Four assessments implemented to date (PERCE in 1997, SERCE in 2006, TERCE in 2013, ERCE 2019). |
Curriculum-based assessment in reading, mathematics, science and writing. Contextual information is collected from students, teachers, school principals and parents. |
Students in Grade 3 (reading, maths and writing). Students in Grade 6 (the above three subjects plus science). |
A representative sample of students is tested in each country. Two-stage stratified sampling to obtain representative sample. Stage 1: Schools sampled with PPS within two strata (school location and type). Stage 2: One intact class per grade randomly selected from each sampled school. |
Written format, group administration in school. |
Regional reports comparing countries' performance using a single scale. Countries are compared by their overall performance and the percentage of students located in each of the performance levels defined for each grade and subject. Within-country dissemination strategy depends on each country. |
LLECE produces regional results and reports detailing technical aspects of the project. All reports are available from the website of UNESCO’s regional office: |
PASEC (Programme for the Analysis of Education Systems) |
To inform member countries of the French-speaking community about the evolution of their education systems, to provide stimulation on topics of common interest and reforms, and to facilitate dialogue between ministers and experts to support policy development in education. Inform more efficient and equitable education systems, including deriving a hierarchy of potential educational interventions. |
PASEC was established in 1991, with the first cycle involving 3 African countries. It later expanded to 24 countries in Africa, the Indian Ocean and Southeast Asia. Following an impact study in 2011 PASEC was substantially revised and launched its first competency-based international assessment in 2014 in 10 African countries. Subsequent cycles will broaden participation to more CONFEMEN countries. |
Reading, writing and numeracy are assessed. Contextual information is collected from students, teachers and school principals. |
Students in early (Grade 2) and late (Grade 6) primary school. |
Two-stage stratified sampling to obtain a representative sample. Stage 1: Schools sampled with PPS within defined strata (administrative division and school type). Stage 2: 15–20 students selected randomly from one Grade 2 class and one Grade 5 class (if there is more than one Grade 2 class or Grade 6 class, then one class is sampled randomly prior to sampling students). |
Written format, group administration in school twice for both target grades, at the beginning and at the end of the school year. |
The findings are presented in country reports, and an international report, which compares results across countries. Competency scales, means, distribution and trends are presented. Associations between contextual factors and achievement are also explored. | Country reports are available from CONFEMEN’s website: www.confemen.org |
PILNA (Pacific Islands Literacy and Numeracy Assessment) Go to pamphlet |
To support the improvement of education outcomes of students in the Pacific Islands. |
PILNA grew from a partnership between UNESCO, the Educational Quality and Assessment Programme (EQAP) of the Pacific Community and 15 Pacific Islands countries. The first cycle was in 2012, followed by 2015 and 2018. |
Literacy and numeracy are assessed. Contextual information is collected from students, teachers and school principals. |
Students who have completed approximately 4 and 6 years of formal schooling. |
Two-stage sample. Stage 1: schools selected from each country, with the 10 smallest countries conducting a census. Stage 2: from each school 25 students in each of the year levels are selected to participate. |
Written format, group administration in school. |
Three kinds of reports of PILNA reports are produced: regional, sub-regional and individual country reports. The regional report combines data from all participating countries. The sub-regional report contains results for the five small Island states as a whole. The individual country reports present the results of each participating country. Results are presented with means and distributions across proficiency scales, disaggregated by gender. Trends are also reported on. The reports are disseminated to the relevant educational stakeholders: policy-makers, educators and the broader community. |
More information can be found on the Pacific Community website: |
PIRLS (Progress in International Reading Literacy Study) Go to pamphlet |
To assesses the reading achievement of young students. ‘PIRLS Literacy’ is a similar but a simpler version of the assessment for developing countries. |
PIRLS has been conducted every five years since 2001. PIRLS was a follow-up to the IEA’s 1991 Reading Literacy Study. The number of participating countries has grown from 35 in the first cycle to 50 in the fifth cycle, with a further 11 benchmarking entities. | Reading achievement within the two overarching purposes: reading for literary experience; and reading to acquire and use information. | Students in their fourth year of schooling. |
Two-stage stratified sampling to obtain representative sample. Stage 1: schools sampled with PPS. Stage 2: one or more intact classes selected from each sampled school. Typical sample size 150 schools, approx. 4000 students per target population. |
School, teacher, and student questionnaires as well as the Learning to Read Survey completed by students, parents or caregivers. In 2016, electronic forms were introduced, with half of the countries applying this method. |
Quantitative reports produced to summarise the international achievement and questionnaire results. Trends in achievement can be reported. Participants also prepare national-level reports and conduct their own multi-level dissemination activities. |
Data, reports and other documentation can be downloaded from the TIMSS and PIRLS International Study Centre website: |
PISA (Programme for International Student Assessment) PISA – D (for developing countries) |
To provide empirical information on how well education systems are preparing students near the end of compulsory schooling to enter adult life. |
A triennial survey launched by the OECD in 1997. First implemented in 43 countries in 2000, with over 90 countries having participated since its inception. Seven cycles completed to date (2000, 2003, 2006, 2009, 2012, 2015 and 2018). |
Literacy in reading, mathematics and science is assessed. PISA also collects valuable information on student attitudes and motivations, and formally assesses skills such as collaborative problem-solving, and is investigating opportunities to assess other important competencies related, for example, to global competence. Contextual information is collected from students and school principals. |
15 year-old students studying at Grade 7 or higher. |
Two-stage stratified sampling to obtain representative sample. Stage 1: schools sampled with PPS within country-defined strata. Stage 2: eligible students randomly selected within schools. |
Written format, group administration in school. |
Results reported on a continuous proficiency scale for each domain. Each scale is divided in proficiency levels describing what students typically know and can do. Trends in achievement also reported. Background data linked with average country performance. Based on PISA results, recommendations for education policy are provided. International report released by the OECD. Participant countries prepare their national reports and define the dissemination strategy locally. |
Databases, international reports and other in-depth reports further investigating PISA results can be downloaded from the OECD-PISA website:
|
SACMEQ Go to pamphlet |
To assess the conditions of schooling and performance levels of learners and teachers in literacy and numeracy in Southern and Eastern Africa. |
The SACMEQ consortium established in 1997. First assessment implemented in seven countries in 1995–1999. Four cycles completed to date at five- to six-year intervals. |
Reading and mathematics are assessed. Contextual information is collected from students, teachers and school principals. |
Students in Grade 6. |
Two-stage stratified sampling to obtain representative sample. Stage 1: schools sampled with PPS within country-defined strata. Stage 2: students randomly selected from each school onsite by test administrator. The minimum number of students per selected school is 25. |
Written format, group administration in school. |
Within-country and cross-country analyses of results are documented. Achievement levels are reported using a single scale and defined levels of competency. Each SACMEQ country convenes a research results dissemination forum for different groups of stakeholders. |
Reports and data files are available from SACMEQ’s website: www.sacmeq.org/ |
TIMSS
(Trends in International Mathematics and Science Study) Go to pamphlet |
To collect data to assist participants to make informed choices about improving mathematics and science teaching and learning. TIMSS Numeracy is a simpler version of the assessment for developing countries. |
First implemented by IEA in 45 countries in 1995. Six administrations completed to date (1995, 1999, 2003, 2007, 2011 and 2015). The seventh cycle of data collection was conducted in 2019 with results available in late 2020. |
Curriculum-based assessment of mathematics and science. Contextual information is collected about students, teachers, schools, the curriculum and the education system overall. |
Students in Grades 4 and 8. Advanced module 11 or 12. TIMSS numeracy 4, 5, 6. |
Two-stage stratified sampling to obtain representative sample. Stage 1: schools sampled with PPS. Stage 2: one or more intact classes selected from each sampled school. Typical sample size 150 schools, approx. 4000 students per target population. |
Written format, group administration in schools. In 2019, electronic forms were introduced, with half of the countries applying this method. |
Results reported as means and distributions of achievement. Trends in achievement, cohort comparisons, achievement differences by gender and trends in achievement differences by gender also reported. Results reported against benchmarks for different achievement levels. Background data linked with average achievement scores. Participants also prepare national-level reports and conduct their own multi-level dissemination activities. |
Data, reports and other documentation can be downloaded from the TIMSS and PIRLS International Study Centre website: |
UWEZO
Download PDF |
To contribute to the improvement of education quality in Kenya, Tanzania and Uganda by measuring the basic literacy and numeracy competencies of school age children. |
Started in 2009. Methodology based on that of ASER. Annually 2009 -2018, with potential for extension. UWEZO is part of the PAL Network. |
Basic literacy and numeracy are assessed. The assessments aligned with Grade 2 curriculum. Contextual information is collected about children, households, villages and schools. |
Children from 6–16 years old. |
Two-stage sampling. Stage 1: 30 Enumeration Areas were selected per district using the probability proportional to size. Stage 2: 20 households are selected in each of the 30 EAs. All available children in the target population are tested. One primary school per EA is selected for a survey of school resources. The school selected is the one attended by the largest proportion of children residing in the EA. |
Oral and one-on-one, administered in homes annually. Adaptive administration, where assessment starts at a middle-difficulty task and moves up or down depending on whether or not the child successfully completes that initial task. |
Within countries, national results are reported as the percentage of children reaching each competency level, corresponding to the tasks assessed by each country. The regional report compares countries' performance based on a 'pass' rate. Children are considered to have 'passed' the test if they are able to successfully complete all the tasks they were asked to perform. Results are released with national press conferences and communicated through printed reports in English and Kiswahili, newspapers, radio talk shows, and TV adverts and news. |
Reports are available from UWEZO’s website: |
[1] PPS = Probability Proportional to Size
Note: in some assessments, some of the key features have changed over time since the first implementation (for example, measurement objectives or target population). The information presented in this summary table is based on the latest cycle of the assessment administration. For more details, please refer to the PDF pamphlet about each assessment.
Sign up to our monthly roundup of ACER news and articles, sent directly to your inbox.