Skip to main content
Log in

Measuring the continuum of literacy skills among adults: educational testing and the LAMP experience

  • Published:
International Review of Education Aims and scope Submit manuscript

Abstract

The field of educational testing has become increasingly important for providing different stakeholders and decision-makers with information. This paper discusses basic standards for methodological approaches used in measuring literacy skills among adults. The authors address the increasing interest in skills measurement, the discourses on how this should be done with scientific integrity and UNESCO’s experience regarding the Literacy Assessment and Monitoring Programme (LAMP). The increase in interest is due to the evolving notion of literacy as a continuum. Its recognition in surveys and data collection is ensured in the first commitment in section 11 of the Belém Framework for Action. The discourse on how measurements should be carried out concerns the need to find valid parsimonious approaches, also their relevance in different institutional, cultural and linguistic contexts as well as issues of ownership and sustainability. Finally, UNESCO’s experience with LAMP shows how important addressing these different issues is in order to equip countries with an approach that is fit for purpose.

Resumé

Mesurer le continuum de l’alphabétisme des adultes : évaluation des acquis et expérience LAMP – Le domaine de l’évaluation des acquis de l’apprentissage gagne aujourd’hui en importance car il convient de fournir les informations aux diverses parties prenantes et aux décideurs. Cet article présente des critères de base pour les approches méthodologiques utilisées pour mesurer les compétences de lecture et d’écriture chez les adultes. Les auteurs traitent l’intérêt croissant pour la mesure des compétences, les discours sur les moyens d’y procéder selon les règles scientifiques, et l’expérience de l’UNESCO avec le Programme d’évaluation et de suivi de l’alphabétisation (LAMP). Cet intérêt accru est dû à la nouvelle notion de l’alphabétisme considéré en tant qu’un continuum. La prise en compte de cette notion dans les enquêtes et collectes de données est assurée dans le premier engagement du point du Cadre d’action de Belém. Le discours sur la façon dont ces mesures doivent être effectueés renvoie au besoin de trouver des méthodes valables parcimonieuses, à leur pertinence dans différents contextes institutionnels, culturels et linguistiques, ainsi qu’aux questions d’appropriation et de durabilité. Finalement, l’expérience de l’UNESCO avec le Programme LAMP montre l’importance de répondre à ces différentes questions afin de doter les pays d’une approche qui puisse permettre d’atteindre les objectifs.

Zusammenfassung

Wie misst man die Lese- und Schreibfertigkeiten von Erwachsenen als Kontinuum? Bildungstests und die Erfahrungen mit LAMP – Wenn es um die Versorgung von Interessenvertretern und Entscheidungsträgern mit Informationen geht, spielen Bildungstests eine immer wichtigere Rolle. In diesem Artikel werden grundlegende Standards für methodologische Ansätze zur Messung der Lese- und Schreibfertigkeiten von Erwachsenen erörtert. Die Autoren behandeln das zunehmende Interesse an der Messung von Fertigkeiten, die Diskurse, wie solche Messungen wissenschaftlich seriös durchzuführen sind, und die Erfahrungen der UNESCO mit dem Literacy Assessment and Monitoring Programme (LAMP). Das Interesse nimmt zu, weil Alphabetisierung heute immer mehr als Kontinuum begriffen wird. Die Anerkennung dieses Konzepts in Erhebungen und Datensammlungen wird im Aktionsrahmen von Belém in der ersten Verpflichtung unter Punkt 11 gewährleistet. Bei der Diskussion um die richtige Durchführung der Messungen geht es um die Suche nach fundierten und zugleich sparsamen Ansätzen, um ihre Relevanz in verschiedenen institutionellen, kulturellen und sprachlichen Kontexten sowie um Fragestellungen der Eigenverantwortung und Nachhaltigkeit. Die UNESCO hat mit LAMP die Erfahrung gemacht, dass diese verschiedenen Aspekte behandelt werden müssen, um Ländern Instrumente zur Verfügung stellen zu können, die ihren Zweck erfüllen.

Resumen

Medir el continuum de las habilidades de alfabetismo en adultos: evaluaciones educativas y la experiencia con el programa LAMP – El campo de la evaluación educativa se ha vuelto cada vez más importante en cuanto a la información que suministra a los diferentes grupos de interés y tomadores de decisiones. Este trabajo se ocupa de estándares básicos en los enfoques usados para medir habilidades de alfabetismo en personas adultas. Los autores abordan el creciente interés en la medición de habilidades, los discursos sobre cómo ésta debe realizarse con integridad científica y la experiencia de la UNESCO con el Programa de Evaluación y Monitoreo de la Alfabetización (LAMP). El creciente interés se debe a la evolución que ha tenido la noción de alfabetismo en tanto continuum. Su reconocimiento en estudios y generación de datos está asegurado como primer compromiso en el punto 11 del Marco de Acción de Belém. El discurso sobre cómo deberían realizarse las mediciones concierne la necesidad de encontrar enfoques parsimoniosos válidos, al igual que su relevancia en diferentes contextos institucionales, culturales y lingüísticos, as como aspectos de apropiación y sostenibilidad. Por último, la experiencia de la UNESCO con el programa LAMP muestra la importancia que tiene abordar estos temas a efectos de dotar a los países de un enfoque que sea suficiente para lograr su propósito.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. In 1958, UNESCO’s General Conference approved a series of recommendations concerning the standardisation of educational statistics. These included a simple definition of literacy (see UNESCO 1958) that was later echoed by the United Nations Statistical Division (UNSD 1997, 2.145). These form the basis on which population census and household surveys have structured the questions they posed. In this context, literacy rates are defined as being the “total number of literate persons in a given age group, expressed as a percentage of the total population in that age group” (UIS 2010, p. 269). Nevertheless, the notion of literacy has evolved as shown in different UNESCO documents (UNESCO 1978, 2004, 2005) and this evolution was also echoed by the United Nations Statistical Division in the most recent revision of the above-mentioned document: “Literacy has historically been defined as the ability both to read and to write, distinguished between ‘literate’ and ‘illiterate’ people. A literate person is one who can both read and write a short, simple statement on his or her everyday life. (…) However, new understanding referring to a range of levels, of domains of application, and of functionality is now widely accepted. (…) Nevertheless, administering a literacy test to all household members in the course of enumeration may prove impractical and affect participation, therefore limiting the utility of the results. Countries have regularly used simple self-assessment questions within a census to provide an indication of literacy rates at the small area level. An evaluation of the quality of statistics should be provided with census statistics on literacy” (UNSD 2008, pp. 147–148).

  2. At international level, the most important and pioneering efforts have been conducted by the International Association for the Evaluation of Educational Achievement (IEA, www.iea.nl) since the late 50s, and then followed by the Organisation for Economic Co-Operation and Development (OECD) since the late 90s as well as regional initiatives in Latin America and Africa. In any case, these international efforts rely on and/or promote national capacities usually organised in educational testing units in the Ministries of Education.

  3. Educational attainment is usually measured in number of years of schooling attended (excluding years spent repeating the same grade), highest level attended or completed, or highest certification acquired.

  4. This framework expresses countries’ commitments endorsed by delegations (UNESCO’s 144 Member States, representatives of civil society organisations, social partners, United Nations agencies, intergovernmental agencies and private sector) at the Sixth International Conference on Adult Education (CONFINTEA VI, held in Belém, Brazil in December 2009). In the above-mentioned section (11a), countries committed to “ensuring that all surveys and data collection recognise literacy as a continuum”.

  5. For additional information on LAMP see UIS (2009).

  6. This wording is the title of chapter V in Postlethwaite (2004).

  7. Postlethwaite’s text is mainly focused on educational testing as conducted in schools. Measuring literacy skills of the youth and adult population entails some specific characteristics, given that it has to be conducted using a household survey platform and that the actual subject matter is embedded in a theoretical discussion about literacy, rather than, as it is usually the case in school-based testing, on the prescriptions of a given curriculum.

  8. This word of alert is usually attributed to the British economist Ronald Coase having said: “If you torture the data long enough, it will confess.”

  9. One such example is a recent literacy survey (Bangladesh Bureau of Statistics 2008) that reported results on one scale with four levels (non-literate, semi-literate, literate-initial and literate-advanced). The scale merged information from four domains: (i) reading (oral reading of five isolated words and a passage made up of short and simple sentences); (ii) writing (five exercises writing isolated words); (iii) numeracy (12 purely algorithmic exercises, one subtraction embedded in a passage and one simple series) and (iv) general knowledge (nine visual/oral exercises not requiring any ability to read, write or compute). Each section accounted for one quarter of the overall score; at the same time, each level represented one fourth in the range of possible scores. Thus, if for instance an individual got all the points in the “general knowledge” section and only one more in any of the others (a total score of 26/100), that individual would be classified as semi-literate, i.e. someone with the “ability to recognize and write some simple words, to count objects, and numbers at a very basic level” (op. cit., p. xiv).

  10. “PISA seeks to measure how well young adults, at age 15 and therefore approaching the end of compulsory schooling, are prepared to meet the challenges of today's knowledge” (OECD n.d.).

  11. Quoted from the first paragraph of What PISA is in the OECD website for PISA, available at http://www.pisa.oecd.org/pages/0,3417,en_32252351_32235907_1_1_1_1_1,00.html. Italics added by the authors.

  12. See Dept et al. (2010).

  13. Inter-scorer agreement deals with the degree to which two scorers, working independently from each other, arrive at the same score for a given answer provided by a respondent. The exact agreement is simply the proportion of agreements between the two scorers, while Cohen's kappa (κ) coefficient is another, more sophisticated measure used for categorical items. It is generally thought to be a more robust measure than simple percent agreement calculation, since kappa takes into account the agreement that is expected to occur by chance.

  14. Item Response Theory (IRT) models have been developed to compute scores by using a set of item characteristics such as the difficulty and discriminatory power of each individual item used in a test (in the two-parameter logistic model, or 2PL). In some cases, (especially multiple choice or true/false questions, as opposed to open-ended questions) an element of pseudo-guessing is also factored into the model (the three-parameter logistic model, or 3PL). Thus, each score is a mathematical function that combines the individual ability with the characteristics of the items included in a test on a specific scale.

  15. This is the major topic surrounding the discussion on the use of a set of “plausible values” for each individual respondent (see IERI 2009).

  16. See for instance: Global Campaign for Education (2005) and UNESCO (2005).

  17. While by definition reading and writing always refer to written materials, this is not the case for numeracy. Computations can be performed in fully oral situations, or by simply relying on graphical resources. In that sense, it is possible to suggest that the only numeracy tasks included in this definition of literacy are those that require written responses and tend to provide written questions or stimuli, or both.

  18. The use of these three comparative adjectives in relation to testing literacy skills is present in one academic paper (Wagner 2003).

  19. This idea is present in the philosophy of science at least since William of Ockham (the expression Occam’s razor exactly refers to the need to suppress unnecessary complexity). Albert Einstein (in his Herbert Spencer lecture at Oxford in 1933) felt it necessary to stress that simplification should not go so far as to compromise the whole effort (things should be simple but not simpler or oversimplified).

  20. These tests are distributed across different instruments, so every single respondent is exposed to a smaller number of items ranging from 35 to 49.

  21. El Salvador, Jordan, Niger, Morocco, Mongolia, Occupied Palestinian Territory, Paraguay and Vietnam.

  22. Anguilla, India, Jamaica, Laos and Namibia.

  23. While the OECD studies (International Adult Literacy Survey – IALS and Adult Literacy and Life Skills Survey – ALL) were conducted in more than 20 countries and 15 languages; these languages were all European (13 Indo-European and two Uralic) and all of them using the Roman alphabet and Western Arabic numerals.

  24. For instance Darville (1999), Hamilton (2001) and Hamilton and Barton (2000).

References

  • Bangladesh Bureau of Statistics. (2008). Literacy Assessment Survey 2008. Dhaka: Bangladesh Bureau of Statistics.

    Google Scholar 

  • Cooper, B., & Dunne, M. (1998). Anyone for tennis? Social class differences in children’s responses to national curriculum mathematics testing. The Sociological Review, 46(1), 115–148.

    Article  Google Scholar 

  • Darville, R. (1999). Knowledges of adult literacy: Surveying for competitiveness. International Journal of Educational Development, 19(4–5), 273–285.

    Article  Google Scholar 

  • Dept, S., Ferrari, A., & Wäyrynen, L. (2010). Developments in translation verification procedures in three multilingual assessments: A plea for an integrated translation and adaptation monitoring tool. In J. Harkness, M. Braun, B. Edwards, T. Johnson, L. Lyberg, P. Ph. Mohler, B.-E. Pennell, & T. Smith (Eds.), Survey methods in multinational, multiregional, and multicultural contexts. Wiley series in Survey Methodology. New Jersey: Wiley.

  • Global Campaign for Education. (2005). Writing the wrongs: International benchmarks on adult literacy. London, Johannesburg: Global Campaign for Education and ActionAid International.

    Google Scholar 

  • Hamilton, M. (2001). Privileged literacies: Policy, institutional process and the life of the IALS. Language and Education, 15(2&3), 178–196.

    Article  Google Scholar 

  • Hamilton, M., & Barton, D. (2000). The International Adult Literacy Survey: What does it really measure? International Review of Education, 46(5), 377–389.

    Article  Google Scholar 

  • IERI (IEA-ETS Research Institute). (2009). Issues and methodologies in large-scale assessments. IERI Monograph Series Vol. 2. Hamburg: IEA-ETS Research Institute, IERI.

  • Mahony, P., & Hextall, I. (2000). Reconstructing teaching: Standards, performance and accountability. Falmer: Routledge.

    Google Scholar 

  • OECD (n.d.). PISAThe OECD programme for international student assessment. PISA Brochure. Paris: OECD.

  • Postlethwaite, T. N. (2004). Monitoring educational achievement. Fundamentals of Educational Planning series. Paris: UNESCO International Institute for Educational Planning.

  • Power, M. (1997). The audit society rituals of verification. Oxford: Oxford University Press.

    Google Scholar 

  • Revell, P. (2005). The professionals: better teachers better schools. Stoke on Trent: Trenthan Books.

    Google Scholar 

  • UIL (UNESCO Institute for Lifelong Learning) (2010). Belém Framework for Action. Harnessing the power and potential of adult learning and education for a viable future. Hamburg: UNESCO Institute for Lifelong Learning.

    Google Scholar 

  • UIS (UNESCO Institute for Statistics) (2009). The next generation of literacy statistics: Implementing the Literacy Assessment and Monitoring Programme (LAMP). Montreal: UNESCO Institute for Statistics.

    Google Scholar 

  • UIS (UNESCO Institute for Statistics) (2010). Global Education Digest. Montreal: UNESCO Institute for Statistics.

    Google Scholar 

  • UNESCO (1958). Recommendation concerning the International Standardization of Educational Statistics. In Records of the general conference. Tenth Session. Paris: UNESCO.

  • UNESCO (1978). Revised Recommendation concerning the International Standardization of Educational Statistics. In Records of the general conference. Twentieth Session. Paris: UNESCO.

  • UNESCO (2004). The plurality of literacy and its implications for policies and programmes. Paris: UNESCO.

    Google Scholar 

  • UNESCO (2005). Aspects of literacy assessment. Topics and issues from the UNESCO expert meeting, Paris, 10–12 June 2003. Paris: UNESCO.

    Google Scholar 

  • UNSD (United Nations Statistical Division) (1997). Principles and recommendations for population and housing censuses. New York: United Nations Statistical Division.

    Google Scholar 

  • UNSD (United Nations Statistical Division) (2008) Principles and recommendations for population and housing censuses. Revision 2. New York: United Nations Statistical Division.

  • Wagner, D. (2003). Smaller, quicker, cheaper: Alternative strategies for literacy assessment in the UN Literacy Decade. International Journal of Educational Research, 39, 293–309.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cesar Guadalupe.

Additional information

Both authors work for the UNESCO Institute for Statistics (UIS) at the unit responsible for the Literacy Assessment and Monitoring Programme (LAMP). The opinions expressed in this essay are the exclusive responsibility of the authors.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Guadalupe, C., Cardoso, M. Measuring the continuum of literacy skills among adults: educational testing and the LAMP experience. Int Rev Educ 57, 199–217 (2011). https://doi.org/10.1007/s11159-011-9203-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11159-011-9203-2

Keywords

Navigation