How valid is Monolingual Language Testing in Multilingual Contexts?
By Dr Bryan Maddox, Associate Professor in Educational Assessment at the University of East Anglia and Executive Director of Assessment Micro-Analytics
The question of valid and authentic assessment in multi-lingual contexts was recently brought home to me with a visit to Senegal in West Africa, where I observed some field trials of the OECDs ‘PISA for Development’ (PISA-D) reading and mathematics assessment for out-of-school youths. The PISA-D assessment in Senegal offered participants a choice between the use of French and Wolof languages. The provision of that choice seems reasonable and progressive. French is used in the national education system, whereas Wolof use is widespread in Senegal where it operates as a lingua franca, being spoken by some 80% of the population. However, as I learned in Senegal, language choice in assessment is not so straight forward.
Multilingualism in Senegal reflects the country’s religious, economic and colonial history. As a result, like other parts of West Africa French is used in State activities including secular education, while national languages including Wolof are widely used in other domains (Scriber and Cole, 1981; Brenner, 2001; Lüpke and Bao Diop, 2014). Wolof is written in the Arabic derived Ajami ‘Wolofol’ script, and occasionally in Garay script and Roman script. It is widely spoken in most day-to-day activities, and occasionally as a written language in letter writing, accounting and religion (Lüpke and Bao Diop, 2014). However, Wolof tends to be ignored by secular educationalists and development organisations (Ibid, p88).
In practice, code switching, and code mixing between Wolof and French is an every-day part of language use in Senegal. In such a context the idea of monolingual assessment (even with language choice) does not easily transfer to such a richly multi-lingual context, where there are different languages and script used for different purposes, and where proficiency in those languages is complex and unequally distributed. The PISA-D assessments that I observed were multi-lingual encounters. The test takers usually chose to take the assessment in French, but the out of school youths often had limited competence in French.
Image: A test administrator and test taker discuss the test item in PISA-D field trials in Senegal.
In interaction between the test administrator and the test taker they often switched between Wolof and French as they completed the assessments and the test administrator provided encouragement as the transcript below illustrates.
Administrator: Le menu est là, riz au haricots, ce sont les mêmes prix. Maintenant, quel est le prix du riz au poulet ? Riz au haricots, riz au bœuf, riz au poulet, riz au poisson, thé ou café, jus frais. Appuie,… mu dem. [FRA: The menu is here, rice with beans, it’s the same prices. Now, what’s the price of the rice with chicken? Rice with beans, rice with beef, rice with chicken, rice with fish, tea or coffee, fresh juice. Press… WOL: let’s go] Waw, mu xool. Bal–… non, teggal ci mu xool, .. fi. Voilà. Ñaata lay jar? [WOL and FRA: Yes, let’s see. No, put it in, let’s see… there. How much is it?]
Test taker: Hm?
Administrator: Ca coûte combien ? [FRA: How much is this?] Bindal ci dal. [WOL: Write it] Voilà. Mu xool. [FRA and WOL: There you go. Let’s see]. Okay. Voilà. Gis nga, réponse bi moo nekk fi [FRA and WOL: Alright. There you go, the response is here]. Maintenant on continue. Allez-y, continue! Voilà. [FRA: Now let’s continue. Come on! There you go.]
What is the implication of such code switching between languages for the way we design and conduct assessments of literacy and numeracy in such multi-lingual contexts? It suggests the need to shift assessment away from a preoccupation with ‘pure’ languages, toward the idea of every-day linguistic mixing and multi-lingual impurity that better reflects many linguistic contexts. In some richly multi-lingual contexts perhaps a more authentic assessment experience should promote a mix of languages and scripts in assessment texts? With tablet-based assessments can we also enable in-test switching between languages? If so, the question of language choice in assessment might need some rethinking.
References:
Brenner, L. (2001). Controlling Knowledge: Religion, Power, and Schooling in a West African Muslim Society. Indiana University Press.
Scribner, S & Cole, M. (1981). The Psychology of Literacy. Harvard University Press.
Lüpke, F. & BAO-DIOP, S (2014) ‘Beneath the Surface? Contemporary Ajami Writing in West Africa Exemplified through Wolofol’. Chapter 3. In Bondarev, D., Juffermans, K. Asfaha, Y.M., & Abdelhay, A. (Eds). African Literacies : Ideologies, Scripts, Education, Cambridge Scholars Publishing, pp 88-117.
About Dr Bryan Maddox
Dr Bryan Maddox is Associate Professor in Educational Assessment at the University of East Anglia and Executive Director of Assessment Micro-Analytics. He specialises in small-scale observational studies of interaction in testing situations. His edited book on ‘International Large-Scale Assessments in Education’ is published by Bloomsbury (2018).
Dr Maddox on LinkedIn
About Assessment Micro-Analytics
Assessment Micro-Analytics works to improve test quality with fine-grained observational data. We specialise in studies of user experience and assessment response processes, using eye tracking technology and video-ethnography to inform next generation assessment design.