The Center for Languages & Intercultural Communication at Rice University invites you to submit a proposal for inclusion in an upcoming special volume to be published in 2021. We hope that you will be a part of this ambitious project.
Beyond Validity: Social and Ethical Consequences of Assessment
This edited volume, based upon the Rice University Center for Languages & Intercultural Communication’s conference of the same name held in April 2019, will draw together research that takes a critical eye toward examining the social and ethical consequences of assessment. It will focus on the analysis of test (mis)uses in societies, and the development and implementation of tests conceptualizing language uses in multilingual contexts, rather than regarding language use as mere manipulation of linguistic components at labs.
Test validity is never value-free and context-free; test interpretation, uses, and consequences for individuals, programs, institutions, and societies are also aspects of test validation (Cronbach, 1988; Messick, 1989). Accordingly, construct validity is only part of the validity evidence, and claims implied by and decisions made from proposed test interpretations and uses should be specified and examined (Kane, 2006). The International Language Testing Association (ILTA) has created a Code of Ethics for language testers to ensure that test uses meet certain social and cultural values, such as justice, respect for autonomy and for civil society (ILTA, 2018). There are, however, many issues that deserve more attention. For example, tests have been used to create an “elite” group because people receiving higher scores are regarded as possessing more “merits” than those receiving lower scores (Fulcher, 2015). Therefore, it is necessary to examine the claims of why and how certain attributes/values/merits are more important in a certain context than others. Similar to Fulcher’s call to examining the relationship between language assessment and meritocracy, Shohamy (2011) argues for valuing multilingual language use in testing and suggests to move away from the monolingual approach to language assessment.
Overall, there is a need for validating assessment in social dimensions (e.g., Larsen-Freeman, 2018; McNamara & Roever, 2006; Plough, Banerjee, & Iwashita, 2018; Shohamy, 2011). The present volume will address the ways in which existing testing models could better address the social aspects of test validation. We welcome a wide variety of theoretical and critical perspectives and aim to include chapters focusing on a variety of assessment and testing settings, ranging from small classroom achievement tests to large scale norm-referenced proficiency tests and general assessment frameworks such as CEFR and ACTFL.
A number of guiding themes are particularly relevant for this volume:
Call for chapter proposals:
Contributors may expand upon the previous set of guiding questions. Both empirical studies and theoretical analyses are welcome.
To help us get started with the selection of the best publisher and secure publication rights as early as possible, we would appreciate receiving your confirmation that you plan to submit a proposal by June 15, 2019. We will follow the timeline below:
The abstract should be less than 500-word and submitted to Heather Lazare (firstname.lastname@example.org). We will provide the formal submission guidelines for manuscripts once we receive a response from the publisher. The review process will begin following the February deadline.