educationtechnologyinsights

The Purposeful and Strategic Use of Technology in Language Testing and Assessment

By Ahmet Dursun, Director, University of Chicago Office of Language Assessment

Ahmet Dursun, Director, University of Chicago Office of Language Assessment

In testing or assessing one’s language ability or performance, the key goal is to provide an accurate and valid interpretation of a person’s language skills in the real-life context for which the test is developed. At the same time, the key challenge is to develop and administer authentic, practical assessment tasks that can measure relevant knowledge, skills, and abilities in situations representative of those in the target language use context. It is at this juncture that a purposeful and strategic use of technology can be most helpful.

Recent technological developments and transformations have shaped how technology mediates both language communication in general and the methods available to develop, deliver and score authentic language assessments. This dual role of technology has significant implications for how assessment tasks should utilize technological resources. For instance, email has become an indispensable part of daily professional communication. This type of communication has its characteristics as a genre in which technology both mediates and shapes language use. Therefore, when we intend to measure someone’s email communication ability in a specific context, we need to make sure that we simulate the technology both in terms of how it mediates email communication and as a delivery method. On the contrary, if we measure this through a paper-and-pencil test, test takers would suffer from not only an inauthentic task environment but also from the lack of technological resources available in most email applications such as identification of spelling errors, auto-correction, and more recent predictive typing features.

Assessment tasks, then, should allow real-life, language-related technological resources to be available to tests takers during the test. In that sense, technology should be used to help narrow the gap between the test and the real world, potentially contributing to a positive testing experience and eliminating non-language factors that impact test takers’ performance and scores. For example, the University of Chicago’s Academic Reading Comprehension Assessment (ARCATM) has been designed to measure graduate students' ability to conduct academic research by reading in a secondary research language. In this assessment, students receive a discipline-specific academic text in a second language required by their department. They read, annotate, and take notes, with the help of a print dictionary. The text is taken away and students write a summary protocol, reproducing the central arguments in their own words in the primary research language (English). Initially students had to hand-write their responses, creating a discrepancy between the assessment task and how such functions are actually performed in real-life. When we made the transition to computer technology and provided them the opportunity to type their responses, we found that students could write longer and better responses, and the consistency of scoring their responses profoundly increased. In the same way, we have been exploring different technologies that allow them to read the text on a screen and access annotation and note-taking tools within a secure testing environment, since these are by default the technological resources they have access to when reading academic texts in a PDF format. These examples show how technology does not need to be “cool” or innovative to bring assessment tasks closer to real-life language practices; the purposeful and strategic implementation is what matters.

"Recent technological developments and transformations have shaped how technology mediates both language communication in general and the methods available to develop, deliver, and score authentic language assessments"

In addition tothe aforementioned role in designing more enhanced and authentic assessment tasks, technology has also created new opportunities in how language tests can be delivered. There is an increasing need to administer language tests in an anytime, anywhere fashion that does not require test takers to visit a proctored testing center. We faced this challenge at the University of Chicago as a result of a mandate to test prospective international students and scholars’ academic English oral proficiency before they can arrive in the US.  We developed the Academic English Proficiency Assessment (AEPATM), which is an authentic, highly structured English conversation with simulated academic role-play tasks, intended to elicit the test taker’s best possible sample of functional speaking ability in an academic context. We needed a technology solution that can handle the complex, live nature of this test and be user-friendly and accessible without any restrictions across the world. After a long investigation and piloting process, we decided to use the Zoom (https://www.zoom.us) video conferencing platform because they could provide the required technological resources such as content-sharing, real-time co-annotation, digital whiteboarding, session recording, end-to-end encryption, and accessibility for all test takers.

Handling test security concerns become more complex for language tests that are delivered online. Fortunately, technological developments have started to address some of these concerns. Among these solutions are user authentication, webcam recording, keystroke logging, browser control, screen recording, and more recently, live monitoring (online proctoring). As these security features become more common and are built into online testing or survey platforms, more institutions will be able to handle their test administration in an anytime, anywhere fashion more practically.

Along with the implementations mentioned above, there are other exciting developments in technology-mediated language testing on the horizon. As a result of “big data” and learning trajectories provided by learning management systems and online language learning platforms, as well as the unprecedented developments in artificial intelligence, we have started to observe a trend towards personalized, content-based language instruction that provides each learner with a customized learning experience. Initially, this will eliminate one-size-fits-all approaches in language testing within classroom assessment practices and we will begin to see each language test uniquely tailored to each learner. Eventually, these developments will provide a solid ground for standardized language proficiency testing to also move in the same direction and develop custom tests designed for specific purposes and contexts. This type of testing approach, accordingly, will help the language testing community reach its goal of providing a more accurate and valid interpretation of a person’s language skills in the real-life language use context.

Read Also

Leading during Times of Exponential Change

Leading during Times of Exponential Change

Curtis A. Carver, VP & CIO, The University of Alabama at Birmingham
Getting past the

Getting past the "Woo" of the Pitch, and Avoiding the "Woes" of Technology Implementation

Sheri Barrett, Director, Office of Assessment, Evaluation and Institutional outcomes at Johnson County Community College Contributor
Sparks at the Edges of Technology and Assessment

Sparks at the Edges of Technology and Assessment

Martha A Kalnin Diede, Director, Syracuse University
The Strategic Link Between Data and Design

The Strategic Link Between Data and Design

Bucky J. Dodd, Ph.D, Chief Learning Innovation Office, University of Central Oklahoma

Weekly Brief

New Editions