educationtechnologyinsights

Sparks at the Edges of Technology and Assessment

By Martha A Kalnin Diede, Director, Syracuse University

Martha A Kalnin Diede, Director, Syracuse University

Multiple forces have made assessment the “a” word in higher education. Learning management systems, other powerful technologies, the demand for data, and faculty/student concerns have developed some fascinating edges. Self-checking, adaptability, and grit spark against cheating and access. Assessment data that allows for self-checking and accreditations rub against oversight that stifles creativity, enforcing homogeneity instead of engaging diversity benefits.

Technology gives new form to assessment of student learning. Faculty can use clickers or polling technology to determine quickly how learners perceive an issue or whether or not they can answer questions related to the day’s class. Students with technology access can complete tests outside of the class meeting time. Those exams can be adaptive, so that learners who grapple with test-taking have questions appear based on how well or how poorly they responded to the previous questions. In this way, technology individualizes assessment of student learning. Learners who answer a question correctly may face a more challenging question, whereas learners who struggle may encounter an easier question to reduce anxiety and to regain confidence. Adapted this way, student learning assessment becomes a way to help learners develop grit and increase confidence.

"Assessment technologies show great promise to support learning, especially when adaptively supporting learners and increasing persistence and equity through testing"

Assessment technologies can also help learners to do better work and support learners who confront challenges not of their making in pursuing higher education. By embedding short self-tests in their course sites or by creating rubrics that have self-check options, faculty can help learners to assess their understanding of specific concepts or to determine how they might perform on specific assignments. Exercises and quizzes from course software can provide additional practice for students as they progress in their studies. By harnessing technological capabilities, faculty can also offer students feedback before the final grade so that learners can foresee likely ultimate performance.

Technology and testing also rub against each other around advantage and authenticity. Authentic assessments allow students to demonstrate their learning and do not require mind-reading. Such assessments are difficult to design, do not fit well into traditional testing technologies, and may require new technology designs capable of assessing multifaceted products. For smaller classes (<30), those assessments represent a significant investment of faculty time. For larger classes (>75), those assessments exceed available time. Thus, finding ways authentically to assess student learning is a challenging edge.

Of course, assessment technologies give cheating new forms, a fascinating friction between testing and perceived success. With smart phones and tablets, students can open a test on one device, while using a different device to access information required to succeed on the test, and most learners have access to five or more devices at any given time. In online classes, students can hire others to take their tests. Certainly, institutions can implement integrity technologies such as lock-down browsers, but those lock-downs work only on one device. Even identity software products do not prevent students from hiring others to take the classes for them, a phenomenon that is already a subject of research for its educational impact.

Assessments are not solely for learners, however. Faculty can also find a great deal of usefulness in assessment technologies. Using technology, faculty can see patterns in assessment responses, for example, most students answering incorrectly on a specific test question. Taken as improvement data points, faculty can then review specific questions to determine changes necessary to ensure that students have indeed learned the material. By examining patterns even in self-tests, faculty can also identify areas of challenge for their learners and address those student learning challenges.

Programs, departments, colleges, and entire institutions can also leverage assessment technologies to demonstrate return and value on investment for their degrees. By feeding assessment data into larger systems, institutions can build data warehouses to help guide decision-making. For example, assessment data can help to define optimum class-size, or space use, or to highlight student behavior patterns. Constructed carefully, assessments can help to reveal effective andragogical practices. Groups of graduated students can also become part of assessment data: their professional trajectories may help pre-matriculation students to determine if they too want to pursue a program that provides similar return and value on investment.

Testing and assessment also challenges faculty-institution relations. Institutions can use assessment data for its support but unfortunately this kind of data may not tell the whole faculty story. Some learning management systems will allow faculty to identify course goals and link student artifacts demonstrating goal achievement to each goal. Framed positively, this process sounds like easy data collection for re-accreditation of programs, schools, colleges, and even regional accreditors. Regrettably these same data can suppress teaching innovations: trying a new practice brings a significant risk of success or failure. Moreover, these data could also contribute to bias against faculty members who may struggle with an accent or ability challenge. These data may, then, become evidence used to prevent professional progress or to dismiss a faculty member simply for being different. Thus giving institutional access to what might be intermediate data in a course or program may negatively impact the learning environment and drive faculty to teach to assessments rather than to metacognitive practices.

Clearly, technology and assessment challenge chafe against each other. Assessment technologies show great promise to support learning, especially when adaptively supporting learners and increasing persistence and equity through testing. For individual feedback and self- assessment, these technologies prove especially useful to students and faculty. At varying institutional levels, assessment data can inform decision making, providing administrators with additional data to determine what and how to build or renovate, which programs need funding and development; which faculty might benefit from additional support to do their best work. Concurrently, assessment technologies create new challenges, potentially disadvantaging the very learners who need the most support, reducing assessment complexity and equality, and generating new cheating practices. Institutionally, collecting data for assessment purposes may be necessary, but that data may offer the potential to target faculty members inappropriately using assessment data. At this point of generative friction, technology and assessment create a spark to imagine new futures.

Read Also

Leading during Times of Exponential Change

Leading during Times of Exponential Change

Curtis A. Carver, VP & CIO, The University of Alabama at Birmingham
Moving from Understanding to Actionable

Moving from Understanding to Actionable

Matthew Anson, Associate Director for Institutional Research and Assessment at the University of Lowa
The IR Stack

The IR Stack

David Eubanks, Assistant Vice President for Institutional Effectiveness, Furman University
Securing Tests and Exams; Need for 360 Degree Security

Securing Tests and Exams; Need for 360 Degree Security

Darrin Theriault, Director of Academic Testing & Prior Learning Assessment Coordinator at Kennesaw State University

Weekly Brief

New Editions