Abstract:
The hurried pace of the modern classroom does not permit formative feedback on writing assignments at the frequency or quality recommended by the research literature. One solution for increasing individual feedback to students is to incorporate some form of computer-generated assessment. This study explores the use of automated assessment of student writing in a content-specific context (history) on both traditional and non-traditional tasks. Four classrooms of middle school history students completed two projects, one culminating in an essay and one culminating in a digital documentary. From the total set of completed projects, approximately 70 essays and 70 digital documentary scripts were then scored by human raters and by an automated evaluation system. The student essays were used to test the comparison of human and computer-generated feedback in the context of history education, and the digital documentary scripts were used to test feedback given on a non-traditional task. The results were encouraging with very high correlation and reliability factors within and across both sets of documents, suggesting the possibility of new forms of formative assessment of student writing for content-area instruction in a variety of emerging formats.