This was a thesis I completed for my honors in Computer Science at Williams College. Read the full document here.
Reading is an irreplaceable part of our everyday lives. We read to relax, to satiate our imaginations, and to enrich our understanding of the world around us. In the educational setting, we especially read to gain new knowledge. This information acquisition process does not always proceed smoothly however. When encountering new material, we may be hindered by gaps in understanding and consequently fall into a state of confusion. In order to continue learning effectively, it is thus critical for readers to be able to recognize this confusion and resolve it in a timely manner.
However, confusion is often difficult to recognize and qualify, especially for readers themselves who may fall into boredom and passivity instead of confronting their confusion. Such prolonged stays in this state of apathy may lead to reader frustration and burnout, eventually contributing to a final disengagement from the reading material. To avoid this result, there thus arises a need for automatic confusion detection.
In this thesis, I built a confusion detection application to help students recognize confusion during the reading process in real time. The application achieved this by collecting and analyzing reading annotations, critical derivations of the students’ thought processes that are reflective of their interpretations and understandings of the text. For this application, I trained a support vector machine (SVM) classifier based on language and discourse features on data from Lacuna Stories, a social annotations platform, that predicted the confusion levels of each annotation on a 1 to 5 Likert scale. The reader was then able to review these classifications and obtain a better grasp of their understanding of different parts of the reading.
I then conducted a pilot user study using the application to identify certain trends in how the accuracy of confusion prediction influenced a reader’s actual learning, reported perceived learning, and reported perception of self-predicted and machine-predicted confusion. When assuming the user’s self-predicted confusion levels to be correct (perceived accuracy), the overall performance of the trained classifier was not better than random. However, after re-coding the annotations for confusion using the same standards as those on the Lacuna training set (actual accuracy), I achieved much higher accuracies for the SVM model, suggesting that future iterations need to unify user definitions of confusion. This study also suggested that “more accurate” predictions for confusion in terms of the user improved actual learning, did not affect reported perceived learning, and widened the gap between reported perceived self-predicted confusion and reported perceived machine-predicted confusion.