Zhou, T., Sheng, H., & Howley, I. (2020). Assessing Post-hoc Explainability of the BKT Algorithm. In AAAI conference on Artificial Intelligence, Ethics, and Society
Read the full paper here.
As machine intelligence is increasingly incorporated into educational technologies, it becomes imperative for instructors and students to understand the potential flaws of the algorithms on which their systems rely. This paper describes the design and implementation of an interactive post-hoc explanation of the Bayesian Knowledge Tracing algorithm which is implemented in learning analytics systems used across the United States. After a user-centered design process to smooth out interaction design difficulties, we ran a controlled experiment to evaluate whether the interactive or ‘static’ version of the explainable led to increased learning. Our results reveals that learning about an algorithm through an explainable depends on users’ educational background. For other contexts, designers of post-hoc explainables must consider their users’ educational background to best determine how to empower more informed decision-making with AI-enhanced systems.