Evaluating a General Model of Adaptive Tutorial Dialogues


Amali Weerasinghe, Antonija Mitrovic, David Thomson, Pavle Mogin and Brent Martin

Paper type: 
Full paper
11. 15:30-16:30, Thursday 30 June


Tutorial dialogues are considered as one of the critical factors contributing to the effectiveness of human one-on-one tutoring. We discuss how we evaluated the effectiveness of a general model of adaptive tutorial dialogues in both an ill-defined and a well-defined task. The first study involved dialogues in database design, an ill-defined task. The control group participants received non-adaptive dialogues regardless of their knowledge level and explanation skills. The experimental group participants received adaptive dialogues that were customised based on their student models. The performance on pre- and post-tests indicate that the experimental group participants learned significantly more than their peers. The second study involved dialogues in data normalization, a well-defined task. The performance of the experimental group increased significantly between pre- and post-test, while the improvement of the control group was not significant. The studies show that the model is applicable to both ill- and well-defined tasks, and that they support learning effectively.


tutorial dialogues, constraint-based tutors, Ill-defined tasks, well-defined tasks