Tutorial: Empirical Evaluation of User Modeling Systems

Given by David Chin

This tutorial will introduce UM researchers to the fundamental techniques of empirical evaluation of user modeling systems. The target audience of this tutorial are new UM researchers who want to learn how to evaluate their UM system. No background in statistics is assumed. Topics will include experiment design, running experiments, and experiment analysis, especially statistical techniques for analysis. After this tutorial, attendees will be well prepared to participate in the follow-on afternoon tutorial, "Evaluation of Adaptive Systems".

This tutorial will address the following learning objectives:

  • Understand the difference between independent and dependent variables
  • Be familiar with covariant variables commonly correlated with UM variables and which tests to use for them
  • Understand the common sources of nuisance variables (individual differences and environmental influences) and how to control them to improve significance
  • Know when to use between subjects vs. within subjects experiment designs
  • Be able to calculate sensitivity/power/effect-size, how to use them and why they are important
  • Be familiar with factorial experiment designs
  • Realize the threats to experiment validity, both internal and external, and how to control for them
  • Be familiar with common statistical tests and non-parametric tests for analyzing the statistical significance of data
  • Know about the assumptions of ANOVA and how to test for them
  • Be able to use ANCOVA to improve significance by factoring out variance due to covariates