Lerot: an Online Learning to Rank Framework
Living Labs workshop at CIKM'13, San Francisco, USA. Nov 1, 2013.
Summary
Lerot: an Online Learning to Rank Framework presents an open-source framework for evaluating online learning to rank algorithms using simulated user interactions, addressing the challenge that academics lack access to real user feedback data. The framework enables researchers to test various learning algorithms and user models through click simulation, incorporating multiple interleaving methods (team draft, balanced, probabilistic, etc.) and user models (cascade, dependent click, federated click) to evaluate ranking performance. This contribution provides the academic community with a standardized testbed for learning to rank research, allowing systematic evaluation of algorithms without requiring real user data or risking harm to actual users.
Slides
Links
Related Publications
Lerot: an Online Learning to Rank Framework
Anne Schuth and Katja Hofmann and Shimon Whiteson and Maarten de Rijke.
In Proceedings of Living Labs for Information Retrieval Evaluation workshop at CIKM'13, 2013.