OptimalAI
Authors
Guido Zuccon
Teerapong Leelanupab
Stewart Whiting
Joemon Jose
Joemon Jose
Publication date
2011
Publisher
Dublin City University
Total citations
Description
The TREC evaluation paradigm, developed from the Cranfield experiments, typically considers the effectiveness of information retrieval (IR) systems when retrieving documents for an isolated query. A step forward towards a robust evaluation of interactive information retrieval systems has been achieved by the TREC Session Track, which aims to evaluate retrieval performance of systems over query sessions. Its evaluation protocol consists of artificially generated reformulations of initial queries extracted from other TREC tasks, and relevance judgements made by NIST assessors. This procedure is mainly due to the difficulty of accessing session logs and because interactive experiments are expensive to conduct.