Evaluation is a central aspect in the research and development of information retrieval systems. In academia, the quantitative evaluation of such systems is mostly known under the term Cranfield paradigm. This research method has been established for more than 25 years in international evaluation campaigns such as the Text Retrieval Conference (TREC) or the Conference and Labs of the Evaluation Forum (CLEF). Meanwhile industrial research has taken a completely different approach. Many companies are able to access a large number of users and their interactions, which can be recorded and evaluated. These infrastructures allow alternative evaluations like large-scale A/B experiments or other online methods. In the last years, different approaches to go beyond TREC-style evaluations emerged to close the gap and to bring together academic and industrial evaluation.
We are calling for articles that report on novel Evaluation efforts, like:
- Living Labs
- Evaluation as a service
- Large-scale A/B tests
- Interactive retrieval evaluation
- Session-based evaluation
- User-centered evaluation
- Counterfactual evaluation
- Novel evaluations in application domains such as cultural heritage, digital libraries, social media, expert search, health information, etc.
- Other evaluations that go beyond TREC
Expected size of the paper: 8–10 pages, double-column (cf. the author guidelines at www.springer.com/13222).
Contributions either in German or in English are welcome.
- Deadline for submissions: Oct. 1st, 2019
- Issue delivery: DASP-1-2020 (March 2020)
- Philipp Schaer, Technische Hochschule Köln, firstname.lastname@example.org
- Klaus Berberich, Hochschule für Technik und Wirtschaft des Saarlandes, email@example.com