2nd Call for Papers and Abstracts
=================

PERSPECTIVES 2021: Workshop on the Perspectives on the Evaluation of Recommender Systems 
Co-located with ACM RecSys 2021.
The workshop will take place online!

Workshop website: https://perspectives-ws.github.io/2021/
Paper submission: https://easychair.org/conferences/?conf=perspectives2021 

In this workshop, we aim to bring together researchers and practitioners from academia and industry to critically discuss the different, and maybe even diverging or contradictory perspectives on the evaluation of recommender systems.

Important Dates
	• Paper submission deadline: ***July 29th, 2021***
	• Author notification: August 21st, 2021
	• Camera-ready version deadline: September 4th, 2021
	• Workshop: Sat, 25 September 2021 (in scope of RecSys 2021)


Objectives
=================

Evaluation is essential when conducting rigorous research in recommender systems (RS). It may span the evaluation of early ideas and approaches up to elaborate systems in operation; it may target a wide spectrum of different aspects being evaluated. Naturally, we do (and have to) take various perspectives on the evaluation of RS. Thereby, the term “perspective” may, for instance, refer to various purposes of a RS, the various stakeholders affected by a RS, or the potential risks that ought to be minimized. Further, we have to consider that various methodological approaches and experimental designs represent different perspectives on evaluation. The perspective on the evaluation of RS may also be substantially characterized by the available resources. The access to resources will likely be different for PhD students compared to established researchers in industry.

Acknowledging that there are various perspectives on the evaluation of RS, we want to put into discussion whether there is a ``golden standard'' for the evaluation of RS, and---if so---if it indeed is ``golden'' in any sense. We postulate that the various perspectives are valid and reasonable, and aim to reach out to the community to discuss and reason about them.

The goal of the workshop is to capture the current state of evaluation, and gauge whether there is, or should be, a different target that RS evaluation should strive for. The workshop will address the question: where should we go from here as a community? and aims at coming up with concrete steps for action.

We have a particularly strong commitment to invite and integrate researchers at the beginning of their careers and want to equally integrate established researchers and practitioners, from industry and academia alike. It is our particular concern to give a voice to the various perspectives involved.

Topics of Interest
=================
Topics of interest include, but are not limited to, the following:

	• Case studies of difficult, hard-to-evaluate scenarios
	• Evaluations with contradicting results
	• Showcasing (structural) problems in RS evaluation
	• Integration of offline and online experiments
	• Multi-Stakeholder evaluation
	• Divergence between evaluation goals and what is actually captured by the evaluation
	• Nontrivial and unexpected experiences from practitioners

We deliberately solicit papers reporting problems and (negative) experiences regarding RS evaluation, as we consider the reflection on unsuccessful, inadequate or insufficient evaluations as a fruitful source for yet another perspective on RS evaluation that can spark discussions at the workshop. Accordingly, submissions may also address the following
themes: 
(a) “lessons learned” from the successful application of RS evaluation or from “post mortem” analyses describing specific evaluation strategies that failed to uncover decisive elements, 
(b) “overview papers” analyzing patterns of challenges or obstacles to evaluation, and 
(c) “solution papers” presenting solutions for specific evaluation scenarios. 
Additionally, 
(d) “visionary papers” discussing novel and future evaluation aspects will be considered as well.


Submissions
=================
We solicit two forms of contributions: First, we solicit paper submissions that will undergo peer review. Accepted papers will be published and presented at the workshop. Second, we offer the opportunity to present ideas without a paper submission. In this case, we call for the submission of abstracts that will be reviewed by the workshop organizers. Accepted abstracts will be presented at the workshop, but not published.

1) Paper Submissions

We solicit papers with 4 up to 10 pages. Along the lines of this year’s call for papers of the main conference, we do not distinguish between full and short (or position) papers. 

Submissions will undergo single-blind peer review by a minimum of three program committee members and will be selected based on their quality, novelty, clarity, and relevance. Authors of accepted papers will be invited to present their work during the workshop and will be published as open access workshop proceedings via ceur-ws.org. At least one author of each accepted paper must attend the workshop and present the work.


2) Abstract Submissions
We solicit abstracts with 200-350 words, to be submitted in EasyChair. The workshop organizers will select abstracts based on quality, clarity, relevance, and their potential to spark interesting discussion during the workshop. Authors of accepted abstracts will be invited to present their work during the workshop. 


Workshop Organizers
=================
	• Eva Zangerle (Universität Innsbruck, Austria)
	• Christine Bauer (Utrecht University, The Netherlands)
	• Alan Said (University of Gothenburg, Sweden)



--  
Dr. Christine Bauer | Assistant Professor | Department of Information and Computing Sciences | Utrecht University | Princetonplein 5, 3584 CC Utrecht | Buys Ballotgebouw, room 4.13 | +31 30 253 4158 | c.bauer@uu.nl | www.uu.nl/staff/CBauer | https://christinebauer.eu | Present: Mon-Fri