PhD position in Explainable Recommender Systems

When:
01/04/2020 – 02/04/2020 all-day
2020-04-01T02:00:00+02:00
2020-04-02T02:00:00+02:00

Annonce en lien avec l’Action/le Réseau : aucun

Laboratoire/Entreprise : ETIS/CY Cergy Université
Durée : 3 ans
Contact : aikaterini.tzompanaki@cyu.fr
Date limite de publication : 2020-04-01

Contexte :
A recommender aids the user explore the set of items in a system and find the most relevant items to him/her. The two basic recommender categories are the context- and score-based ones. The first category exploits the characteristics of users and items, while the latter depends on the item scores given by the users. Traditional implementations of recommenders are based on TF-IDF and nearest neighbors techniques, while more recent recommenders follow machine learning approaches, like matrix factorization and neural networks. A natural issue that comes along with recommendations is whether a user, or even the system designer understands the results of the recommender. This problem has given rise to the so-called explainable recommenders.

Sujet :
Explainable recommendation helps to improve the transparency, persuasiveness, effectiveness, trustworthiness, and satisfaction of recommendation systems. It also facilitates system designers for better system debugging. So far, the research in explainable recommendations is focused on the Why question: “Why is an item recommended?”. Solutions either consider the recommendation system as a black-box, and thus try to reveal relationships among users and items, the importance of different features with respect to the predicted value, or to dwell into the intrinsic characteristics of the recommendation system in order to truly explain the system. What has not yet been studied though, is the Why-Not aspect of a recommendation: “Why is not a specific item a recommendation?”. We argue that explaining why certain items or categories of items are not recommended can be as valuable as explaining why items are recommended. Why-Not questions have recently gained the attention of the research community in multiple settings, e.g., for relational databases. In machine learning, Why-Not questions are shown to improve the intelligibility of predictions but remain vastly unexplored.
In this thesis proposal we aim to explore Why-Not, machine learning based explainable recommenders. In a second phase, we aim to extend the recommenders so that they can leverage the Why-Not explanations for auto-tuning.
More information at : https://perso-etis.ensea.fr/tzompanaki/phd_proposal.html

Profil du candidat :
The candidate should hold a Msc Degree in fields related to Computer Science, Machine Learning, or Applied Mathematics/Statistics. She/He should have solid knowledge of data management, algorithms and programming. Knowledge and previous experience on machine learning, recommender systems, explainability are a plus. She/He should master the english language (oral and written); knowledge of the french language is not obligatory. She/He must have strong analytical skills, be proactive, self-driven and capable to collaborate with a group of international researchers.

Formation et compétences requises :
The candidate should hold a Msc Degree in fields related to Computer Science, Machine Learning, or Applied Mathematics/Statistics.

Adresse d’emploi :
CY Cergy Paris Université
Site Saint Martin, 2 av. Adolphe Chauvin, Pontoise 95000 France

Document attaché :