PhD proposal on fairness in machine learning — NAVER LABS Europe / LIG

When:
30/06/2019 – 01/07/2019 all-day
2019-06-30T02:00:00+02:00
2019-07-01T02:00:00+02:00

Annonce en lien avec l’Action/le Réseau : aucun

Laboratoire/Entreprise : NAVER LABS Europe / LIG
Durée : 3 ans
Contact : patrick.loiseau@univ-grenoble-alpes.fr
Date limite de publication : 2019-06-30

Contexte :
Recommendation is a prominent machine learning task, used in a variety of platforms ranging from news aggregators to webtoons providers, ad publishers, online dating application, job marketplace, etc. At the heart of recommendation lies a ranking algorithm that ranks contents presented to a user. As recommendation platforms affect users in many important ways, it is crucial to make them fair, but what is a fair ranking remains very unclear.

Algorithmic fairness has recently received great attention from the machine learning and data mining communities. A number of mathematical definitions of fairness have been proposed (demographic parity, equal opportunity, etc.) and researchers have proposed various solutions to build learning algorithms that respect those constraints . However, this line of work is currently limited in two directions. First, most of it considers classification whereas very little exists for ranking/recommendation (where it is arguably more complex to define/satisfy fairness). Second, it always considers one-sided fairness notions from the point of view of either content producers (e.g., news providers) or content consumers (e.g., users) in isolation. Recommendation platforms on the other hand act as mediators between these two actors and need to consider fairness notions from both points of view simultaneously. Naturally, whether a ranking is fair or not depends on the stakeholder’s perspective: intuitively, producers expect fairness in the exposure of their content objects while consumers expect fairness in the variety of items they are exposed to. These (possibly contradictory) objectives raise the crucial question of how to define fairness in multi-stakeholder recommendation settings and how to build algorithms that satisfy the defined notion.

Sujet :
The PhD student will conduct research on fairness in multi-stakeholder recommendation platforms, with three main objectives. First, we will empirically study one such platform. We will do that on the example of the news and webtoons recommendation platforms of Naver. We will work in particular on empirically quantifying unfairness. That will help us better understand the multi-stakeholder fairness issue from a data-driven perspective and to formalize the notions for this setting. Second, we will work on designing ranking algorithms that provide fair recommendation by design. This will involve theoretical work to prove that the designed algorithm satisfies the fairness properties identified. We will also work on characterizing the trade-off between the fairness of the different stakeholders. Finally, we will test the algorithm in practice and design methods to audit the result so as to prove in practice to a third party that the algorithm respects the fairness properties. That involves in particular questions such as how to measure fairness, which data is needed to show that fairness is respected on a particular run, for how long, etc.

Profil du candidat :
Candidates should hold (or be about to get) a MSc degree in computer science, applied mathematics, or a related field and have:
• a strong background in mathematics (at least in probability/statistics) and some background in ma-chine learning;
• programming capabilities to perform data-driven empirical studies;
• interest in the societal impact of machine learning and the research area of algorithmic bias (no prior experience working in this area is required).

Formation et compétences requises :
Candidates should hold (or be about to get) a MSc degree in computer science, applied mathematics, or a related field and have:
• a strong background in mathematics (at least in probability/statistics) and some background in ma-chine learning;
• programming capabilities to perform data-driven empirical studies;
• interest in the societal impact of machine learning and the research area of algorithmic bias (no prior experience working in this area is required).

Adresse d’emploi :
NAVER LABS Europe
6 Chemin de Maupertuis
38240 Meylan

Document attaché : PhDoffer_CifreNaverLIG_FairnessRecommendation.pdf