Explain to Learn – Learn to Explain

When:
20/06/2020 – 21/06/2020 all-day
2020-06-20T02:00:00+02:00
2020-06-21T02:00:00+02:00

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : Universite Cote d’Azur / Inria Sophia Mediterrane
Durée : 3 ans
Contact : precioso@unice.fr
Date limite de publication : 2020-06-20

Contexte :
In the last few years, the explosion of interest in deep learning has led to improve dramatically the performance of intelligent systems in a remarkable number of different fields. However, re-cent critical analyses provide evidence of their limitations. In machine learning, one is typically concerned with communication protocols, the purpose of which is to explain the task to be learned. However, the interest is growing on high human-like interactions capable of supporting a sort of Leaning to Explain and Explain to Learn (L2EE2L) protocol.
In this PhD project the student is expected to explore a constrained-based modeling of the environment that makes it possible to unify learning and inference within the same mathematical framework. The unification is based on the abstract notion of constraint, which provides a representation of knowledge granules gained from the interaction with the environment. The agents are based on deep neural network architectures, whose learning and inferential processes are driven by different schemes for enforcing the environmental constraints.

Sujet :
In this PhD we plan to address this core question for AI field by including logic constraints into machine learning models from both directions:
– Top-Down: Logic constraints from relational knowledge graph will be translated into real-valued functions arising from the adoption of opportune t-norms. Computational models like Graph Neural Networks (GNN) will be incorporated in the proposed framework thanks to the expression of structured domains by constraints. Based on such neural architectures, as Deep Logic Models, we should be able to strengthen and enrich the existing knowledge (for instance by predicting links between concepts in knowledge graphs).
– Bottom-Up: The architecture of a deep network trained on a given dataset can be related to the underlying knowledge between the concepts represented in this dataset. Thus, both the knowledge graph between considered concepts and the deep neural net-work can strengthen and improve each other though reasoning and semantic relational consistency preservation.

Profil du candidat :
The candidate should hold a Master degree in Computer Science with a major in Artificial Intelligence, or in Applied Math with a specialization in Learning.

Formation et compétences requises :
Symbolic learning/Learning with constraints
knowledge representation
Sub-symbolic learning/Machine Learning/Deep Learning

Adresse d’emploi :
Mainly at Inria Sophia Mediterranee but also Siena University for some visits.

Document attaché : 202005081642_3IA-PhDProposal-Learn to Explain – Explain to Learn.pdf