Reasoning and Provenance on Neural Networks

When:
11/02/2024 all-day
2024-02-11T01:00:00+01:00
2024-02-11T01:00:00+01:00

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : LIG, Université Grenoble Alpes & Inria
Durée : 6 mois
Contact : silviu.maniu@univ-grenoble-alpes.fr
Date limite de publication : 2024-02-11

Contexte :
Artificial intelligence and neural networks in particular have brought unprecedented progress in recent years in important areas such as language, vision and control, among others. However, two important challenges remain. First, some of the simplest fundamental traits of human intelligence such as generalization and basic logical reasoning, remain difficult to realize and integrate, as neural architectures do not allow adding logic rules to their optimizations. Secondly, there is no sound and generic way to integrate explanations into their architecture or to track from where and how the outputs were computed.

This lack of understanding, reasoning, and traceability translates into a fundamental weakness of AI in terms of explainability and accountability. As a result, AI-based methods are commonly used as “black boxes” where it is difficult to to evaluate or identify why a particular network or part of a network works well or poorly to accomplish a particular task: the knowledge processed (relations, concepts) is not explicitly shown. **Neuro-symbolic AI** is an area of research that has become particularly active in bridging this gap, studying methods for **combining symbolic knowledge representation and reasoning with deep learning**. An important challenge is the combination of two completely different worlds: Euclidean spaces for learning, and symbolic logic for reasoning. This implies moving from the world of symbolic logic with Boolean interpretation to fuzzy or probabilistic interpretations, by integrating probabilities into the logic.

Going further, neural architectures (neuro-symbolic or otherwise) would benefit greatly from the ability to explain the results of their reasoning. This can be achieved by **annotating the parts of the neural computation graph**. In this manner one can track what has been used in the answer to the query or how the data was transformed; this is known as **provenance** or **lineage**.

Sujet :
The proposed internship aims at covering at least one of the following two objectives:

1. To investigate theoretical and practical methods for querying data structures built from noisy and incomplete data, i.e. to develop approaches with high tolerance to noise and missing data, while enabling reasoning capabilities that are beyond the reach of current sub-symbolic systems (neural networks).

2. To extend the probabilistic annotations used in neuro-symbolic computing with provenance annotations, in order to also provide explanation for the output and the reasoning. This can be achieved by extending previous work on graph queries and provenance.

Profil du candidat :
We are interested in students able to obtain _working implementations_, possibly directly in popular frameworks such as PyTorch or Tensorflow, and evaluation over _real-world_ datasets.

The offer is in a laboratory belonging to a ZRR, hence special access permissions are required. The internship can take place only if these permissions are given.

Formation et compétences requises :
Master student in Compute Science, data-related, M1 or M2.

Programming skills (Python, etc.) required.

Adresse d’emploi :
Laboratoire d’Informatique de Grenoble, UMR 5217
Bâtiment IMAG – 150 place du Torrent
Domaine universitaire de Saint-Martin-d’Hères