PhD position – Object Detection from Few Multispectral Examples

When:
15/05/2024 – 16/05/2024 all-day
2024-05-15T02:00:00+02:00
2024-05-16T02:00:00+02:00

Offre en lien avec l’Action/le Réseau : – — –/Doctorants

Laboratoire/Entreprise : IRISA/ATERMES
Durée : 36 mois
Contact : minh-tan.pham@irisa.fr
Date limite de publication : 2024-05-15

Contexte :
Please find the full PhD topic here: http://www-obelix.irisa.fr/files/2024/04/PhD_Cifre2024_IRISA_ATERMES.pdf

ATERMES is an international mid-sized company, based in Montigny-le-Bretonneux with a strong expertise in high technology and system integration from the upstream design to the long-life maintenance cycle. It specializes in offering system solution for border surveillance. Its flagship product BARIER™ (“Beacon Autonomous Reconnaissance Identification and Evaluation Response”) provides ready application for temporary strategic site protection or ill-defined border regions in mountainous or remote terrain where fixed surveillance modes are impracticable or overly expensive to deploy. As another example, SURICATE is the first of its class optronic ground “RADAR” that covers very efficiently wide field with automatic classification of intruders thanks to multi-spectral deep learning detection.

Sujet :
The project aims at providing deep learning-based methods to detect objects in outdoor environments using multispectral data in a low supervision context, e.g., learning from few examples to detect scarcely-observed objects. The data consist of RGB and IR (Infra-red) images which are frames from calibrated and aligned multispectral videos.

Few-shot learning [1][2], semi-supervised learning [3][4] and continual learning [5][6] are among the most widely-used frameworks to tackle this task. For the first approach based on few-shot object detection (FSOD), the recent trend has relied on using meta learning or transfer learning approaches [1:1]. Yet, realistic settings including scarce objects may exist a domain shift that makes the task more challenging. The second approach based on semi-supervised learning considers a large amount of unlabeled data in the training process to foster the representation capacity of deep models, improving the peformance of object detection from a small amount of labeled samples. As the third approach, continual learning [5:1] aims to maintain the performance of the deep models on old categories and avoid the “catastrophic forgetting” phenomenon when learning new object categories. It has been also integrated into a FSOD task [7] to ensure that few-shot object detectors could learn new object concepts without forgetting previous object categories that still exist in prediction phase. Last but not least, with the dramastically rapid evolution of research in AI, another challenge to tackle is the investigation of modern AI models, and more specifically foundation models which involves multimodal transformers [8][9]. Indeed, these large machine learning models trained on a vast quantity of data at scale have been designed to be adapted to a wide range of downstream tasks (including object detection, see for instance UniDetector [10]) or CLIP2 [11]. These models leading to zero-shot object detection could very well be the ultimate answer for the task of having a true scene understanding.

Profil du candidat :
MSc or Engineering degree with excellent academic track and proven research experience in the following fields: computer science, applied maths, signal processing and computer vision;

European nationality required

Formation et compétences requises :
Experience with machine learning, in particular deep learning;

Skills and proved experience in programming (Python is mandatory and knowledge about frameworks such as Pytorch is a real plus);

Excellent communication skills (spoken/written English) is required ;

Ambition to publish at the best level in the computer vision community (CVPR, ICCV, TPAMI, …) during the thesis.

Adresse d’emploi :
IRISA, Université Bretagne Sud, 56000 Vannes

Document attaché : 202404161424_PhD_Cifre2024_IRISA_ATERMES.pdf