ICPRAI 2026: Special Session on Frontiers of Artificial Intelligence for Medical Applications: Models, Reasoning, Perception, and Interaction

Date : 2025-01-15 => 2026-01-15
Lieu : Montreal, Canada

Special Session on
Frontiers of Artificial Intelligence for Medical Applications: Models, Reasoning, Perception, and Interaction
ICPRAI 2026, Montréal, Québec, Canada, June, 2026
https://icprai2026.com/

Scope
Artificial intelligence has seen remarkable progress in digital healthcare across areas such as learning algorithms, knowledge representation, perception, and interaction. From deep learning advances in vision and language, to reinforcement learning for control, to symbolic and neuro-symbolic hybrids for robust reasoning, modern AI systems are increasingly capable of understanding complex data, adapting to new tasks, and interacting with humans and environments. This Special Session invites original contributions that explore foundational models, architectures, and applications spanning the AI spectrum, including but not limited to learning and representation; planning and decision-making; perception and multimodal understanding; human-AI collaboration; and trustworthy, explainable, and ethical AI. We welcome theoretical analyses, novel algorithms, system implementations, and domain-specific studies that illustrate state-of-the-art techniques and future directions in AI, with special encouragement for work in large language models, multimodal vision-language modeling, and impactful applications in healthcare and medical domains such as medical imaging, autism detection, neurodegenerative disorders (e.g., Alzheimer’s and Parkinson’s), and diabetes management.

Topics of Interest
Topics of interest include, but are not limited to:
• Large Language Models (LLMs) and foundation models
• Embeddings, representation learning, and transfer learning
• Generative modeling: GANs, VAEs, diffusion and beyond
• Multimodal integration: vision-language, audio-visual, sensor fusion
• Symbolic, neuro-symbolic, and hybrid reasoning systems
• Automated planning, scheduling, and decision-theoretic frameworks
• Reinforcement learning, multi-agent systems, and control
• Human-AI collaboration, mixed-initiative interfaces, and personalization • Explainability, interpretability, causal analysis, and fairness
• Robustness, safety, trustworthiness, and privacy-preserving AI
• Healthcare and medical AI applications: imaging, diagnostics, chronic care 1

Important Dates
• Full paper submission deadline: January 15, 2026
• Notification of acceptance: March 7, 2026
• Camera-ready deadline: March 13, 2026
• Early Registration deadline: until April 30, 2026
• Special Session at ICPRAI: June 15-18, 2026

Submission Guidelines
Please follow the standard ICPRAI submission instructions at the conference website. When up loading your paper, select “Frontiers of Artificial Intelligence for Medical Applications: Models, Reasoning, Perception, and Interaction ” as the target session.

Organizers
• Prof. Ghazaleh Khodabandelou, University Paris-Est, France
ghazaleh.khodabandelou@u-pec.fr
• Prof. Mounîm A. El Yacoubi, Institut Polytechnique de Paris, France mounim.el_yacoubi@telecom-sudparis.eu

Lien direct


Notre site web : www.madics.fr
Suivez-nous sur Tweeter : @GDR_MADICS
Pour vous désabonner de la liste, suivre ce lien.

Internship: Representation of physical quantities on the Semantic Web

Offre en lien avec l’Action/le Réseau : RECAST/– — –

Laboratoire/Entreprise : LIMOS, UMR 6158 / Mines Saint-Étienne
Durée : 4-6 mois
Contact : maxime.lefrancois@emse.fr
Date limite de publication : 2025-12-15

Contexte :
Physical quantities form an important part of what is represented in scientific data, medical data, industry data, open data, and to some extent, various private data.

Whether it is distances, speeds, payloads in transportation, concentrations, masses, moles in chemistry, powers, intensities, voltages in the energy sector, dimensions of furniture, weights, heights of people, durations, and many others in health, there is a need to represent physical quantities, to store them, to process them, and to exchange them between information systems, potentially on a global scale, often on the Internet and via the Web.

Sujet :
In this internship, we seek to precisely define a way to unambiguously represent physical quantities for the Web of Data. More precisely, we will study the proposals made to encode physical quantities in the standard data model of the Semantic Web, RDF. We will be particularly interested in the use of a data type dedicated to this encoding, probably adapted from the proposal of Lefrançois & Zimmermann (2018) based on the UCUM standard.

Having established a rigorous definition of the data type (possibly its variants, if relevant), we will focus on implementing a module that can read/write and process physical quantities and their operations within the RDF data manipulation APIs, for the management, querying and reasoning with knowledge graphs containing physical quantities.

The ambition is that, on the one hand, the specification will become in a few years a de facto standard, before perhaps becoming a de jure standard; and that, on the other hand, the implementation will be the reference allowing to compare the compliance levels of other future implementations.

This study should lead to the publication of a scientific paper in a high impact scientific journal.

References
Maxime Lefrançois and Antoine Zimmermann (2018). The Unified Code for Units of Measure in RDF: cdt:ucum and other UCUM Datatypes. In The Semantic Web: ESWC 2018 Satellite Events – ESWC 2018 Satellite Events, Heraklion, Crete, Greece, June 3-7, 2018, Revised Selected Papers, volume 11155 of the Lecture Notes in Computer Science, pp196–201, Springer.
Gunther Shadow and Clement J. McDonald. The Unified Code for Units of Measure. Technical report, Regenstrief Institute, Inc, November 21 2017.

Profil du candidat :
Master 2 students in computer science

To apply, please submit by email or in an online file repository your CV, motivation letter, university transcripts, and possibly letters of recommendation. The motivation letter must explain why you are interested in this topic and why you are qualified to work on this topic.

Formation et compétences requises :
Equivalent of a M2 level in CS, with knowledge of Semantic Web technologies. Also, the candidate must have either very good programming skills in Java, or very good aptitude in formal and abstract thinking.

Adresse d’emploi :
Mines Saint-Étienne, Institut Henri Fayol, 29 rue Pierre et Dominique Ponchardier, 42100 Saint-Étienne, France

Argumentative Graph-RAG for Participatory Democracy

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : LIP6, Sorbonne University
Durée : 5-6 months
Contact : rafael.angarita@lip6.fr
Date limite de publication : 2026-04-30

Contexte :

Sujet :
Participatory democracy platforms (Make, Decidim, Cap Collectif, Consul) enable thousands of citizens to propose and discuss ideas for public policies. However, the large volume of textual contributions produces severe information overload: citizens struggle to identify similar or opposing proposals, while decision-makers face difficulty in detecting consensus or disagreement.

Recent research at LIP6 has shown that Natural Language Processing (NLP) can detect argumentative relations between citizen proposals (equivalence, contradiction, neutrality). These relations can be structured into argumentative graphs, which help organize debates and improve navigation within large participatory datasets.

This internship aims to extend these ideas using Graph Retrieval-Augmented Generation (Graph-RAG). By combining graph-based retrieval with language generation, the project seeks to build intelligent tools capable of summarizing debates, identifying conflicting or redundant proposals, and assisting citizens in writing balanced contributions.

Profil du candidat :
Master 2 / Final-year engineering

Formation et compétences requises :
– Programming: Python, PyTorch or TensorFlow

– NLP / ML: Experience with large language models, embeddings, or NLP tasks

– Data Science: Text preprocessing, vector representations, evaluation metrics

– Research: Ability to conduct literature reviews, design small experiments, and analyze results

– Participatory democracy: Interest in participatory democracy or computational argumentation

Adresse d’emploi :
Sorbonne University, 4 place Jussieu 75005 Paris.

Document attaché : 202511121059_Stage_LIP6_2025_2026.pdf

Efficient self-supervised learning using dataset distillation

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : LIPADE
Durée : 6 months
Contact : ayoub.karine@u-paris.fr
Date limite de publication : 2026-04-30

Contexte :
The performance of supervised deep learning methods in computer vision heavily depends on the availability of
labeled data, whose annotation is both time-consuming and requires expert knowledge. To overcome this limitation,
Self-Supervised Learning (SSL) has emerged as a promising alternative to address the challenge of limited annotations.
In this paradigm, models learn from unlabeled data by generating their own supervisory signals. The resulting pre-
trained models can then be fine-tuned on various downstream tasks such as image classification, object detection, and
semantic segmentation. However, achieving performance comparable to supervised learning often requires large-scale
datasets and high training costs, which significantly increase computational and storage demands. This internship
aims to alleviate these constraints by exploring data distillation techniques to make SSL training more efficient.

Sujet :
Dataset Distillation (DD) [1] aims to condense a large-scale training dataset into a much smaller synthetic one
such that models trained on the distilled data achieve performance comparable to those trained on the original
dataset (see figure 1). Most existing DD methods are designed for efficient supervised learning and can be broadly
classified into three main categories [2] : (1) Performance Matching, which minimizes the loss on the synthetic
dataset by aligning the performance of models trained on real and synthetic data, (2) Parameter Matching, which
trains two neural networks respectively on real and synthetic data and encourages similarity in their parameters and
(3) Distribution Matching, which generates synthetic data that closely mimics the distribution of the original dataset.
In this internship, we will focus on the Parameter Matching approach. Building upon the work of Cazenavette et al.
[3], the authors of [4] extended this concept to SSL using knowledge distillation [5, 6, 7], particularly employing SSL
methods such as Barlow Twins and SimCLR. In the same vein, this internship will explore the DINO (self-DIstillation
with NO labels, MetaAI) SSL method [8], which naturally produces teacher–student parameter trajectories that can
be leveraged for Parameter Matching. The different steps of the internship are :
▷ Step 1 – Literature review : Review recent dataset distillation methods applied to computer vision, with a
focus on parameter matching and SSL-based approaches.
▷ Step 2 – Trajectory Observation : Analyze and visualize the teacher–student parameter trajectories generated
by DINO during SSL training.
▷ Step 3 – Integration into Data Distillation Frameworks : Design a trajectory matching loss based on
DINO’s teacher–student dynamics and train a student model on synthetic data guided by these trajectories.
▷ Step 4 – Test on down-stream computer vision tasks : Assess the effectiveness of the proposed approach
on tasks such as image classification
– Bibliography
[1] Tongzhou Wang et al. “Dataset distillation”. In : arXiv preprint arXiv :1811.10959 (2018).
[2] Ruonan Yu, Songhua Liu et Xinchao Wang. “Dataset distillation : A comprehensive review”. In : IEEE transactions on pattern analysis and machine
intelligence 46.1 (2023), p. 150-170.
[3] George Cazenavette et al. “Dataset distillation by matching training trajectories”. In : Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition. 2022, p. 4750-4759.
[4] Siddharth Joshi, Jiayi Ni et Baharan Mirzasoleiman. “Dataset Distillation via Knowledge Distillation : Towards Efficient Self-Supervised Pre-training
of Deep Networks”. In : The Thirteenth International Conference on Learning Representations. 2025. url : https://openreview.net/forum?id=c61unr33XA.
[5] Geoffrey Hinton, Oriol Vinyals et Jeff Dean. “Distilling the knowledge in a neural network”. In : arXiv preprint arXiv :1503.02531 (2015).
[6] Ayoub Karine, Thibault Napoléon et Maher Jridi. “I2CKD : Intra- and inter-class knowledge distillation for semantic segmentation”. In : Neurocomputing
649 (oct. 2025), p. 130791. url : https://hal.science/hal-05144692.
[7] Ayoub Karine, Thibault Napoléon et Maher Jridi. “Channel-spatial knowledge distillation for efficient semantic segmentation”. In : Pattern Recognition
Letters 180 (avr. 2024), p. 48-54. url : https://hal.science/hal-04488459.
[8] Oriane Siméoni et al. “Dinov3”. In : arXiv preprint arXiv :2508.10104 (2025)

Profil du candidat :
The ideal
candidate should have knowledge in deep learning, computer vision, Python programming and an interest in efficient
machine/deep learning.

Formation et compétences requises :
Master 2 student or final year of MSc, or engineering school in computer science.

Adresse d’emploi :
45 rue des Saints-Pères, 75006, Paris

Document attaché : 202511111324_2025_Internship_DD_SSL.pdf

Knowledge Distillation from Large Vision Foundation Models for Efficient Dense Prediction

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : LIPADE
Durée : 6 months
Contact : ayoub.karine@u-paris.fr
Date limite de publication : 2026-04-30

Contexte :
Recently, several Large Vision Foundation Models (LVFMs) have been proposed in the literature [1]. They are
trained through a Self-Supervised Learning (SSL) paradigm on large-scale unlabeled datasets and evaluated on small
labeled datasets (fine-tuning). These models have achieved state-of-the-art performance across a wide range of
downstream computer vision tasks, including both non-dense tasks (e.g., image classification, image retrieval) and
dense tasks (e.g., semantic segmentation, object detection). However, the growing size and computational demands of
the LVFMs significantly constrain their applicability in resource-limited devices (e.g., drone, smarphone). For instance,
CLIP (Contrastive Language–Image Pretraining, OpenAI) [2] comprises up to 0.4 billion parameters, DINOv3 (self-
DIstillation with NO labels, MetaAI) [3] includes models with up to 7 billion parameters, and the SAM 2 (Segment
Anything Model, Meta AI) [4] exceeds 224 million parameters. To reduce the computational demands of such massive
architectures, this internship will focus on investigating knowledge distillation techniques.

Sujet :
The knowledge distillation (KD) technique [5, 6, 7] transfers knowledge from a powerful teacher network to a
smaller student model, enabling the student to achieve significantly improved performance with lower computational
cost. In this process, the student is trained on the same dataset as the teacher, allowing it to directly leverage the
teacher’s learned representations. However, directly applying KD to LVFMs presents several challenges. First, the
most performant LVFMs are developed by large tech companies, and their training datasets are often not publicly
available. Second, these LVFMs typically employ Vision Transformer (ViT) architectures [8] as encoders, whereas
convolutional neural networks (CNNs) are generally lighter and more computationally efficient, making them strong
candidates for student models on edge devices. Third, there are significant discrepancies in capacity between LVFMs
and smaller edge models. The latter two challenges are partially addressed by Lee et al. [9], who propose a method
to customize the well-generalized features of LVFMs for a given student model. Despite promising results, this work
does not thoroughly address the issues of unavailable source datasets and cross-architecture knowledge transfer.
Additionally, only the image classification task is considered. In this internship, we aim to tackle these challenges by
investigating state-of-the-art methods for cross-architecture KD [10], data-free KD [11] and adaptive KD [12]. As
illustrated in figure 1, we will focus on two dense down-stream tasks : semantic segmentation and object detection.
The different steps of the internship are :
▷ Step 1 – Literature review on KD from foundation models
▷ Step 2 – Compare different methods of cross-architecture KD, data-free KD and adaptive KD : The
teacher will be a LVFM such as CLIP, DINOv3 and SAM2. The student encoder should be a CNN one like ResNet18.
▷ Step 3 – Test the student model on different semantic segmentation and object detection datasets :
A comparison is to be done with classical KD methods dedicated to dense prediction.
– Bibliography
[1] Muhammad Awais et al. “Foundation models defining a new era in vision : a survey and outlook”. In : IEEE Transactions on Pattern Analysis and
Machine Intelligence (2025).
[2] Alec Radford et al. “Learning transferable visual models from natural language supervision”. In : International conference on machine learning. PmLR.
2021, p. 8748-8763.
[3] Oriane Siméoni et al. “Dinov3”. In : arXiv preprint arXiv :2508.10104 (2025).
[4] Nikhila Ravi et al. “Sam 2 : Segment anything in images and videos”. In : arXiv preprint arXiv :2408.00714 (2024)
[5] Geoffrey Hinton, Oriol Vinyals et Jeff Dean. “Distilling the knowledge in a neural network”. In : arXiv preprint arXiv :1503.02531 (2015).
[6] Ayoub Karine, Thibault Napoléon et Maher Jridi. “I2CKD : Intra- and inter-class knowledge distillation for semantic segmentation”. In : Neurocom-
puting 649 (oct. 2025), p. 130791. doi : 10.1016/j.neucom.2025.130791. url : https://hal.science/hal-05144692.
[7] Ayoub Karine, Thibault Napoléon et Maher Jridi. “Channel-spatial knowledge distillation for efficient semantic segmentation”. In : Pattern Recognition
Letters 180 (avr. 2024), p. 48-54. doi : 10.1016/j.patrec.2024.02.027. url : https://hal.science/hal-04488459.
[8] Alexey Dosovitskiy et al. “An Image is Worth 16×16 Words : Transformers for Image Recognition at Scale”. In : International Conference on Learning
Representations. 2021. url : https://openreview.net/forum?id=YicbFdNTTy.
[9] Jungsoo Lee et al. “Customkd : Customizing large vision foundation for edge model improvement via knowledge distillation”. In : Proceedings of the
Computer Vision and Pattern Recognition Conference. 2025, p. 25176-25186.
[10] Weijia Zhang et al. “Cross-Architecture Distillation Made Simple with Redundancy Suppression”. In : Proceedings of the IEEE/CVF International Confe-
rence on Computer Vision. 2025, p. 23256-23266.
[11] Qianlong Xiang et al. “Dkdm : Data-free knowledge distillation for diffusion models with any architecture”. In : Proceedings of the Computer Vision and
Pattern Recognition Conference. 2025, p. 2955-2965.
[12] Yichen Zhu et Yi Wang. “Student customized knowledge distillation : Bridging the gap between student and teacher”. In : Proceedings of the IEEE/CVF
International Conference on Computer Vision. 2021, p. 5057-5066.

Profil du candidat :
The ideal candidate should have knowledge in deep learning, computer vision, Python programming and an interest in efficient
deep learning.

Formation et compétences requises :
Master 2 student or final year of MSc, or engineering school in computer science

Adresse d’emploi :
45 rue des Saints-Pères, 75006, Paris

Document attaché : 202511111320_2025_Internship_KD_LVFM.pdf

Transformer-based methods for cluster detection in astronomical images

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : LIPADE & APC
Durée : 6 mois
Contact : ayoub.karine@u-paris.fr
Date limite de publication : 2026-04-30

Contexte :

Sujet :
Deep Learning techniques have revolutionized artificial intelligence. Their application to astrophysics and cosmology permits us to analyze the large quantity of data obtained with
current surveys and expected from future surveys with the aim of improving our understanding of the cosmological model.
The internship is in the context of the data acquired by Vera Rubin Observatory (https://www.lsst.org/about) LLST (Legacy Survey of Space and Time), in particular in the context of the Dark Energy (DESC) and Galaxies Rubin Science Collaborations
(https://rubinobservatory.org/for-scientists/science-collaborations), and of the Euclid space mission (https://sci.esa.int/web/euclid). Galaxy clusters are powerful probes for cosmological models. LSST and Euclid will reach
unprecedented depths and, thus, they require highly complete and pure cluster catalogs, with a well-defined selection function. In this internship, we will focus on analysing astronomical
images through deep learning. Our team have developed a new cluster detection algorithm named YOLO for CLuster detection
(YOLO-CL), which is a modified version of the state-of-the-art object detection deep convolutional network named You only look once (YOLO) that has been optimized for the
detection of galaxy clusters [1,2]. The YOLO approach is a convolution-based method that primarily captures local features. In this internship, we aim to investigate transformer-based methods to model global relationships across entire astronomical images. These models are capable of capturing spatial and contextual interactions between multiple objects, which is expected to enhance detection performance compared to YOLO in our target application. In this context, we focus on the Detection Transformer (DETR) framework [3], an end-to-end
architecture that employs a transformer encoder–decoder network.
– Bibliography
[1] Grishin, Kirill, Simona Mei, and Stéphane Ilić. “YOLO–CL: Galaxy cluster detection in the SDSS with deep machine learning.” Astronomy & Astrophysics 677 (2023): A101.
[2] Grishin, Kirill, Simona Mei, Stephane Ilic, Michel Aguena, Dominique Boutigny, and Marie
Paturel. “YOLO-CL cluster detection in the Rubin/LSST DC2 simulations.” Astronomy & Astrophysics 695 (2025): A246.
[3] Carion, Nicolas, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. “End-to-end object detection with transformers.” In European conference on computer vision, pp. 213-229. Cham: Springer International Publishing, 2020.

Profil du candidat :
The ideal candidate should have knowledge in deep learning, computer vision, Python programming and an interest in handling astronomical images. We have already obtained funding for the internship for 3-6 months.

Formation et compétences requises :
Master 2 or final year of MSc, or engineering school students in computer science.

Adresse d’emploi :
10 rue A.Domon et Léonie Duquet, 75205 Paris and/or 45 rue des
Saints-Pères, 75006, Paris

Document attaché : 202511111316_2025_Internship_Transformer-ClusterDetection.pdf

Developing a Super-Resolution Benchmark for Remote Sensing Downstream Applications

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : IRISA
Durée : 4 à 6 mois
Contact : charlotte.pelletier@univ-ubs.fr
Date limite de publication : 2025-01-15

Contexte :
The spatial resolution of freely available multispectral sensors such as Sentinel-2 (10 meter at best) remains a limiting factor for many Earth observation tasks, particularly those involving fine-scale spatial structures such as the delineation of crop boundaries, mapping of urban trees, or identification of individual buildings. Deep learning-based super-resolution (SR) techniques have emerged as an attractive solution to synthetically enhance the spatial detail of such imagery [4]. While numerous SR methods, ranging from convolutional neural networks to transformers and generative models [3], have been proposed, their evaluation typically relies on reconstruction and perceptual metrics. These measures, though common, are tailored for SR models trained on natural images and overlooked challenges cause by cross-sensor SR from satellite images [2]. More importantly, they do not indicate whether the super-resolved data improve the performance, robustness, or interpretability of downstream models used for Earth monitoring [5].

Sujet :
Objectives of this work. This internship aims to bridge this gap by developing a comprehensive benchmark of SR models for downstream learning applications in Earth observation. The goal is to quantify how the reconstruction of fine details in SR imagery impacts the performance of subsequent analysis tasks. The focus will be on Copernicus data, in particular Sentinel-2 imagery, which is freely available and provides global coverage with acquisitions every five days at the equator. The benchmark will include both standard image-based metrics and newly proposed task-aware evaluation criteria tailored to the selected applications.
Work Plan
To address the aforementioned objectives, a tentative work plan is outlined below:
• Literature review: Survey recent SR models and their evaluation in downstream applications using Sentinel-2 or similar optical data.
• Benchmark design: Identify suitable datasets combining Sentinel-2 imagery and higher-resolution references (e.g., PlanetScope, WorldView, or aerial data) for multiple domains such as agriculture, forestry, maritime [1], and urban monitoring.
• Metric development: Explore and propose new metrics that go beyond classical reconstruction or segmentation scores. The objective is to assess how SR influences application-level outcomes, e.g., boundary delineation [6], small-object detection, or vegetation index preservation.
• Experimental benchmarking: Implement and compare several SR models within a unified experimental setup, evaluating their performance using both conventional and newly defined task-aware metrics.
The expected outcomes include a benchmark framework enabling the community to evaluate SR models on a range of downstream applications, as well as a research paper submitted to a top-tier journal.
References
[1] Katerina Kikaki, Ioannis Kakogeorgiou, Paraskevi Mikeli, Dionysios E Raitsos, and Konstantinos Karantzalos. MARIDA: A benchmark for marine debris detection from Sentinel-2 remote sensing data. PloS one, 17(1):e0262247, 2022.
[2] Julien Michel, Ekaterina Kalinicheva, and Jordi Inglada. Revisiting remote sensing cross-sensor single image super-resolution: the overlooked impact of geometric and radiometric distortion. IEEE Transactions on Geoscience and Remote Sensing, 2025.
[3] Aimi Okabayashi, Nicolas Audebert, Simon Donike, and Charlotte Pelletier. Cross-sensor super-resolution of irregularly sampled sentinel-2 time series. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 502–511, 2024.
[4] Peijuan Wang, Bulent Bayram, and Elif Sertel. A comprehensive review on deep learning based remote sensing image super-resolution methods. Earth-Science Reviews, 232:104110, 2022.
[5] Piper Wolters, Favyen Bastani, and Aniruddha Kembhavi. Zooming out on zooming in: Advancing super-resolution for remote sensing. arXiv preprint arXiv:2311.18082, 2023.
[6] Quentin Yeche, Dino Ienco, and Raffaele Gaetano. Field by field: moving from area-based metrics to instance-level agricultural parcel assessment. 2025.

Profil du candidat :
We are looking for a candidate:
• enrolled in a Master 2, École d’Ingénieur, or equivalent program in computer science, data science, or geoinformatics;
• with a strong background in data science, and/or computer vision;
• proficient in Python programming and familiar with at least one deep learning framework (preferably PyTorch);
• with experience in remote sensing or a strong motivation to apply AI to Earth observation;
• with excellent communication skills in French or English;
• and a keen interest in research and scientific publication.

Formation et compétences requises :
We are looking for a candidate enrolled in a Master 2, École d’Ingénieur, or equivalent program in computer science, data science, or

Adresse d’emploi :
Université Bretagne Sud
Campus de Tohannic
56000 Vannes

Document attaché : 202511101259__2025__Master_2_SR_downstream_applications.pdf

On importance sampling for probability estimation of high-dimensional rare events with finite intrinsic dimensions

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : ISAE SUPAERO (Toulouse)
Durée : 3,6 ans
Contact : florian.simatos@isae.fr
Date limite de publication : 2026-01-05

Contexte :

Sujet :
Cf pdf

Profil du candidat :

Formation et compétences requises :

Adresse d’emploi :
Toulouse

Document attaché : 202511080545_high-dimensional-IS-Mai-Simatos.pdf

Modélisations par approches neuronales des déformations d’un organe observé par IRM dynamique.

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : Laboratoire d’Informatique et Systèmes – LIS – UMR
Durée : 5 à 6 mois
Contact : marc-emmanuel.bellemare@univ-amu.fr
Date limite de publication : 2026-01-31

Contexte :
Le stage se déroulera à Marseille essentiellement au laboratoire d’informatique et des systèmes (LIS) dans l’équipe Images & Modèles sur le campus de St Jérôme (https://im.lis-lab.fr).
Le LIS UMR 7020 fédère plus de 375 membres. La recherche y est structurée au sein de pôles (calcul, science des données, analyse et contrôle des systèmes, signal et image), et centrée sur les activités dans les domaines de l’informatique, de l’automatique, du signal et de l’image. L’apprentissage profond en particulier y est un thème transverse et le LIS dispose d’une plateforme dédiée, un cluster de nœuds GPU ainsi que le personnel de gestion nécessaire.

Sujet :
La modélisation des déformations des organes abdominaux revêt une importance cruciale pour la santé des patients et pour de nombreuses applications cliniques, telles que la planification de la radiothérapie adaptative, le suivi de la progression des maladies ou encore l’analyse biomécanique des tissus. L’imagerie par résonance ma- gnétique (IRM) peut offrir une visualisation spatiale et en coupe des déformations d’organes in vivo. Cependant, l’état de l’art actuel présente plusieurs limitations, notamment en termes de résolution et de reconstruction fidèle de l’évolution tridimensionnelle et dynamique des organes. L’objectif de ce stage est de proposer des solutions innovantes pour pallier ces limites.
Dans le cadre d’un projet de recherche mené en collaboration avec l’AP-HM, nous nous intéressons au suivi des déformations des principaux organes pelviens. L’approche actuelle [1, 4] consiste à détecter un contour sur une série d’images 2D, puis à effectuer un échantillonnage spatial de ce contour initial. Les contours suivants sont ensuite estimés de manière récurrente à l’aide d’un modèle de transport optimal, la déformation finale étant calculée à partir de la distance entre les points d’échantillonnage obtenus. Cependant, cette méthode présente plusieurs faiblesses : la construction de l’échantillonnage est souvent arbitraire, le transport optimal peut introduire des biais difficiles à maîtriser, et la définition même de la distance utilisée reste discutable. Ces éléments limitent la robustesse et la généralisabilité de l’approche, malgré son intérêt scientifique certain.
Afin de dépasser ces limitations, ce stage vise à exploiter des modèles et méthodes récents capables d’apporter plus de cohérence et de précision à la modélisation des déformations. Le premier axe d’amélioration concerne la discrétisation : celle-ci peut être évitée grâce aux représentations neuronales implicites (Implicit Neural Representations, INRs). Ces dernières reposent sur le principe d’approximation universelle des réseaux de neurones, leur permettant de représenter n’importe quelle forme continue. Ainsi, le contour précédemment échantillonné sera alors directement modélisé par un réseau neuronal.
Pour l’estimation des déformations, nous proposons d’utiliser des réseaux de neurones informés par la phyique (Physics-Informed Neural Networks, PINNs). L’idée est d’intégrer des contraintes issues des équations mécaniques de la déformation afin d’estimer à la fois le champ de déformation et les paramètres des lois de comportement des tissus.
En résumé, le stage a pour objectif de combiner les représentations neuronales implicites et les réseaux de neurones informés par la physique afin de modéliser les déformations d’organes alignées sur des données d’imagerie IRM, offrant ainsi une approche plus continue, précise et physiquement cohérente de la dynamique des organes observés.

Réalisation
L’objectif principal de ce stage est de développer et d’évaluer des modèles de déformation d’organes, en s’appuyant sur des approches d’apprentissage profond et de modélisation physique.
Les étapes et objectifs clés sont les suivants :
— Développer une représentation neuronale implicite (INR) des contours de déformation de la vessie en 2D + temps.
— Évaluer les performances de cette représentation en termes de précision et de continuité temporelle.
— Concevoir une approche basée sur les réseaux de neurones informés par la physique afin de reconstruire le champ de déformation et d’estimer les paramètres mécaniques du comportement.
— Évaluer les performances du PINN selon deux perspectives possibles :
— comme modèle hybride, intégrant à la fois les données expérimentales et les contraintes issues des équations physiques, afin de guider l’apprentissage vers des solutions cohérentes avec les lois mécaniques.
— ou comme problème inverse, visant à identifier les paramètres physiques (par ex. propriétés mécaniques des tissus) et les déformations spatiales à partir des données observées, tout en respectant les équations de la mécanique des milieux continus.
— Perspective d’extension vers la 3D+temps

Données
Le projet s’appuiera sur un jeu de données d’IRM dynamiques de la vessie, collecté auprès de 50 patientes. L’échantillonnage temporel, à raison d’une image par seconde, des séquences sagittales dynamiques fournit 12 images par patiente. Les contours de la vessie ont été extraits de manière manuelle ou semi-automatique, sur l’ensemble des images de la séquence dynamique. Au total, 600 contours ont ainsi été obtenus, constituant la base de données utilisée pour l’apprentissage et l’évaluation des performances du modèle.

Profil du candidat :
Le ou la candidat.e sera intéressé.e par un domaine pluridisciplinaire embrassant l’analyse d’image, les mathématiques appliquées, le deep-learning, dans un contexte médical.
Des connaissances en équations aux dérivées partielles (EDP) et en méthodes de résolution par éléments finis constituent un atout supplémentaire pour ce stage.

Formation et compétences requises :
De formation Bac+5 dans une formation concernée par le traitement d’image. Une expérience de la programmation avec l’environnement python est un pré-requis, la connaissance de la bibliothèque JAX serait un plus.
Le stage aura une durée de 4 à 6 mois avec la gratification d’usage (de l’ordre de 600€ par mois).

Adresse d’emploi :
Laboratoire d’Informatique et Systèmes – LIS – UMR CNRS 7020 – Aix-Marseille Université
Campus scientifique de St Jérôme – Av. Escadrille Normandie Niemen -13397 Marseille Cedex 20
www.lis-lab.fr

Document attaché : 202511071339_M2_stage_LIS_PINN.pdf

Segmentation d’IRM multiplan par réseaux de neurones profonds

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : Laboratoire d’Informatique et Systèmes LIS – UMR
Durée : 5 à 6 mois
Contact : marc-emmanuel.bellemare@lis-lab.fr
Date limite de publication : 2026-01-31

Contexte :
Le stage se déroulera à Marseille essentiellement au laboratoire d’informatique et des systèmes (LIS) dans l’équipe Images & Modèles sur le campus de St Jérôme (https://im.lis-lab.fr).
Le LIS UMR 7020 fédère plus de 375 membres. La recherche y est structurée au sein de pôles (calcul, science des données, analyse et contrôle des systèmes, signal et image), et centrée sur les activités dans les domaines de l’informatique, de l’automatique, du signal et de l’image. L’apprentissage profond en particulier y est un thème transverse et le LIS dispose d’une plateforme dédiée, un cluster de nœuds GPU ainsi que le personnel de gestion nécessaire.

Sujet :
Le stagiaire s’attachera à la segmentation des images acquises lors de l’observation par IRM dynamique des déformations des organes pelviens afin de produire des reconstructions 3D des surfaces en mouvement.
Les troubles de la statique pelvienne représentent un enjeu de santé publique. Ils regroupent un ensemble de pathologies associant une perte des rapports anatomiques normaux des organes pelviens, et une altération dramatique de la qualité de vie des malades. Ces pathologies sont handicapantes à des degrés variés mais leur physiopathologie reste encore mal connue ce qui complique leur prise en charge. Dans le cadre d’une collaboration avec le service de chirurgie digestive de l’AP-HM, de nouvelles acquisitions IRM, associées à une reconstruction adaptée, ont permis la visualisation 3D des organes en mouvement. Des résultats probants ont été récemment obtenus et publiés pour l’observation de la vessie (Figure) et il s’agit de s’intéresser aux autres organes pelviens. Des acquisitions multi-planaires ont été réalisées dans des plans non classiques ce qui complique la reconnaissance des organes. Ainsi la segmentation des principaux organes impliqués est une étape primordiale mais difficile. Les partenaires cliniciens ont réalisé des segmentations manuelles des organes sur ces plans ce qui permet de disposer d’une vérité-terrain. Nous envisageons de proposer un nouveau modèle de réseau, adapté à la configuration des plans d’acquisition.
Les problématiques de recalage, de segmentation et de modèles 3D, au cœur du projet, seront abordées selon les compétences et préférences du stagiaire.

Profil du candidat :
Le candidat ou la candidate de niveau Bac+5 dans une formation intégrant le traitement d’images, sera intéressé(e) par un projet pluridisciplinaire et par l’imagerie médicale. Les domaines abordés concernent les réseaux de neurones profonds, la segmentation d’IRM et la reconstruction 3D.
Le stage aura une durée de 4 à 6 mois avec la gratification d’usage (de l’ordre de 600€).

Formation et compétences requises :
La compétence en programmation python est un pré-requis.
Des compétences en mathématiques appliquées seront particulièrement appréciées. Une expérience de la programmation avec l’environnement PyTorch serait un plus.

Adresse d’emploi :
Laboratoire d’Informatique et Systèmes – LIS – UMR CNRS 7020 – Aix-Marseille Université
Campus scientifique de St Jérôme – Av. Escadrille Normandie Niemen -13397 Marseille Cedex 20
www.lis-lab.fr

Document attaché : 202511071329_Sujet_Master2_DL&SegmentationMultiPlan.pdf