Organ detection in multi-modality medical images via deep domain adaptation

When:
31/12/2018 – 01/01/2019 all-day
2018-12-31T01:00:00+01:00
2019-01-01T01:00:00+01:00

Annonce en lien avec l’Action/le Réseau : aucun

Laboratoire/Entreprise : Creatis – INSA-Lyon
Durée : 6 mois
Contact : razmig.kechichian@creatis.insa-lyon.fr
Date limite de publication : 2018-12-31

Contexte :
Organ detection and localization in medical images are important tasks in both clinical procedures and as an intermediate step in image analysis algorithms, such as image segmentation. Multi-modality methods are of particular interest for robust organ detection in heterogeneous datasets stored in PACS systems of healthcare and medical research centers. Such datasets are often of large size and diverse content challenging the task of efficient organ detection.

Sujet :
We seek a fast multi-modality object detection method capable of localizing up to 2 dozens of thoracic and abdominal organs in 3D radiological images (CT and MRI). Recent deep learning-based object detection methods [2-4] were proven to be very effective in the supervised setting where hundreds of annotated training examples are available for each object class. In medical imaging, such large annotated datasets are rare and annotations are expensive, therefore supervised deep learning methods that estimate millions of deep network parameters would fail.

Data augmentation techniques, both image transformation-based [8,12] and, more recently, GAN (generative adversarial network) -based [9-11] can help alleviate the lack of annotated data by generating additional examples similar to those in available training sets. On the other hand, annotations are often available and more abundant for certain image modalities, such as contrasted CT. Organ detectors learned on these source images could be transferred or adapted to target images, such as MRI, comprising similar anatomies by domain adaptation methods [1]. Existing domain adaptive object detection methods often adapt a learned classification and detection model by fine-tuning deep network parameters such as [5]. Recent adversarial approaches propose particularly interesting alternatives. In [7] for example, a convolutional neural network (CNN) -based detector learned on a source domain is adapted to the target domain through GAN-generated examples resembling the target domain carrying source labels and pseudo labels in the target domain. In [6], the supervised CNN detector is extended via 2 adversarial pathways to tackle image and instance-level shift in the target domain.

The aim of this project is therefore to study and propose an efficient cross-modality organ detection method for medical images capable of adapting supervised detectors learned in a source modality, possibly via data augmentation to counter the lack of annotated data, to a target modality, possibly in an adversarial manner.

Profil du candidat :
We are looking for a motivated collaborator capable of critical thinking, able to work autonomously as well as in a collective setting, having interest for medical imaging and good sense of responsibility (and humor ;). The candidate should be studying towards completing a master degree in computer science or a related engineering field. She should have a solid background in applied mathematics, image processing and computer science, in addition to good programming skills, preferably in Python programming language. A working knowledge of deep learning methods is necessary.

Formation et compétences requises :
See above.

Adresse d’emploi :
CREATIS – INSA-Lyon, bât. B. Pascal
7 avenue Jean Capelle 69100 Villeurbanne

Document attaché : sujet-stage-kechichian.pdf