DAMoS: Deep Analysis of Motor Symptoms for Dementia with Lewy Bodies

When:
27/05/2021 – 28/05/2021 all-day
2021-05-27T02:00:00+02:00
2021-05-28T02:00:00+02:00

Offre en lien avec l’Action/le Réseau : DOING/– — –

Laboratoire/Entreprise : ICube
Durée : 3 ans
Contact : seo@unistra.fr
Date limite de publication : 2021-05-27

Contexte :
The diagnosis of dementia with Lewy bodies (DLB), a disease associated with abnormal deposits of synuclein (specific protein) in the brain, can be challenging. Its early symptoms are often confused with similar symptoms found in other brain diseases like Alzheimer’s disease or in psychiatric disorders like schizophrenia. The most common DLB signs and symptoms are changes in cognition, movement, and behavior. In this thesis, we will develop a learning-based approach to model face and eye movement from video input, with a specific focus on modeling the motor symptoms of DLB. Our primary objective will be on modeling and detecting facial motor symptoms, such as reduced facial expression, and facial expressions showing behavioral symptoms (depression, apathy, or agitation). Other motor symptoms related to cognitive symptoms will also be considered, which requires analyzing the eye movement: Troubles or unpredictable changes in visual attention, executive, visual and spatial abilities (judging distance and depth or misidentifying objects), and movements showing cognitive fluctuations or visual hallucinations.
Robust three-dimensional reconstruction, analysis, and characterization of shape and motion of individuals or groups of people from one or more video images have been open problems for decades, with many exciting application areas such as early abnormality detection in predictive clinical analysis. A common way to acquire necessary 3D data and model is to use calibrated multi-view passive cameras to merge a sparse or dense set reconstructed depth images into a single mesh, but the size and cost of such multi-view systems prevent their use in consumer applications. In more unconstrained and ambiguous settings, such as in the monocular image or video, priors in the form of template or parametric model derived from a large dataset are often used, which help to constrain the problem significantly. While generative methods reconstruct the moving geometry by optimizing the alignment between the projected model and the image data, regressive methods train deep neural networks to infer shape parameters of a parametric body model from a single image. Despite remarkable progress, the reconstruction and analysis of facial models from video has not been fully addressed yet, with most existing algorithms operating on ‘normal’ faces, and in a frame-by-frame manner. In this study, we will (1) address the relatively unspoken problem and data, i.e. abnormal face and eye movements, and (2) include a temporal aspect of the facial movement into a learned model, a work that has not been done before.

Sujet :
We will deploy recent deep learning techniques to approach the challenging problem of detection and analysis of facial motor symptoms of DLB from video. Devoted learning-based models will be developed to model the face and eye movements, which will then be integrated into a DLB diagnoser. In both cases, we aim to build our models in 3D, meaning that (1) A 2D-to-3D reconstruction will be preceded for the facial modeler, and (2) 2-dimensional visual stimuli for the eye tracking tests will be generated from 3D models, and the 2D fixation map back-projected to 3D. The work is articulated in three parts:
1. Face movement modeler. A model-based DNN (deep neural network) will be developed, which will learn to jointly regress the 3D facial shape and movement (head pose, and pose-dependent shape change) from the monocular video input. Following our recent success on the DNN-based facial animation modeling, a recurrent neural network will be adopted, which has shown to achieve promising results in modeling sequential, time-series data.
2. Eye movement modeler will be trained to model and analyze the sequence data of saccades and fixations on observed visual stimuli, which we will acquire by using an eye-tracker.
3. DLB diagnoser. Both aforementioned modelers will be integrated into a DLM diagnoser, with a capability of detecting some of the known motor-, cognitive-, and behavioral symptoms.
The observation data of patients and normal aged populations will be collected in collaboration with the University hospital, and other publicly available resources.

Profil du candidat :
— Master’s degree in Computer Science, Electrical Engineering or Applied Mathematics

Formation et compétences requises :
— Solid programming skills in Python/Matlab
— Solid knowledge in deep learning with programming experience in Tensorflow or Pytorch
— Working knowledge in geometry modeling and statistics
— Good communication skills

Adresse d’emploi :
Equipe MLMS, Laboiratoire ICube
Bâtiment Clovis Vincent
5 rue Kirschleger,
67085 Strasbourg Cedex FRANCE

Document attaché : 202101221313_SEO-BLANC_Sujet de these IA.pdf