Perception of multi-scale synchronization during movement and music

When:
01/10/2021 – 02/10/2021 all-day
2021-10-01T02:00:00+02:00
2021-10-02T02:00:00+02:00

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : Euromov D.H.M
Durée : 3 years
Contact : patrice.guyot@mines-ales.fr
Date limite de publication : 2021-10-01

Contexte :
A 3-year fully funded PhD scholarship is proposed by the PhD school (ED I2S) in Alès / Montpellier under the supervision of Patrice Guyot (PhD, sound analysis), Pierre Slangen (Pr, motion capture) and Benoît Bardy (Pr, embodied cognition).
The successful applicant will become part of a dynamic research environment within the newly multidisciplinary joint research center EUROMOV Digital Health in Motion (landing theme: Perception in Action & Synchronization – PIAS).

See this offer on the Euromov D.H.M website:
https://dhm.euromov.eu/wp-content/uploads/2021/06/Ph.D_MovementMusicSync.pdf

As a PhD student, you will be responsible for:
– Independently carrying out research and completing a PhD dissertation within three years,
– Recruiting participants and organize experiments in our labs,
– Collecting and synthesizing motion data and music,
– Developing algorithms and methods to analyze motion and music data,
– Reporting the results in international peer-reviewed scientific journals and conferences.

Start date: October 1st, 2021 (to September 2024).
Net remuneration around 1400€ monthly (including social security and health benefits).

Sujet :
Synchronized group activities, such as dancing, singing or certain sports, strengthen human attachment and improve individual well-being [Lau16]. In a physical activity such as tai chi chuan, synchronization within the group is based on the a priori knowledge of individuals and their perception of movements of other participants. In the context of a fitness or capoeira class, synchronization is also based on the common perception of the rhythms of the music.

In general, synchronies between individuals are based on predictive abilities that are fed by visual and auditory perception [Bar20, Tra18]. However, the way in which sound and visual information interacts in the perception and production of synchronization, intentional or spontaneous, is still poorly understood [Ips17].

The ability to synchronize movements and music is primarily analyzed through very simple tasks such as tapping. In more complex situations, such as walking, synchronization can be analyzed through the impact of the steps, and their correspondence with the strong beats of the piece [Dec18]. However, human movements, similarly to music, are composed of cycles at multiple levels, presenting complex rhythmic relationships (expiration and foot impact for running) or hierarchical structures (binary or ternary alternation of strong and weak beats for music).

Beyond classical approaches to detect simple cycles such as the downbeats of pieces (e.g., with neural networks [Jia19]), recent work on automatic analysis converges toward multi-scale modeling of musical content. In this context, conditional random fields have been proposed for leveraging multi-scale information for computational rhythm analysis [Fue19].

In the context of sports practice, coaches are required to perceive these complex multi-scale synchronization patterns. Research has shown that humans synchronize better through auditory or multimodal stimuli than through visual-only stimuli [El10]. These results can be exploited within the framework of synchronization perception to produce visualization and sonification tools. These tools could facilitate the task of the coach when practicing online sport, by enhancing group synchronization for instance, and by allowing rapid identification of people in difficulty in order to offer individualized coaching. Applied to motion capture data, they could also be used in a medical setting to illustrate stability loss in movement polyrhythms.

In this thesis, we propose to analyze multi-scale synchronization patterns between individual movements, group movements, and musical rhythms. We will produce data from motion capture of individuals and groups in the laboratory as well as in more natural settings, and sound synthesis of multi-scale rhythmic content. This data will be analyzed using different approaches from Artificial Intelligence, including neural networks and probabilistic graphical models. Experiments will also be carried out on the perception of synchronization via the representation and / or sonication of the results of these analyzes, with the aim of developing computer bricks facilitating human evaluation of synchronization.

A better understanding of synchronization mechanisms, and their inclusion in IT, may improve collaborative virtual as well as rehabilitation of patients with social disorders [Slo17] or Parkinson’s disease [Dec18].

References

– [Lau16] Launay, Jacques, Bronwyn Tarr, and Robin IM Dunbar. “Synchrony as an adaptive mechanism for large‐scale human social bonding.” Ethology 122.10 (2016): 779-789.
– [Bar20] Bardy, Benoît G., et al. “Moving in unison after perceptual interruption.” Scientific reports 10.1 (2020): 1-13.
– [Tra18] Tranchant, Pauline. “Synchronisation rythmique déficiente chez l’humain: bases comportementales.” Diss. Université de Montréal (2018).
– [Ips17] Ipser, Alberta, et al. “Sight and sound persistently out of synch: stable individual differences in audiovisual synchronisation revealed by implicit measures of lip-voice integration.” Scientific Reports 7.1 (2017): 1-12.
– [Dec18] De Cock, V. Cochen, et al. “Rhythmic abilities and musical training in Parkinson’s disease: do they help?.” NPJ Parkinson’s disease 4.1 (2018): 1-8.
– [Jia19] Jia, Bijue, Jiancheng Lv, and Dayiheng Liu. “Deep learning-based automatic downbeat tracking: a brief review.” Multimedia Systems 25.6 (2019): 617-638.
– [Fue19] Fuentes, Magdalena. “Multi-scale computational rhythm analysis: a framework for sections, downbeats, beats, and microtiming”. Diss. Université Paris-Saclay, 2019.
– [Chu16] Chung, Junyoung, Sungjin Ahn, and Yoshua Bengio. “Hierarchical multiscale recurrent neural networks.” arXiv preprint arXiv:1609.01704 (2016).
– [Tav19] Tavanaei, Amirhossein, et al. “Deep learning in spiking neural networks.” Neural Networks 111 (2019): 47-63.
– [El10] Elliott, Mark T., Alan M. Wing, and Andrew E. Welchman. “Multisensory cues improve sensorimotor synchronisation.” European Journal of Neuroscience 31.10 (2010): 1828-1835.
– [Slo17] Słowiński, Piotr, et al. “Unravelling socio-motor biomarkers in schizophrenia.” npj Schizophrenia 3.1 (2017): 1-10.

Profil du candidat :
Applicants should have (or anticipate having) a MSc and research background related to computer science, audio/signal processing, or computational movement science. Knowledge in music (theoretical and practical) will be valued. French is not mandatory, but the candidate must be willing to learn French during their PhD and they must be able to communicate in English.

Applications should include a cover letter discussing your interest in the position, detailed CV, academic results (evaluation, average and ranking of the candidate during the initial course and Msc) and two reference letters. Deadline is July 5, 2021. Interviews will be conducted via zoom on Tuesday, July 13 and Thursday, July 15.

Formation et compétences requises :
Applicants should have (or anticipate having) a MSc and research background related to computer science, audio/signal processing, or computational movement science.

Adresse d’emploi :
Euromov D.H.M
IMT Mines Ales / Univ. of Montpellier

Document attaché : 202106221434_Ph.D_MovementMusicSync.pdf