Balancing Privacy and Biodiversity Acoustic Monitoring in Urban Environments

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : CEA Grenoble
Durée : 3 ans
Contact : marielle.malfante@gmail.com
Date limite de publication : 2024-06-30

Contexte :
More and more urban planning and development projects are taking biodiversity into account, and this is becoming a standard feature of urban planning. Indeed, monitoring biodiversity in urban environments is a crucial issue for species conservation, sustainable urban management and even the well-being of residents.
Passive acoustics is a promising solution for biodiversity monitoring. Analysis of the soundscape can yield valuable insights regarding the urban environment, including information about the wildlife present, along with their distribution and phenological patterns. This data can also help assess both the positive and negative effects of human activity on local biodiversity (Sordello et al., 2020; Darras et al., 2019). Moreover, examination of overall soundscape characteristics can provide important indications related to public health concerns and contribute to evaluating the comfort level experienced by city dwellers (Thompson, 2022).
Moreover, acoustic recording and analysis technologies are increasingly powerful and affordable, making passive acoustics accessible to a wide range of users. For example, Artificial intelligence (AI) is increasingly being used to analyze acoustic recordings due to its ability to handle large datasets, extract complex patterns and transient signals and make accurate inferences. It is already used to automatically detect, classify and quantify animal calls (Chalmers et al. 2021).
However, the deployment of passive listening systems in urban environments raises ethical and legal issues relating to the privacy of citizens. It is therefore essential to develop embedded systems capable of removing human voices from recordings while preserving sounds relevant to biodiversity detection.

Sujet :
Objectives
The aim of this thesis project is to develop and evaluate an on-board passive listening system for monitoring biodiversity in urban environments. The system will have to meet the following requirements:
Be capable of recording and analyzing ambient sounds.
Remove or blur human voices from recordings, while preserving sounds relevant to biodiversity detection.
Respect the privacy of citizens by guaranteeing the confidentiality of the data collected.
This project is part of a rapidly expanding scientific and technological context. It will enable the development of methodological and technological innovations for monitoring biodiversity in urban environments, and contribute to the protection of biodiversity and the sustainable management of cities.
Expected tasks
Bibliographic synthesis covering computational bioacoustics, privacy issues in audio recordings containing speech and embedded audio systems
Adaptation of existing model to meet the requirement of this thesis project
Implementation of real time processing on portable edge devices.
Field work and associated analyses to validate the methods
Scientific valorisation of the research: patents, research paper, participation in scientific conferences, PhD manuscript redaction and defense.

Profil du candidat :
We are looking for a candidate with the following profile:
– Machine Learning, Signal Processing, Speech processing
– Embedded Systems, edge computing, hardware integration
– Experience in Python language will be a plus
– An interest in bioacoustics and ecology
– An interest in field work.

If in doubt regarding your profile, please contact us for further discussion. The proposed PhD project is multidisciplinary but does not necessarily require experience in all the branches.

Formation et compétences requises :

Adresse d’emploi :
This PhD project is a CIFRE project between BioPhonia & CEA Grenoble.
Time will be shared between CEA Grenoble and Biophonia in Lyon/Saint-Etienne. Home office is partially allowed.
Field work in different geographic areas is also planned during the PhD to validate the different developed methods.

Document attaché : 202404091450_Thèse.pdf

Deep Graph Representation Learning on non-uniform 3D objects

Offre en lien avec l’Action/le Réseau : – — –/Doctorants

Laboratoire/Entreprise : CRIL
Durée : 3
Contact : wissem.inoubli@univ-artois.fr
Date limite de publication : 2024-04-25

Contexte :
Machine learning involves leveraging data to extract mathematical models capable
of generalizing or describing this data according to predefined objectives. This data
comes in various forms, ranging from well-defined structures like images and
matrices to semi-structured formats such as text and graphs. However, dealing with
entirely unstructured data, such as non-uniform 3D objects, poses a challenge for
traditional methods that primarily focus on geometric analysis.
The development of artificial intelligence, particularly deep learning, has greatly
improved performance compared to conventional learning methods, especially when
it comes to textual data, images, graphs, sequences, and more. However, learning on
non-uniform 3D objects remains a significant challenge. This field is garnering
increasing interest in various applications such as predicting molecular properties
based on their 3D structures rather than textual features [2].
In the field of bioinformatics, protein annotation based on their 3D interactions is an
example [1], as is the use of 3D structures in physics to simulate objects [3] or body
parts to analyze their behavior. These applications demonstrate the utility of
analyzing or learning on non-uniform 3D objects, thus sparking considerable interest
within the scientific community.

Sujet :
This thesis focuses on deep learning, with an emphasis on learning graph
representations. Graphs are widely used in many applications, providing a versatile
representation for non-regular objects, including 3D meshes, as an alternative to
traditional methods such as CNNs or image segmentation models like U-net. This
thesis explores graph neural networks (GNNs) for modeling non-regular 3D objects,
such as 3D meshes. Unlike CNNs, GNNs are designed to handle graph-type data,
making them more suitable for representing 3D meshes. They have demonstrated
superior performance in modeling such data, offering a promising alternative to
existing methods. However, despite their effectiveness, GNNs face scalability
challenges, especially with complex meshes. This thesis proposes solutions to
overcome these challenges by exploring mesh-specific pooling methods and other
strategies to simplify learning. It also considers approaches for constructing graphs
from 3D meshes to enhance learning efficiency. In addition to the static aspect of
data, this thesis addresses the application of GNNs to data with temporal patterns or features. It explores their uses in domains such as fluid simulation, weather
modeling, and 3D medical imaging, as well as in physical simulation of 3D meshes.
This highlights the temporal evolution of meshes in both space and time.

References
[1] Laveglia, V., Giachetti, A., Sala, D., Andreini, C., & Rosato, A. (2022). Learning to Identify
Physiological and Adventitious Metal-Binding Sites in the Three-Dimensional Structures of
Proteins by Following the Hints of a Deep Neural Network. Journal of Chemical Information
and Modeling, 62(12), 2951-2960.
[2] Yang, Y., Yao, K., Repasky, M. P., Leswing, K., Abel, R., Shoichet, B. K., & Jerome, S. V.
(2021). Efficient exploration of chemical space with docking and deep learning. Journal of
Chemical Theory and Computation, 17(11), 7106-7119.
[3] Atz, K., Grisoni, F., & Schneider, G. (2021). Geometric deep learning on molecular
representations. Nature Machine Intelligence, 3(12), 1023-1032.
[4] Cao, Y., Chai, M., Li, M., & Jiang, C. (2023, July). Efficient learning of mesh-based physical
simulation with bi-stride multi-scale graph neural network. In International Conference on
Machine Learning (pp. 3541-3558). PMLR.
[5] Fahim, G., Amin, K., & Zarif, S. (2022). Enhancing single-view 3D mesh reconstruction with
the aid of implicit surface learning. Image and Vision Computing, 119, 104377.

Profil du candidat :
Ideally, the recruited person will hold a Master’s degree in computer science and
have theoretical and practical knowledge in deep learning. Experience of machine
learning on graphs is also desirable but not essential The candidate must
demonstrate:
● Programming skills, such as proficiency in Python, for example
● Experience in Deep Learning, data mining
● Synthesis and writing skills allowing for clear and effective reporting of work
done

Formation et compétences requises :

Adresse d’emploi :
Computer science Research Institute of Lens (CRIL), Lens, France

Document attaché : 202404091446_Deep Graph Representation Learning on non-uniform 3D objects.pdf

3e Atelier IJCAI sur le raisonnement et l’apprentissage spatio-temporels (STRL 2024)

Date : 2024-08-05
Lieu : Co-localisé avec la conférence IJCAI 2024, île de Jeju, Corée du Sud

Call For Papers

The 3rd International Workshop on Spatio-Temporal Reasoning and Learning (STRL 2024) will take place in Jeju, South Korea, collocated with IJCAI 2024.

Website: https://www.lirmm.fr/strl2024/

Introduction

Opposing the false dilemma of logical reasoning vs machine learning, we argue for a synergy between these two paradigms in order to obtain hybrid AI systems that will be robust, generalizable, and transferable.

Indeed, it is well-known that machine learning only includes statistical information and, therefore, is not inherently able to capture perturbations (interventions or changes in the environment), or perform reasoning and planning. Ideally, (the training of) machine learning models should be tied to assumptions that align with physics and human cognition to allow for these models to be re-used and re-purposed in novel scenarios.

On the other hand, it is also the case that logic in itself can be brittle too, and logic further assumes that the symbols with which it can reason are already given.

It is becoming ever more evident in the literature that modular AI architectures should be prioritized, where the involved knowledge about the world and the reality that we are operating in is decomposed into independent and recomposable pieces, as such an approach should only increase the chances that these systems behave in a causally sound manner.

You may find details about previous editions of this workshop via the links below:

Objective

The aim of this workshop is to formalize such a synergy between logical reasoning and machine learning that will be grounded on spatial and temporal knowledge.

We argue that the calculi associated with the spatial and temporal reasoning community, be it qualitative or quantitative, naturally build upon physics and human cognition, and could therefore form a module that would be beneficial towards causal representation learning. A (symbolic) spatio-temporal knowledge base could provide a dependable causal seed upon which machine learning models could generalize, and exploring this direction from various perspectives is the main theme here.

Topics

In this workshop, we invite the research community in artificial intelligence to submit works related to the proposed integration of spatial and temporal reasoning with machine learning, revolving around the following topic areas:

  • Real-world problems / applications involving spatio-temporal data
  • Spatial, temporal, and spatio-temporal knowledge graphs
  • Spatio-temporal data mining / analysis
  • Space and time in narratives
  • Declarative spatial reasoning
  • Spatial and temporal language understanding with and without additional modalities (e.g., vision)
  • Neuro-symbolic approaches for spatio-temporal reasoning and learning
  • Probabilistic world models for spatio-temporal reasoning and learning
  • Probabilistic inference for spatio-temporal reasoning and learning
  • Datasets for spatio-temporal reasoning and learning
  • Metrics for assessing spatio-temporal reasoning and learning methods
  • Limitations in machine learning for spatio-temporal reasoning and learning; how far can machine learning go?
  • Relation between causal reasoning and spatial and temporal reasoning
  • Research and teaching challenges in spatio-temporal reasoning and learning

The list above is by no means exhaustive, as the aim is to foster the debate around all aspects of the suggested integration.

Application domains being addressed include, but are not limited to:

  • Autonomous Vehicles and Drones
  • Cognitive Robotics
  • Spatial Computing for Design
  • Computational Art
  • (Cognitive) Vision
  • Geographic Information Systems
  • Smart Environments
  • Healthcare

Submission

The submission link is available at: https://easychair.org/conferences/?conf=strl2024

Guidelines

Papers should be formatted according to the CEUR-ART style formatting guidelines here and submitted as a single PDF file.

We welcome submissions across the full spectrum of theoretical and practical work including research ideas, methods, tools, simulations, applications or demos, practical evaluations, and surveys.

Submissions that are 2 pages long (excluding references and appendices) will be considered for a short presentation, and submissions that are between 4 and 7 pages long (again, excluding references and appendices) will be considered for a regular presentation.

All papers will be peer-reviewed in a single-blind process and assessed based on their novelty, technical quality, potential impact, clarity, and reproducibility (when applicable).

Important Dates

Be mindful of the following dates:

  • April 26, 2024: Workshop paper submission deadline
  • May 31, 2024: Paper acceptance/rejection notification date
  • June 7, 2024: Camera-ready submission deadline
  • August 5, 2024: Workshop Date

Note: all deadlines are AoE (Anywhere on Earth).

Proceedings

The accepted papers will appear on the workshop website. We also intend to publish the workshop proceedings with CEUR-WS.org; this option will be discussed with the authors of accepted papers and is subject to the CEUR-WS.org preconditions. We note that, as STRL 2024 is a workshop, not a conference, submission of the same paper to conferences or journals is acceptable from our standpoint.

Workshop Organizers

Lien direct


Notre site web : www.madics.fr
Suivez-nous sur Tweeter : @GDR_MADICS
Pour vous désabonner de la liste, suivre ce lien.

Deep Learning and Knowledge Integration for Temporal Relations Extraction

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : LIFO Université d’Orléans
Durée : 3 ans
Contact : anais.halftermeyer@univ-orleans.fr
Date limite de publication : 2024-05-04

Contexte :
The recruited person will work at LIFO, University of Orléans (Campus de la Source, Orléans). They will be integrated into the Contraintes et Apprentissage team of LIFO(https://www.univ-orleans.fr/lifo/equipes/CA/).
The thesis will start in October 2024, and funding will last for three years.
Supervisors:
Anaïs Lefeuvre-Halftermeyer (anais.halftermeyer@univ-orleans.fr) LIFO, U. Orléans
Thi Bich Hanh Dao (thi-bich-hanh.dao@univ-orleans.fr) LIFO, U. Orléans
Remuneration:
Remuneration follows current legislation (2100 euros for gross salary), see https://www.enseignementsup-recherche.gouv.fr/fr/le-financement-doctoral-46472

Sujet :
We propose to work within the framework of temporal information extraction, which associates a synthetic representation of the events described in natural language text. A classical representation of such data is a graph of temporal relations between the events described and/or between temporal expressions [1].
Recent advances in deep learning in terms of language skills lead us to question human mastery over natural language processing tasks. These models have increasingly complex architectures and are increasingly demanding in terms of computing power and training data. However, they remain insufficient since general knowledge about temporal relations is not exploited to better guide and explain the results. In the context of this thesis topic, we propose to explore the integration of knowledge into a deep learning system, based on a language model, to solve temporal reasoning tasks.
A preliminary system [3] proposed to construct a temporal graph from medical texts by leveraging BERT, using rules in probabilistic logic during the model learning phase, as well as during the global inference phase. This hybrid work opened research avenues on the considerable contribution that temporal knowledge could represent through rule-based work. In order to make the systems more efficient, another study [4] proposed to successfully utilize syntactic analysis of inputs. In line with [2], we propose to leverage temporal knowledge representation to enhance system performance and explainability.
We are interested in integrating knowledge into these models to best solve temporal reasoning tasks, and this via constraint expression to:
• Leverage the best of both worlds, constraints, and language models acquired by deep learning
• Propose partly explainable hybrid models
• Base our systems on controlled computing power combined with a reproducible methodology of knowledge injection
Concretely, given a deep learning system based on a language model trained to translate text into a temporal graph representing the events narrated in the input text, injecting knowledge via constraint expression will modify the system’s outputs. We aim to incrementally inject knowledge to guide our system while controlling:
• The size of our model
• The size of our training data
• The complexity of our constraints
References
[1] T. Knez and S. Žitnik. Event-centric temporal knowledge graph construction: A survey. Mathematics, 11(23), 2023.
[2] B. Zhang and L. Li. Piper: A logic-driven deep contrastive optimization pipeline for event temporal reasoning. Neural Networks, 164:186–202, 2023.
[3] Y. Zhou, Y. Yan, R. Han, J. H. Caufield, K.-W. Chang, Y. Sun, P. Ping, and W. Wang. Clinical temporal relation extraction with probabilistic soft logic regularization and global inference. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14647–14655, 2021.
[4] L. Zhuang, H. Fei, and P. Hu. Syntax-based dynamic latent graph for event relation extraction. Information Processing Management, 60(5):103469, 2023

Profil du candidat :
Ideally, the recruited person will hold a Master’s degree in computer science and have theoretical and practical knowledge in deep learning. An interest in language and its automatic processing would be appreciated but is not a prerequisite for recruitment.

Formation et compétences requises :
The candidate must demonstrate:
• Programming skills, such as proficiency in Python, for example
• Experience in Machine Learning, data mining, or applied mathematics
• Synthesis and writing skills allowing for clear and effective reporting of work done
• Ability to communicate in French or English, both orally and in writing

An audition will take place before the MIPTIS doctoral school jury on June 12 to finalize the selection process.

Adresse d’emploi :
LIFO – Bâtiment IIIA
Rue Léonard de Vinci
B.P. 6759
F-45067 ORLEANS Cedex 2

Document attaché : 202404080944_Sujet_these_FR_EN.pdf

Green CNN : Optimisation multi-objective des architectures de réseaux de neurones convolutifs pour maximiser leurs performances et réduire leur consommation d’énergie

Offre en lien avec l’Action/le Réseau : – — –/Doctorants

Laboratoire/Entreprise : LISTIC
Durée : 3 ans
Contact : Khadija.arfaoui@univ-smb.fr
Date limite de publication : 2024-05-04

Contexte :

Sujet :
La Recherche d’Architecture de Neurones (Neural Architecture Search ou NAS) a révolutionné l’apprentissage automatique en automatisant la conception des architectures neuronales, dépassant les méthodes d’apprentissage classiques sur des tâches telles que la classification d’images, la détection d’objets et la segmentation sémantique. En se situant dans le domaine de l’AutoML, le NAS présente un chevauchement avec l’optimisation des hyper-paramètres et l’apprentissage méta. Classées selon trois dimensions, les méthodes de NAS nécessitent une définition efficace de l’espace de recherche, des algorithmes de recherche avancés et des techniques d’évaluation appropriées. Les architectures de réseaux de neurones, en particulier les CNN, émergent comme prédominantes, mais leur performance est étroitement liée à la configuration de leurs paramètres, nécessitant une exploration systématique de l’espace paramétrique. Cependant, des défis matériels tels que la complexité computationnelle, la taille des modèles, l’hétérogénéité matérielle et la consommation d’énergie persistent. Dans ce contexte, les approches d’optimisation multi-objectifs des architectures de réseaux de neurones, visant à améliorer les performances tout en réduisant la consommation d’énergie, deviennent cruciales. Ce projet de thèse envisage d’explorer ces aspects en utilisant des approches évolutives pour évaluer l’impact des différentes configurations de paramètres et en se concentrant sur les architectures compactes pour réduire la consommation d’énergie des CNNs. La validation de ces travaux à travers des cas d’utilisation concrets avec des ensembles de données réels permettra de démontrer la pertinence et l’applicabilité des avancées dans le domaine de l’optimisation des architectures des réseaux de neurones pour les CNNs.

Profil du candidat :
Le candidat idéal pour ce sujet de thèse devrait posséder des connaissances approfondies en apprentissage automatique et en optimisation, avec une maîtrise des techniques telles que les algorithmes évolutionnaires et l’apprentissage par renforcement. Une solide expérience en programmation, en particulier avec des langages comme Python et des bibliothèques telles que TensorFlow ou PyTorch, est essentielle pour la mise en œuvre pratique des méthodes proposées. De plus, une compréhension approfondie des architectures de réseau de neurones, en particulier des CNN, ainsi que des hyper-paramètres associés et de leur impact sur les performances des modèles, est nécessaire. La capacité à travailler avec des ensembles de données réels et à analyser les résultats de manière statistiquement significative est également importante. Enfin, une connaissance des défis matériels et de l’efficacité énergétique dans le contexte de l’implémentation des architectures de réseau de neurones serait un avantage.

Formation et compétences requises :

Adresse d’emploi :
LISTIC (Laboratoire d’Informatique, Systèmes, Traitement de l’Information et de la Connaissance), 5 Chem. de Bellevue, 74940 Annecy

Document attaché : 202404080618_sujet de thèse.pdf

Information Rating and Analysis of Knowledge Dynamics. Application to the Temporal Monitoring of the Reliability of Bibliographic Information on Insects as Vectors of Plant Pathogens.

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : MaIAGE – INRAE et AgroParisTech Saclay
Durée : 36 months
Contact : claire.nedellec@inrae.fr
Date limite de publication : 2024-06-30

Contexte :
We are looking for candidates who have proven knowledge of NLP and Machine Learning. Deadline is June 30th.
You will be affiliated with the Computer Science Graduate School at Paris-Saclay University. You will be employed by INRAE.
We offer a motivating research environment with many opportunities for in-house, national and international collaborations and access to computing GPU resources and state-of-the-art research equipment. The gross salary per month for the three-year contract is 2 100 (in 2024) to 2300 (in 2026) including the social security package (healthcare, pensions, unemployment benefits).

Sujet :
## Research subject
within the framework of the research project on NLP for insect monitoring in plant health. The central aim of the PhD project is to develop original approaches for the reliability of textual information, by integrating linguistic dimensions and knowledge graphs (NLP, language models), and dynamic dimensions (time series). The quality and relevance of the extracted information will be derived from the collected documents along time and from the existing knowledge base.
For plant health and risk management, the biological interaction between insect vectors, pathogens, and host plants is of primary interest for anticipating contamination and reducing pesticide use.

You will be affiliated with the Computer Science Graduate School at Paris-Saclay University. You will be employed by INRAE.

Profil du candidat :
## Requirements
A successful candidate will have an MSc or equivalent in Artificial Intelligence.
Proven experience with applying natural language processing.
Interest in learning about biology or bioinformatics.
High level of academic English or French, both written and spoken;
Good programming skills in Python or Java (and preferably experience with deep learning tools)
Capacity to work as part of a team in a multidisciplinary framework.
Experiences of applied research to Life Science is an asset.

Formation et compétences requises :

Adresse d’emploi :
Location: Paris-Saclay University campus, mainly at the MaIAGE lab [1] and MIA-Paris-Saclay [2], you will visit PHIM [3] at the INRAE research center in Montpellier (South of France).
## Application
The closing date is June 30th 2024
Interested candidates should send their application files to Claire Nédellec (claire.nedellec@inrae.fr), Vincent Guigue, Nicolas Sauvion (Nicolas.sauvion@inrae.fr), and Robert Bossy (Robert.bossy@inrae.fr ).
It should comprise:
a CV (max 5 pages) with transcripts (Master), diplomas, internships
a cover letter
the names and contact of two referees for reference letters
[1] https://maiage.inrae.fr/en/bibliome
[2] https://vguigue.github.io/
[3] https://umr-phim.cirad.fr/en/recherche/comprendre-les-epidemies-dans-les-champs-prism/equipe-forisk

Document attaché : 202404041723_ADUM INRAE 2024-english version.pdf

GeoTextAI : Information spatiale et Intelligence Artificielle dans l’analyse de données textuelles pour la prédiction des crises alimentaires et sanitaires

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : TETIS – Montpellier
Durée : 3 ans
Contact : maguelonne.teisseire@teledetection.fr
Date limite de publication : 2024-06-30

Contexte :
Dans le contexte des systèmes de veille et de détection précoce des crises alimentaires et sanitaires, l’extraction et la représentation des caractéristiques spatio-temporelles associées aux informations textuelles sont indispensables à l’identification et à la modélisation des événements et de leurs impacts. La prise en compte de la complexité de l’information spatiale, incluant des éléments comme les relations de hiérarchie et de proximité, représente un défi actuel pour lequel il existe peu de solutions satisfaisantes.

Sujet :
L’objectif de cette thèse est de développer une méthodologie qui intègre des graphes de connaissances spatiaux dans les modèles issus de l’Intelligence Artificielle, en tenant expressément compte de l’information spatiale et temporelle.

Sujet détaillé et procédure de candidature :
https://nubes.teledetection.fr/index.php/s/mL98yCJakigZiMM

Profil du candidat :
Profil de la/du candidat.e :
La ou le candidat.e devra avoir une expérience en traitement de données textuelles et apprentissage automatique. Une connaissance générale sur les enjeux d’entraînement et application de modèles de langues et des graphes de connaissance est recommandée, ainsi qu’une appétence pour les applications thématiques.

Formation et compétences requises :
Formation et compétences requises :
Master en informatique, science des données, traitement du langage naturel, ou tout autre sujet connexe.

Adresse d’emploi :
https://nubes.teledetection.fr/index.php/s/mL98yCJakigZiMM

Apprentissage profond implicite de prior pour les problèmes inverses. Étude de cas en radioastronomie.

Offre en lien avec l’Action/le Réseau : BigData4Astro/– — –

Laboratoire/Entreprise : Laboratoire des Signaux et Systèmes
Durée : 3 ans
Contact : francois.orieux@l2s.centralesupelec.fr
Date limite de publication : 2024-06-30

Contexte :

Sujet :
https://pro.orieux.fr/assets/thesis-dnn-orieux-l2s.pdf

Contexte
========

Le traitement de mesures instrumentales nécessite souvent d’utiliser le modèle de données, ou modèle direct , dans la méthode. Par exemple les mesures sont affectées d’un bruit, d’un flou, ou vivent dans un autre espace que celui des inconnues (des coefficients de Fourier *versus* une image pour le cas de l’IRM ou de l’interférométrique).

Autant le modèle direct est stable et bien posé (à partir des paramètres on peut générer des données), autant le problème inverse est le plus souvent instable et mal-posé.

Le projet s’inscrit dans le cadre du projet international SKA, *Square Kilometer Array*. SKA est un observatoire pour la radioastronomie qui produira un volume de données considérable pour produire des images à une résolution spatiale et spectrale inégalées. Les antennes sont réparties en australie et en afrique du Sud, ce qui en fera le plus grand interféromètre radio à ce jour. L’équipe est impliquée dans le projet par le biais de l’ANR Dark-Era  et du LabCom ECLAT (ATOS, IETR, INRIA, …). Le travail se fera *en collaboration avec N. Gac du SATIE*, porteur de l’ANR Dark-Era, et qui apportera en outre son expertise sur l’adéquation algorithme-architecture pour les problèmes inverses.

Sujet
=====

Les techniques de résolution de problèmes inverses ont fortement évolué ces dernières années avec les nouvelles techniques d’apprentissage machine. On peut mentionner le déroulage d’algorithmes itératif (*unrolling*), les approches *plug-and-play*, le RED (*regularization by denoising*), ou encore les *a priori* basés donnés.

Le travail se déroulera en plusieurs temps.

– Tout d’abord, le doctorant devra faire un état de l’art sur les méthodes basées données et apprentissage statistique pour la résolution de problèmes inverses et se concentrer sur une nouvelle méthode d’apprentissage, l’*Implicit Deep Learning*.

– Ensuite il faudra, à partir de cette revue bibliographique, comprendre et mettre en œuvre les approches utilisant les réseaux génératifs comme les VAE ou encore les réseaux inversibles. Il s’agit d’une approche reposant sur la minimisation d’un critère mixte $$J(x) = | y – H x|_2^2 + R(x)$$ où le terme d’attache aux données utilise le modèle d’observation connu $H$ (flou, inpainting, debruitage…) et le terme de régularisation $R(x)$ est appris à partir de données. La solution est alors définie comme $$hat x = argmin_{xb} J(x) y.$$

– Il faudra identifier les apports et les limites de cette approche pour les problèmes inverses et proposer des résolutions possibles aux verroux rencontrés. On regardera notamment d’autres algorithme de point fixe que l’algorithme classique de descente de gradient.

– Nous nous attacherons à mettre en œuvre cette nouvelle méthode et les résultats devront être comparés aux résultats obtenus avec les approches classiques : filtre de Wiener, parcimonie…pour lesquels des codes sont à disposition.

– L’application sera sur un problème de synthèse de Fourier pour la radioastronomie dans le cadre de SKA.

– Le travail se fera sur un poste équipé d’une carte GPU Nvidia 3080 ou 4090 avec Linux, TensorFlow et Python ou le cluster de calcul Ruche de l’Université Paris-Saclay.

Ce travail propose des innovations sur deux plans, à la fois méthodologique sur l’utilisation de l’apprentissage pour les problèmes inverses, mais également sur la proposition de nouveaux algorithmes plus performants pour la synthèse de Fourier en radioastronomie. La perspective d’avoir des algorithmes plus rapides grâce à l’*unrolling* pour le traitement de données massives issues de SKA est un enjeu important.

Profil — compétences acquises
==============================

Le candidat devra avoir une formation type ingénieur ou Master 2 en traitement du signal ou d’images, *data science* ou *machine learning*. Il devra posséder des connaissances en mathématiques appliquées ou en programmation. Des compétences en estimation et statistiques sera apprécié.

Profil du candidat :

Formation et compétences requises :

Adresse d’emploi :
Laboratoire des Signaux et Systèmes
3 rue Joliot-Curie
91190 Gif-sur-Yvette

Document attaché : 202404041508_thesis-dnn-orieux-l2s.pdf

Towards a knowledge-based DIGItal Twin for a tOMato production system – DIGITOM

Offre en lien avec l’Action/le Réseau : DOING/– — –

Laboratoire/Entreprise : Institut de Recherche en Horticulture et Semences
Durée : 36 moths
Contact : julie.bourbeillon@institut-agro.fr
Date limite de publication : 2024-05-17

Contexte :
The Institute for Horticulture and Seed Research – IRHS (UMR1345) is seeking a Ph. D. student within the frame of a research project on ontology-based multiscaled modelling of tomato, financed by Institut Agro Rennes-Angers (1.10.2024 – 30.9.2027) and starting on October 2024.
Context and background
In the context of challenges such as climate change, scarcity of workforce, pressure from new pests and diseases, regulations concerning the use of pesticides, production of horticultural crops has become a difficult endeavour. There is a real need to develop new production systems, that overcome these problems. At the same time, enormous progress has been made recently at the frontiers of information science, artificial intelligence and sensor technology. 3D plant models representing plant architectural and physiological development in space and over time at different resolutions (scales) are now available, putting the creation of a horticultural digital twin within reach. Such a digital twin (i.e. a multi-scaled model able to update its parameters automatically) would be a powerful tool enabling us to rapidly optimize existing, and to propose novel, production systems in silico.
A digital twin consists in multiscale models with a multitude of parameters. The mater is how best to interconnect these models, and to reason simplifications at the scale of the digital twin. We therefore need to automatize the exploration of these different scales. This can be achieved thanks to a formal representation of the multi-dimensional landscape of scales and parameters through an ontology. The aim of this thesis is to navigate the ontology to determine what is relevant by comparing simulated with real data. The challenge is to carry out such a comparison by developing a method for automatically moving from one scale to another, without losing essential information.

Sujet :
What you will do
• Characterizing the multidimensional landscape of scales and parameters: Inventory of photosynthesis and biomass production models (especially for tomato), characterize the key parameters to create an ontology describing the parameter landscape of each model.
• Building the integration system: Define how to transfer data between ecophysiological models and scales, and represent them in the ontology for the tomato crop case. Exploit the information to describe how to use the output of one model in another.
• Greenhouse trials: Define how to measure the environment and the plants at the desired level of detail for the model(s) under consideration, based on the results of the system (output from point 2).
• Refining the integration system: Compare the experimental results with the integration system to improve the representation. A second set of experimental data may be used to validate the corrections made. Data analysis, parameterization, calibration and validation of the model
Generally, you will conduct a bibliographical comparison and an analysis of the code of various models, then propose a (re)coding of the models (Functional-Structural Plant Model, Process-Based Model, or 3D model of the greenhouse) based on an ontology to be created. This work will be followed by a sensitivity analysis, optimization studies, simulation of scenarios and validation using the platforms GroIMP and R. Validation will be provided by experiments planned on a greenhouse located on the campus.

Profil du candidat :
Your profile
You should have sound skills in at least two of the following domains: bioinformatics, data sciences, computer science or plant sciences. You must be at ease with programming (knowledge of the JAVA language would be a plus) and should have a strong interest in agronomy (or plant science) and be ready to carry out experiments in interaction with agronomists. Applications with both data science and plant sciences degree will be appreciated. Your ability to communicate in English both orally and in writing is essential. (Basic) knowledge of the French language (resp., willingness to learn it) will be a strong asset, as you will have to communicate with technical staff.

Formation et compétences requises :

Adresse d’emploi :
IRHS, 42 Rue Georges Morel, 49070 Beaucouzé

Document attaché : 202404040828_DoctoralPositionAngers.pdf

Post-doctorat en Intelligence Artificielle pour les Sciences des Catastrophes

Offre en lien avec l’Action/le Réseau : – — –/– — –

Laboratoire/Entreprise : LIMOS
Durée : 24 mois
Contact : julien.ah-pine@sigma-clermont.fr
Date limite de publication : 2024-08-31

Contexte :

Sujet :
[Toutes nos excuses si vous recevez à plusieurs reprises cette annonce]

Chers collègues,

Le LIMOS (Laboratoire d’Informatique, de Modélisation et d’Optimisation des Systèmes) et le CERDI (Centre d’Études et de Recherches sur le Développement International) ont le plaisir d’annoncer une opportunité de post-doctorat dans le domaine de l’Intelligence Artificielle pour les Sciences des Catastrophes, dans le cadre du projet DLISCES.

Nous recherchons un.e chercheur.e ayant récemment obtenu son doctorat ou sur le point de le terminer en Informatique ou en Mathématiques Appliquées, dans le domaine de l’IA/Apprentissage Profond/Vision par Ordinateur. Le.a candidat.e retenu.e rejoindra notre équipe pluridisciplinaire pour relever les défis cruciaux de la réduction des risques liés aux catastrophes naturelles dans les pays du Sud.

—————————————
Compétences souhaitées
—————————————
– Solide expertise en deep learning et machine learning, en particulier dans les applications de vision par ordinateur.
– Expériences en traitement d’images satellites, avec un accent sur les données liées aux catastrophes, seraient un plus.
– Capacité démontrée à mener des recherches de haut niveau et à publier dans des conférences et des revues internationales à comité de lecture.
– Vif intérêt pour la collaboration interdisciplinaire et pour la recherche à impact à l’intersection de l’intelligence artificielle et des sciences des catastrophes.
– Maîtrise de l’anglais ou du français.

—————————————
Contexte scientifique
—————————————
Le projet DLISCES vise à exploiter des techniques avancées d’Intelligence Artificielle pour analyser des images satellites, des données socio-économiques et des informations environnementales afin de cartographier des indicateurs de vulnérabilité dans un contexte de risques associés aux aléas climatiques. En combinant méthodes avancées en IA et perspectives socio-économiques, nous visons à améliorer notre compréhension de la vulnérabilité et à contribuer à des décisions de politique publique éclairées.

—————————————
Détails de l’offre
—————————————
– Poste : Chercheur.e Post-doctoral (durée de 2 ans).
– Date de début : Septembre 2024 ou avant.
– Salaire : Entre €31,500 et €34,000 par an, selon l’expérience.
– Date limite de candidature : 31 mai 2024.
– Fiche de poste détaillée : https://limos.fr/news_job/59 et https://cerdi.uca.fr/version-francaise/unite/nous-rejoindre/projet-dlisces-recrutement-dun-e-postdoctorant-e#/admin

—————————————
Pour candidater
—————————————
Pour postuler, merci d’envoyer votre CV, lettre de motivation et les contacts de deux référents à :
– Julien Ah-Pine (julien.ah-pine@sigma-clermont.fr) et
– Pascale Phélinas (pascale.phelinas@ird.fr).

L’examen des candidatures débutera immédiatement et se poursuivra jusqu’à ce que le poste soit pourvu.

Bien cordialement,

Julien Ah-Pine
MCF en Science des Données
UCA/LIMOS

Profil du candidat :

Formation et compétences requises :

Adresse d’emploi :
LIMOS
Campus Universitaire des Cézeaux
1 rue de la Chebarde
TSA 60125
CS 60026
63178 AUBIERE CEDEX – FRANCE

Document attaché : 202404031544_Job Opening PostDoc LIMOS-CERDI.pdf