Temporal domain adaptation for land cover mapping from multi-modal remote sensing data

When:
30/11/2021 – 01/12/2021 all-day
2021-11-30T01:00:00+01:00
2021-12-01T01:00:00+01:00

Offre en lien avec l’Action/le Réseau : MACLEAN/– — –

Laboratoire/Entreprise : UMR TETIS
Durée : 6 mois
Contact : dino.ienco@inrae.fr
Date limite de publication : 2021-11-30

Contexte :
Nowadays, a plethora of satellite missions continuously collects remotely sensed images of the Earth surface via various modalities (e.g. SAR or optical) and at different spatial and temporal scales. Therefore, the same study area can be effectively covered by rich, multi-faceted and diverse information. Such information is of paramount importance in order to monitor spatio-temporal phenomena and produce land cover map to support sustainable agriculture as well as public policy decisions. In the last years, the remote sensing research community is turning its attention towards the use of deep learning (DL) approaches to integrate complementary sensor acquisitions available on the same study area [1] with the aim to leverage as much as possible the interplay between input sources exhibiting different spectral as well as spatial content to ameliorate the underlying mapping result. Unfortunately, DL models require a considerable amount of data to be trained and, in real world scenarios, it is difficult to acquire enough ground truth information each time that a land cover map (on a specific study area) should be produced. In order to acquire ground truth data on a study area, time-consuming (3 or 4 months) and labour-intensive field campaigns are deployed (i.e. costs are related to travels to and from the study area for a team of 4 or 5 people at least, access to a particular study area, etc…).

While a certain amount of research studies were conducted on how combine multi-source remote sensing information for land cover maps in a standard supervised learning setting [2,3], limited efforts were devoted to understand how much the trained machine learning models are transferable from a time period to a successive one (on the same study area) in order to reduce the cost associated to the acquisition of new ground truth data [4].

The objective of this internship will be the study and development of a methodological framework, based on deep learning approaches (Convolutional Neural networks and/or Recurrent Neural network) to cope with the transferability (temporal transfer learning) of a multi-source land cover mapping model from a period of time to a successive period of time (i.e. from one year to another year) on the same study area. To this end, the internship student will inspect recent trends and methods in the field of Unsupervised Domain Adaptation [5] (UDA) exploiting state of the art techniques from computer vision and signal processing [6,7].

Environment: The UMR TETIS (joint research unit involving INRAE, CIRAD, AgropParisTech and CNRS – www.) is an interdisciplinary laboratory that groups together people with different backgrounds (agronomy, ecology, remote sensing, signal processing, data science). It has acquired a consolidated experience in the development of machine learning approaches (CNN, RNN, GraphCNN, Attention Mechanism) to deal with the high complexity of remote sensing data for many environmental and agricultural application studies: land cover mapping, biophysical variables estimations (i.e. soil moisture), yield prediction, biodiversity characterization, forest monitoring, etc.

[1] D. Hong, L. Gao, N. Yokoya, J. Yao, J. Chanussot, Q. Du, B. Zhang: More Diverse Means Better: Multimodal Deep Learning Meets Remote-Sensing Imagery Classification. IEEE Trans. Geosci. Remote. Sens. 59(5): 4340-4354 (2021).

[2] P. Benedetti, D. Ienco, R. Gaetano, K. Ose, R. G. Pensa, S. Dupuy: M3Fusion: A Deep Learning Architecture for Multiscale Multimodal Multitemporal Satellite Data Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 11(12): 4939-4949 (2018).

[3] Y. J. E. Gbodjo, O. Montet, D. Ienco, R. Gaetano and S. Dupuy: Multi-sensor land cover classification with sparsely annotated data based on Convolutional Neural Networks and Self-Distillation. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. -(-): — (2021).

[4] B. Tardy, J. Inglada, J. Michel: Assessment of Optimal Transport for Operational Land-Cover Mapping Using High-Resolution Satellite Images Time Series without Reference Data of the Mapping Period. Remote. Sens. 11(9): 1047 (2019).

[5] S. Zhao, X. Yue, S. Zhang, B. Li, H. Zhao, B. Wu, R. Krishna, J. E. Gonzalez, A. L. Sangiovanni-Vincentelli, S. A. Seshia, K. Keutzer: A Review of Single-Source Deep Unsupervised Visual Domain Adaptation. CoRR abs/2009.00155 (2020).

[6] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, V. S. Lempitsky: Domain-Adversarial Training of Neural Networks. J. Mach. Learn. Res. 17: 59:1-59:35 (2016)

[7] E. Tzeng, J. Hoffman, K. Saenko, T. Darrell: Adversarial Discriminative Domain Adaptation. CVPR 2017: 2962-2971

[8] H. Ismail Fawaz, G. Forestier, J. Weber, L. Idoumghar, P.-A. Muller: Deep learning for time series classification: a review. Data Min. Knowl. Discov. 33(4): 917-963 (2019)

Sujet :
The objective of this internship will be the study and development of a methodological framework, based on deep learning approaches (Convolutional Neural networks and/or Recurrent Neural network) to cope with the transferability (temporal transfer learning) of a multi-source land cover mapping model from a period of time to a successive period of time (i.e. from one year to another year) on the same study area. To this end, the internship student will inspect recent trends and methods in the field of Unsupervised Domain Adaptation [5] (UDA) exploiting state of the art techniques from computer vision and signal processing [6,7].

The internship student will work in a tight connection with a team of research scientists (INRAE/CIRAD Researchers and a PhD student) in the general field of Unsupervised Domain Adaptation [5], multi-source remote sensing data [1,2,3] and multi-variate time series analysis [8]. The missions of the internship will be the follow:
– A detailed bibliography study about recent trends in multi-modal/source Unsupervised Domain Adaptation;
– Multi-source/Multi-modal image (Remote Sensing) data preprocessing to organize the data for the subsequent machine learning analysis;
– Study, design and development of a deep learning framework for multi-modal Unsupervised Domain Adaptation;
– Experimental evaluation of the proposed framework w.r.t. competing methods (implementation of the competing approaches or using available code on repository);
– Quantitative as well as qualitative analysis of the obtained results in order to identify the strong/weak points of the proposed framework;
– Release of the produced code on open-source platforms (i.e. github, gitlab, etc…) with the associated employed data.

[1] D. Hong, L. Gao, N. Yokoya, J. Yao, J. Chanussot, Q. Du, B. Zhang: More Diverse Means Better: Multimodal Deep Learning Meets Remote-Sensing Imagery Classification. IEEE Trans. Geosci. Remote. Sens. 59(5): 4340-4354 (2021).

[2] P. Benedetti, D. Ienco, R. Gaetano, K. Ose, R. G. Pensa, S. Dupuy: M3Fusion: A Deep Learning Architecture for Multiscale Multimodal Multitemporal Satellite Data Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 11(12): 4939-4949 (2018).

[3] Y. J. E. Gbodjo, O. Montet, D. Ienco, R. Gaetano and S. Dupuy: Multi-sensor land cover classification with sparsely annotated data based on Convolutional Neural Networks and Self-Distillation. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. -(-): — (2021).

[4] B. Tardy, J. Inglada, J. Michel: Assessment of Optimal Transport for Operational Land-Cover Mapping Using High-Resolution Satellite Images Time Series without Reference Data of the Mapping Period. Remote. Sens. 11(9): 1047 (2019).

[5] S. Zhao, X. Yue, S. Zhang, B. Li, H. Zhao, B. Wu, R. Krishna, J. E. Gonzalez, A. L. Sangiovanni-Vincentelli, S. A. Seshia, K. Keutzer: A Review of Single-Source Deep Unsupervised Visual Domain Adaptation. CoRR abs/2009.00155 (2020).

[6] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, V. S. Lempitsky: Domain-Adversarial Training of Neural Networks. J. Mach. Learn. Res. 17: 59:1-59:35 (2016)

[7] E. Tzeng, J. Hoffman, K. Saenko, T. Darrell: Adversarial Discriminative Domain Adaptation. CVPR 2017: 2962-2971

[8] H. Ismail Fawaz, G. Forestier, J. Weber, L. Idoumghar, P.-A. Muller: Deep learning for time series classification: a review. Data Min. Knowl. Discov. 33(4): 917-963 (2019)

Profil du candidat :
The ideal candidate is a student at Master 2 level or coming from an engineering school (still at the last year of attendance) with a good background in signal processing/image processing, machine learning and good programming skills in python (numpy, pandas, scikit-image, scikit-learn). A first experience with a deep learning library (PyTorch or Tensorflow) is a plus.

Formation et compétences requises :
The ideal candidate is a student at Master 2 level or coming from an engineering school (still at the last year of attendance) with a good background in signal processing/image processing, machine learning and good programming skills in python (numpy, pandas, scikit-image, scikit-learn). A first experience with a deep learning library (PyTorch or Tensorflow) is a plus.

Adresse d’emploi :
500, Rue Jean François Breton, 34093 Montpellier