Affordable Access

Learning to translate land-cover maps : several multi-dimensional context-wise solutions

  • Baudoux, Luc
Publication Date
Jan 30, 2023
External links


The description of the bio-physical coverage of the Earth's surface, termed land-cover, is of utmost importance in recent decades in many areas, ranging from urban planning to climate studies and food security. Historically manually produced, land-cover maps now take advantage of the recent boom of satellite imagery and computer vision techniques to gain more accuracy and higher update frequency. However, they still suffer from two disadvantages limiting their use. On the one hand, the land cover map spatial resolution is fixed, while a map at 10-meter spatial resolution will not be suitable for analysing large-scale phenomena, nor for monitoring objects less than 10 meters. On the other hand, the map nomenclature is chosen to meet a specific need which does not necessarily suit another user's needs. For instance, a nomenclature may group under the term "built-up areas" a set of elements such as "roads" and "dwellings", which other nomenclatures may classify separately.Current approaches target to adapt these nomenclatures and spatial resolutions. They are mainly based on pure semantic translation methods (LCCS...) applied at the nomenclature level by comparing class definitions. In doing so, they neglect that two objects of the same class can be translated differently depending, for instance, on their spatial context or temporal evolution.This thesis addresses this interleaved problem by proposing context-wise translation methods to increase re-use possibilities and new land-cover map generation. First, we propose different strategies, mainly based on convolution neural networks, learning to translate a source map into a target map context-wisely. In particular, we show the crucial importance of taking into account spatial and geographical contexts (a forest in the mountains is probably occupied by conifers) on multiple translation cases. Secondly, based on the observation that multi-language translation models provide better results than those trained to translate from a single source language to a single target language, we propose a multi-map translation framework allowing us to obtain several target nomenclatures from a unique source map. We show that this model allows for more robust results than models trained on a single translation, especially on maps with limited training samples. Thirdly, we experiment with different multi-modal fusion configurations merging satellite images (optical and radar) and elevation data with land-cover maps. Finally, we define the concept of, and propose a method to build, a semantic representation space common for all land-cover maps, no longer representing the translation as the transformation from a discrete representation space with n classes (a nomenclature) to another but as a simple change of interpretation of a continuous semantic representation space shared between all nomenclatures. We propose the first application of the concept of common semantic representation space to translation, focusing on the translation of source maps unseen during the translation model training. The codes and datasets (France-wide, six land-cover maps, satellite imagery, and hand-annotated ground truth) produced during this thesis are also accessible for reproducibility and potential comparison purposes

Report this publication


Seen <100 times