Affordable Access

deepdyve-link
Publisher Website

A survey on deep multimodal learning for computer vision: advances, trends, applications, and datasets

Authors
  • Bayoudh, Khaled1
  • Knani, Raja1
  • Hamdaoui, Fayçal1
  • Mtibaa, Abdellatif1
  • 1 University of Monastir,
Type
Published Article
Journal
The Visual Computer
Publisher
Springer Berlin Heidelberg
Publication Date
Jun 10, 2021
Pages
1–32
Identifiers
DOI: 10.1007/s00371-021-02166-7
PMID: 34131356
PMCID: PMC8192112
Source
PubMed Central
Keywords
Disciplines
  • Survey
License
Unknown

Abstract

The research progress in multimodal learning has grown rapidly over the last decade in several areas, especially in computer vision. The growing potential of multimodal data streams and deep learning algorithms has contributed to the increasing universality of deep multimodal learning. This involves the development of models capable of processing and analyzing the multimodal information uniformly. Unstructured real-world data can inherently take many forms, also known as modalities, often including visual and textual content. Extracting relevant patterns from this kind of data is still a motivating goal for researchers in deep learning. In this paper, we seek to improve the understanding of key concepts and algorithms of deep multimodal learning for the computer vision community by exploring how to generate deep models that consider the integration and combination of heterogeneous visual cues across sensory modalities. In particular, we summarize six perspectives from the current literature on deep multimodal learning, namely: multimodal data representation, multimodal fusion (i.e., both traditional and deep learning-based schemes), multitask learning, multimodal alignment, multimodal transfer learning, and zero-shot learning. We also survey current multimodal applications and present a collection of benchmark datasets for solving problems in various vision domains. Finally, we highlight the limitations and challenges of deep multimodal learning and provide insights and directions for future research.

Report this publication

Statistics

Seen <100 times