Affordable Access

Access to the full text

Improving meta-learning model via meta-contrastive loss

Authors
  • Tian, Pinzhuo1
  • Gao, Yang1
  • 1 Nanjing University, Jiangsu, 210023, China , Jiangsu (China)
Type
Published Article
Journal
Frontiers of Computer Science
Publisher
Higher Education Press
Publication Date
Jan 08, 2022
Volume
16
Issue
5
Identifiers
DOI: 10.1007/s11704-021-1188-9
Source
Springer Nature
Keywords
Disciplines
  • Special Section on Meta-learning: Theories, Algorithms and Applications
License
Yellow

Abstract

Recently, addressing the few-shot learning issue with meta-learning framework achieves great success. As we know, regularization is a powerful technique and widely used to improve machine learning algorithms. However, rare research focuses on designing appropriate meta-regularizations to further improve the generalization of meta-learning models in few-shot learning. In this paper, we propose a novel meta-contrastive loss that can be regarded as a regularization to fill this gap. The motivation of our method depends on the thought that the limited data in few-shot learning is just a small part of data sampled from the whole data distribution, and could lead to various bias representations of the whole data because of the different sampling parts. Thus, the models trained by a few training data (support set) and test data (query set) might misalign in the model space, making the model learned on the support set can not generalize well on the query data. The proposed meta-contrastive loss is designed to align the models of support and query sets to overcome this problem. The performance of the meta-learning model in few-shot learning can be improved. Extensive experiments demonstrate that our method can improve the performance of different gradient-based meta-learning models in various learning problems, e.g., few-shot regression and classification.

Report this publication

Statistics

Seen <100 times