Affordable Access

Publisher Website

Looking inside self-organizing map ensembles with resampling and negative correlation learning

Authors
Journal
Neural Networks
0893-6080
Publisher
Elsevier
Publication Date
Volume
24
Issue
1
Identifiers
DOI: 10.1016/j.neunet.2010.08.004
Keywords
  • Ensemble Learning
  • Self-Organizing Maps
  • Negative Correlation Learning
  • Regression
  • Bagging
  • Random Subspace Method
Disciplines
  • Computer Science

Abstract

Abstract In this work, we focus on the problem of training ensembles or, more generally, a set of self-organizing maps (SOMs). In the light of new theory behind ensemble learning, in particular negative correlation learning (NCL), the question arises if SOM ensemble learning can benefit from non-independent learning when the individual learning stages are interlinked by a term penalizing correlation in errors. We can show that SOMs are well suited as weak ensemble components with a small number of neurons. Using our approach, we obtain efficiently trained SOM ensembles outperforming other reference learners. Due to the transparency of SOMs, we can give insights into the interrelation between diversity and sublocal accuracy inside SOMs. We are able to shed light on the diversity arising over a combination of several factors: explicit versus implicit as well as inter-diversities versus intra-diversities. NCL fully exploits the potential of SOM ensemble learning when the single neural networks co-operate at the highest level and stability is satisfied. The reported quantified diversities exhibit high correlations to the prediction performance.

There are no comments yet on this publication. Be the first to share your thoughts.