Affordable Access

Access to the full text

Improving the Performance of Feedforward Neural Networks by Noise Injection into Hidden Neurons

Authors
  • Hammadi, Nait Charif1
  • Ito, Hideo2
  • 1 Chiba University, Department of Information & Computer Sciences, Graduate School of Science & Technology, 1-33 Yayoi-cho, Chiba, 263, Japan , Chiba
  • 2 Chiba University, Department of Information & Computer Sciences, Faculty of Engineering, 1-33 Yayoi-cho, Chiba, 263, Japan , Chiba
Type
Published Article
Journal
Journal of Intelligent & Robotic Systems
Publisher
Springer-Verlag
Publication Date
Feb 01, 1998
Volume
21
Issue
2
Pages
103–115
Identifiers
DOI: 10.1023/A:1007965819848
Source
Springer Nature
Keywords
License
Yellow

Abstract

The generalization ability of feedforward neural networks (NNs) depends on the size of training set and the feature of the training patterns. Theoretically the best classification property is obtained if all possible patterns are used to train the network, which is practically impossible. In this paper a new noise injection technique is proposed, that is noise injection into the hidden neurons at the summation level. Assuming that the test patterns are drawn from the same population used to generate the training set, we show that noise injection into hidden neurons is equivalent to training with noisy input patterns (i.e., larger training set). The simulation results indicate that the networks trained with the proposed technique and the networks trained with noisy input patterns have almost the same generalization and fault tolerance abilities. The learning time required by the proposed method is considerably less than that required by the training with noisy input patterns, and it is almost the same as that required by the standard backpropagation using normal input patterns.

Report this publication

Statistics

Seen <100 times