Affordable Access

Construction of RBF classifiers with tunable units using orthogonal forward selection based on leave-one-out misclassification rate

Authors
Publisher
IEEE Computational Intelligence Society
Publication Date
Disciplines
  • Computer Science

Abstract

Construction of RBF Classifiers with Tunable Units Using Orthogonal Forward Selection Based on Leave-one-out Misclassification Rate [IJCNN1219] Construction of RBF Classifiers with Tunable Units Using Orthogonal Forward Selection Based on Leave-One-Out Misclassification Rate S. Chen, C.J. Harris School of Electronics and Computer Science University of Southampton, Southampton SO17 1BJ, U.K. E-mails: {sqc,[email protected] X. Hong Department of Cybernetics University of Reading, Reading RG6 6AY, U.K. E-mail: [email protected] Abstract— An orthogonal forward selection (OFS) algorithm based on leave-one-out (LOO) misclassification rate is proposed for the construction of radial basis function (RBF) classifiers with tunable units. Each stage of the construction process determines a RBF unit, namely its centre vector and diagonal covariance matrix as well as weight, by minimising the LOO statistics. This OFS-LOO algorithm is computationally efficient and it is capable of constructing parsimonious RBF classifiers that generalise well. Moreover, the proposed algorithm is fully automatic and the user does not need to specify a termination criterion for the construction process. The effectiveness of the proposed RBF classifier construction procedure is demonstrated using three classification benchmark examples. I. INTRODUCTION A basic principle in nonlinear data modelling is that of ensuring the smallest possible model which explains the train- ing data. This parsimonious principle is particularly relevant in the construction of radial basis function (RBF) classifiers. The key questions in constructing a RBF classifier are how many RBF units to use, the positions (centres) and shapes (variances or covariance matrices) of the RBF nodes. The objective is to obtain sparse RBF classifiers that generalise well, i.e. achieving small misclassification rate for data un- seen in training. A popular approach for constructing RBF classifiers is to consider the training input data points a

There are no comments yet on this publication. Be the first to share your thoughts.