Affordable Access

Publisher Website

Simulation of weight prunning process in backpropagation neural network for pattern classification: A self-running threshold approach

Computer Methods in Applied Mechanics and Engineering
Publication Date
DOI: 10.1016/s0045-7825(98)00072-3


Abstract Neural network minimization is a process in which non-affecting elements (non-contributing processing elements and weights) are deleted. This results in the network with a minimum configuration and positively contributing elements. In general, the scope of the minimization process is to achieve a network which provides optimal solution for a given task without loss of performance. Therefore, study into network minimization becomes essential for complicated tasks. The present study investigates ways of developing automatic thresholding methods for weight deleting process to minimize the network resulting in enhanced performances for classification of six control chart patterns. To achieve this aim, two weight deleting processes are developed including dynamic and static weight deleting processes. The automatic weight deleting mechanism introduced in the present study is based on the optimal RMS error variation. In this case, once the RMS error drops to the desired level, weight deleting process starts, which then enables one to select the optimal threshold automatically, in this case, RMS error becomes minimum. The study is extended to include two sets of data structures, namely first and second data kinds. In the first data kind, random variables and coefficients in equations governing the chart patterns are varied while in the second data kind, only random variables are changed. These enable one to compare the generalization capacity of the resulting network which uses the first and second data kinds separately. It is found that dynamic deleting process results improved performances and the late start of deleting process is fruitful for improved learning rate. In addition, the use of first data kind provides the network with better generalization capacity.

There are no comments yet on this publication. Be the first to share your thoughts.


Seen <100 times