Abstract
Many authors consider neural network (NN) supervised training as an optimization process during which the weights are iteratively adjusted in order to minimise an error (cost) function, which represents the difference between the obtained and the aimed output. The error function surface is usually nonconvex, can be highly convoluted, with many plateaux and long narrow troughs, and can encounter many saddle and local minima (LM) points. Because backpropagation (BP), which is widely used method for supervised learning, uses local methods for optimization, it can get stuck in LM. This can make learning very difficult and sometimes its convergence to optimal solution - not possible. In this paper we propose a stochastic method for global optimization (GO), which make use of a uniformly distributed LP(r) sequence of points. The developed technique is tested with common benchmark problems and used for neural network supervised learning. The conducted tests show that the proposed method can be successfully used for an optimal supervised training of small size NN.
Original language | English |
---|---|
Pages (from-to) | 27-32 |
Number of pages | 6 |
Journal | Biomedical Sciences Instrumentation |
Volume | 36 |
Publication status | Published - 2000 |
Keywords
- Global optimization
- Neural network learning
- Stochastic methods