Optimal design of connectivity in neural network training

Ivan Jordanov*, Robert Brown

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Many authors consider neural network (NN) supervised training as an optimization process during which the weights are iteratively adjusted in order to minimise an error (cost) function, which represents the difference between the obtained and the aimed output. The error function surface is usually nonconvex, can be highly convoluted, with many plateaux and long narrow troughs, and can encounter many saddle and local minima (LM) points. Because backpropagation (BP), which is widely used method for supervised learning, uses local methods for optimization, it can get stuck in LM. This can make learning very difficult and sometimes its convergence to optimal solution - not possible. In this paper we propose a stochastic method for global optimization (GO), which make use of a uniformly distributed LP(r) sequence of points. The developed technique is tested with common benchmark problems and used for neural network supervised learning. The conducted tests show that the proposed method can be successfully used for an optimal supervised training of small size NN.

Original languageEnglish
Pages (from-to)27-32
Number of pages6
JournalBiomedical Sciences Instrumentation
Volume36
Publication statusPublished - 2000

Keywords

  • Global optimization
  • Neural network learning
  • Stochastic methods

Fingerprint

Dive into the research topics of 'Optimal design of connectivity in neural network training'. Together they form a unique fingerprint.

Cite this