Neural network training and stochastic global optimization

I. Jordanov*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper investigates local minima problem in neural network (NN) backpropagation supervised learning. The proposed algorithm of training makes use of stochastic optimization technique based on so-called low-discrepancy sequences. The learning process is considered as an unconstrained optimization problem and once parameter space (defined by the NN weights) and objective functions are defined, the proposed method searches for a global optimum. First, regions of attraction as candidates for local minima are obtained, and secondly, each region is searched for locating minima and subsequently finding a global minimum. The conducted algorithm is initially tested on multimodal mathematical functions and then on common benchmark problems for NN training. The results are finally discussed and compared with such obtained from backpropagation and other methods.

Original languageEnglish
Title of host publication Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02
EditorsJagath C. Rajapakse, Xin Yao, Lipo Wang, Kunihiko Fukushima, Soo-Young Lee
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages488-492
Number of pages5
ISBN (Electronic)9810475241, 9789810475246
DOIs
Publication statusPublished - 5 Jun 2003
Event9th International Conference on Neural Information Processing, ICONIP 2002 - Singapore, Singapore
Duration: 18 Nov 200222 Nov 2002

Conference

Conference9th International Conference on Neural Information Processing, ICONIP 2002
Country/TerritorySingapore
CitySingapore
Period18/11/0222/11/02

Fingerprint

Dive into the research topics of 'Neural network training and stochastic global optimization'. Together they form a unique fingerprint.

Cite this