Supervised neural network training with a hybrid global optimization technique

Antoniya Georgieva*, Ivan Jordanov

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution


A novel hybrid global optimization method applied for feedforward neural networks (NN) supervised learning is investigated. The network weights are determined by minimizing the traditional mean-square error function. The optimization technique, called GLPτS is a combination of novel global optimization heuristic search based on low-discrepancy sequences of points, called LPτ Optimization (LPτO), a Genetic Algorithm, and a Simplex local search. The proposed method is initially tested on 10 multimodal mathematical functions of 30 and 100 dimensions. Subsequently, it is applied for training moderate size NN for function fitting and solving benchmark classification problems, such as the parity problem (XOR and 4-Parity), Iris dataset, and a medical diagnosis problem (Diabetes). The investigated technique is also tested on predicting continuous output of a mechanical system dataset (Servo). Finally, the results are analysed, discussed, and compared with others.

Original languageEnglish
Title of host publicationThe 2006 IEEE International Joint Conference on Neural Network Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages8
ISBN (Print)0780394909, 9780780394902
Publication statusPublished - 30 Oct 2006
EventInternational Joint Conference on Neural Networks 2006, IJCNN '06 - Vancouver, BC, Canada
Duration: 16 Jul 200621 Jul 2006

Publication series

NameProceedings of IEEE International Conference on Neural Networks
ISSN (Print)2161-4393
ISSN (Electronic)2161-4407


ConferenceInternational Joint Conference on Neural Networks 2006, IJCNN '06
CityVancouver, BC


  • Genetic algorithms
  • Global optimization
  • Hybrid methods
  • Low-discrepancy sequences
  • Simplex search
  • Supervised NN learning


Dive into the research topics of 'Supervised neural network training with a hybrid global optimization technique'. Together they form a unique fingerprint.

Cite this