A new combinatorial optimization technique and its applications
Abstract
In this thesis, a new global optimization technique, its applications in particular to neural networks, and its implementation on parallel computers are presented and discussed. The algorithm is also compared to other global optimization algorithms such as Gradient descent, Genetic Algorithm and Simulated Annealing. This new optimization technique proved itself worthy of further study after observing its accuracy of convergence, speed of convergence and ease of use. Some of the advantages of this new optimization technique can be summarized as optimizing function does not have to be continuous or differentiable. No random mechanism is used, therefore this algorithm does not inherit the slow speed of random searches. There are no fine-tuning parameters (such as the step rate of G.D. or temperature of S.A.) needed for this technique. This algorithm can be implemented on parallel computers to compensate for the increase in the execution time as the number of dimensions increase.
Degree
Ph.D.
Advisors
Ersoy, Purdue University.
Subject Area
Electrical engineering
Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server.