Masters Theses
Date of Award
12-1993
Degree Type
Thesis
Degree Name
Master of Science
Major
Computer Science
Major Professor
Bruce Whitehead
Committee Members
Alfonso Pujol, Dinesh Mehta
Abstract
Neural networks that use the Least Mean Squared learning rule or the Generalized Delta Rule require the proper selection of a learning rate parameter to assure good convergence while being trained. This thesis discusses algorithms that modify the learning rate as the network is being trained, while still allowing for good convergence. Radial basis function networks and backpropagation networks were used for the development and testing of these adaptive algorithms. Research shows that modifying the learning rate for gradient descent techniques based upon the history of the normalized error can eliminate the need for the guesswork required to select a good static learning rate. Additionally, it was found that for a given number of training epochs, an adaptive learning rate algorithm can improve a neural network's convergence towards the global minimum of its error surface when compared to a static learning rate algorithm.
Recommended Citation
Ealy, Derek Michael, "Adaptive learning rate techniques for neural networks. " Master's Thesis, University of Tennessee, 1993.
https://trace.tennessee.edu/utk_gradthes/11873