Masters Theses
Date of Award
12-1994
Degree Type
Thesis
Degree Name
Master of Science
Major
Computer Science
Major Professor
Bruce Whitehead
Committee Members
Alfonso Pujol, Dinesh Mehta
Abstract
Other researchers have recently trained recurrent artificial neural networks with gradient descent techniques to infer simple abstract machines. This thesis examines the effects of a dimensionality reduction technique on the performance of such a network trained to recognize the odd-parity language. The network used is a second-order recurrent network similar to that of Giles et al. (1990) with the same complete-gradient training method. The differences are: 1) a hidden layer (the encoding layer) inserted between the input layer and the original second layer (the state layer) and 2) a separate output layer--a single processing element (PE) with first-order connections from the state layer. The encoding layer has first order connections from the input layer. Hypothesis: The encoding layer should improve the performance of the network.
According to two measures of performance--the fraction of correctly classified strings and the total epoch error--the network performed better without encoding PEs. The network converged 70% of the time for 0 encoding PEs (i.e., without any compression method) and did not converge at all when using encoding PEs. The error and classification accuracy plots (with respect to a varying number of encoding PEs) reveal a pattern whose general shape is not heavily dependent on the data set used for testing. Thus, the accuracy and error information may be useful in characterizing network behavior, with or without convergence.
Recommended Citation
Ferrell, Michael Peter, "Neural grammatical inference using network-adapted encoding. " Master's Thesis, University of Tennessee, 1994.
https://trace.tennessee.edu/utk_gradthes/11534