Masters Theses

Date of Award

12-1994

Degree Type

Thesis

Degree Name

Master of Science

Major

Computer Science

Major Professor

Jens Gregor

Committee Members

Michael Thomason, Bruce MacLennan

Abstract

The research reported within this thesis focuses on applying the methods of piecewise linear classification to artificial neural networks. To some extent, artificial neural networks utilize a black box strategy, in that the classification is done from an input/output viewpoint. Organization and training are handled by the network producing results that, while quite possibly correct for the training set, are difficult to predict and understand. Several choices must be made in order to use feed forward back propagation neural networks effectively. One of the key choice is network topology, in respect to the number of layers and the number of units in each layer. Recent work within the artificial neural network community has been to develop a network that can dynamically allocate units during training and create its own structure. This thesis describes a neural network that configures itself by dynamically allocating units based on the techniques of piecewise linear classification.

Piecewise linear classification methods are based on the idea that if nonlinearly separable regions could be separated into subregions consisting of linearly separable patterns, then each subregion could be learned by a linear discriminant function. The dynamically allocating piecewise linear (DAPL) neural network uses the techniques developed in piece- wise linear classification for deciding when to divide nonlinearly separable regions. Division of the resulting subregions continues until only subregions consisting of linearly separable patterns are generated. A single layered artificial neural subnetwork is generated each time a region needs to be to split into subregions. Through adjustments to the training set, a single sided decision surface is generated within the subnetwork for linearly seperating the region. The resulting set of subnetworks produces hyperplanes that linearly separate the training space. On top of the subnetworks, an overseer network is built combining the hyperplanes into a decision surface. The resulting network not only handles the problem of dynamically configuring itself, its units function in developing a more predictable decision surface.

When creating the DAPL network approach, emphasis was placed on developing a self configuring network that would use less of a black box strategy while still being able to generalize. The resulting network compares favorably in timing and correctness to existing self configuring networks.

Files over 3MB may be slow to open. For best results, right-click and select "save as..."

Share

COinS