Masters Theses

Date of Award

12-2009

Degree Type

Thesis

Degree Name

Master of Science

Major

Computer Engineering

Major Professor

Itamar Arel

Committee Members

Gregory Peterson, Hairong Qi

Abstract

Vision-based machine learning agents are tasked with making decisions based on high-dimensional, noisy input, placing a heavy load on available resources. Moreover, observations typically provide only partial information with respect to the environment state, necessitating robust state inference by the agent. Reinforcement learning provides a framework for decision making with the goal of maximizing long-term reward. This thesis introduces a novel approach to vision-based reinforce- ment learning through the use of a consolidated actor-critic model (CACM). The approach takes advantage of artificial neural networks as non-linear function approximators and the reduced com- putational requirements of the CACM scheme to yield a scalable vision-based control system. In this thesis, a comparison between the actor-critic and CACM is made. Additionally, the affect of observation prediction and correlated exploration has on the agent's performance is investigated.

Files over 3MB may be slow to open. For best results, right-click and select "save as..."

Share

COinS