Masters Theses
Date of Award
5-1991
Degree Type
Thesis
Degree Name
Master of Science
Major
Electrical Engineering
Major Professor
Mohan M. Trivedi
Committee Members
M. A . Abidi, James A. Euler
Abstract
The successful implementation of sensor based robots in dynamic environments will depend largely upon the immunity of the system to incomplete and erroneous sensory information. This paper introduces a six module, 3D model based robot vision system that utilizes 3D geometric models of the objects expected to appear in a scene and can tolerate incomplete and noisy image features. Unique attributes of the work presented include; 1) a matching strategy that is robust to handling incomplete and noisy image features, 2) a procedure for calculating object pose from arbitrary viewing perspectives, 3) the development of a powerful Geometric Modeling and Sensor Simulation System, and 4) the design and development of an integrated 3D geometric model-based robot vision system.
Object identification and localization is independent of the particular robot pose and object pose, as long as the object is within view of the sensor. The system effectively utilizes topology during the matching phase to significantly reduce the number of mappings between the domain of image features to that of the object features (object models). These mappings are represented by the mazimal cliques of an association graph. In the context of our system, the maximal cliques represent the topologically best object feature-image feature mappings. Geometric information is then employed during pose determination to calculate the object's unconstrained pose. Our pose determination procedure borrows theories and techniques from camera calibration and automated cartography to generate pose vectors defining the objects pose relative to the robot. Object pose vectors are derived by solving the perspective-three-point problem for all object feature-image feature triplets. Using these pose vectors, an iterative K-means clustering procedure finds the correct object pose. We verify the performance of the system through experimentation involving several objects being viewed from arbitrary position and location, including images involving multiple objects with occlusion.
The initial utility of the Geometric Modeling and Sensor Simulation system was two fold. First, the geometric models contain the object features employed during matching and pose determination. Second, the simulation of edge based intensity imagery is necessary for proper (and efficient) object pose verification. However, other important uses for geometric modeling and sensor simulation naturally developed (for example, the testing of multisensor robotic vision systems), hence the sensor simulation capabilities were extended. The sensor simulation routines generate synthetic data from each of the available simulated sensors given a geometrically modeled environment and given the sensor specifications and sensor locations. The five simulated sensory modalities are laser range imagery, point laser range, ultrasonic range, proximity, and edge based intensity imagery.
It is clear that the design and integration of a complete robot vision system is not trivial. One must engineer a system consisting of cooperating subsystems that when merged together work efficiently and in a robust manner. Although in this system any one component by itself may be of little utility, the cooperation of all the components as a system is what gives it value. Indeed, we are making strides in this direction; building an integrated multi-component 3D model-based robot vision system.
Recommended Citation
Bidlack, Clint Robert, "A robot vision system for object identification, localization, and manipulation using 3D geometric models. " Master's Thesis, University of Tennessee, 1991.
https://trace.tennessee.edu/utk_gradthes/12345