Masters Theses

Date of Award

5-1991

Degree Type

Thesis

Degree Name

Master of Science

Major

Computer Science

Major Professor

Jesse H. Poore

Committee Members

Jean Blair, Clement Wilson

Abstract

"Global" measures of software quality are generally not used by practitioners because they have not been calibrated for local operating environments. In a University of Tennessee research program on local measures of software quality, a participative process for defining and measuring local software quality is being refined and tested in field applications. The thesis of this research is that measures of software quality are most valid and most credible when they are based on local experience with the particular parameters that define the local computing environment. The group process for defining local software quality involves having a "jury" of seasoned practitioners evaluate a sample of modules from the organization's inventory of code. Good practices are distinguished from bad practices as modules are rank ordered through a structured group process. One of the outcomes of the process is a "software quality rule set" that reflects the best practices currently in use in the organization. In this study, the results of four field applications of the process were tested in formal validation experiments to determine whether the obtained rule set is a valid representation of the organization's sense of quality. Nonorganizational programmers were asked to rank the same set of modules that were ranked by the organization's jury, using either the locally-derived rules, generic (textbook) rules, placebo (nonhelpful) rules, or no rules. In three of four tests, programmers using the local rule set ranked the modules in a way that correlated more strongly with the jury's ranking than any other group. The results of the fourth test would have been consistent with the others but for a single individual's results. The overall correlations between the test groups and juries were .579 for local rules, .479 for generic rules, .446 for placebo rules, and .483 for no rules. The correlation between the local rules group and the jury was statistically significant at p < .0005; all other correlations were significant at p < .005. The results support a conclusion that the group process for defining local software quality is reliable in producing valid rule sets.

Files over 3MB may be slow to open. For best results, right-click and select "save as..."

Share

COinS