Date of Award

3-2001

Document Type

Thesis

Degree Name

Master of Science

Department

Department of Operational Sciences

First Advisor

Kenneth W. Bauer, Jr., PhD

Abstract

This research investigates current practices in test and evaluation of classification algorithms, and recommends improvements. We scrutinize the evaluation of automatic target recognition algorithms and rationalize the potential for improvements in the accepted methodology. We propose improvements through the use of an experimental design approach to testing. We demonstrate the benefits of improvements by simulating algorithm performance data and using both methodologies to generate evaluation results. The simulated data is varied to test the sensitivity of the benefits to a broad set of outcomes. The opportunities for improvement are threefold. First, the current practice of 'one-at-a-time factor variation (only one factor is varied in each test condition) fails to capture the effect of multiple factors. Next, the coarse characterization of data misses the opportunity to reduce the estimate of noise in test through the observation of uncontrolled factors. Finally, the lack of advanced data reduction and analysis tools renders analysis and reporting tedious and inefficient. This research addresses these shortcomings and recommends specific remedies through factorial testing, detailed data characterization, and logistic regression. We show how these innovations improve the accuracy and efficiency of automatic target recognition performance evaluation.

AFIT Designator

AFIT-GOR-ENS-01M-08

DTIC Accession Number

ADA391257

Share

COinS