Date of Award

9-1-2008

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Department of Operational Sciences

First Advisor

Kenneth W. Bauer, Jr., PhD

Abstract

There is no universally accepted methodology to determine how much confidence one should have in a classifier output. This research proposes a framework to determine the level of confidence in an indication from a classifier system where the output is or can be transformed into a posterior probability estimate. This is a theoretical framework that attempts to unite the viewpoints of the classification system developer (or engineer) and the classification system user (or war-fighter). The paradigm is based on the assumptions that the system confidence acts like, or can be modeled as a value and that indication confidence can be modeled as a function of the posterior probability estimates. The introduction of the non-declaration possibility induces the production of a higher-level value model that weighs the contribution of engineering confidence and associated non-declaration rate. Now, the task becomes to choose the appropriate threshold to maximize this overarching value function. This paradigm is developed in a setting considering only in-library problems, but it is applied to out-of-library problems as well. Introduction of out-of-library problems requires expansion of the overarching value model. This confidence measure is a direct link between traditional decision analysis techniques and traditional pattern recognition techniques. This methodology is applied to multiple data sets, and experimental results show the behavior that would be expected from a rational confidence paradigm.

AFIT Designator

AFIT-DS-ENS-08-02

DTIC Accession Number

ADA485329

Share

COinS