Feature Saliency Measures

Document Type

Article

Publication Date

4-1997

Abstract

This paper presents a survey of feature saliency measures used in artificial neural networks. Saliency measures can be used for assessing a feature's relative importance. In this paper, we contrast two basic philosophies for measuring feature saliency or importance within a feed-forward neural network. One philosophy is to evaluate each feature with respect to relative changes in either the neural network's output or the neural network's probability of error. We refer to this as a derivative-based philosophy of feature saliency. Using the derivative-based philosophy, we propose a new and more efficient probability of error measure. A second philosophy is to measure the relative size of the weight vector emanating from each feature. We refer to this as a weight-based philosophy of feature saliency. We derive several unifying relationships which exist within the derivative-based feature saliency measures, as well as between the derivative and the weight-based feature saliency measures. We also report experimental results for an target recognition problem using a number of derivative-based and weight-based saliency measures. Abstract © Elsevier

Comments

The "Link to Full Text" button on this page loads the open access article version of record, hosted at Elsevier. The publisher retains permissions to re-use and distribute this article.

DOI

10.1016/S0898-1221(97)00059-X

Source Publication

Computers and Mathematics with Applications

Share

COinS