Document Type

Article

Publication Date

3-2008

Abstract

In security-related areas there is concern over novel “zero-day” attacks that penetrate system defenses and wreak havoc. The best methods for countering these threats are recognizing “nonself” as in an Artificial Immune System or recognizing “self” through clustering. For either case, the concern remains that something that appears similar to self could be missed. Given this situation, one could incorrectly assume that a preference for a tighter fit to self over generalizability is important for false positive reduction in this type of learning problem. This article confirms that in anomaly detection as in other forms of classification a tight fit, although important, does not supersede model generality. This is shown using three systems each with a different geometric bias in the decision space. The first two use spherical and ellipsoid clusters with a k-means algorithm modified to work on the one-class/blind classification problem. The third is based on wrapping the self points with a multidimensional convex hull (polytope) algorithm capable of learning disjunctive concepts via a thresholding constant. All three of these algorithms are tested using the Voting dataset from the UCI Machine Learning Repository, the MIT Lincoln Labs intrusion detection dataset, and the lossy-compressed steganalysis domain.

Comments

Copyright © 2007, Springer-Verlag London Limited.

AFIT Scholar furnishes the accepted manuscript version of this article. The published version of record appears in Knowledge and Information Systems and is available by subscription through the DOI link in the citation below.

Article accepted Feb 4, 2007.

DOI

10.1007/s10115-007-0072-8

Source Publication

Knowledge and Information Systems

Share

COinS