Authors
Marko Grobelnik,
Publication date
1999
Publisher
Citeseer
Total citations
Description
This paper describes an approach to feature subset selection that takes into account problem speci cs and learning algorithm characteristics. It is developed for the Naive Bayesian classi er applied on text data, since it combines well with the addressed learning problems. We focus on domains with many features that also have a highly unbalanced class distribution and asymmetric misclassication costs given only implicitly in the problem. By asymmetric misclassi cation costs we mean that one of the class values is the target class value for which we want to get predictions and we prefer false positive over false negative. Our example problem is automatic document categorization using machine learning, where we want to identify documents relevant for the selected category. Usually, only about 1%-10% of examples belong to the selected category. Our experimental comparison of eleven feature scoring measures show that considering domain and algorithm characteristics signi cantly improves the results of classi cation.