It is a common problem when a trained filter miscategorizes something and you'd like to know why - what is really the feature that puts the most weight into the wrong category? Or sometimes you'd like to know what are the features that puts another object into the right category - maybe we should add them somehow to the object? In AI::NaiveBayes::Classification there is a method called
find_predictors
that finds the most features that weight most for and against the category that a given object is eventually classified under. The simple algorithm assumes that there are only two categories - but it should be possible to extend it for more categories. The returned numbers are hard to interpret - but what is important is how big they are in comparison with other numbers in the result.
We use the classifier for spam detection - whenever we get misclassified posts I check what words (or other features) push them into the wrong category and I decide what to do - should be improve the training examples, add more post features or maybe we can just ignore the case. When improving the filters, by adding or removing examples I can check how that changes the classification and also how exactly it changes the influence of each important feature on the result.
No comments:
Post a Comment