Why is the Mahalanobis Distance Effective for Anomaly Detection?

Abstract

The Mahalanobis distance-based confidence score, a recently proposed anomaly detection method for pre-trained neural classifiers, achieves state-of-the-art performance on both out-of-distribution (OoD) and adversarial examples detection. This work analyzes why this method exhibits such strong performance in practical settings while imposing an implausible assumption; namely, that class conditional distributions of pre-trained features have tied covariance. Although the Mahalanobis distance-based method is claimed to be motivated by classification prediction confidence, we find that its superior performance stems from information not useful for classification. This suggests that the reason the Mahalanobis confidence score works so well is mistaken, and makes use of different information from ODIN, another popular OoD detection method based on prediction confidence. This perspective motivates us to combine these two methods, and the combined detector exhibits improved performance and robustness. These findings provide insight into the behavior of neural classifiers in response to anomalous inputs.

Publication
arXiv preprint arXiv:2003.00402
Ryo Kamoi
Ryo Kamoi
Ph.D. Student

My research interests are in improving reliability of natural language processing systems. PhD student at Penn State (2023-), MS at UT Austin, BE at Keio University.