Multi-Attribute Spaces: Calibration for Attribute Fusion and Similarity Search
Walter Scheirer, Neeraj Kumar, Peter Belhumeur, and Terrance Boult
[IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012]
Abstract. Recent work has shown that visual attributes are a powerful approach for applications such as recognition, image description and retrieval. However, fusing multiple attribute scores – as required during multi-attribute queries or similarity searches – presents a significant challenge. Scores from different attribute classifiers cannot be combined in a simple way; the same score for different attributes can mean different things. In this work, we show how to construct normalized “multi-attribute spaces” from raw classifier outputs, using techniques based on the statistical Extreme Value Theory. Our method calibrates each raw score to a probability that the given attribute is present in the image. We describe how these probabilities can be fused in a simple way to perform more accurate multi-attribute searches, as well as enable attribute-based similarity searches. A significant advantage of our approach is that the normalization is done after-the-fact, requiring neither modification to the attribute classification system nor ground truth attribute annotations. We demonstrate results on a large data set of nearly 2 million face images and show significant improvements over prior work. We also show that perceptual similarity of search results increases by using contextual attributes.
After reading/accepting our LICENSE for non-commercial-used you can download source code for improving recognition systems using our proposed Extreme Value Theory Meta-Recognition approach.The library is in C and C++ with a PythonSwig wrapper. A MATLAB/MEX version will be available soon. Note the build process needs CMAKE > 2.8. Click here to get the license and start the download process
Click here to download the full pseudocode for the multi-attribute space similarity algorithm described in Sec. 4 of the paper.