TR2017-174

Localization-Aware Active Learning for Object Detection


    •  Kao, C.-C., Lee, T.-Y., Sen, P., Liu, M.-Y., "Localization-Aware Active Learning for Object Detection", arXiv, November 2017.
      BibTeX Download PDF
      • @techreport{MERL_TR2017-174,
      • author = {Kao, C.-C. and Lee, T.-Y. and Sen, P. and Liu, M.-Y.},
      • title = {Localization-Aware Active Learning for Object Detection},
      • institution = {MERL - Mitsubishi Electric Research Laboratories},
      • address = {Cambridge, MA 02139},
      • number = {TR2017-174},
      • month = nov,
      • year = 2017,
      • url = {http://www.merl.com/publications/TR2017-174/}
      • }
  • MERL Contact:
  • Research Areas:

    Computer Vision, Machine Learning


Active learning - a class of algorithms that iteratively searches for the most informative samples to include in a training dataset - has been shown to be effective at annotating data for image classification. However, the use of active learning for object detection is still largely unexplored as determining informativeness of an object-location hypothesis is more difficult. In this paper, we address this issue and present two metrics for measuring the informativeness of an object hypothesis, which allow us to leverage active learning to reduce the amount of annotated data needed to achieve a target object detection performance. Our first metric measures "localization tightness" of an object hypothesis, which is based on the overlapping ratio between the region proposal and the final prediction. Our second metric measures "localization stability" of an object hypothesis, which is based on the variation of predicted object locations when input images are corrupted by noise. Our experimental results show that by augmenting a conventional active-learning algorithm designed for classification with the proposed metrics, the amount of labeled training data required can be reduced up to 25%. Moreover, on PASCAL 2007 and 2012 datasets our localization-stability method has an average relative improvement of 96.5% and 81.9% over the baseline method using classification only.