Spatio-Temporal Ranked-Attention Networks for Video Captioning

    •  Cherian, A., Wang, J., Hori, C., Marks, T.K., "Spatio-Temporal Ranked-Attention Networks for Video Captioning", IEEE Winter Conference on Applications of Computer Vision (WACV), DOI: 10.1109/​WACV45572.2020.9093291, February 2020, pp. 1606-1615.
      BibTeX TR2020-016 PDF
      • @inproceedings{Cherian2020feb,
      • author = {Cherian, Anoop and Wang, Jue and Hori, Chiori and Marks, Tim K.},
      • title = {Spatio-Temporal Ranked-Attention Networks for Video Captioning},
      • booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
      • year = 2020,
      • pages = {1606--1615},
      • month = feb,
      • publisher = {IEEE},
      • doi = {10.1109/WACV45572.2020.9093291},
      • url = {}
      • }
  • MERL Contacts:
  • Research Areas:

    Artificial Intelligence, Computer Vision, Machine Learning


Generating video descriptions automatically is a challenging task that involves a complex interplay between spatio-temporal visual features and language models. Given that videos consist of spatial (frame-level) features and their temporal evolutions, an effective captioning model should be able to attend to these different cues selectively. To this end, we propose a Spatio-Temporal and TemporoSpatial (STaTS) attention model which, conditioned on the language state, hierarchically combines spatial and temporal attention to videos in two different orders: (i) a spatiotemporal (ST) sub-model, which first attends to regions that have temporal evolution, then temporally pools the features from these regions; and (ii) a temporo-spatial (TS) sub-model, that first decides a single frame to attend to, then applies spatial attention within that frame. We propose a novel LSTM-based temporal ranking function, which we call ranked attention, for the ST model to capture action dynamics. Our entire framework is trained end-toend. We provide experiments on two benchmark datasets: MSVD and MSR-VTT. Our results demonstrate the synergy between the ST and TS modules, outperforming recent stateof-the-art methods