Towards To-a-T Spatio-Temporal Focus for Skeleton-Based Action Recognition


Graph Convolutional Networks (GCNs) have been widely used to model the high-order dynamic dependencies for skeleton-based action recognition. Most existing approaches do not explicitly embed the high-order spatio-temporal importance to joints' spatial connection topology and intensity, and they do not have direct objectives on their attention module to jointly learn when and where to focus on in the action sequence. To address these problems, we propose the To-a-T Spatio-Temporal Focus (STF), a skeleton-based action recognition framework that utilizes the spatio-temporal gradient to focus on relevant spatio-temporal features. We first propose the STF modules with learnable gradient-enforced and instance-dependent adjacency matrices to model the high-order spatio-temporal dynamics. Second, we propose three loss terms defined on the gradient-based spatio-temporal focus to explicitly guide the classifier when and where to look at, distinguish confusing classes, and optimize the stacked STF modules. STF outperforms the state-of-the-art methods on the NTU RGB+D 60, NTU RGB+D 120, and Kinetics Skeleton 400 datasets in all 15 settings over different views, subjects, setups, and input modalities, and STF also shows better accuracy on scarce data and dataset shifting settings.


  • Related Publication

  •  Ke, L., Peng, K.-C., Lyu, S., "Towards To-a-T Spatio-Temporal Focus for Skeleton-Based Action Recognition", arXiv, December 2021.
    BibTeX arXiv
    • @article{Ke2021dec,
    • author = {Ke, Lipeng and Peng, Kuan-Chuan and Lyu, Siwei},
    • title = {Towards To-a-T Spatio-Temporal Focus for Skeleton-Based Action Recognition},
    • journal = {arXiv},
    • year = 2021,
    • month = dec,
    • url = {}
    • }