TR2006-053

Generative Process Tracking for Audio Analysis


Abstract:

The problem of generative process tracking involves detecting and adapting to changes in the underlying generative process that creates a time series of observations. It has been widely used for visual background modelling to adaptively track the generative process that generates the pixel intensities. In this paper, we extend this idea to audio background modelling and show its applications in surveillance domain. We adaptively learn the parameters of the generative audio background process and detect foreground events. We have tested the effectiveness of the proposed algorithms using synthetic time series data and show its performance on elevator audio surveillance.

 

  • Related News & Events

    •  NEWS    ICASSP 2006: 3 publications by Ajay Divakaran and others
      Date: May 14, 2006
      Where: IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
      Brief
      • The papers "Generative Process Tracking for Audio Analysis" by Radhakrishnan, R. and Divakaran, A., "Latent Dirichlet Decomposition for Single Channel Speaker Separation" by Raj, B., Shashanka, M.V.S. and Smaragdis, P. and "Secure Sound Classification: Gaussian Mixture Models" by Shashanka, M.V.S. and Smaragdis, P. were presented at the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP).
    •