TR2014-122

Phase Processing for Single Channel Speech Enhancement: History and Recent Advances


    •  Gerkmann, T., Krawczyk, M., Le Roux, J., "Phase Processing for Single Channel Speech Enhancement: History and Recent Advances", IEEE Signal Processing Magazine, DOI: 10.1109/​MSP.2014.2369251, Vol. 32, No. 2, pp. 55-66, March 2015.
      BibTeX TR2014-122 PDF
      • @article{Gerkmann2015mar,
      • author = {Gerkmann, T. and Krawczyk, M. and {Le Roux}, J.},
      • title = {Phase Processing for Single Channel Speech Enhancement: History and Recent Advances},
      • journal = {IEEE Signal Processing Magazine},
      • year = 2015,
      • volume = 32,
      • number = 2,
      • pages = {55--66},
      • month = mar,
      • publisher = {IEEE},
      • doi = {10.1109/MSP.2014.2369251},
      • issn = {1053-5888},
      • url = {https://www.merl.com/publications/TR2014-122}
      • }
  • MERL Contact:
  • Research Areas:

    Artificial Intelligence, Speech & Audio

Abstract:

With the advancement of technology, both assisted listening devices and speech communication devices are becoming more portable and also more frequently used. As a consequence, the users of devices such as hearing aids, cochlear implants, and mobile telephones, expect their devices to work robustly anywhere and at any time. This holds in particular for challenging noisy environments like a cafeteria, a restaurant, a subway, a factory, or in traffic. One way to making assisted listening devices robust to noise is to apply speech enhancement algorithms. To improve the corrupted speech, spatial diversity can be exploited by a constructive combination of microphone signals (so called beamforming), and by exploiting the different spectro-temporal properties of speech and noise. Here, we focus on single channel speech enhancement algorithms which rely on spectro-temporal properties. On the one hand, these algorithms can be employed when the miniaturization of devices only allows for using a single microphone. On the other hand, when multiple microphones are available, single channel algorithms can be employed as a postprocessor at the output of a beamformer. To exploit the short-term stationary properties of natural sounds, many of these approaches process the signal in a time-frequency representation, most frequently the short time discrete Fourier transform (STFT) domain. In this domain, the coefficients of the signal are complex-valued, and can therefore be represented by their absolute value (referred to in the literature both as STFT magnitude and STFT amplitude) and their phase. While the modeling and processing of the STFT magnitude has been the center of interest in the past three decades, phase has been largely ignored. In this survey, we review the role of phase processing for speech enhancement in the context of assisted listening and speech communication devices. We explain why most of the research conducted in this field used to focus on estimating spectral magnitudes in the STFT domain, and why recently phase processing is attracting increasing interest in the speech enhancement community. Furthermore, we review both early and recent methods for phase processing in speech enhancement. We aim at showing that phase processing is an exciting This work was supported by grant GE2538/2-1 of the German Research Foundation (DFG) field of research with the potential to make assisted listening and speech communication devices more robust in acoustically challenging environments.