NEWS    MERL co-organizes the 2023 Sound Demixing (SDX2023) Challenge and Workshop

Date released: October 19, 2023


  •  NEWS    MERL co-organizes the 2023 Sound Demixing (SDX2023) Challenge and Workshop
  • Date:

    January 23, 2023 - November 4, 2023

  • Where:

    International Symposium of Music Information Retrieval (ISMR)

  • Description:

    MERL Speech & Audio team members Gordon Wichern and Jonathan Le Roux co-organized the 2023 Sound Demixing Challenge along with researchers from Sony, Moises AI, Audioshake, and Meta.

    The SDX2023 Challenge was hosted on the AI Crowd platform and had a prize pool of $42,000 distributed to the winning teams across two tracks: Music Demixing and Cinematic Sound Demixing. A unique aspect of this challenge was the ability to test the audio source separation models developed by challenge participants on non-public songs from Sony Music Entertainment Japan for the music demixing track, and movie soundtracks from Sony Pictures for the cinematic sound demixing track. The challenge ran from January 23rd to May 1st, 2023, and had 884 participants distributed across 68 teams submitting 2828 source separation models. The winners will be announced at the SDX2023 Workshop, which will take place as a satellite event at the International Symposium of Music Information Retrieval (ISMR) in Milan, Italy on November 4, 2023.

    MERL’s contribution to SDX2023 focused mainly on the cinematic demixing track. In addition to sponsoring the prizes awarded to the winning teams for that track, the baseline system and initial training data were MERL’s Cocktail Fork separation model and Divide and Remaster dataset, respectively. MERL researchers also contributed to a Town Hall kicking off the challenge, co-authored a scientific paper describing the challenge outcomes, and co-organized the SDX2023 Workshop.


  • External Link:

    https://www.aicrowd.com/challenges/sound-demixing-challenge-2023

  • MERL Contacts:
  • Research Areas:

    Artificial Intelligence, Machine Learning, Speech & Audio

    •  Petermann, D., Wichern, G., Subramanian, A.S., Wang, Z.-Q., Le Roux, J., "Tackling the Cocktail Fork Problem for Separation and Transcription of Real-World Soundtracks", IEEE/ACM Transactions on Audio, Speech, and Language Processing, DOI: 10.1109/​TASLP.2023.3290428, Vol. 31, pp. 2592-2605, September 2023.
      BibTeX TR2023-113 PDF
      • @article{Petermann2023sep,
      • author = {Petermann, Darius and Wichern, Gordon and Subramanian, Aswin Shanmugam and Wang, Zhong-Qiu and Le Roux, Jonathan},
      • title = {Tackling the Cocktail Fork Problem for Separation and Transcription of Real-World Soundtracks},
      • journal = {IEEE/ACM Transactions on Audio, Speech, and Language Processing},
      • year = 2023,
      • volume = 31,
      • pages = {2592--2605},
      • month = sep,
      • doi = {10.1109/TASLP.2023.3290428},
      • issn = {2329-9304},
      • url = {https://www.merl.com/publications/TR2023-113}
      • }
    •  Petermann, D., Wichern, G., Wang, Z.-Q., Le Roux, J., "The Cocktail Fork Problem: Three-Stem Audio Separation for Real-World Soundtracks", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), DOI: 10.1109/​ICASSP43922.2022.9746005, April 2022, pp. 526-530.
      BibTeX TR2022-022 PDF Software
      • @inproceedings{Petermann2022apr,
      • author = {Petermann, Darius and Wichern, Gordon and Wang, Zhong-Qiu and Le Roux, Jonathan},
      • title = {The Cocktail Fork Problem: Three-Stem Audio Separation for Real-World Soundtracks},
      • booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
      • year = 2022,
      • pages = {526--530},
      • month = apr,
      • doi = {10.1109/ICASSP43922.2022.9746005},
      • url = {https://www.merl.com/publications/TR2022-022}
      • }