Complexity Scaling for Speech Denoising
Accepted by ICASSP 2024
Authors: Hangting Chen & Jianwei Yu & Chao Weng
Tencent AI Lab, Audio and Speech Signal Processing Oteam
Email: chenhangting17@mails.ucas.ac.cn
Abstract: Computational complexity is critical when deploying deep learning-based speech denoising models for on-device applications.
Most prior research focused on optimizing model architectures to meet specific computational cost constraints, often creating distinct neural network architectures for different complexity limitations.
This study conducts complexity scaling for speech denoising tasks, aiming to consolidate models with various complexities into a unified architecture.
We present a Multi-Path Transform-based (MPT) architecture to handle both low- and high-complexity scenarios. A series of MPT networks present high performance covering a wide range of computational complexities on the DNS challenge dataset.
Moreover, inspired by the scaling experiments in natural language processing, we explore the empirical relationship between model performance and computational cost on the denoising task.
As the complexity number of multiply-accumulate operations (MACs) is scaled from 50M/s to 15G/s on MPT networks, we observe a linear increase in the values of PESQ-WB and SI-SNR, proportional to the logarithm of MACs, which might contribute to the understanding and application of complexity scaling in speech denoising tasks.
Paper link: https://arxiv.org/pdf/2309.07757v1.pdf
This paper is accepted by ICASSP2024.