DualSep: A Light-weight Dual-Encoder Convolutional Recurrent Network For Real-Time In-Car Speech Separation

1 Audio, Speech and Language Processing Group (ASLP@NPU) School of Computer Science, Northwestern Polytechnical University, Xi’an, China 2 Huawei Cloud

Abstract

The advancements in deep learning and voice-activated technologies have driven the development of human-vehicle interaction. Distributed microphone arrays are commonly used better to capture passengers' voices across different speech zones, resulting in excessive audio channels. However, the limited computational resources within the in-car environment, coupled with the need for low latency, make in-car multi-channel speech separation still a challenging problem. To migrate the problem, we propose a lightweight framework that cascades digital signal processing (DSP) and neural networks (NN). We utilize beamforming and independent vector analysis (IVA) to reduce computational costs and offer spatial prior. We employ dual encoders for dual-branch modeling, with spatial encoder capturing spatial cues and spectral encoder preserving spectral information, facilitating spatial-spectral fusion. Our proposed system supports both streaming and non-streaming modes. Experimental results demonstrate the superiority of the proposed system across various metrics. With only 0.83M parameters and 0.39 real-time factor (RTF) on an Intel Core i7 (2.6GHz) CPU, it effectively separates speech into distinct speech zones.

Separated speech samples by the proposed DualSep

Sample1 with only one speaker talking in zone 6 -- Easy done😊

mix

Mono mixture from 6-ch signals obtained by fixed beamforming from 24-ch distributed mic-array recording

mix

Seperated speech in zone 1

mix

Seperated speech in zone 2

mix

Seperated speech in zone 3

mix

Seperated speech in zone 4

mix

Seperated speech in zone 5

mix

Seperated speech in zone 6

Sample2 with speakers talking in zone 1, 3, 5 simultaneously -- It's fine🧐

mix

Mono mixture from 6-ch signals obtained by fixed beamforming from 24-ch distributed mic-array recording

mix

Seperated speech in zone 1

mix

Seperated speech in zone 2

mix

Seperated speech in zone 3

mix

Seperated speech in zone 4

mix

Seperated speech in zone 5

mix

Seperated speech in zone 6

Sample3 when all speech zones speak simultaneously -- Challenging but ok😎

mix

Mono mixture from 6-ch signals obtained by fixed beamforming from 24-ch distributed mic-array recording

mix

Seperated speech in zone 1

mix

Seperated speech in zone 2

mix

Seperated speech in zone 3

mix

Seperated speech in zone 4

mix

Seperated speech in zone 5

mix

Seperated speech in zone 6