The advancements in deep learning and voice-activated technologies have driven the development of human-vehicle interaction. Distributed microphone arrays are commonly used better to capture passengers' voices across different speech zones, resulting in excessive audio channels. However, the limited computational resources within the in-car environment, coupled with the need for low latency, make in-car multi-channel speech separation still a challenging problem. To migrate the problem, we propose a lightweight framework that cascades digital signal processing (DSP) and neural networks (NN). We utilize beamforming and independent vector analysis (IVA) to reduce computational costs and offer spatial prior. We employ dual encoders for dual-branch modeling, with spatial encoder capturing spatial cues and spectral encoder preserving spectral information, facilitating spatial-spectral fusion. Our proposed system supports both streaming and non-streaming modes. Experimental results demonstrate the superiority of the proposed system across various metrics. With only 0.83M parameters and 0.39 real-time factor (RTF) on an Intel Core i7 (2.6GHz) CPU, it effectively separates speech into distinct speech zones.
Mono mixture from 6-ch signals obtained by fixed beamforming from 24-ch distributed mic-array recording
Seperated speech in zone 1
Seperated speech in zone 2
Seperated speech in zone 3
Seperated speech in zone 4
Seperated speech in zone 5
Seperated speech in zone 6
Mono mixture from 6-ch signals obtained by fixed beamforming from 24-ch distributed mic-array recording
Seperated speech in zone 1
Seperated speech in zone 2
Seperated speech in zone 3
Seperated speech in zone 4
Seperated speech in zone 5
Seperated speech in zone 6
Mono mixture from 6-ch signals obtained by fixed beamforming from 24-ch distributed mic-array recording
Seperated speech in zone 1
Seperated speech in zone 2
Seperated speech in zone 3
Seperated speech in zone 4
Seperated speech in zone 5
Seperated speech in zone 6