Highly spontaneous, conversational, and potentially emotional and noisy speech is known to be a challenge for today's automatic speech recognition (ASR) systems, which highlights the need for advanced algorithms that improve speech features and models. Histogram Equalization is an efficient method to reduce the mismatch between clean and noisy conditions by normalizing all moments of the probability distribution of the feature vector components. In this article, we propose to combine histogram equalization and multi-condition training for robust keyword detection in noisy speech. To better cope with conversational speaking styles, we show how contextual information can be effectively exploited in a multi-stream ASR framework that dynamically models context-sensitive phoneme estimates generated by a long short-term memory neural network. The proposed techniques are evaluated on the SEMAINE database-a corpus containing emotionally colored conversations with a cognitive system for "Sensitive Artificial Listening".
Multi-Stream LSTM-HMM Decoding and Histogram Equalization for Noise Robust Keyword Spotting / Woellmer, M.; Marchi, E.; Squartini, Stefano; Schuller, B.. - In: COGNITIVE NEURODYNAMICS. - ISSN 1871-4080. - Volume 5, Issue 3:(2011), pp. 253-264. [10.1007/s11571-011-9166-9]
Multi-Stream LSTM-HMM Decoding and Histogram Equalization for Noise Robust Keyword Spotting
SQUARTINI, Stefano;
2011-01-01
Abstract
Highly spontaneous, conversational, and potentially emotional and noisy speech is known to be a challenge for today's automatic speech recognition (ASR) systems, which highlights the need for advanced algorithms that improve speech features and models. Histogram Equalization is an efficient method to reduce the mismatch between clean and noisy conditions by normalizing all moments of the probability distribution of the feature vector components. In this article, we propose to combine histogram equalization and multi-condition training for robust keyword detection in noisy speech. To better cope with conversational speaking styles, we show how contextual information can be effectively exploited in a multi-stream ASR framework that dynamically models context-sensitive phoneme estimates generated by a long short-term memory neural network. The proposed techniques are evaluated on the SEMAINE database-a corpus containing emotionally colored conversations with a cognitive system for "Sensitive Artificial Listening".I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.