Browsing by Author "Žďánský Jindřich"
Now showing 1 - 16 of 16
Results Per Page
Sort Options
- ItemAdaptive blind audio source extraction supervised by dominant speaker identification using x-vectors(IEEE, 2020) Janský Jakub; Málek Jiří; Čmejla Jaroslav; Kounovský Tomáš; Koldovský Zbyněk; Žďánský Jindřich
- ItemAn Approach to Online Speaker Change Point Detection Using DNNs and WFSTs(ISCA, 2019) Matějů Lukáš; Červa Petr; Žďánský Jindřich
- ItemBlind extraction of moving audio source in a challenging environment supported by speaker identification via X-vectors(IEEE, 2021) Málek Jiří; Janský Jakub; Kounovský Tomáš; Koldovský Zbyněk; Žďánský Jindřich
- ItemCompensation of Nonlinear Distortions in Speech for Automatic Recognition(Institute of Electrical and Electronics Engineers Inc., 2015) Málek Jiří; Silovský Jan; Červa Petr; Koldovský Zbyněk; Nouza Jan; Žďánský Jindřich
- ItemInvestigation into the use of deep neural networks for LVCSR of Czech(IEEE, 2015) Matějů Lukáš; Červa Petr; Žďánský Jindřich
- ItemInvestigation into the Use of WFSTs and DNNs for Speech Activity Detection in Broadcast Data Transcription(Springer Verlag, 2017) Matějů Lukáš; Červa Petr; Žďánský Jindřich
- ItemMultilingual Multimedia Monitoring and Analyzing Platform(2017) Nouza Jan; Červa Petr; Žďánský Jindřich; Čihák Stanislav; Bureš Kamil
- ItemOptical Character Recognition for Audio-Visual Broadcast Transcription System(IEEE, 2020) Chaloupka Josef; Paleček Karel; Červa Petr; Žďánský Jindřich
- ItemRobust Automatic Recognition of Speech with Background Music(Institute of Electrical and Electronics Engineers Inc., 2017) Málek Jiří; Žďánský Jindřich; Červa PetrThis paper addresses the task of Automatic Speech Recognition (ASR) with music in the background, where the accuracy of recognition may deteriorate significantly. To improve the robustness of ASR in this task, e.g. for broadcast news transcription or subtitles creation, we adopt two approaches: 1) multi-condition training of the acoustic models and 2) denoising autoencoders followed by acoustic model training on the preprocessed data. In the latter case, two types of autoencoders are considered: the fully connected and the convolutional network. Presented experimental results show that all the investigated techniques are able to improve the recognition of speech distorted by music significantly. For example, in the case of artificial mixtures of speech and electronic music (low Signal-to-Noise Ratio (SNR) of 0 dB), we achieved absolute improvement of accuracy by 35.8%. For real-world broadcast news and a high SNR (about 10 dB), we achieved improvement by 2.4%. The important advantage of the studied approaches is that they do not deteriorate the accuracy in scenarios with clean speech (the decrease is about 1%).
- ItemRobust Recognition of Conversational Telephone Speech via Multi-Condition Training and Data Augmentation(Springer Verlag, 2018) Málek Jiří; Žďánský Jindřich; Červa Petr
- ItemRobust Recognition of Speech with Background Music in Acoustically Under-Resourced Scenarios(IEEE, 2018) Málek Jiří; Žďánský Jindřich; Červa Petr
- ItemSpeech Activity Detection in Online Broadcast Transcription Using Deep Neural Networks and Weighted Finite State Transducers(Institute of Electrical and Electronics Engineers Inc., 2017) Matějů Lukáš; Červa Petr; Žďánský Jindřich; Málek JiříIn this paper, a new approach to online Speech Activity Detection (SAD) is proposed. This approach is designed for the use in a system that carries out 24/7 transcription of radio/TV broadcasts containing a large amount of non-speech segments, such as advertisements or music. To improve the robustness of detection, we adopt Deep Neural Networks (DNNs) trained on artificially created mixtures of speech and non-speech signals at desired levels of signal-to-noise ratio (SNR). An integral part of our approach is an online decoder based on Weighted Finite State Transducers (WFSTs); this decoder smooths the output from DNN. The employed transduction model is context-based, i.e., both speech and non-speech events are modeled using sequences of states. The presented experimental results show that our approach yields state-of-the-art results on standardized QUT-NOISE-TIMIT data set for SAD and, at the same time, it is capable of a) operating with low latency and b) reducing the computational demands and error rate of the target transcription system.
- ItemStudy on the use of deep neural networks for speech activity detection in broadcast recordings(SciTePress, 2016) Matějů Lukáš; Červa Petr; Žďánský Jindřich
- ItemUnique Software Technological Platform for Re-scripting of Archives of Historical And Contemporary Relations Čro And Their Opening Up by the Web(2014) Nouza Jan; Červa Petr; Žďánský Jindřich; Blavka Karel; Boháč Marek; Silovský Jan; Chaloupka Josef; Kuchařová Michaela; Málek Jiří
- ItemVery Fast Keyword Spotting System with Real Time Factor below 0.01(Springer Nature Switzerland, 2020) Nouza Jan; Červa Petr; Žďánský Jindřich
- ItemVoice-activity and overlapped speech detection using x-vectors(Springer Nature Switzerland, 2020) Málek Jiří; Žďánský Jindřich