Follow
Albert Zeyer
Albert Zeyer
Human Language Technology and Pattern Recognition Group, RWTH Aachen University
Verified email at cs.rwth-aachen.de - Homepage
Title
Cited by
Cited by
Year
Improved Training of End-to-end Attention Models for Speech Recognition
A Zeyer, K Irie, R Schlüter, H Ney
Proc. Interspeech 2018, 7-11, 2018
2982018
RWTH ASR Systems for LibriSpeech: Hybrid vs Attention--w/o Data Augmentation
C Lüscher, E Beck, K Irie, M Kitza, W Michel, A Zeyer, R Schlüter, H Ney
arXiv preprint arXiv:1905.03072, 2019
2942019
A comprehensive study of deep bidirectional LSTM RNNs for acoustic modeling in speech recognition
A Zeyer, P Doetsch, P Voigtlaender, R Schlüter, H Ney
2017 IEEE international conference on acoustics, speech and signal …, 2017
2152017
A comparison of Transformer and LSTM encoder decoder models for ASR
A Zeyer, P Bahar, K Irie, R Schlüter, H Ney
IEEE Automatic Speech Recognition and Understanding Workshop, Sentosa, Singapore, 2019
2112019
Language modeling with deep transformers
K Irie, A Zeyer, R Schlüter, H Ney
arXiv preprint arXiv:1905.04226, 2019
1922019
RETURNN as a generic flexible neural toolkit with application to translation and speech recognition
A Zeyer, T Alkhouli, H Ney
arXiv preprint arXiv:1805.05225, 2018
862018
RETURNN: The RWTH extensible training framework for universal recurrent neural networks
P Doetsch, A Zeyer, P Voigtlaender, I Kulikov, R Schlüter, H Ney
2017 IEEE International Conference on Acoustics, Speech and Signal …, 2017
812017
Generating synthetic audio data for attention-based speech recognition systems
N Rossenbach, A Zeyer, R Schlüter, H Ney
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
782020
Towards Online-Recognition with Deep Bidirectional LSTM Acoustic Models
A Zeyer, R Schlüter, H Ney
Interspeech, 3424-3428, 2016
622016
A new training pipeline for an improved neural transducer
A Zeyer, A Merboldt, R Schlüter, H Ney
arXiv preprint arXiv:2005.09319, 2020
542020
On using specaugment for end-to-end speech translation
P Bahar, A Zeyer, R Schlüter, H Ney
arXiv preprint arXiv:1911.08876, 2019
532019
The RWTH/UPB/FORTH system combination for the 4th CHiME challenge evaluation
T Menne
Deutsche Nationalbibliothek, 2016
522016
CTC in the Context of Generalized Full-Sum HMM Training
A Zeyer, E Beck, R Schlüter, H Ney
INTERSPEECH, 944-948, 2017
502017
Training language models for long-span cross-sentence evaluation
K Irie, A Zeyer, R Schlüter, H Ney
2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU …, 2019
462019
Investigating methods to improve language model integration for attention-based encoder-decoder ASR models
M Zeineldeen, A Glushko, W Michel, A Zeyer, R Schlüter, H Ney
arXiv preprint arXiv:2104.05544, 2021
422021
Bidirectional decoder networks for attention-based end-to-end offline handwriting recognition
P Doetsch, A Zeyer, H Ney
2016 15th International Conference on Frontiers in Handwriting Recognition …, 2016
402016
Librispeech transducer model with internal language model prior correction
A Zeyer, A Merboldt, W Michel, R Schlüter, H Ney
arXiv preprint arXiv:2104.03006, 2021
272021
Why does CTC result in peaky behavior?
A Zeyer, R Schlüter, H Ney
arXiv preprint arXiv:2105.14849, 2021
232021
An Analysis of Local Monotonic Attention Variants.
A Merboldt, A Zeyer, R Schlüter, H Ney
Interspeech, 1398-1402, 2019
202019
A comprehensive analysis on attention models
A Zeyer, A Merboldt, R Schlüter, H Ney
Universitätsbibliothek der RWTH Aachen, 2019
202019
The system can't perform the operation now. Try again later.
Articles 1–20