Sledovat
Zhiyun Lu
Zhiyun Lu
E-mailová adresa ověřena na: apple.com
Název
Citace
Citace
Rok
Learning compact recurrent neural networks
Z Lu, V Sindhwani, TN Sainath
Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International …, 2016
1092016
Speech sentiment analysis via pre-trained features from end-to-end asr models
Z Lu, L Cao, Y Zhang, CC Chiu, J Fan
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
812020
How to scale up kernel methods to be as good as deep neural nets
Z Lu, A May, K Liu, AB Garakani, D Guo, A Bellet, L Fan, M Collins, ...
arXiv preprint arXiv:1411.4000, 2014
75*2014
Kernel approximation methods for speech recognition
A May, AB Garakani, Z Lu, D Guo, K Liu, A Bellet, L Fan, M Collins, D Hsu, ...
The Journal of Machine Learning Research 20 (1), 2121-2156, 2019
342019
A large scale speech sentiment corpus
E Chen, Z Lu, H Xu, L Cao, Y Zhang, J Fan
Proceedings of the Twelfth Language Resources and Evaluation Conference …, 2020
282020
Improving streaming automatic speech recognition with non-streaming model distillation on unsupervised data
T Doutre, W Han, M Ma, Z Lu, CC Chiu, R Pang, A Narayanan, A Misra, ...
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
262021
Exploring targeted universal adversarial perturbations to end-to-end asr models
Z Lu, W Han, Y Zhang, L Cao
arXiv preprint arXiv:2104.02757, 2021
192021
Selecting β-Divergence for Nonnegative Matrix Factorization by Score Matching
Z Lu, Z Yang, E Oja
Artificial Neural Networks and Machine Learning–ICANN 2012: 22nd …, 2012
192012
E2E Segmenter: Joint Segmenting and Decoding for Long-Form ASR
W Ronny Huang, S Chang, D Rybach, R Prabhavalkar, TN Sainath, ...
arXiv e-prints, arXiv: 2204.10749, 2022
18*2022
Hyper-parameter tuning under a budget constraint
Z Lu, CK Chiang, F Sha
arXiv preprint arXiv:1902.00532, 2019
162019
Less is more: Removing text-regions improves clip training efficiency and robustness
L Cao, B Zhang, C Chen, Y Yang, X Du, W Zhang, Z Lu, Y Zheng
arXiv preprint arXiv:2305.05095, 2023
152023
Improving the fusion of acoustic and text representations in RNN-T
C Zhang, B Li, Z Lu, TN Sainath, S Chang
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
152022
A comparison between deep neural nets and kernel acoustic models for speech recognition
Z Lu, D Quo, AB Garakani, K Liu, A May, A Bellet, L Fan, M Collins, ...
2016 IEEE International Conference on Acoustics, Speech and Signal …, 2016
142016
Uncertainty estimation with infinitesimal jackknife, its distribution and mean-field approximation
Z Lu, E Ie, F Sha
arXiv preprint arXiv:2006.07584, 2020
122020
Unsupervised data selection via discrete speech representation for asr
Z Lu, Y Wang, Y Zhang, W Han, Z Chen, P Haghani
arXiv preprint arXiv:2204.01981, 2022
102022
Input length matters: Improving RNN-T and MWER training for long-form telephony speech recognition
Z Lu, Y Pan, T Doutre, P Haghani, L Cao, R Prabhavalkar, C Zhang, ...
arXiv preprint arXiv:2110.03841, 2021
102021
Mean-field approximation to Gaussian-softmax integral with application to uncertainty estimation
Z Lu, E Ie, F Sha
arXiv preprint arXiv:2006.07584, 2020
72020
Apple intelligence foundation language models
T Gunter, Z Wang, C Wang, R Pang, A Narayanan, A Zhang, B Zhang, ...
arXiv preprint arXiv:2407.21075, 2024
52024
Direct large language model alignment through self-rewarding contrastive prompt distillation
A Liu, H Bai, Z Lu, X Kong, S Wang, J Shan, M Cao, L Wen
arXiv preprint arXiv:2402.11907, 2024
52024
Input length matters: An empirical study of RNN-T and MWER training for long-form telephony speech recognition
Z Lu, Y Pan, T Doutre, L Cao, R Prabhavalkar, C Zhang, T Strohman
arXiv preprint arXiv:2110.03841, 2021
52021
Systém momentálně nemůže danou operaci provést. Zkuste to znovu později.
Články 1–20