Follow
Frank Qiaochu Zhang
Title
Cited by
Cited by
Year
Transformer-based acoustic modeling for hybrid speech recognition
Y Wang, A Mohamed, D Le, C Liu, A Xiao, J Mahadeokar, H Huang, ...
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
2432020
Emformer: Efficient memory transformer based acoustic model for low latency streaming speech recognition
Y Shi, Y Wang, C Wu, CF Yeh, J Chan, F Zhang, D Le, M Seltzer
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
1582021
Streaming transformer-based acoustic models using self-attention with augmented memory
C Wu, Y Wang, Y Shi, CF Yeh, F Zhang
arXiv preprint arXiv:2005.08042, 2020
672020
Improving RNN transducer based ASR with auxiliary tasks
C Liu, F Zhang, D Le, S Kim, Y Saraf, G Zweig
2021 IEEE Spoken Language Technology Workshop (SLT), 172-179, 2021
442021
Deja-vu: Double feature presentation and iterated loss in deep transformer networks
A Tjandra, C Liu, F Zhang, X Zhang, Y Wang, G Synnaeve, S Nakamura, ...
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
432020
Improved language identification through cross-lingual self-supervised learning
A Tjandra, DG Choudhury, F Zhang, K Singh, A Conneau, A Baevski, ...
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
412022
Faster, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces
F Zhang, Y Wang, X Zhang, C Liu, Y Saraf, G Zweig
Interspeech 2020, 2020
292020
Weak-attention suppression for transformer based speech recognition
Y Shi, Y Wang, C Wu, C Fuegen, F Zhang, D Le, CF Yeh, ML Seltzer
arXiv preprint arXiv:2005.09137, 2020
272020
Multilingual graphemic hybrid ASR with massive data augmentation
C Liu, Q Zhang, X Zhang, K Singh, Y Saraf, G Zweig
arXiv preprint arXiv:1909.06522, 2019
262019
Contextualizing ASR lattice rescoring with hybrid pointer network language model
DR Liu, C Liu, F Zhang, G Synnaeve, Y Saraf, G Zweig
arXiv preprint arXiv:2005.07394, 2020
232020
Benchmarking lf-mmi, ctc and rnn-t criteria for streaming asr
X Zhang, F Zhang, C Liu, K Schubert, J Chan, P Prakash, J Liu, CF Yeh, ...
2021 IEEE spoken language technology workshop (SLT), 46-51, 2021
212021
Scaling ASR improves zero and few shot learning
A Xiao, W Zheng, G Keren, D Le, F Zhang, C Fuegen, O Kalinli, Y Saraf, ...
arXiv preprint arXiv:2111.05948, 2021
192021
Accent-robust automatic speech recognition using supervised and unsupervised wav2vec embeddings
J Li, V Manohar, P Chitkara, A Tjandra, M Picheny, F Zhang, X Zhang, ...
arXiv preprint arXiv:2110.03520, 2021
172021
Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Y Wang, Y Shi, F Zhang, C Wu, J Chan, CF Yeh, A Xiao
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
172021
Streaming attention-based models with augmented memory for end-to-end speech recognition
CF Yeh, Y Wang, Y Shi, C Wu, F Zhang, J Chan, ML Seltzer
2021 IEEE Spoken Language Technology Workshop (SLT), 8-14, 2021
112021
On lattice-free boosted MMI training of HMM and CTC-based full-context ASR models
X Zhang, V Manohar, D Zhang, F Zhang, Y Shi, N Singhal, J Chan, ...
2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU …, 2021
102021
Training asr models by generation of contextual information
K Singh, D Okhonko, J Liu, Y Wang, F Zhang, R Girshick, S Edunov, ...
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
62020
Deja-vu: Double feature presentation in deep transformer networks
A Tjandra, C Liu, F Zhang, X Zhang, Y Wang, G Synnaeve, S Nakamura, ...
Submitted to ICASSP, 2020
32020
Multilingual ASR with massive data augmentation
C Liu, Q Zhang, X Zhang, K Singh, Y Saraf, G Zweig
arXiv preprint arXiv:1909.06522, 2019
32019
Efficient memory transformer based acoustic model for low latency streaming speech recognition
MLS Yangyang Shi, Yongqiang Wang, Chunyang Wu, Ching-Feng Yeh, Julian Yui ...
US Patent 11,646,017, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–20