Follow
Jing Liu
Jing Liu
PhD Candidate, Monash University
Verified email at monash.edu - Homepage
Title
Cited by
Cited by
Year
Discrimination-aware channel pruning for deep neural networks
Z Zhuang, M Tan, B Zhuang, J Liu, Y Guo, Q Wu, J Huang, J Zhu
Advances in neural information processing systems 31, 2018
6962018
Scalable vision transformers with hierarchical pooling
Z Pan, B Zhuang, J Liu, H He, J Cai
Proceedings of the IEEE/cvf international conference on computer vision, 377-386, 2021
1352021
Generative low-bitwidth data free quantization
S Xu, H Li, B Zhuang, J Liu, J Cao, C Liang, M Tan
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
1092020
Discrimination-aware Network Pruning for Deep Model Compression
J Liu, B Zhuang, Z Zhuang, Y Guo, J Huang, J Zhu, M Tan
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 1, 1-15, 2021
1062021
Less is more: Pay less attention in vision transformers
Z Pan, B Zhuang, H He, J Liu, J Cai
Proceedings of the AAAI Conference on Artificial Intelligence 36 (2), 2035-2043, 2022
612022
Effective training of convolutional neural networks with low-bitwidth weights and activations
B Zhuang, M Tan, J Liu, L Liu, I Reid, C Shen
IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (10), 6140 …, 2021
372021
Pruning self-attentions into convolutional layers in single path
H He, J Cai, J Liu, Z Pan, J Zhang, D Tao, B Zhuang
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024
292024
Aqd: Towards accurate quantized object detection
P Chen, J Liu, B Zhuang, M Tan, C Shen
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021
282021
A survey on efficient training of transformers
B Zhuang, J Liu, Z Pan, H He, Y Weng, C Shen
arXiv preprint arXiv:2302.01107, 2023
242023
Ecoformer: Energy-saving attention with linear complexity
J Liu, Z Pan, H He, J Cai, B Zhuang
NeurIPS Spotlight, 2022
182022
Deep Transferring Quantization
MT Zheng Xie, Zhiquan Wen, Jing Liu, Zhiqiang Liu, Xixian Wu
European Conference on Computer Vision (ECCV) 2020, 2020
16*2020
Conditional automated channel pruning for deep neural networks
Y Liu, Y Guo, J Guo, L Jiang, J Chen
IEEE Signal Processing Letters 28, 1275-1279, 2021
152021
Ptqd: Accurate post-training quantization for diffusion models
Y He, L Liu, J Liu, W Wu, H Zhou, B Zhuang
Advances in Neural Information Processing Systems 36, 2024
142024
Mesa: A memory-saving training framework for transformers
Z Pan, P Chen, H He, J Liu, J Cai, B Zhuang
arXiv preprint arXiv:2111.11124, 2021
122021
Sharpness-aware quantization for deep neural networks
J Liu, J Cai, B Zhuang
arXiv preprint arXiv:2111.12273, 2021
112021
Qllm: Accurate and efficient low-bitwidth quantization for large language models
J Liu, R Gong, X Wei, Z Dong, J Cai, B Zhuang
arXiv preprint arXiv:2310.08041, 2023
92023
Dynamic Focus-aware Positional Queries for Semantic Segmentation
H He, J Cai, Z Pan, J Liu, J Zhang, D Tao, B Zhuang
CVPR 2023, 2022
82022
Single-path bit sharing for automatic loss-aware model compression
J Liu, B Zhuang, P Chen, C Shen, J Cai, M Tan
TPAMI, 2023
6*2023
Efficientdm: Efficient quantization-aware fine-tuning of low-bit diffusion models
Y He, J Liu, W Wu, H Zhou, B Zhuang
arXiv preprint arXiv:2310.03270, 2023
42023
Focusformer: Focusing on what we need via architecture sampler
J Liu, J Cai, B Zhuang
arXiv preprint arXiv:2208.10861, 2022
42022
The system can't perform the operation now. Try again later.
Articles 1–20