Follow
Sung-En Chang
Sung-En Chang
Northeastern
Verified email at northeastern.edu - Homepage
Title
Cited by
Cited by
Year
Mix and match: A novel fpga-centric deep neural network quantization framework
SE Chang, Y Li, M Sun, R Shi, HKH So, X Qian, Y Wang, X Lin
2021 IEEE International Symposium on High-Performance Computer Architecture …, 2021
852021
Language model compression with weighted low-rank factorization
YC Hsu, T Hua, S Chang, Q Lou, Y Shen, H Jin
arXiv preprint arXiv:2207.00112, 2022
432022
Film-qnn: Efficient fpga acceleration of deep neural networks with intra-layer, mixed-precision quantization
M Sun, Z Li, A Lu, Y Li, SE Chang, X Ma, X Lin, Z Fang
Proceedings of the 2022 ACM/SIGDA International Symposium on Field …, 2022
432022
Sparse progressive distillation: Resolving overfitting under pretrain-and-finetune paradigm
S Huang, D Xu, IEH Yen, Y Wang, SE Chang, B Li, S Chen, M Xie, ...
arXiv preprint arXiv:2110.08190, 2021
272021
RMSMP: A novel deep neural network quantization framework with row-wise mixed schemes and multiple precisions
SE Chang, Y Li, M Sun, W Jiang, S Liu, Y Wang, X Lin
Proceedings of the IEEE/CVF international conference on computer vision …, 2021
102021
MSP: an FPGA-specific mixed-scheme, multi-precision deep neural network quantization framework
SE Chang, Y Li, M Sun, W Jiang, R Shi, X Lin, Y Wang
arXiv preprint arXiv:2009.07460, 2020
102020
Latent feature lasso
IEH Yen, WC Lee, SE Chang, AS Suggala, SD Lin, P Ravikumar
International Conference on Machine Learning, 3949-3957, 2017
102017
Mixlasso: Generalized mixed regression via convex atomic-norm regularization
IEH Yen, WC Lee, K Zhong, SE Chang, PK Ravikumar, SD Lin
Advances in Neural Information Processing Systems 31, 2018
32018
ESRU: Extremely Low-Bit and Hardware-Efficient Stochastic Rounding Unit Design for Low-Bit DNN Training
SE Chang, G Yuan, A Lu, M Sun, Y Li, X Ma, Z Li, Y Xie, M Qin, X Lin, ...
2023 Design, Automation & Test in Europe Conference & Exhibition (DATE), 1-6, 2023
22023
You Already Have It: A Generator-Free Low-Precision DNN Training Framework Using Stochastic Rounding
G Yuan, SE Chang, Q Jin, A Lu, Y Li, Y Wu, Z Kong, Y Xie, P Dong, M Qin, ...
European Conference on Computer Vision, 34-51, 2022
22022
Learning tensor latent features
SE Chang, X Zheng, IE Yen, P Ravikumar, R Yu
arXiv preprint arXiv:1810.04754, 2018
12018
Hardware-efficient stochastic rounding unit design for DNN training: late breaking results
SE Chang, G Yuan, A Lu, M Sun, Y Li, X Ma, Z Li, Y Xie, M Qin, X Lin, ...
Proceedings of the 59th ACM/IEEE Design Automation Conference, 1396-1397, 2022
2022
ILMPQ: An Intra-Layer Multi-Precision Deep Neural Network Quantization framework for FPGA
SE Chang, Y Li, M Sun, Y Wang, X Lin
arXiv preprint arXiv:2111.00155, 2021
2021
Efficient Tensor Decomposition with Boolean Factors
SE Chang, X Zheng, IEH Yen, P Ravikumar, R Yu
arXiv preprint arXiv:1810.04754, 2018
2018
The system can't perform the operation now. Try again later.
Articles 1–14