Follow
Mark Kurtz
Mark Kurtz
Neural Magic
Verified email at neuralmagic.com
Title
Cited by
Cited by
Year
Inducing and exploiting activation sparsity for fast inference on deep neural networks
M Kurtz, J Kopinsky, R Gelashvili, A Matveev, J Carr, M Goin, W Leiserson, ...
International Conference on Machine Learning, 5533-5543, 2020
1302020
The optimal bert surgeon: Scalable and accurate second-order pruning for large language models
E Kurtic, D Campos, T Nguyen, E Frantar, M Kurtz, B Fineran, M Goin, ...
arXiv preprint arXiv:2203.07259, 2022
652022
How well do sparse imagenet models transfer?
E Iofinova, A Peste, M Kurtz, D Alistarh
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
302022
Sparse* bert: Sparse models are robust
D Campos, A Marques, T Nguyen, M Kurtz, C Zhai
arXiv preprint arXiv:2205.12452, 2022
42022
System and method of accelerating execution of a neural network
A Matveev, D Alistarh, J Kopinsky, R Gelashvili, M Kurtz, N Shavit
US Patent 11,195,095, 2021
42021
oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes
D Campos, A Marques, M Kurtz, CX Zhai
arXiv preprint arXiv:2303.17612, 2023
12023
System and method of training a neural network
M Kurtz, D Alistarh
US Patent App. 17/149,043, 2021
12021
System and method of accelerating execution of a neural network
A Matveev, D Alistarh, J Kopinsky, R Gelashvili, M Kurtz, N Shavit
US Patent 11,797,855, 2023
2023
Sparse* BERT: Sparse Models Generalize To New tasks and Domains
D Campos, A Marques, T Nguyen, M Kurtz, CX Zhai
arXiv preprint arXiv:2205.12452, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–9