Statistical guarantees for regularized neural networks M Taheri, F Xie, J Lederer Neural Networks 142, 148-161, 2021 | 34 | 2021 |

A self-adaptive local metric learning method for classification M Taheri, Z Moslehi, A Mirzaei, M Safayani Pattern Recognition 96, 106994, 2019 | 16 | 2019 |

Layer sparsity in neural networks M Hebiri, J Lederer, M Taheri Journal of Statistical Planning and Inference 234, 106195, 2025 | 13 | 2025 |

Balancing statistical and computational precision and applications to penalized linear regression with group sparsity M Taheri, N Lim, J Lederer Dept. Comput. Sci. Dept. Biostatistics Med. Inf, 233-240, 2016 | 13 | 2016 |

Discriminative fuzzy c-means as a large margin unsupervised metric learning algorithm Z Moslehi, M Taheri, A Mirzaei, M Safayani IEEE Transactions on Fuzzy Systems 26 (6), 3534-3544, 2018 | 10 | 2018 |

Efficient feature selection with large and high-dimensional data M Taheri, N Lim, J Lederer arXiv preprint arXiv:1609.07195, 2020 | 7 | 2020 |

Balancing Statistical and Computational Precision: A General Theory and Applications to Sparse Regression M Taheri, N Lim, J Lederer IEEE Transactions on Information Theory 69 (1), 316-333, 2022 | 1 | 2022 |

Statistical Guarantees for Approximate Stationary Points of Simple Neural Networks M Taheri, F Xie, J Lederer arXiv preprint arXiv:2205.04491, 2022 | 1 | 2022 |

How many samples are needed to train a deep neural network? P Golestaneh, M Taheri, J Lederer arXiv preprint arXiv:2405.16696, 2024 | | 2024 |

Statistical guarantees for regularized (approximate) estimators in machine learning M Taheri | | 2023 |

How many samples are needed to train a deep-ReLU neural network? P Golestaneh, M Taheri, J Lederer | | |