Follow
Yifei Huang
Yifei Huang
The University of Tokyo
Verified email at ut-vision.org - Homepage
Title
Cited by
Cited by
Year
Ego4d: Around the world in 3,000 hours of egocentric video
K Grauman, A Westbury, E Byrne, Z Chavis, A Furnari, R Girdhar, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
4652022
Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition
Y Huang, M Cai, Z Li, Y Sato
Oral presentation, European Conference on Computer Vision (ECCV), 789-804, 2018
1202018
Facial: Synthesizing dynamic talking face with implicit attribute learning
C Zhang, Y Zhao, Y Huang, M Zeng, S Ni, M Budagavi, X Guo
Proceedings of the IEEE/CVF international conference on computer vision …, 2021
1112021
Improving action segmentation via graph-based temporal reasoning
Y Huang, Y Sugano, Y Sato
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2020
1072020
Clrnet: Cross layer refinement network for lane detection
T Zheng, Y Huang, Y Liu, W Tang, Z Yang, D Cai, X He
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2022
972022
Goal-oriented gaze estimation for zero-shot learning
Y Liu, L Zhou, X Bai, Y Huang, L Gu, J Zhou, T Harada
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2021
962021
Semantic aware attention based deep object co-segmentation
H Chen, Y Huang, H Nakayama
Asian Conference on Computer Vision, 435-450, 2018
742018
Mutual context network for jointly estimating egocentric gaze and action
Y Huang, M Cai, Z Li, F Lu, Y Sato
IEEE Transactions on Image Processing 29, 7795-7806, 2020
602020
Manipulation-skill assessment from videos with spatial attention network
Z Li, Y Huang, M Cai, Y Sato
Proceedings of the IEEE International Conference on Computer Vision Workshops, 2019
592019
Commonsense knowledge aware concept selection for diverse and informative visual storytelling
H Chen, Y Huang, H Takamura, H Nakayama
Proceedings of the AAAI Conference on Artificial Intelligence 35 (2), 999-1008, 2021
342021
Towards visually explaining video understanding networks with perturbation
Z Li, W Wang, Z Li, Y Huang, Y Sato
Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2021
292021
VideoLLM: Modeling Video Sequence with Large Language Models
G Chen, YD Zheng, J Wang, J Xu, Y Huang, J Pan, Y Wang, Y Wang, ...
arXiv preprint arXiv:2305.13292, 2023
272023
Internvideo-ego4d: A pack of champion solutions to ego4d challenges
G Chen, S Xing, Z Chen, Y Wang, K Li, Y Li, Y Liu, J Wang, YD Zheng, ...
arXiv preprint arXiv:2211.09529, 2022
232022
Precise multi-modal in-hand pose estimation using low-precision sensors for robotic assembly
F von Drigalski, K Hayashi, Y Huang, R Yonetani, M Hamaya, K Tanaka, ...
2021 IEEE International Conference on Robotics and Automation (ICRA), 968-974, 2021
232021
Interact before align: Leveraging cross-modal knowledge for domain adaptive action recognition
L Yang, Y Huang, Y Sugano, Y Sato
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
222022
Compound Prototype Matching for Few-Shot Action Recognition
Y Huang, L Yang, Y Sato
European Conference on Computer Vision, 351-368, 2022
192022
An ego-vision system for discovering human joint attention
Y Huang, M Cai, Y Sato
IEEE Transactions on Human-Machine Systems 50 (4), 306-316, 2020
142020
Temporal localization and spatial segmentation of joint attention in multiple first-person videos
Y Huang, M Cai, H Kera, R Yonetani, K Higuchi, Y Sato
Proceedings of the IEEE International Conference on Computer Vision, 2313-2321, 2017
132017
Learn to recover visible color for video surveillance in a day
G Wu, Y Zheng, Z Guo, Z Cai, X Shi, X Ding, Y Huang, Y Guo, R Shibasaki
Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23 …, 2020
122020
Epic-kitchens-100 unsupervised domain adaptation challenge for action recognition 2021: Team m3em technical report
L Yang, Y Huang, Y Sugano, Y Sato
arXiv preprint arXiv:2106.10026, 2021
92021
The system can't perform the operation now. Try again later.
Articles 1–20