Ego4d: Around the world in 3,000 hours of egocentric video K Grauman, A Westbury, E Byrne, Z Chavis, A Furnari, R Girdhar, ... Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022 | 291 | 2022 |
Future Person Localization in First-Person Videos T Yagi, K Mangalam, R Yonetani, Y Sato 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018 | 182 | 2018 |
GO-finder: a registration-free wearable system for assisting users in finding lost objects via hand-held object discovery T Yagi, T Nishiyasu, K Kawasaki, M Matsuki, Y Sato 26th International Conference on Intelligent User Interfaces, 139-149, 2021 | 8 | 2021 |
Foreground-aware stylization and consensus pseudo-labeling for domain adaptation of first-person hand segmentation T Ohkawa, T Yagi, A Hashimoto, Y Ushiku, Y Sato IEEE Access 9, 94644-94655, 2021 | 7 | 2021 |
Hand-Object Contact Prediction via Motion-Based Pseudo-Labeling and Guided Progressive Label Correction T Yagi, MT Hasan, Y Sato 32nd British Machine Vision Conference (BMVC), 2021 | 5 | 2021 |
Fine-grained Affordance Annotation for Egocentric Hand-Object Interaction Videos Z Yu, Y Huang, R Furuta, T Yagi, Y Goutsu, Y Sato Proceedings of the IEEE/CVF Winter Conference on Applications of Computer …, 2023 | 3 | 2023 |
Style Adapted DataBase: Generalizing Hand Segmentation via Semantics-aware Stylization T Ohkawa, T Yagi, Y Sato IEICE Technical Report; IEICE Tech. Rep. 120 (187), 26-31, 2020 | 1 | 2020 |
GO-Finder: A Registration-free Wearable System for Assisting Users in Finding Lost Hand-held Objects T Yagi, T Nishiyasu, K Kawasaki, M Matsuki, Y Sato ACM Transactions on Interactive Intelligent Systems 12 (4), 1-29, 2022 | | 2022 |
Precise Affordance Annotation for Egocentric Action Video Datasets Z Yu, Y Huang, R Furuta, T Yagi, Y Goutsu, Y Sato arXiv preprint arXiv:2206.05424, 2022 | | 2022 |
Object Instance Identification in Dynamic Environments T Yagi, MT Hasan, Y Sato arXiv preprint arXiv:2206.05319, 2022 | | 2022 |
Egocentric pedestrian motion prediction by separately modeling body pose and position D Wu, T Yagi, Y Matsui, Y Sato IEICE Technical Report; IEICE Tech. Rep. 119 (481), 39-44, 2020 | | 2020 |
Human-Computer Interaction: an User Evaluation Perspective-- T Yagi, S Shinagawa, K Akiyama, K Hirotaka, R Shimamura, T Matayoshi IEICE Technical Report; IEICE Tech. Rep. 118 (260), 1-4, 2018 | | 2018 |
Egocentric Pedestrian Motion Forecasting for Separately Modelling Pose and Location D WU, T YAGI, Y MATSUI, Y SATO | | |
Future Person Localization in First-Person Videos: Supplementary Material T Yagi, K Mangalam, R Yonetani, Y Sato | | |