Follow
Jesse Thomason
Title
Cited by
Cited by
Year
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
M Shridhar, J Thomason, D Gordon, Y Bisk, W Han, R Mottaghi, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2020
7802020
ProgPrompt: Generating situated robot task plans using large language models
I Singh, V Blukis, A Mousavian, A Goyal, D Xu, J Tremblay, D Fox, ...
2023 IEEE International Conference on Robotics and Automation (ICRA), 11523 …, 2023
6542023
Experience Grounds Language
Y Bisk, A Holtzman, J Thomason, J Andreas, Y Bengio, J Chai, M Lapata, ...
arXiv preprint arXiv:2004.10151, 2020
4082020
Vision-and-dialog navigation
J Thomason, M Murray, M Cakmak, L Zettlemoyer
Conference on Robot Learning (CoRL), 2019
3442019
Integrating Language and Vision to Generate Natural Language Descriptions of Videos in the Wild
J Thomason, S Venugopalan, S Guadarrama, K Saenko, R Mooney
Proceedings of the Twenty Fifth International Conference on Computational …, 2014
2722014
Learning to Interpret Natural Language Commands through Human-Robot Dialog
J Thomason, S Zhang, R Mooney, P Stone
Proceedings of the 24th International Joint Conference on Artificial …, 2015
2272015
TEACh: Task-driven Embodied Agents that Chat
A Padmakumar, J Thomason, A Shrivastava, P Lange, A Narayan-Chen, ...
arXiv preprint arXiv:2110.00534, 2021
1662021
BWIBots: A platform for bridging the gap between AI and human–robot interaction research
P Khandelwal, S Zhang, J Sinapov, M Leonetti, J Thomason, F Yang, ...
The International Journal of Robotics Research, 2017
1452017
Vision-and-Language Navigation: A Survey of Tasks, Methods, and Future Directions
J Gu, E Stefani, Q Wu, J Thomason, XE Wang
arXiv preprint arXiv:2203.12667, 2022
1222022
Learning Multi-Modal Grounded Linguistic Semantics by Playing "I Spy"
J Thomason, J Sinapov, M Svetlik, P Stone, RJ Mooney
Proceedings of the Twenty-Fifth international joint conference on Artificial …, 2016
1212016
Shifting the Baseline: Single Modality Performance on Visual Navigation & QA
J Thomason, D Gordon, Y Bisk
Conference of the North American Chapter of the Association for …, 2019
882019
Improving Grounded Natural Language Understanding through Human-Robot Dialog
J Thomason, A Padmakumar, J Sinapov, N Walker, Y Jiang, H Yedidsion, ...
International Conference on Robotics and Automation (ICRA), 2019
792019
Embodied BERT: A Transformer Model for Embodied, Language-guided Visual Task Completion
A Suglia, Q Gao, J Thomason, G Thattai, G Sukhatme
arXiv preprint arXiv:2108.04927, 2021
782021
Jointly improving parsing and perception for natural language commands through human-robot dialog
J Thomason, A Padmakumar, J Sinapov, N Walker, Y Jiang, H Yedidsion, ...
Journal of Artificial Intelligence Research 67, 325-372, 2020
662020
Opportunistic active learning for grounding natural language descriptions
J Thomason, A Padmakumar, J Sinapov, J Hart, P Stone, RJ Mooney
Conference on Robot Learning, 67-76, 2017
662017
Prosodic entrainment and tutoring dialogue success
J Thomason, HV Nguyen, D Litman
International conference on artificial intelligence in education, 750-753, 2013
652013
Climb: A continual learning benchmark for vision-and-language tasks
T Srinivasan, TY Chang, L Pinto Alva, G Chochlakis, M Rostami, ...
Advances in Neural Information Processing Systems 35, 29440-29453, 2022
622022
Language grounding with 3D objects
J Thomason, M Shridhar, Y Bisk, C Paxton, L Zettlemoyer
Conference on Robot Learning, 1691-1701, 2022
602022
Interpreting Black Box Models via Hypothesis Testing
C Burns, J Thomason, W Tansey
Foundations of Data Science (FODS), 2020
59*2020
Retrospectives on the Embodied AI Workshop
M Deitke, D Batra, Y Bisk, T Campari, AX Chang, DS Chaplot, C Chen, ...
arXiv preprint arXiv:2210.06849, 2022
542022
The system can't perform the operation now. Try again later.
Articles 1–20