Our Mission
Our research lab is broadly interested in multimodal machine learning and learning with limited supervision. We are particularly interested in building intelligent task assistants that can fuse linguistic, visual, and other types of modalities to perceive and interact with the world. Current language + vision projects involve multimodal representation learning, contrastive self-supervision, embodied AI, video localization and multi-agent communication. Applications include healthcare, medical imaging, manufacturing and misinformation detection.

News
- Nov, 2022: Congratulations to Makanjuola for receiving a 2022 Cadence Diversity in Technology Scholarship!
- Nov, 2022: Congratulations to Muntasir for receiving a VT Pratt Fellowship for the Spring 2023 semester!
- Oct, 2022: Congratulations to Amarachi and Ying for being nominated by the department for the TwoSigma Ph.D. Fellowship!
- Oct, 2022: Congratulations to Ying and Amarachi for being nominated by the department for the Google Ph.D. Fellowship!
- Oct, 2022: Congratulations to Muntasir and Avi for their paper acceptance at the 2022 IEEE BigData conference.
- Oct, 2022: Congratulations to Hesam Soleimani for his journal accepted at the Mechanical Systems and Signal Processing (MSSP).
- Oct, 2022: Our lab has received an Amazon-VT faculty research award to work on Embodied AI research!
- Sept, 2022: Congratulations to Gaurang and Amarachi for the MICCAI 2022 paper acceptance.
- August, 2022: Our lab has received DARPA KMASS funding!
- July, 2022: Our lab has received Commonwealth Cyber Initiative (CCI) funding on ML-based code repair!
- May, 2022: Congratulations to Amarachi’s department nomination for the Microsoft Ph.D. Fellowship!
- February, 2022: Our lab has received NSF EAGER CMMI-AM funding!
Funding
Our work is made possible by funding from several organizations.

Affiliations
Our lab is affiliated with several institutes.