PLAN-Lab

Our Mission

Our research lab is broadly interested in multimodal machine learning and learning with limited supervision. We are particularly interested in building intelligent task assistants that can fuse linguistic, visual, and other types of modalities to perceive and interact with the world. Current language + vision projects involve multimodal representation learning, contrastive self-supervision, embodied AI, video localization and multi-agent communication. Applications include healthcare, medical imaging, manufacturing and misinformation detection.

GitHub

News

Funding

Our work is made possible by funding from several organizations.

Affiliations

Our lab is affiliated with several institutes.