Abstract: Zero-shot image captioning can harness the knowledge of pre-trained visual language models (VLMs) and language models (LMs) to generate captions for target domain images without paired ...
Abstract: Visual grounding tasks aim to localize image regions based on natural language references. In this work, we ex-plore whether generative VLMs predominantly trained on image-text data could be ...
Neuroscientists have been trying to understand how the brain processes visual information for over a century. The development of computational models inspired by the brain's layered organization, also ...