Skip to content

Mind the Modality Gap: Towards a Remote Sensing Vision-Language Model via Cross-modal Alignment

Notifications You must be signed in to change notification settings

Orion-AI-Lab/MindTheModalityGap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pretrained models

The weights of both aligned and patched models can be accessed using the following links:

Citation

If you use this work please cite:

@misc{zavras2024mindmodalitygapremote,
      title={Mind the Modality Gap: Towards a Remote Sensing Vision-Language Model via Cross-modal Alignment}, 
      author={Angelos Zavras and Dimitrios Michail and Begüm Demir and Ioannis Papoutsis},
      year={2024},
      eprint={2402.09816},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2402.09816}, 
}

About

Mind the Modality Gap: Towards a Remote Sensing Vision-Language Model via Cross-modal Alignment

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages