This repo contains InternVideo series and related works in video foundation models.
- InternVideo: general video foundation models via generative and discriminative learning
- InternVideo2: scaling video foundation models for multimodal video understanding
- InternVid: a large-scale video-text dataset for multimodal understanding and generation
2024.08.12
: We provide smaller models, InternVideo2-S/B/L, which are distilled from InternVideo2-1B. We also build smaller VideoCLIP with MobileCLIP.2024.08
: InternVideo2-Stage3-8B and InternVideo2-Stage3-8B-HD are released. 8B indicates the use of InternVideo2-1B and the 7B LLM.2024.07
: The video annotation for InternVid2 (HuggingFace) is released.2024.06
: The full version of the video annotation (230M video-text pairs) for InternVid (OpenDataLab | HuggingFace) is released.2024.04
: The Checkpoints and scripts for InternVideo2 are released.2024.03
: The technical report of InternVideo2 is released.2024.01
: InternVid (a video-text dataset for video understanding and generation) has been accepted for spotlight presentation of ICLR 2024.2023.07
: A video-text dataset InternVid is released at here for facilitating multimodal understanding and generation.2023.05
: Video instruction data are released at here for tuning end-to-end video-centric multimodal dialogue systems like VideoChat.2023.01
: The code & models of InternVideo are released.2022.12
: The technical report of InternVideo is released.2022.09
: Press releases of InternVideo (official | 163 news | qq news).
- If you have any questions during the trial, running or deployment, feel free to join our WeChat group discussion! If you have any ideas or suggestions for the project, you are also welcome to join our WeChat group discussion!
- We are hiring researchers, engineers and interns in General Vision Group, Shanghai AI Lab. If you are interested in working with us on video foundation models and related topics, please contact Yi Wang ([email protected]).