close
close

VideoLLaMA 2 released: A set of large-scale video language models to advance multimodal research in video language modeling

0

Recent advances in AI have had a significant impact in various fields, particularly image recognition and photorealistic image generation, with significant applications in medical imaging and autonomous driving. However, the area of ​​video understanding and generation, especially video LLMs, still needs support. These models struggle with processing temporal dynamics and integrating audiovisual data, which limits their effectiveness in predicting future events and performing comprehensive multimodal analysis. Addressing these complexities is critical to improving video LLM performance.

Researchers from DAMO Academy, Alibaba Group, have introduced VideoLLaMA 2, a series of advanced video LLMs designed to improve spatiotemporal modeling and audio understanding in video-related tasks. Based on previous models, VideoLLaMA 2 features a custom spatial-temporal convolution (STC) connector to better handle video dynamics and an integrated audio branch for improved multimodal understanding. Evaluations show that VideoLLaMA 2 excels in tasks such as video question answering and subtitling, outperforming many open-source models and keeping up with some proprietary models. These advancements position VideoLLaMA 2 as the new standard in intelligent video analytics.

Current video LLMs typically use a pre-trained visual encoder, a vision language adapter, and an instruction-tuned speech decoder to process video content. However, existing models often overlook temporal dynamics and rely on speech decoders for this task, which could be more efficient. To address this problem, VideoLLaMA 2 introduces an STC connector that better captures spatiotemporal features while maintaining the efficiency of visual tokens. In addition, recent advances have focused on integrating audio streams into video LLMs, improving multimodal understanding, and enabling more comprehensive video scene analysis through models such as PandaGPT, XBLIP, and CREMA.

VideoLLaMA 2 maintains the dual-branch architecture of its predecessor, with separate Vision-Language and Audio-Language branches that connect pre-trained visual and audio encoders to a large language model. The Vision-Language branch uses an Image-Level Encoder (CLIP) and introduces an STC connector for improved spatiotemporal representation. The Audio-Language branch pre-processes audio into spectrograms and uses the BEATs audio encoder for temporal dynamics. This modular design ensures effective visual and auditory data integration, enhances the multimodal capabilities of VideoLLaMA 2, and allows easy customization for future extensions.

VideoLLaMA 2 excels in performance on video and audio understanding tasks, consistently outperforming open-source models and closely competing with leading proprietary systems. It shows strong performance on video question answering, video captioning, and audio-based tasks, and particularly excels at answering multiple-choice video questions (MC-VQA) and open-ended audio-video questions (OE-AVQA). The model's ability to integrate complex multimodal data such as video and audio shows significant progress over other models. Overall, VideoLLaMA 2 stands out as a leading video and audio understanding model, achieving robust and competitive results in all benchmarks.

The VideoLLaMA 2 series introduces enhanced video LLMs to improve multimodal understanding in video and audio tasks. By integrating an STC connector and a jointly trained audio branch, the model captures spatiotemporal dynamics and integrates audio cues. VideoLLaMA 2 consistently outperforms similar open-source models and closely competes with proprietary models in several benchmarks. Its strong performance in video question answering, video captioning, and audio-based tasks underscores its potential for tackling complex challenges in video analysis and multimodal research. The models are publicly available for further development.


Check out the Paper, Model card on HF And GitHub. All credit for this research goes to the researchers of this project. Also, don't forget to follow us on Þjórsárdalur and join our Telegram channel And LinkedInphew. If you like our work, you will Newsletters..

Don’t forget to join our 48k+ ML SubReddit

Find upcoming AI webinars here



Sana Hassan, a consulting intern at Marktechpost and a double major student at IIT Madras, is passionate about using technology and AI to solve real-world challenges. With his keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-world solutions.