Papers
arxiv:2507.11336

UGC-VideoCaptioner: An Omni UGC Video Detail Caption Model and New Benchmarks

Published on Jul 15
· Submitted by peiranW on Jul 16
Authors:
,
,
,

Abstract

UGC-VideoCap introduces a new benchmark and model for detailed omnimodal captioning of user-generated videos, emphasizing audio-visual integration and using a novel training strategy.

AI-generated summary

Real-world user-generated videos, especially on platforms like TikTok, often feature rich and intertwined audio visual content. However, existing video captioning benchmarks and models remain predominantly visual centric, overlooking the crucial role of audio in conveying scene dynamics, speaker intent, and narrative context. This lack of omni datasets and lightweight, capable models hampers progress in fine grained, multimodal video understanding. To address these challenges, we introduce UGC-VideoCap, a new benchmark and model framework specifically designed for detailed omnimodal captioning of short form user-generated videos. Unlike prior datasets, UGC-VideoCap emphasizes balanced integration of audio and visual modalities, featuring 1000 TikTok videos annotated through a structured three stage human-in-the-loop pipeline covering audio only, visual only, and joint audio visual semantics. The benchmark also includes 4000 carefully crafted QA pairs probing both unimodal and cross modal understanding. Alongside the dataset, we propose UGC-VideoCaptioner(3B), a 3B parameter captioning model distilled from Gemini 2.5 Flash. Using a novel two-stage training strategy supervised fine tuning followed by Group Relative Policy Optimization (GRPO), our approach enables efficient adaptation from limited data while maintaining competitive performance. Together, our benchmark and model offer a high-quality foundation and a data-efficient solution for advancing omnimodal video captioning in unconstrained real-world UGC settings.

Community

Paper author Paper submitter
edited 11 days ago

UGC-VideoCaptioner Technical Report (in progress)
First UGC detail video caption bench and model
Paper: https://arxiv.org/abs/2507.11336 (omni video detailed caption)
Website: https://memories.ai/
Code: https://github.com/WPR001/UGC_VideoCaptioner
Benchmark & Model: https://huggingface.co/collections/openinterx/ugc-videocap-6845e290580112a1834737c4

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.11336 in a Space README.md to link it from this page.

Collections including this paper 3