Abstract
A novel vision-language model, CauSight, performs visual causal discovery by inferring cause-and-effect relationships in images, outperforming GPT-4.1 with a significant performance boost.
Causal thinking enables humans to understand not just what is seen, but why it happens. To replicate this capability in modern AI systems, we introduce the task of visual causal discovery. It requires models to infer cause-and-effect relations among visual entities across diverse scenarios instead of merely perceiving their presence. To this end, we first construct the Visual Causal Graph dataset (VCG-32K), a large-scale collection of over 32,000 images annotated with entity-level causal graphs, and further develop CauSight, a novel vision-language model to perform visual causal discovery through causally aware reasoning. Our training recipe integrates three components: (1) training data curation from VCG-32K, (2) Tree-of-Causal-Thought (ToCT) for synthesizing reasoning trajectories, and (3) reinforcement learning with a designed causal reward to refine the reasoning policy. Experiments show that CauSight outperforms GPT-4.1 on visual causal discovery, achieving over a threefold performance boost (21% absolute gain). Our code, model, and dataset are fully open-sourced at project page: https://github.com/OpenCausaLab/CauSight.
Community
We are excited to share our recent work on VLM reasoning titled "CauSight: Learning to Supersense for Visual Causal Discovery".
Paper: https://arxiv.org/abs/2512.01827
Github: https://github.com/OpenCausaLab/CauSight
Model: https://huggingface.co/OpenCausaLab/CauSight
Dataset: https://huggingface.co/datasets/OpenCausaLab/VCG-32K
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CounterVQA: Evaluating and Improving Counterfactual Reasoning in Vision-Language Models for Video Understanding (2025)
- Look as You Think: Unifying Reasoning and Visual Evidence Attribution for Verifiable Document RAG via Reinforcement Learning (2025)
- Activating Visual Context and Commonsense Reasoning through Masked Prediction in VLMs (2025)
- From Illusion to Intention: Visual Rationale Learning for Vision-Language Reasoning (2025)
- Guiding the Inner Eye: A Framework for Hierarchical and Flexible Visual Grounded Reasoning (2025)
- MASS: Motion-Aware Spatial-Temporal Grounding for Physics Reasoning and Comprehension in Vision-Language Models (2025)
- Think Twice to See More: Iterative Visual Reasoning in Medical VLMs (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper