From Black Boxes to Transparent Minds: Evaluating and Enhancing the Theory of Mind in Multimodal Large Language Models
University of Science and Technology Beijing
*Indicates Equal Contribution,†Indicates Corresponding Author
*Indicates Equal Contribution,†Indicates Corresponding Author
🏆 Overview
This repository provides the code for the paper [ICML 2025] "From Black Boxes to Transparent Minds: Evaluating and Enhancing the Theory of Mind in Multimodal Large Language Models"
⚙️ Installation
conda create -n gridtom python=3.12
conda activate gridtom
# Please install PyTorch according to your CUDA version.
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
pip install -r requirements.txt
⚡️ Quick Start
chmod 700 *.sh
./init_belief.sh LLaVA-NeXT-Video-7B-hf
./init_belief.sh Qwen2-VL-7B-Instruct
./evaluate.sh LLaVA-NeXT-Video-7B-hf
./evaluate.sh Qwen2-VL-7B-Instruct
./save_states.sh LLaVA-NeXT-Video-7B-hf
./save_states.sh Qwen2-VL-7B-Instruct
./interv_evaluate.sh LLaVA-NeXT-Video-7B-hf
./interv_evaluate.sh Qwen2-VL-7B-Instruct
🔎 Citation
Please cite the paper and star this repo if you find it interesting/useful, thanks!
@article{li2025black,
title={From Black Boxes to Transparent Minds: Evaluating and Enhancing the Theory of Mind in Multimodal Large Language Models},
author={Li, Xinyang and Liu, Siqi and Zou, Bochao and Chen, Jiansheng and Ma, Huimin},
journal={arXiv preprint arXiv:2506.14224},
year={2025}
}