image
imagewidth (px) 256
320
|
|---|
MTurn-Seg: A Large-Scale Bilingual Medical Benchmark for Multi-Turn Reasoning Segmentation (BIBM 2025)
This dataset is the official release of the benchmark introduced in the paper
“MTurn-Seg: A Large-Scale Bilingual Medical Benchmark for Multi-Turn Reasoning Segmentation.”
The dataset will be released in phases. Each subset will be manually reviewed before being progressively opened to the public.
More details can be found at: https://cowboyh.github.io/MTurn-Seg/.
Authors
Haitao Nie*, Yimeng Zheng*, Ying Ye*, Bin Xie†
Artificial Intelligence and Robotics Laboratory (AIRLab),
Central South University, Changsha, China
* Equal contribution.
† Corresponding author.
Currently released subset
CHAOS (CN)
Abstract
Multi-turn reasoning segmentation is essential for mimicking real-world clinical workflows, where anatomical structures are identified through step-by-step dialogue based on spatial, functional, or pathological descriptions. However, the lack of a dedicated benchmark in this area has limited progress. To address this gap, we introduce the first bilingual benchmark for multi-turn medical image segmentation, supporting both Chinese and English dialogues. The benchmark consists of 28,904 images, 113,963 segmentation masks, and 232,188 question–answer pairs, covering major organs and anatomical systems across CT and MRI modalities. Each dialogue requires the model to infer the segmentation target based on prior conversational turns and previously segmented regions. We evaluate several state-of-the-art models, including MedCLIP-SAM, LISA, and LISA++, and report three key findings: (1) existing models perform poorly on our benchmark, far below clinical usability standards; (2) performance degrades as dialogue turns increase, reflecting limited multi-turn reasoning capabilities; and (3) general-purpose models such as LISA can outperform medical-specific models, suggesting that further integration of domain knowledge is needed for specialized medical applications.
Highlights
New Task — Multi-Turn Reasoning Segmentation (MTRS): At each turn, the model consumes the current instruction + interaction history (prior prompts and masks) to produce the next segmentation.
Three Reasoning Facets: (i) Clinical/Anatomical (e.g., “segment the solid organ in the right upper abdomen involved in glucose metabolism”), (ii) Spatial (e.g., “segment the elliptical structure adjacent to the right side of the abdominal aorta”), (iii) History-based References (e.g., “segment the necrotic region surrounding the previously segmented tumor”).
Bilingual Benchmark (ZH/EN): First dataset supporting multi-turn medical dialogues in Chinese and English.
Scale & Coverage: 28,904 images, 113,963 masks, 232,188 QA pairs across CT & MRI; covers major organs and anatomical systems.
What It Measures: Cross-turn memory, history-conditioned mask refinement, and language-to-image alignment over multiple rounds.
SOTA Evaluation: Benchmarked MedCLIP-SAM, LISA, and LISA++ under multi-turn settings.
Key Findings:
- Current models are well below clinical usability on this benchmark.
- Performance degrades as dialogue turns increase.
- General-purpose models outperform medical-specific models, indicating a need to infuse stronger domain knowledge.
Intended Impact: Establishes the first large-scale yardstick for MTRS, enabling fair, reproducible comparison and catalyzing progress on multi-turn reasoning in medical imaging.
📁 Dataset Directory Structure
CHAOS/
│
├── image/ # Original medical images (.png)
│ ├── T2_13_10.png
│ ├── T2_13_11.png
│ └── ...
│
├── label/ # Organ segmentation masks (.npz, multi-channel)
│ ├── x___T2_13_10.(4,320,320,1).npz
│ ├── x___T2_13_11.(4,320,320,1).npz
│ └── ...
│
├── label_visualize/ # Visualization of the masks (for viewing segmentation results)
│ ├── T2_13_10_mask.png
│ ├── T2_13_11_mask.png
│ └── ...
│
└── MultiZH_CHAOS.json # Core file: multi-turn reasoning QA annotations
📄 JSON Structure (MultiZH_CHAOS.json)
The JSON file contains the following top-level fields:
- name: Dataset name
- dimension: Image dimension (e.g.,
"2D") - modality: Imaging modality (e.g.,
"mr_t2w") - labels: Mapping from class ID to organ name
- dataset: List of all samples
🧩 Sample Structure Example (inside dataset)
{
"image": "image/xxx.png",
"label": "label/xxx.npz",
"class_ids": [2, 3],
"questions": [ ... ]
}
Field Description
| Field | Description |
|---|---|
| image | Path to the original medical image (PNG) |
| label | Path to the segmentation mask (NPZ) |
| class_ids | Organ classes included in this image |
| questions | Multi-turn reasoning QA list |
💬 Multi-Turn QA Format
Each turn of QA contains the following fields (example):
{
"question": "...",
"answer": "...",
"id": "CHAOS_00001",
"label": "kidney_right",
"referring label": "kidney_left",
"area": 1234,
"area center": [97.3, 181.2]
}
| Field | Meaning |
|---|---|
| question | Natural-language segmentation instruction for the current turn |
| answer | Expected textual response describing the target for this turn |
| id | Unique QA identifier |
| label | Segmentation target of the current turn |
| referring label | The segmentation target referred from a previous turn (first turn = "none") |
| area | Pixel area of the segmentation target |
| area center | Centroid coordinates of the segmentation target |
License
MTurn-Seg Dataset License: CC BY-NC-SA 4.0
You are free to share and adapt the dataset under the following terms:
- Attribution (BY)
- NonCommercial (NC)
- ShareAlike (SA)
Full text: https://creativecommons.org/licenses/by-nc-sa/4.0/
Citation
If you find this work useful or use this dataset in your research, please cite our paper.
Note: The paper has been accepted at BIBM 2025 and is to appear; final publication details (e.g., pages/DOI) will be updated upon release.
@InProceedings{MTurnBIBM,
author = {Haitao Nie, Yimeng Zheng, Ying Ye, Bin Xie},
title = {MTurn-Seg: A Large-Scale Bilingual Medical Benchmark for Multi-Turn Reasoning Segmentation},
booktitle = {Proceedings of the IEEE International Conference on Bioinformatics and Biomedicine (BIBM)},
year = {2025},
note = {Accepted, to appear},
}
- Downloads last month
- 9