Prompt2SceneBench / README.md
bodhisattamaiti's picture
update readme
187b94c verified
metadata
license: cc-by-4.0
task_categories:
  - text-to-image
  - question-answering
  - zero-shot-classification
  - image-to-text
language:
  - en
tags:
  - text-to-image
  - multimodal
  - indoor-scenes
  - prompt-engineering
  - stable-diffusion
  - scene-understanding
  - image-generation
  - image-retrieval
  - image-captioning
  - zero-shot-learning
  - contrastive-learning
  - semantic-alignment
  - benchmarking
  - evaluation
  - data-generation
  - synthetic-data
  - structured-prompts
  - vision-language
  - indoor-environments
  - object-grounding
  - caption-alignment
  - prompt-dataset
size_categories:
  - 10K<n<100K

Dataset Card for Prompt2SceneBench

Dataset Details

Dataset Description

Prompt2SceneBench is a structured prompt dataset with 12,606 text descriptions designed for evaluating text-to-image models in realistic indoor environments. Each prompt describes the spatial arrangement of 1–4 common household objects on compatible surfaces and in contextually appropriate scenes, sampled using strict object–surface–scene compatibility mappings.

A usecase of the Prompt2SceneBench has been showcased in the Prompt2SceneGallery image dataset (https://huggingface.co/datasets/bodhisattamaiti/Prompt2SceneGallery) which has been generated using SDXL and the prompts from Prompt2SceneBench dataset.

  • Curated by: Bodhisatta Maiti
  • Funded by: N/A
  • Shared by: Bodhisatta Maiti
  • Language(s): English
  • License: CC BY 4.0

Dataset Sources

Uses

Direct Use

Prompt2SceneBench can be directly used for:

  1. Prompt-to-image generation using models like Stable Diffusion XL to benchmark compositional accuracy in indoor scenes.
  2. Prompt–image alignment scoring, evaluating how well generated images match the structured prompts.
  3. Compositional generalization benchmarking, testing models on spatial arrangement of 1–4 objects with increasing difficulty.
  4. Zero-shot captioning evaluation, using prompts as pseudo-references to measure how captioning models describe generated images.
  5. Scene layout reasoning tasks, e.g., predicting spatial configuration or scene graph generation from textual prompts.
  6. Style transfer or image editing tasks,where the structured prompt can guide object placement or scene modification in indoor contexts.
  7. Multimodal fine-tuning or distillation, where paired structured prompts and generated images can be used to improve alignment in vision-language models (VLMs), especially for grounding objects, spatial relationships, and indoor scene context.
  8. Controllable generation studies, analyzing prompt structure impact on generated outputs under different text-to-image models.

Out-of-Scope Use

  • Outdoor scenes, surreal or abstract visual compositions.
  • Benchmarks involving human-centric understanding or motion.
  • Direct use for safety-critical or clinical systems.

Dataset Structure

CSV Format (prompt2scene_prompts_final.csv)

Size: 12,606 prompts

Each row in the CSV corresponds to a single prompt instance and includes the following fields:

  • type: Prompt category — one of A, B, C, or D, based on number of objects and complexity.
  • object1, object2, object3, object4: Objects involved in the scene (some may be None/NaN/Null depending on type).
  • surface: The surface where the objects are placed (e.g., desk surface, bench).
  • scene: The indoor environment (e.g., living room, study room).
  • prompt: The final structured natural language prompt.

Note:

  • Type A prompt has only 1 object (object2, object3, object4 fields will be None/NaN/Null)
  • Type B prompt has only 2 objects (object3, object4 fields will be None/NaN/Null)
  • Type C prompt has only 3 objects (object4 field will be None/NaN/Null)
  • Type D prompt has 4 objects (all the object fields will have values)

Sample Examples:

  • Type A: a football located on a bench in a basement. (object1: football, surface: bench, scene: basement)
  • Type B: a coffee mug beside a notebook on a wooden table in a home office. (object1: coffee mug, object2: notebook, surface: wooden table, scene: home office)
  • Type C: a jar, a coffee mug, and a bowl placed on a kitchen island in a kitchen. (object1: jar, object2: coffee mug, object3: bowl, surface: kitchen island, scene: kitchen)
  • Type D: An arrangement of an air purifier, a pair of slippers, a guitar, and a pair of shoes on a floor in a bedroom. (object1:air purifier, object2: pair of slippers, object3: guitar, object4: pair of shoes, surface: floor, scene: bedroom)

JSON Format (prompt2scene_metadata.json)

The JSON contains the following keys:

  • objects: List of all 50 objects used in the prompt generation.
  • scenes: List of 15 indoor scenes.
  • surfaces: List of 20 compatible surfaces.
  • object_to_scenes: Dictionary mapping each object to plausible indoor scenes.
  • object_to_surfaces: Dictionary mapping each object to compatible surface(s).
  • surface_to_scenes: Dictionary mapping each surface to scene(s) where it naturally occurs.
  • prompt_templates: Template used for generating the prompts for all the prompt types (A, B, C and D), each prompt type has 3 variants

This JSON file supports reproducibility and reuse by providing all internal mappings used during structured prompt generation. The community can further extend/modify the above lists and mappings and use their own prompt templates based on the usecase.

Dataset Creation

Curation Rationale

The dataset was created to provide a controlled and structured benchmark for evaluating spatial and compositional understanding in generative AI systems, particularly in indoor environments.

Source Data

Data Collection and Processing

All data is programmatically generated using a controlled sampling routine from curated lists of 50 indoor objects, 20 surfaces, and 15 scenes. Only valid object–surface–scene combinations were retained using rule-based mappings.

Who are the source data producers?

The dataset is fully synthetic and was created by Bodhisatta Maiti through controlled generation logic.

Annotations

No human annotations are involved beyond the original curation and sampling logic.

Personal and Sensitive Information

No personal or sensitive information is present. The dataset consists of entirely synthetic prompts.

Bias, Risks, and Limitations

This dataset focuses only on physically and contextually plausible indoor scenes. It excludes unusual, humorous, or surrealistic scenarios intentionally. It may not cover the full range of compositional variation needed in creative applications.

Recommendations

Use with generative models that understand object placement and spatial grounding. Avoid using it to benchmark models trained for outdoor or abstract scenes.

Citation

APA:

Maiti, B. (2025). Prompt2SceneBench: Structured Prompts for Text-to-Image Generation in Indoor Environments [Data set]. Zenodo. https://doi.org/10.5281/zenodo.15876129

Glossary

  • Type (Prompt category): The number of objects (1 to 4) described in the scene vary based on the prompt type (A, B, C and D).
  • Surface: Physical platform or area where objects rest.
  • Scene: Room or environment in which the surface is situated.

Dataset Card Authors

  • Bodhisatta Maiti

Dataset Card Contact