Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
image
imagewidth (px)
687
2.05k
label
class label
17 classes
0LinaBell
0LinaBell
0LinaBell
0LinaBell
0LinaBell
0LinaBell
1Terracotta_Warriors
1Terracotta_Warriors
1Terracotta_Warriors
1Terracotta_Warriors
1Terracotta_Warriors
2bear_plushie
2bear_plushie
2bear_plushie
2bear_plushie
2bear_plushie
3cat
3cat
3cat
3cat
3cat
4cat2
4cat2
4cat2
4cat2
4cat2
5cat3D
5cat3D
5cat3D
5cat3D
5cat3D
6dog
6dog
6dog
6dog
6dog
7dog2
7dog2
7dog2
7dog2
7dog2
7dog2
8dog6
8dog6
8dog6
8dog6
8dog6
9grey_sloth_plushie
9grey_sloth_plushie
9grey_sloth_plushie
9grey_sloth_plushie
9grey_sloth_plushie
10monster_toy
10monster_toy
10monster_toy
10monster_toy
10monster_toy
11pig
11pig
11pig
11pig
11pig
12porcupine
12porcupine
12porcupine
12porcupine
12porcupine
13red_cartoon
13red_cartoon
13red_cartoon
13red_cartoon
14robot_toy
14robot_toy
14robot_toy
14robot_toy
14robot_toy
15stitch
15stitch
15stitch
15stitch
15stitch
16wolf_plushie
16wolf_plushie
16wolf_plushie
16wolf_plushie
16wolf_plushie

Subject Motion Dataset

A dataset for personalized text-to-video generation, supporting subject customization, motion customization, and subject-motion combination customization.

Dataset Description

Subject Motion Dataset is a images and videos dataset specifically designed for personalized text-to-video generation tasks. The dataset consists of two main components:

  • Subject: 16 different subjects, each containing 4-6 high-quality images
  • Motion: 10 different motion videos covering various dynamic behaviors

Dataset Structure

subject_motion/
β”œβ”€β”€ subject/
β”‚   β”œβ”€β”€ Terracotta_Warriors/                   
β”‚   β”œβ”€β”€ red_cartoon/           
β”‚   β”œβ”€β”€ cat3D/                 
β”‚   β”œβ”€β”€ wolf_plushie/         
β”‚   β”œβ”€β”€ grey_sloth_plushie/    
β”‚   β”œβ”€β”€ cat2/                  
β”‚   β”œβ”€β”€ stitch/                
β”‚   β”œβ”€β”€ dog2/                 
β”‚   β”œβ”€β”€ porcupine/             
β”‚   β”œβ”€β”€ monster_toy/           
β”‚   β”œβ”€β”€ dog/                   
β”‚   β”œβ”€β”€ robot_toy/             
β”‚   β”œβ”€β”€ pig/                   
β”‚   β”œβ”€β”€ bear_plushie/          
β”‚   β”œβ”€β”€ dog6/                  
β”‚   └── cat/                   
└── motion/                    
    β”œβ”€β”€ Cycling/               
    β”œβ”€β”€ diving/                
    β”œβ”€β”€ ski/                   
    β”œβ”€β”€ dog_skateboard/        
    β”œβ”€β”€ surf/                  
    β”œβ”€β”€ man_skateboard/        
    β”œβ”€β”€ ride/                  
    β”œβ”€β”€ rotating/              
    β”œβ”€β”€ play_guitar/           
    └── horse_running/         

Data Sources

Subject Data

Subject images are sourced from three channels:

Motion Data

All motion videos are collected from the web, carefully curated to ensure quality and diversity.

Applications

This dataset is primarily used for three types of customization generation:

  1. Subject Customization: Using specific subject images for personalized subject generation
  2. Motion Customization: Learning motion styles based on specific motion videos
  3. Subject-Motion Combination Customization: Combining specific subjects with specific motions to generate personalized subject-motion combinations

Technical Features

  • High Quality: All images and videos are quality-filtered
  • Diversity: Covers various subject types and motion types
  • Standardization: Unified data format and naming conventions
  • Extensibility: Supports adding new subjects and motions

Citation

If you use this dataset in your research, please cite this dataset and the related papers:

@misc{sun2025,
  author = {Chenhao Sun},
  title = {Subject Motion Dataset},
  year = {2025},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/datasets/Minusone/subject_motion}},
  note = {Accessed: 2025-07-20}
}

@inproceedings{ruiz2023dreambooth,
  title={Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation},
  author={Ruiz, Nataniel and Li, Yuanzhen and Jampani, Varun and Pritch, Yael and Rubinstein, Michael and Aberman, Kfir},
  booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  pages={22500--22510},
  year={2023}
}

@article{Avrahami_Hertz_Vinker_Arar_Fruchter_Fried_Cohen-Or_Lischinski,  
title={The Chosen One: Consistent Characters in Text-to-Image Diffusion Models}, 
author={Avrahami, Omri and Hertz, Amir and Vinker, Yael and Arar, Moab and Fruchter, Shlomi and Fried, Ohad and Cohen-Or, Daniel and Lischinski, Dani}, 
language={en-US} 
}

License

This dataset is licensed under the MIT License.

Contributing

We welcome issues and pull requests to improve this dataset.

Contact

For questions or suggestions, please contact us through GitHub Issues.

Downloads last month
165