Datasets:
image
imagewidth (px) 687
2.05k
| label
class label 17
classes |
---|---|
0LinaBell
|
|
0LinaBell
|
|
0LinaBell
|
|
0LinaBell
|
|
0LinaBell
|
|
0LinaBell
|
|
1Terracotta_Warriors
|
|
1Terracotta_Warriors
|
|
1Terracotta_Warriors
|
|
1Terracotta_Warriors
|
|
1Terracotta_Warriors
|
|
2bear_plushie
|
|
2bear_plushie
|
|
2bear_plushie
|
|
2bear_plushie
|
|
2bear_plushie
|
|
3cat
|
|
3cat
|
|
3cat
|
|
3cat
|
|
3cat
|
|
4cat2
|
|
4cat2
|
|
4cat2
|
|
4cat2
|
|
4cat2
|
|
5cat3D
|
|
5cat3D
|
|
5cat3D
|
|
5cat3D
|
|
5cat3D
|
|
6dog
|
|
6dog
|
|
6dog
|
|
6dog
|
|
6dog
|
|
7dog2
|
|
7dog2
|
|
7dog2
|
|
7dog2
|
|
7dog2
|
|
7dog2
|
|
8dog6
|
|
8dog6
|
|
8dog6
|
|
8dog6
|
|
8dog6
|
|
9grey_sloth_plushie
|
|
9grey_sloth_plushie
|
|
9grey_sloth_plushie
|
|
9grey_sloth_plushie
|
|
9grey_sloth_plushie
|
|
10monster_toy
|
|
10monster_toy
|
|
10monster_toy
|
|
10monster_toy
|
|
10monster_toy
|
|
11pig
|
|
11pig
|
|
11pig
|
|
11pig
|
|
11pig
|
|
12porcupine
|
|
12porcupine
|
|
12porcupine
|
|
12porcupine
|
|
12porcupine
|
|
13red_cartoon
|
|
13red_cartoon
|
|
13red_cartoon
|
|
13red_cartoon
|
|
14robot_toy
|
|
14robot_toy
|
|
14robot_toy
|
|
14robot_toy
|
|
14robot_toy
|
|
15stitch
|
|
15stitch
|
|
15stitch
|
|
15stitch
|
|
15stitch
|
|
16wolf_plushie
|
|
16wolf_plushie
|
|
16wolf_plushie
|
|
16wolf_plushie
|
|
16wolf_plushie
|
Subject Motion Dataset
A dataset for personalized text-to-video generation, supporting subject customization, motion customization, and subject-motion combination customization.
Dataset Description
Subject Motion Dataset is a images and videos dataset specifically designed for personalized text-to-video generation tasks. The dataset consists of two main components:
- Subject: 16 different subjects, each containing 4-6 high-quality images
- Motion: 10 different motion videos covering various dynamic behaviors
Dataset Structure
subject_motion/
βββ subject/
β βββ Terracotta_Warriors/
β βββ red_cartoon/
β βββ cat3D/
β βββ wolf_plushie/
β βββ grey_sloth_plushie/
β βββ cat2/
β βββ stitch/
β βββ dog2/
β βββ porcupine/
β βββ monster_toy/
β βββ dog/
β βββ robot_toy/
β βββ pig/
β βββ bear_plushie/
β βββ dog6/
β βββ cat/
βββ motion/
βββ Cycling/
βββ diving/
βββ ski/
βββ dog_skateboard/
βββ surf/
βββ man_skateboard/
βββ ride/
βββ rotating/
βββ play_guitar/
βββ horse_running/
Data Sources
Subject Data
Subject images are sourced from three channels:
- DreamBooth: Based on the paper DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation
- The Chosen One: Based on the paper The Chosen One: Consistent Characters in Text-to-Image Diffusion Models
- Web Collection: High-quality subject images collected from the web
Motion Data
All motion videos are collected from the web, carefully curated to ensure quality and diversity.
Applications
This dataset is primarily used for three types of customization generation:
- Subject Customization: Using specific subject images for personalized subject generation
- Motion Customization: Learning motion styles based on specific motion videos
- Subject-Motion Combination Customization: Combining specific subjects with specific motions to generate personalized subject-motion combinations
Technical Features
- High Quality: All images and videos are quality-filtered
- Diversity: Covers various subject types and motion types
- Standardization: Unified data format and naming conventions
- Extensibility: Supports adding new subjects and motions
Citation
If you use this dataset in your research, please cite this dataset and the related papers:
@misc{sun2025,
author = {Chenhao Sun},
title = {Subject Motion Dataset},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/datasets/Minusone/subject_motion}},
note = {Accessed: 2025-07-20}
}
@inproceedings{ruiz2023dreambooth,
title={Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation},
author={Ruiz, Nataniel and Li, Yuanzhen and Jampani, Varun and Pritch, Yael and Rubinstein, Michael and Aberman, Kfir},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={22500--22510},
year={2023}
}
@article{Avrahami_Hertz_Vinker_Arar_Fruchter_Fried_Cohen-Or_Lischinski,
title={The Chosen One: Consistent Characters in Text-to-Image Diffusion Models},
author={Avrahami, Omri and Hertz, Amir and Vinker, Yael and Arar, Moab and Fruchter, Shlomi and Fried, Ohad and Cohen-Or, Daniel and Lischinski, Dani},
language={en-US}
}
License
This dataset is licensed under the MIT License.
Contributing
We welcome issues and pull requests to improve this dataset.
Contact
For questions or suggestions, please contact us through GitHub Issues.
- Downloads last month
- 165