|
|
--- |
|
|
task_categories: |
|
|
- video-classification |
|
|
tags: |
|
|
- micro-action |
|
|
|
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
extra_gated_heading: "You need to agree to share your contact information to access this dataset" |
|
|
extra_gated_description: The Micro-Action-52 (MA-52) dataset is only to be used for **non-commercial scientific purposes**. You may request access to the dataset by completing the [Google Form](https://forms.gle/avQQiRWvbxa1nDFQ6) provided and the [LA files](https://drive.google.com/file/d/1vAussMwE9GrL5Vt1MpSQeSmVbUMsgPhw/view?usp=sharing). We will respond promptly upon receipt of your application. If you have difficulty filling out the form, we can also accept the application by [email](mailto:[email protected]). |
|
|
|
|
|
gated: true |
|
|
gated_fields: |
|
|
- name: "Full Name" |
|
|
description: "Your full name" |
|
|
- name: "Affiliation" |
|
|
description: "Your institution, organization, or university" |
|
|
- name: "Email" |
|
|
description: "A valid academic or professional email address" |
|
|
- name: "Intended Use" |
|
|
description: "Briefly describe how you intend to use this dataset" |
|
|
--- |
|
|
|
|
|
|
|
|
## Introduction |
|
|
|
|
|
For more information about Micro-Action analysis, please refer to https://github.com/VUT-HFUT/Micro-Action. |
|
|
|
|
|
The Micro-Action-52 (MA-52) dataset is designed specifically for Micro-Action Recognition research. |
|
|
The Micro-Action-52 (MA-52) dataset is only to be used for **non-commercial scientific purposes**. You may request access to the dataset by completing the [Google Form](https://forms.gle/avQQiRWvbxa1nDFQ6) provided and the [LA files](https://drive.google.com/file/d/1vAussMwE9GrL5Vt1MpSQeSmVbUMsgPhw/view?usp=sharing). |
|
|
We will respond promptly upon receipt of your application. |
|
|
If you have difficulty filling out the form, we can also accept the application by [email](mailto:[email protected]). |
|
|
|
|
|
Please note that the test set is withheld for competition purposes. You can evaluate your results by following the provided [instructions](https://github.com/VUT-HFUT/Micro-Action/tree/main/mar_scripts#codabench-submission-test-set). |
|
|
|
|
|
## Citation |
|
|
Please consider citing the related paper in your publications if it helps your research. |
|
|
|
|
|
|
|
|
``` |
|
|
@article{guo2024benchmarking, |
|
|
title={Benchmarking Micro-action Recognition: Dataset, Methods, and Applications}, |
|
|
author={Guo, Dan and Li, Kun and Hu, Bin and Zhang, Yan and Wang, Meng}, |
|
|
journal={IEEE Transactions on Circuits and Systems for Video Technology}, |
|
|
year={2024}, |
|
|
volume={34}, |
|
|
number={7}, |
|
|
pages={6238-6252}, |
|
|
publisher={IEEE}, |
|
|
doi={10.1109/TCSVT.2024.3358415} |
|
|
} |
|
|
|
|
|
@article{li2024mmad, |
|
|
title={Mmad: Multi-label micro-action detection in videos}, |
|
|
author={Li, Kun and Liu, Pengyu and Guo, Dan and Wang, Fei and Wu, Zhiliang and Fan, Hehe and Wang, Meng}, |
|
|
journal={arXiv preprint arXiv:2407.05311}, |
|
|
year={2024} |
|
|
} |
|
|
``` |