Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
video
video

BJJ Positions & Submissions Dataset

Dataset Description

This dataset contains pose keypoint annotations and compressed video clips for Brazilian Jiu-Jitsu (BJJ) combat positions and submissions. It includes 2D keypoint coordinates for up to 2 athletes per image, labeled with specific BJJ positions and submission attempts, as well as short video segments for each position/submission. The videos are optimized for use in video transformer models such as ViViT.

Dataset Summary

  • Total samples: 1
  • Position classes: 1 unique BJJ positions
  • Keypoint format: MS-COCO (17 keypoints per person)
  • Video format: MP4, H.264, 360p/480p, 15 FPS, compressed for ML
  • Data format: [x, y, confidence] for each keypoint, plus associated video
  • Last updated: 2025-07-21
  • Version: 0.0.1

Supported Tasks

  • BJJ position classification
  • Submission detection
  • Multi-person pose estimation
  • Combat sports analysis
  • Video action recognition (ViViT, etc.)
  • Action recognition in grappling

Recent Updates

Version 1.2.0 (2025-07-21)

  • Added 1 total samples
  • Improved data structure for better compatibility
  • Enhanced position annotations

Position Distribution

  • closed_guard1: 1 samples

Dataset Structure

Data Fields

  • id: Unique sample identifier
  • image_name: Name of the source image
  • position: BJJ position/submission label
  • frame_number: Frame number from source video
  • pose1_keypoints: 17 keypoints for athlete 1 [[x, y, confidence], ...]
  • pose1_num_keypoints: Number of visible keypoints for athlete 1
  • pose2_keypoints: 17 keypoints for athlete 2 [[x, y, confidence], ...]
  • pose2_num_keypoints: Number of visible keypoints for athlete 2
  • num_people: Number of people detected (1 or 2)
  • total_keypoints: Total visible keypoints across both athletes
  • date_added: Date when sample was added to dataset
  • video_path: Relative path to the associated compressed video clip (MP4, suitable for ViViT and other video models)

Keypoint Format

Uses MS-COCO 17-keypoint format: 0. nose, 1. left_eye, 2. right_eye, 3. left_ear, 4. right_ear 5. left_shoulder, 6. right_shoulder, 7. left_elbow, 8. right_elbow 9. left_wrist, 10. right_wrist, 11. left_hip, 12. right_hip 13. left_knee, 14. right_knee, 15. left_ankle, 16. right_ankle

Each keypoint: [x, y, confidence] where confidence 0.0-1.0

Video Format

  • Format: MP4 (H.264), 360p or 480p, 15 FPS, compressed for efficient ML training
  • Usage: Each sample links to a short video clip showing the position/submission, suitable for direct use in video transformer models (e.g., ViViT)

Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("carlosj934/BJJ_Positions_Submissions")

# Access samples
sample = dataset['train'][0]
print(f"Position: {sample['position']}")
print(f"Number of people: {sample['num_people']}")
print(f"Athlete 1 keypoints: {len(sample['pose1_keypoints'])}")
print(f"Video path: {sample['video_path']}")

# Example: Load video for ViViT preprocessing
import cv2
cap = cv2.VideoCapture(sample['video_path'])
frames = []
while True:
    ret, frame = cap.read()
    if not ret:
        break
    frames.append(frame)
cap.release()
print(f"Loaded {len(frames)} frames for ViViT input.")

# Filter by specific positions
guard_samples = dataset['train'].filter(lambda x: 'guard' in x['position'])
print(f"Guard positions: {len(guard_samples)} samples")

Data Collection Progress

The dataset is continuously updated with new BJJ position and submission samples, including both pose annotations and video clips. Each position is being captured from multiple angles and with different athletes to improve model generalization and support robust video-based learning.

Collection Goals

  • Target: 50+ samples per position (900+ total)
  • Current: 1 total samples
  • Coverage: 1/18+ positions represented
  • Focus: High-quality pose annotations and video clips for training robust BJJ classifiers and video models (ViViT, etc.)

Applications

This dataset can be used for:

  • Position Classification: Automatically identify BJJ positions in videos
  • Technique Analysis: Analyze athlete positioning and technique execution
  • Training Feedback: Provide real-time feedback on position quality
  • Competition Analysis: Automatically score and analyze BJJ matches
  • Educational Tools: Interactive learning applications for BJJ students
  • Video Action Recognition: Train ViViT and other video transformer models for grappling action recognition

Citation

If you use this dataset in your research, please cite:

@dataset{bjj_positions_submissions_2025,
  title={BJJ Positions and Submissions Dataset},
  author={Carlos J},
  year={2025},
  version={0.0.1},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/carlosj934/BJJ_Positions_Submissions}
}

License

MIT License - See LICENSE file for details.

Contact

For questions or contributions, please reach out through the Hugging Face dataset page.

Downloads last month
128