Datasets:
Formats:
parquet
Size:
10K - 100K
Update README.md
Browse files
README.md
CHANGED
|
@@ -130,36 +130,40 @@ configs:
|
|
| 130 |
path: data/test-*
|
| 131 |
---
|
| 132 |
|
| 133 |
-
# Bird3m Dataset
|
| 134 |
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
This dataset contains multi-modal data for bird tracking and behavior analysis, primarily focused on Zebrafinches (based on category names in the source data). Each data entry corresponds to a specific bird instance within a video frame.
|
| 138 |
|
| 139 |
-
|
| 140 |
|
| 141 |
-
|
| 142 |
-
* 2D keypoints and bounding boxes for top, side, and back camera views.
|
| 143 |
-
* Information about the video frame and associated processed audio/radio data files.
|
| 144 |
-
* Metadata about the bird and the experimental setup.
|
| 145 |
-
* Linked vocalization events associated with the specific bird in the frame.
|
| 146 |
|
| 147 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 148 |
|
| 149 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 150 |
|
| 151 |
## Dataset Structure
|
| 152 |
|
| 153 |
-
The dataset is
|
| 154 |
-
|
| 155 |
-
Each split is a standard Hugging Face `Dataset` object. Each row in the dataset corresponds to a single detected bird instance in a single frame, with associated multi-modal data.
|
| 156 |
|
|
|
|
| 157 |
```python
|
| 158 |
-
# Example of accessing splits
|
| 159 |
from datasets import load_dataset
|
| 160 |
|
| 161 |
-
dataset = load_dataset("anonymous-submission000/bird3m")
|
| 162 |
-
|
| 163 |
train_dataset = dataset["train"]
|
| 164 |
val_dataset = dataset["val"]
|
| 165 |
test_dataset = dataset["test"]
|
|
@@ -167,53 +171,52 @@ test_dataset = dataset["test"]
|
|
| 167 |
|
| 168 |
## Dataset Fields
|
| 169 |
|
| 170 |
-
Each example
|
| 171 |
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
|
| 191 |
-
|
| 192 |
-
|
| 193 |
-
|
| 194 |
-
|
| 195 |
-
|
| 196 |
-
|
| 197 |
-
|
| 198 |
-
|
| 199 |
-
|
| 200 |
-
|
| 201 |
-
|
| 202 |
-
|
| 203 |
-
|
| 204 |
-
|
| 205 |
-
|
| 206 |
|
| 207 |
## How to Use
|
| 208 |
|
|
|
|
| 209 |
```python
|
| 210 |
from datasets import load_dataset
|
| 211 |
-
import numpy as np
|
| 212 |
|
| 213 |
-
# Load
|
| 214 |
-
dataset = load_dataset("anonymous-submission000/bird3m")
|
| 215 |
-
|
| 216 |
-
# Access a split
|
| 217 |
train_data = dataset["train"]
|
| 218 |
|
| 219 |
# Access an example
|
|
@@ -223,32 +226,30 @@ example = train_data[0]
|
|
| 223 |
bird_id = example["bird_id"]
|
| 224 |
keypoints_3d = example["keypoints_3d"]
|
| 225 |
top_bbox = example["top_bbox_2d"]
|
| 226 |
-
vocalizations = example["vocalization"]
|
| 227 |
-
|
| 228 |
-
# Access multimedia files (they are lazy-loaded)
|
| 229 |
-
image = example["frame_path"] # This loads the PIL Image
|
| 230 |
-
video = example["video_path"] # This loads a Video object
|
| 231 |
-
mic_audio = example["mic_path"] # This loads the Audio signal (dict with 'array' and 'sampling_rate')
|
| 232 |
-
acc_audio = example["acc_path"] # This loads the Audio signal
|
| 233 |
|
| 234 |
-
#
|
| 235 |
-
|
| 236 |
-
|
| 237 |
-
|
| 238 |
-
|
| 239 |
|
| 240 |
-
#
|
| 241 |
-
|
| 242 |
-
|
| 243 |
-
|
| 244 |
-
|
| 245 |
-
# radio_data = np.load(BytesIO(radio_bytes))
|
| 246 |
-
# # Access data inside the npz file, e.g., radio_data['some_key']
|
| 247 |
-
# print("Radio data keys:", list(radio_data.keys()))
|
| 248 |
-
# except Exception as e:
|
| 249 |
-
# print(f"Could not load radio data: {e}")
|
| 250 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 251 |
|
|
|
|
| 252 |
print(f"Bird ID: {bird_id}")
|
| 253 |
print(f"Number of 3D keypoints: {len(keypoints_3d)}")
|
| 254 |
print(f"Top Bounding Box: {top_bbox}")
|
|
@@ -256,16 +257,28 @@ print(f"Number of vocalization events: {len(vocalizations)}")
|
|
| 256 |
|
| 257 |
if vocalizations:
|
| 258 |
first_vocal = vocalizations[0]
|
| 259 |
-
print(f"First vocal event metadata: {first_vocal
|
| 260 |
-
print(f"First vocal event overlap type: {first_vocal
|
|
|
|
| 261 |
|
| 262 |
-
|
| 263 |
-
|
| 264 |
-
|
| 265 |
-
|
| 266 |
-
|
| 267 |
-
|
| 268 |
-
|
| 269 |
-
|
|
|
|
|
|
|
|
|
|
| 270 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 271 |
```
|
|
|
|
| 130 |
path: data/test-*
|
| 131 |
---
|
| 132 |
|
|
|
|
| 133 |
|
| 134 |
+
# Bird3M Dataset
|
|
|
|
|
|
|
| 135 |
|
| 136 |
+
## Dataset Description
|
| 137 |
|
| 138 |
+
**Bird3M** is the first synchronized, multi-modal, multi-individual dataset designed for comprehensive behavioral analysis of freely interacting birds, specifically zebra finches, in naturalistic settings. It addresses the critical need for benchmark datasets that integrate precisely synchronized multi-modal recordings to support tasks such as 3D pose estimation, multi-animal tracking, sound source localization, and vocalization attribution. The dataset facilitates research in machine learning, neuroscience, and ethology by enabling the development of robust, unified models for long-term tracking and interpretation of complex social behaviors.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 139 |
|
| 140 |
+
### Key Features
|
| 141 |
+
- **Duration**: 22.2 hours of synchronized multi-modal recordings, including a fully annotated subset with 4,420 video frames and 2.5 hours of contextual audio and sensor data.
|
| 142 |
+
- **Modalities**:
|
| 143 |
+
- **Multi-view video**: Three orthogonal color cameras (top, side, back) at 47 fps, supplemented by a monochrome nest camera.
|
| 144 |
+
- **Multi-channel audio**: Wall-mounted microphones (16 kHz) and body-mounted accelerometers (24,414 Hz, down-sampled to 16 kHz).
|
| 145 |
+
- **Radio signals**: FM radio phases and magnitudes from four orthogonal antennas.
|
| 146 |
+
- **Annotations**:
|
| 147 |
+
- **Visual**: 57,396 3D keypoints (5 per bird: beak tip, head center, backpack center, tail base, tail end) across 4,420 frames, with 2D keypoints, visibility labels, and bounding boxes.
|
| 148 |
+
- **Audio**: 4,902 vocalization segments with onset/offset times and vocalizer identities, linked across microphone and accelerometer channels.
|
| 149 |
+
- **Experimental Setup**: Data from 15 experiments (2–8 birds each) conducted in the **Birdpark** system, with sessions lasting 4–120 days.
|
| 150 |
+
- **Applications**: Supports 3D localization, pose estimation, multi-animal tracking, sound source localization/separation, and cross-modal behavioral analyses (e.g., vocalization directness).
|
| 151 |
|
| 152 |
+
### Purpose
|
| 153 |
+
Bird3M bridges the gap in publicly available datasets for multi-modal animal behavior analysis by providing:
|
| 154 |
+
1. A benchmark for unified machine learning models tackling multiple behavioral tasks.
|
| 155 |
+
2. A platform for exploring efficient multi-modal information fusion.
|
| 156 |
+
3. A resource for ethological studies linking movement, vocalization, and social context to uncover neural and evolutionary mechanisms.
|
| 157 |
|
| 158 |
## Dataset Structure
|
| 159 |
|
| 160 |
+
The dataset is organized into three splits: `train`, `val`, and `test`, each as a Hugging Face `Dataset` object. Each row corresponds to a single bird instance in a video frame, with associated multi-modal data.
|
|
|
|
|
|
|
| 161 |
|
| 162 |
+
### Accessing Splits
|
| 163 |
```python
|
|
|
|
| 164 |
from datasets import load_dataset
|
| 165 |
|
| 166 |
+
dataset = load_dataset("anonymous-submission000/bird3m")
|
|
|
|
| 167 |
train_dataset = dataset["train"]
|
| 168 |
val_dataset = dataset["val"]
|
| 169 |
test_dataset = dataset["test"]
|
|
|
|
| 171 |
|
| 172 |
## Dataset Fields
|
| 173 |
|
| 174 |
+
Each example includes the following fields:
|
| 175 |
|
| 176 |
+
- **`bird_id`** (`string`): Unique identifier for the bird instance (e.g., "bird_1").
|
| 177 |
+
- **`back_bbox_2d`** (`Sequence[float64]`): 2D bounding box for the back view, format `[x_min, y_min, x_max, y_max]`.
|
| 178 |
+
- **`back_keypoints_2d`** (`Sequence[float64]`): 2D keypoints for the back view, format `[x1, y1, v1, x2, y2, v2, ...]`, where `v` is visibility (0: not labeled, 1: labeled but invisible, 2: visible).
|
| 179 |
+
- **`back_view_boundary`** (`Sequence[int64]`): Back view boundary, format `[x, y, width, height]`.
|
| 180 |
+
- **`bird_name`** (`string`): Biological identifier (e.g., "b13k20_f").
|
| 181 |
+
- **`video_name`** (`string`): Video file identifier (e.g., "BP_2020-10-13_19-44-38_564726_0240000").
|
| 182 |
+
- **`frame_name`** (`string`): Frame filename (e.g., "img00961.png").
|
| 183 |
+
- **`frame_path`** (`Image`): Path to the frame image (`.png`), loaded as a PIL Image.
|
| 184 |
+
- **`keypoints_3d`** (`Sequence[Sequence[float64]]`): 3D keypoints, format `[[x1, y1, z1], [x2, y2, z2], ...]`.
|
| 185 |
+
- **`radio_path`** (`binary`): Path to radio data (`.npz`), stored as binary.
|
| 186 |
+
- **`reprojection_error`** (`Sequence[float64]`): Reprojection errors for 3D keypoints.
|
| 187 |
+
- **`side_bbox_2d`** (`Sequence[float64]`): 2D bounding box for the side view.
|
| 188 |
+
- **`side_keypoints_2d`** (`Sequence[float64]`): 2D keypoints for the side view.
|
| 189 |
+
- **`side_view_boundary`** (`Sequence[int64]`): Side view boundary.
|
| 190 |
+
- **`backpack_color`** (`string`): Backpack tag color (e.g., "purple").
|
| 191 |
+
- **`experiment_id`** (`string`): Experiment identifier (e.g., "CopExpBP03").
|
| 192 |
+
- **`split`** (`string`): Dataset split ("train", "val", "test").
|
| 193 |
+
- **`top_bbox_2d`** (`Sequence[float64]`): 2D bounding box for the top view.
|
| 194 |
+
- **`top_keypoints_2d`** (`Sequence[float64]`): 2D keypoints for the top view.
|
| 195 |
+
- **`top_view_boundary`** (`Sequence[int64]`): Top view boundary.
|
| 196 |
+
- **`video_path`** (`Video`): Path to the video clip (`.mp4`), loaded as a Video object.
|
| 197 |
+
- **`acc_ch_map`** (`struct`): Maps accelerometer channels to bird identifiers.
|
| 198 |
+
- **`acc_sr`** (`float64`): Accelerometer sampling rate (Hz).
|
| 199 |
+
- **`has_overlap`** (`bool`): Indicates if accelerometer events overlap with vocalizations.
|
| 200 |
+
- **`mic_ch_map`** (`struct`): Maps microphone channels to descriptions.
|
| 201 |
+
- **`mic_sr`** (`float64`): Microphone sampling rate (Hz).
|
| 202 |
+
- **`acc_path`** (`Audio`): Path to accelerometer audio (`.wav`), loaded as an Audio signal.
|
| 203 |
+
- **`mic_path`** (`Audio`): Path to microphone audio (`.wav`), loaded as an Audio signal.
|
| 204 |
+
- **`vocalization`** (`list[struct]`): Vocalization events, each with:
|
| 205 |
+
- `overlap_type` (`string`): Overlap/attribution confidence.
|
| 206 |
+
- `has_bird` (`bool`): Indicates if attributed to a bird.
|
| 207 |
+
- `2ddistance` (`bool`): Indicates if 2D keypoint distance is <20px.
|
| 208 |
+
- `small_2ddistance` (`float64`): Minimum 2D keypoint distance (px).
|
| 209 |
+
- `voc_metadata` (`Sequence[float64]`): Onset/offset times `[onset_sec, offset_sec]`.
|
| 210 |
|
| 211 |
## How to Use
|
| 212 |
|
| 213 |
+
### Loading and Accessing Data
|
| 214 |
```python
|
| 215 |
from datasets import load_dataset
|
| 216 |
+
import numpy as np
|
| 217 |
|
| 218 |
+
# Load dataset
|
| 219 |
+
dataset = load_dataset("anonymous-submission000/bird3m")
|
|
|
|
|
|
|
| 220 |
train_data = dataset["train"]
|
| 221 |
|
| 222 |
# Access an example
|
|
|
|
| 226 |
bird_id = example["bird_id"]
|
| 227 |
keypoints_3d = example["keypoints_3d"]
|
| 228 |
top_bbox = example["top_bbox_2d"]
|
| 229 |
+
vocalizations = example["vocalization"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 230 |
|
| 231 |
+
# Load multimedia
|
| 232 |
+
image = example["frame_path"] # PIL Image
|
| 233 |
+
video = example["video_path"] # Video object
|
| 234 |
+
mic_audio = example["mic_path"] # Audio signal
|
| 235 |
+
acc_audio = example["acc_path"] # Audio signal
|
| 236 |
|
| 237 |
+
# Access audio arrays
|
| 238 |
+
mic_array = mic_audio["array"]
|
| 239 |
+
mic_sr = mic_audio["sampling_rate"]
|
| 240 |
+
acc_array = acc_audio["array"]
|
| 241 |
+
acc_sr = acc_audio["sampling_rate"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 242 |
|
| 243 |
+
# Load radio data
|
| 244 |
+
radio_bytes = example["radio_path"]
|
| 245 |
+
try:
|
| 246 |
+
from io import BytesIO
|
| 247 |
+
radio_data = np.load(BytesIO(radio_bytes))
|
| 248 |
+
print("Radio data keys:", list(radio_data.keys()))
|
| 249 |
+
except Exception as e:
|
| 250 |
+
print(f"Could not load radio data: {e}")
|
| 251 |
|
| 252 |
+
# Print example info
|
| 253 |
print(f"Bird ID: {bird_id}")
|
| 254 |
print(f"Number of 3D keypoints: {len(keypoints_3d)}")
|
| 255 |
print(f"Top Bounding Box: {top_bbox}")
|
|
|
|
| 257 |
|
| 258 |
if vocalizations:
|
| 259 |
first_vocal = vocalizations[0]
|
| 260 |
+
print(f"First vocal event metadata: {first_vocal['voc_metadata']}")
|
| 261 |
+
print(f"First vocal event overlap type: {first_vocal['overlap_type']}")
|
| 262 |
+
```
|
| 263 |
|
| 264 |
+
### Example: Extracting Vocalization Audio Clip
|
| 265 |
+
```python
|
| 266 |
+
if vocalizations and mic_sr:
|
| 267 |
+
onset, offset = vocalizations[0]["voc_metadata"]
|
| 268 |
+
onset_sample = int(onset * mic_sr)
|
| 269 |
+
offset_sample = int(offset * mic_sr)
|
| 270 |
+
vocal_audio_clip = mic_array[onset_sample:offset_sample]
|
| 271 |
+
print(f"Duration of first vocal clip: {offset - onset:.3f} seconds")
|
| 272 |
+
print(f"Shape of first vocal audio clip: {vocal_audio_clip.shape}")
|
| 273 |
+
```
|
| 274 |
+
**Code Availability**: Baseline code is available at [https://github.com/anonymoussubmission0000/bird3m](https://github.com/anonymoussubmission0000/bird3m).
|
| 275 |
|
| 276 |
+
## Citation
|
| 277 |
+
```bibtex
|
| 278 |
+
@article{2025bird3m,
|
| 279 |
+
title={Bird3M: A Multi-Modal Dataset for Social Behavior Analysis Tool Building},
|
| 280 |
+
author={tbd},
|
| 281 |
+
journal={arXiv preprint arXiv:XXXX.XXXXX},
|
| 282 |
+
year={2025}
|
| 283 |
+
}
|
| 284 |
```
|