anonymous-submission000 commited on
Commit
48d91cc
·
verified ·
1 Parent(s): 67fe087

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -90
README.md CHANGED
@@ -130,36 +130,40 @@ configs:
130
  path: data/test-*
131
  ---
132
 
133
- # Bird3m Dataset
134
 
135
- ## Dataset Description
136
-
137
- This dataset contains multi-modal data for bird tracking and behavior analysis, primarily focused on Zebrafinches (based on category names in the source data). Each data entry corresponds to a specific bird instance within a video frame.
138
 
139
- The data includes:
140
 
141
- * 3D keypoints derived from multi-view reconstruction.
142
- * 2D keypoints and bounding boxes for top, side, and back camera views.
143
- * Information about the video frame and associated processed audio/radio data files.
144
- * Metadata about the bird and the experimental setup.
145
- * Linked vocalization events associated with the specific bird in the frame.
146
 
147
- This dataset is designed to facilitate research in areas such as multi-view 3D reconstruction, multi-modal data fusion, and fine-grained animal behavior analysis.
 
 
 
 
 
 
 
 
 
 
148
 
149
- [More details about the dataset origin, collection process, and purpose can be added here.]
 
 
 
 
150
 
151
  ## Dataset Structure
152
 
153
- The dataset is structured into splits based on the `split` field in the original data. The standard splits are `train`, `val` (mapped from `val`), and `test`.
154
-
155
- Each split is a standard Hugging Face `Dataset` object. Each row in the dataset corresponds to a single detected bird instance in a single frame, with associated multi-modal data.
156
 
 
157
  ```python
158
- # Example of accessing splits
159
  from datasets import load_dataset
160
 
161
- dataset = load_dataset("anonymous-submission000/bird3m") # Replace with your actual repo_id
162
-
163
  train_dataset = dataset["train"]
164
  val_dataset = dataset["val"]
165
  test_dataset = dataset["test"]
@@ -167,53 +171,52 @@ test_dataset = dataset["test"]
167
 
168
  ## Dataset Fields
169
 
170
- Each example in the dataset has the following fields:
171
 
172
- * `bird_id` (`string`): A unique identifier for the specific bird instance within the context of its frame (e.g., "bird_1", "bird_2").
173
- * `back_bbox_2d` (`Sequence[float64]`): 2D bounding box coordinates for the bird instance in the **back** camera view. Format is likely `[x, y, width, height]` or `[x_min, y_min, x_max, y_max]`.
174
- * `back_keypoints_2d` (`Sequence[float64]`): 2D keypoint coordinates and visibility flags for the bird instance in the **back** camera view. Format is likely `[x1, y1, v1, x2, y2, v2, ...]`, where `v` is the visibility status (e.g., 0: not labeled, 1: labeled but not visible, 2: visible and labeled).
175
- * `back_view_boundary` (`Sequence[int64]`): Boundary coordinates defining the relevant area for the **back** view within the full image dimensions. Format is likely `[x, y, width, height]`.
176
- * `bird_name` (`string`): The biological or specific identifier assigned to the bird (e.g., "b13k20_f", "b13o15_m", "dead", "2U7a_j").
177
- * `video_name` (`string`): Identifier for the original video file the frame belongs to (e.g., "BP_2020-10-13_19-44-38_564726_0240000").
178
- * `frame_name` (`string`): Filename of the individual frame (e.g., "img00961.png").
179
- * `frame_path` (`Image`): Path to the image file (`.png`) for this specific frame. When accessing this field using the `datasets` library, it will automatically load the image.
180
- * `keypoints_3d` (`Sequence[Sequence[float64]]`): 3D keypoint coordinates for the bird instance. Each inner sequence is a 3D point, likely `[x, y, z]`.
181
- * `radio_path` (`Value(dtype='binary')`): Path to an associated radio data file (`.npz`). Stored as generic binary data. You would need external libraries (like `numpy`) to load/interpret the content of the `.npz` file after reading the path.
182
- * `reprojection_error` (`Sequence[float64]`): Reprojection error values, likely corresponding to each 3D keypoint.
183
- * `side_bbox_2d` (`Sequence[float64]`): 2D bounding box for the **side** camera view. (Format similar to `back_bbox_2d`).
184
- * `side_keypoints_2d` (`Sequence[float64]`): 2D keypoint coordinates and visibility for the **side** camera view. (Format similar to `back_keypoints_2d`).
185
- * `side_view_boundary` (`Sequence[int64]`): View boundary coordinates for the **side** view. (Format similar to `back_view_boundary`).
186
- * `backpack_color` (`string`): Color of the backpack tag on the bird (e.g., "purple", "yellow", "red").
187
- * `experiment_id` (`string`): Simplified experiment identifier (e.g., "copExpBP03", "juvExpBP05").
188
- * `split` (`string`): Dataset split for this example ("train", "val", "test").
189
- * `top_bbox_2d` (`Sequence[float64]`): 2D bounding box for the **top** camera view. (Format similar to `back_bbox_2d`).
190
- * `top_keypoints_2d` (`Sequence[float64]`): 2D keypoint coordinates and visibility for the **top** camera view. (Format similar to `back_keypoints_2d`).
191
- * `top_view_boundary` (`Sequence[int64]`): View boundary coordinates for the **top** view. (Format similar to `back_view_boundary`).
192
- * `video_path` (`Video`): Path to the video clip file (`.mp4`) containing this frame. When accessing this field, it will automatically load the video object.
193
- * `acc_ch_map` (`{'0': string, ...}`): Dictionary mapping accelerometer channel indices (as strings) to bird identifiers or descriptions.
194
- * `acc_sr` (`float64`): Accelerometer sampling rate in Hz.
195
- * `has_overlap` (`bool`): Boolean indicating whether the accelerometer event overlaps with the vocalization event (based on previous modification logic).
196
- * `mic_ch_map` (`{'0': string, ...}`): Dictionary mapping microphone channel indices (as strings) to microphone names or descriptions.
197
- * `mic_sr` (`float64`): Microphone sampling rate in Hz.
198
- * `acc_path` (`Audio`): Path to the processed accelerometer audio file (`.wav`). When accessing this field, it will automatically load the audio signal.
199
- * `mic_path` (`Audio`): Path to the processed microphone audio file (`.wav`). When accessing this field, it will automatically load the audio signal.
200
- * `vocalization` (`Sequence[Dict]`): A list (Sequence) containing dictionaries (Dict) representing vocalization events associated with this bird in this frame. Each dictionary within this list has the following fields:
201
- * `overlap_type` (`string`): Describes the type or confidence of overlap/attribution (mapped from `attribution_confidence_step2`).
202
- * `has_bird` (`bool`): Boolean indicating if this vocalization event was attributed to a bird (mapped from `is_attributed_to_bird`).
203
- * `2ddistance` (`bool`): Boolean indicating if the 2D keypoint match distance for this vocalization attribution was within a certain threshold (likely < 20px, mapped from `keypoint_match_is_close_lt_20px`).
204
- * `small_2ddistance` (`float64`): The minimum 2D keypoint match distance associated with this vocalization event attribution (mapped from `keypoint_match_min_distance_px`).
205
- * `voc_metadata` (`Sequence[float64]`): Likely the onset and offset times of the vocalization event within the associated audio clip (e.g., `[onset_sec, offset_sec]`). Mapped from `vocalization_onset_offset_sec_in_clip`.
206
 
207
  ## How to Use
208
 
 
209
  ```python
210
  from datasets import load_dataset
211
- import numpy as np # Needed to load .npz radio data
212
 
213
- # Load the dataset from the Hub
214
- dataset = load_dataset("anonymous-submission000/bird3m") # Replace with your actual repo_id
215
-
216
- # Access a split
217
  train_data = dataset["train"]
218
 
219
  # Access an example
@@ -223,32 +226,30 @@ example = train_data[0]
223
  bird_id = example["bird_id"]
224
  keypoints_3d = example["keypoints_3d"]
225
  top_bbox = example["top_bbox_2d"]
226
- vocalizations = example["vocalization"] # This is a list of dicts
227
-
228
- # Access multimedia files (they are lazy-loaded)
229
- image = example["frame_path"] # This loads the PIL Image
230
- video = example["video_path"] # This loads a Video object
231
- mic_audio = example["mic_path"] # This loads the Audio signal (dict with 'array' and 'sampling_rate')
232
- acc_audio = example["acc_path"] # This loads the Audio signal
233
 
234
- # To access the audio arrays and sampling rates:
235
- mic_array = mic_audio['array']
236
- mic_sr = mic_audio['sampling_rate']
237
- acc_array = acc_audio['array']
238
- acc_sr_actual = acc_audio['sampling_rate'] # Note: This comes from the file metadata, compare with example['acc_sr']
239
 
240
- # To access the binary radio data (if you know how to parse it, e.g., with numpy)
241
- radio_bytes = example["radio_path"] # This loads the binary content
242
- # Example (assuming it's a numpy .npz file):
243
- # try:
244
- # from io import BytesIO
245
- # radio_data = np.load(BytesIO(radio_bytes))
246
- # # Access data inside the npz file, e.g., radio_data['some_key']
247
- # print("Radio data keys:", list(radio_data.keys()))
248
- # except Exception as e:
249
- # print(f"Could not load radio data: {e}")
250
 
 
 
 
 
 
 
 
 
251
 
 
252
  print(f"Bird ID: {bird_id}")
253
  print(f"Number of 3D keypoints: {len(keypoints_3d)}")
254
  print(f"Top Bounding Box: {top_bbox}")
@@ -256,16 +257,28 @@ print(f"Number of vocalization events: {len(vocalizations)}")
256
 
257
  if vocalizations:
258
  first_vocal = vocalizations[0]
259
- print(f"First vocal event metadata: {first_vocal.get('voc_metadata')}")
260
- print(f"First vocal event overlap type: {first_vocal.get('overlap_type')}")
 
261
 
262
- # Example of how to get a slice of the audio clip based on voc_metadata
263
- # if vocalizations and mic_sr is not None:
264
- # onset, offset = vocalizations[0]['voc_metadata']
265
- # onset_sample = int(onset * mic_sr)
266
- # offset_sample = int(offset * mic_sr)
267
- # vocal_audio_clip = mic_array[onset_sample:offset_sample]
268
- # print(f"Duration of first vocal clip: {offset - onset:.3f} seconds")
269
- # print(f"Shape of first vocal audio clip: {vocal_audio_clip.shape}")
 
 
 
270
 
 
 
 
 
 
 
 
 
271
  ```
 
130
  path: data/test-*
131
  ---
132
 
 
133
 
134
+ # Bird3M Dataset
 
 
135
 
136
+ ## Dataset Description
137
 
138
+ **Bird3M** is the first synchronized, multi-modal, multi-individual dataset designed for comprehensive behavioral analysis of freely interacting birds, specifically zebra finches, in naturalistic settings. It addresses the critical need for benchmark datasets that integrate precisely synchronized multi-modal recordings to support tasks such as 3D pose estimation, multi-animal tracking, sound source localization, and vocalization attribution. The dataset facilitates research in machine learning, neuroscience, and ethology by enabling the development of robust, unified models for long-term tracking and interpretation of complex social behaviors.
 
 
 
 
139
 
140
+ ### Key Features
141
+ - **Duration**: 22.2 hours of synchronized multi-modal recordings, including a fully annotated subset with 4,420 video frames and 2.5 hours of contextual audio and sensor data.
142
+ - **Modalities**:
143
+ - **Multi-view video**: Three orthogonal color cameras (top, side, back) at 47 fps, supplemented by a monochrome nest camera.
144
+ - **Multi-channel audio**: Wall-mounted microphones (16 kHz) and body-mounted accelerometers (24,414 Hz, down-sampled to 16 kHz).
145
+ - **Radio signals**: FM radio phases and magnitudes from four orthogonal antennas.
146
+ - **Annotations**:
147
+ - **Visual**: 57,396 3D keypoints (5 per bird: beak tip, head center, backpack center, tail base, tail end) across 4,420 frames, with 2D keypoints, visibility labels, and bounding boxes.
148
+ - **Audio**: 4,902 vocalization segments with onset/offset times and vocalizer identities, linked across microphone and accelerometer channels.
149
+ - **Experimental Setup**: Data from 15 experiments (2–8 birds each) conducted in the **Birdpark** system, with sessions lasting 4–120 days.
150
+ - **Applications**: Supports 3D localization, pose estimation, multi-animal tracking, sound source localization/separation, and cross-modal behavioral analyses (e.g., vocalization directness).
151
 
152
+ ### Purpose
153
+ Bird3M bridges the gap in publicly available datasets for multi-modal animal behavior analysis by providing:
154
+ 1. A benchmark for unified machine learning models tackling multiple behavioral tasks.
155
+ 2. A platform for exploring efficient multi-modal information fusion.
156
+ 3. A resource for ethological studies linking movement, vocalization, and social context to uncover neural and evolutionary mechanisms.
157
 
158
  ## Dataset Structure
159
 
160
+ The dataset is organized into three splits: `train`, `val`, and `test`, each as a Hugging Face `Dataset` object. Each row corresponds to a single bird instance in a video frame, with associated multi-modal data.
 
 
161
 
162
+ ### Accessing Splits
163
  ```python
 
164
  from datasets import load_dataset
165
 
166
+ dataset = load_dataset("anonymous-submission000/bird3m")
 
167
  train_dataset = dataset["train"]
168
  val_dataset = dataset["val"]
169
  test_dataset = dataset["test"]
 
171
 
172
  ## Dataset Fields
173
 
174
+ Each example includes the following fields:
175
 
176
+ - **`bird_id`** (`string`): Unique identifier for the bird instance (e.g., "bird_1").
177
+ - **`back_bbox_2d`** (`Sequence[float64]`): 2D bounding box for the back view, format `[x_min, y_min, x_max, y_max]`.
178
+ - **`back_keypoints_2d`** (`Sequence[float64]`): 2D keypoints for the back view, format `[x1, y1, v1, x2, y2, v2, ...]`, where `v` is visibility (0: not labeled, 1: labeled but invisible, 2: visible).
179
+ - **`back_view_boundary`** (`Sequence[int64]`): Back view boundary, format `[x, y, width, height]`.
180
+ - **`bird_name`** (`string`): Biological identifier (e.g., "b13k20_f").
181
+ - **`video_name`** (`string`): Video file identifier (e.g., "BP_2020-10-13_19-44-38_564726_0240000").
182
+ - **`frame_name`** (`string`): Frame filename (e.g., "img00961.png").
183
+ - **`frame_path`** (`Image`): Path to the frame image (`.png`), loaded as a PIL Image.
184
+ - **`keypoints_3d`** (`Sequence[Sequence[float64]]`): 3D keypoints, format `[[x1, y1, z1], [x2, y2, z2], ...]`.
185
+ - **`radio_path`** (`binary`): Path to radio data (`.npz`), stored as binary.
186
+ - **`reprojection_error`** (`Sequence[float64]`): Reprojection errors for 3D keypoints.
187
+ - **`side_bbox_2d`** (`Sequence[float64]`): 2D bounding box for the side view.
188
+ - **`side_keypoints_2d`** (`Sequence[float64]`): 2D keypoints for the side view.
189
+ - **`side_view_boundary`** (`Sequence[int64]`): Side view boundary.
190
+ - **`backpack_color`** (`string`): Backpack tag color (e.g., "purple").
191
+ - **`experiment_id`** (`string`): Experiment identifier (e.g., "CopExpBP03").
192
+ - **`split`** (`string`): Dataset split ("train", "val", "test").
193
+ - **`top_bbox_2d`** (`Sequence[float64]`): 2D bounding box for the top view.
194
+ - **`top_keypoints_2d`** (`Sequence[float64]`): 2D keypoints for the top view.
195
+ - **`top_view_boundary`** (`Sequence[int64]`): Top view boundary.
196
+ - **`video_path`** (`Video`): Path to the video clip (`.mp4`), loaded as a Video object.
197
+ - **`acc_ch_map`** (`struct`): Maps accelerometer channels to bird identifiers.
198
+ - **`acc_sr`** (`float64`): Accelerometer sampling rate (Hz).
199
+ - **`has_overlap`** (`bool`): Indicates if accelerometer events overlap with vocalizations.
200
+ - **`mic_ch_map`** (`struct`): Maps microphone channels to descriptions.
201
+ - **`mic_sr`** (`float64`): Microphone sampling rate (Hz).
202
+ - **`acc_path`** (`Audio`): Path to accelerometer audio (`.wav`), loaded as an Audio signal.
203
+ - **`mic_path`** (`Audio`): Path to microphone audio (`.wav`), loaded as an Audio signal.
204
+ - **`vocalization`** (`list[struct]`): Vocalization events, each with:
205
+ - `overlap_type` (`string`): Overlap/attribution confidence.
206
+ - `has_bird` (`bool`): Indicates if attributed to a bird.
207
+ - `2ddistance` (`bool`): Indicates if 2D keypoint distance is <20px.
208
+ - `small_2ddistance` (`float64`): Minimum 2D keypoint distance (px).
209
+ - `voc_metadata` (`Sequence[float64]`): Onset/offset times `[onset_sec, offset_sec]`.
210
 
211
  ## How to Use
212
 
213
+ ### Loading and Accessing Data
214
  ```python
215
  from datasets import load_dataset
216
+ import numpy as np
217
 
218
+ # Load dataset
219
+ dataset = load_dataset("anonymous-submission000/bird3m")
 
 
220
  train_data = dataset["train"]
221
 
222
  # Access an example
 
226
  bird_id = example["bird_id"]
227
  keypoints_3d = example["keypoints_3d"]
228
  top_bbox = example["top_bbox_2d"]
229
+ vocalizations = example["vocalization"]
 
 
 
 
 
 
230
 
231
+ # Load multimedia
232
+ image = example["frame_path"] # PIL Image
233
+ video = example["video_path"] # Video object
234
+ mic_audio = example["mic_path"] # Audio signal
235
+ acc_audio = example["acc_path"] # Audio signal
236
 
237
+ # Access audio arrays
238
+ mic_array = mic_audio["array"]
239
+ mic_sr = mic_audio["sampling_rate"]
240
+ acc_array = acc_audio["array"]
241
+ acc_sr = acc_audio["sampling_rate"]
 
 
 
 
 
242
 
243
+ # Load radio data
244
+ radio_bytes = example["radio_path"]
245
+ try:
246
+ from io import BytesIO
247
+ radio_data = np.load(BytesIO(radio_bytes))
248
+ print("Radio data keys:", list(radio_data.keys()))
249
+ except Exception as e:
250
+ print(f"Could not load radio data: {e}")
251
 
252
+ # Print example info
253
  print(f"Bird ID: {bird_id}")
254
  print(f"Number of 3D keypoints: {len(keypoints_3d)}")
255
  print(f"Top Bounding Box: {top_bbox}")
 
257
 
258
  if vocalizations:
259
  first_vocal = vocalizations[0]
260
+ print(f"First vocal event metadata: {first_vocal['voc_metadata']}")
261
+ print(f"First vocal event overlap type: {first_vocal['overlap_type']}")
262
+ ```
263
 
264
+ ### Example: Extracting Vocalization Audio Clip
265
+ ```python
266
+ if vocalizations and mic_sr:
267
+ onset, offset = vocalizations[0]["voc_metadata"]
268
+ onset_sample = int(onset * mic_sr)
269
+ offset_sample = int(offset * mic_sr)
270
+ vocal_audio_clip = mic_array[onset_sample:offset_sample]
271
+ print(f"Duration of first vocal clip: {offset - onset:.3f} seconds")
272
+ print(f"Shape of first vocal audio clip: {vocal_audio_clip.shape}")
273
+ ```
274
+ **Code Availability**: Baseline code is available at [https://github.com/anonymoussubmission0000/bird3m](https://github.com/anonymoussubmission0000/bird3m).
275
 
276
+ ## Citation
277
+ ```bibtex
278
+ @article{2025bird3m,
279
+ title={Bird3M: A Multi-Modal Dataset for Social Behavior Analysis Tool Building},
280
+ author={tbd},
281
+ journal={arXiv preprint arXiv:XXXX.XXXXX},
282
+ year={2025}
283
+ }
284
  ```