Datasets:
You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
You agree to not use the dataset to conduct experiments that cause harm to human subjects.
Log in or Sign Up to review the conditions and access this dataset content.
Dataset Description
Motivation
- Enable both categorical emotion recognition and dimensional affect regression on speech clips with multiple crowd-sourced annotations per instance.
- Study annotator variability, label aggregation, and uncertainty.
Composition
- Audio: WAV files in
wavs/ - Annotations:
- Raw:
WHiSER/Labels/labels.txt(per-file summary + multiple worker lines) - Aggregated:
WHiSER/Labels/labels_consensus.{csv,json} - Per-annotator:
WHiSER/Labels/labels_detailed.{csv,json}
- Raw:
Supported Tasks
- Categorical speech emotion recognition (primary label)
- Dimensional affect regression (arousal/valence/dominance; 1–7 SAM scale)
- Research on annotation aggregation and inter-annotator disagreement
Dataset Structure
Files
wavs/: raw audio clips (e.g.,006-040.1-2_13.wav)WHiSER/Labels/labels.txt- Header (one per file):
<filename>; <agg_code>; A:<float>; V:<float>; D:<float>; - Worker lines:
WORKER<ID>; <primary>; <secondary_csv>; A:<float>; V:<float>; D:<float>;
- Header (one per file):
WHiSER/Labels/labels_consensus.{csv,json}- Aggregated primary emotion (and secondary/attributes when applicable), gender, speaker, split (if defined)
WHiSER/Labels/labels_detailed.{csv,json}- Individual annotations per worker with primary, secondary (multi-label), and V-A-D
Recommended Features
id: string (e.g.,006-040.1-2_13)file: string (e.g.,wavs/006-040.1-2_13.wav)audio: Audio featureagg_label_code: string in{A,S,H,U,F,D,C,N,O,X}agg_primary: normalized primary label (e.g.,Happy)vad_mean:{ arousal: float32, valence: float32, dominance: float32 }(1–7)secondary: sequence of strings (from the secondary label set)- Optional metadata:
split,speaker,gender annotations(from detailed): sequence of {worker_id: string,primary: string,secondary: sequence,vad: {arousal: float32,valence: float32,dominance: float32 } }
Label Schema
Primary consensus codes
- A: Angry
- S: Sad
- H: Happy
- U: Surprise
- F: Fear
- D: Disgust
- C: Contempt
- N: Neutral
- O: Other
- X: No agreement (no plurality winner)
Secondary labels (multi-select)
- Angry, Sad, Happy, Amused, Neutral, Frustrated, Depressed, Surprise, Concerned, Disgust, Disappointed, Excited, Confused, Annoyed, Fear, Contempt, Other
Dimensional affect
- Self-Assessment Manikin (SAM) 1–7
- Valence (1 very negative; 7 very positive)
- Arousal (1 very calm; 7 very active)
- Dominance (1 very weak; 7 very strong)
Example Instance
id:006-040.1-2_13file:wavs/006-040.1-2_13.wavagg_label_code:Hvad_mean:{ arousal: 4.2, valence: 4.8, dominance: 5.0 }annotations(subset):{ worker_id: WORKER00014325, primary: Sad, secondary: [Sad, Concerned], vad: {A:2, V:4, D:4} }{ worker_id: WORKER00014332, primary: Happy, secondary: [Happy, Concerned], vad: {A:4, V:6, D:6} }
Usage
Loading from Parquet
If you've converted the dataset to nested Parquet format using the provided script, you can load it with the Hugging Face datasets library:
from datasets import Dataset
import pyarrow.parquet as pq
# Load the Parquet file
table = pq.read_table("parquet/whiser_nested.parquet")
ds = Dataset(table)
print(f"Dataset size: {len(ds)}")
print(f"Features: {ds.features}")
# Access a single example
example = ds[0]
print(f"ID: {example['id']}")
print(f"Consensus label: {example['agg_primary']}")
print(f"VAD mean: A={example['vad_mean']['arousal']:.2f}, V={example['vad_mean']['valence']:.2f}, D={example['vad_mean']['dominance']:.2f}")
print(f"Secondary labels: {example['secondary']}")
print(f"Number of annotators: {len(example['annotations'])}")
# Access audio bytes (embedded in Parquet)
audio_bytes = example['audio_bytes']
sample_rate = example['sample_rate']
print(f"Audio: {len(audio_bytes)} bytes, {sample_rate} Hz")
# Iterate through per-annotator annotations
for ann in example['annotations']:
worker = ann['worker_id']
primary = ann['primary']
vad = ann['vad']
print(f" Worker {worker}: primary={primary}, A={vad['arousal']}, V={vad['valence']}, D={vad['dominance']}")
Loading Audio
To decode audio from the embedded bytes:
import io
import soundfile as sf
example = ds[0]
audio_bytes = example['audio_bytes']
# Decode audio
audio_data, sr = sf.read(io.BytesIO(audio_bytes))
print(f"Audio shape: {audio_data.shape}, sample rate: {sr}")
Reference
@inproceedings{Naini_2024_2,
author={A. {Reddy Naini} and L. Goncalves and M.A. Kohler and D. Robinson and E. Richerson and C. Busso},
title={{WHiSER}: {White House Tapes} Speech Emotion Recognition Corpus},
booktitle={Interspeech 2024},
volume={},
year={2024},
month={September},
address = {Kos Island, Greece},
pages={1595-1599},
doi={10.21437/Interspeech.2024-1227},
}
- Downloads last month
- 7