Model Card for Model ID
WavJEPANat, a waveform-based version of the Joint-Embedding Predictive Architecture. WavJEPANat leverages high-level semantic representation learning to tackle the shortcomings of representation learning at the speech unit or token level. We show that this approach substantially outperforms state-of-the-art time-domain audio foundation models across a wide variety of downstream benchmark tasks, while requiring considerably fewer computational resources. Additionally, WavJEPA-Nat overcomes the performance drop that time-domain models typically exhibit in noisy and reverberant real-world acoustic environments
Model Details
The WavJEPANat framework comprises a waveform encoder, context encoder, target encoder and a predictor. WavJEPANat’s objective is to predict latent representation of various targets blocks based on a single context block extracted from the same sound wave. As waveform encoder, we use the feature encoder of Wav2Vec 2.0, which is composed of stacked temporal convolution layers (Baevski et al., 2020). Similar to the original I-JEPA architecture (Assran et al., 2023), a Vision Transformer (ViT) (Dosovitskiy et al., 2021) is used for the target encoder, context encoder and predictor.
Model Description
WavJEPANat is the first framework applying semantic learning to general-purpose audio representations in the time domain, surpassing state-of-the-art time-domain approaches on the HEAR (Turian et al., 2022) benchmark suite while requiring only a fraction of the computational resources. WavJEPANat leverages high-level semantic representation learning to tackle the shortcomings of representation learning at the speech unit or token level. We show that this approach substantially outperforms state-of-the-art time-domain audio foundation models across a wide variety of downstream benchmark tasks, while requiring considerably fewer computational resources. Additionally, we address the degraded performance of time-domain models in real-world sound scenes with WavJEPANat, a multi-channel extension of the WavJEPA framework trained on simulated real-world sound scenes. Evaluation on Nat-HEAR (Yuksel et al., 2025), a naturalistic version of the HEAR benchmark suite, demonstrates that WavJEPA-Nat exceeds the robustness of other time-domain foundation models to noise and reverberation.
- Developed by: Goksenin Yuksel, [email protected]
- Model type: Transformers, Audio Foundation Models, Raw Waveform Models
- Language(s) (NLP): WavJEPA and WavJEPA-Nat support all languages, but mainly English.
- License: MIT
Model Sources
- Repository: https://github.com/labhamlet/wavjepa
- Paper: https://arxiv.org/abs/2509.23238
Uses
WavJEPANat can be used as a powerful feature extractor for downstream tasks such as enviromental sound classification, speech recognition, speaker counting etc on adverse conditions. Later, training a linear head on top of these extracted features would yield a fine-tuned audio scene analysis model.
How to Get Started with the Model
from transformers import AutoModel, AutoFeatureExtractor
model = AutoModel.from_pretrained("labhamlet/wavjepa-nat-base", trust_remote_code=True)
extractor = AutoFeatureExtractor.from_pretrained("labhamlet/wavjepa-nat-base", trust_remote_code=True)
audio = torch.zeros([1,2,160000])
extracted = extractor(audio, return_tensors="pt")
audio_feature = extracted['input_values']
print(model(audio_feature).shape)
Training Details
Training Data
We train WavJEPANat on the unbalanced training set of AudioSet, which consists of 1.74 million 10-second sound clips scraped from YouTube (Gemmeke et al., 2017), and 70,000 naturalistic scenes (corresponding to 70 Matterport3D houses). We used the 70,000 naturalistic scenes in the train set to generate naturalistic scenes for all audio clips in the unbalanced training set of AudioSet (10-second sound tracks of 1.74 million YouTube videos (Gemmeke et al., 2017)). Specifically, during training we randomly paired an AudioSet clip with a noise sound clip from the WHAMR! background noise database (Maciejewski et al., 2020). WHAMR! noise clips longer than 10 s were trimmed to 10 s duration and a linear fade-in/fade-out of 200 ms was applied to every noise clip prior to mixing of the sound scene. To create a naturalistic sound scene, we then convolved the AudioSet clip with RIR(s, r, θ).
Training Procedure
Each sound clip was resampled to 16 kHz and mean centered to enforce equal loudness across sound clips. We then randomly sampled 8 sections of 2 s from each sound clip, effectively increasing the batch size by a factor of 8 in a computationally efficient manner. Finally, each instance is instance normalized (Ulyanov et al., 2017). The waveform encoder converts each 2 s instance into an embedding w 200×768, effectively resampling the audio to 100 Hz with a stride of 10 ms and a receptive field size of 12.5 ms
We sampled starting indices for the context block with p = 0.065 and for target blocks with p = 0.025. We set M to 10 for both context block and target block . To update the target encoder parameters ∆, we linearly increased τ from τ0 = 0.999 to τe = 0.99999 over the first 100,000 steps, after which τ was kept constant. We used K = 8 for the top K averaging. We trained WavJEPANat for 375,000 steps using a batch size of 16 on two NVIDIA H100 94 GB GPUs. Given our in-batch sampling factor of 8, we boost our effective batch size to 256. We use the AdamW optimizer (Loshchilov & Hutter, 2019) with a weight decay coefficient λw = 0.04. The learning rate schedule follows a cosine decay with linear warm-up over 100,000 steps, reaching a peak learning rate of 2 × 10−4 before decaying to zero
Preprocessing
RMS Normalization was applied to audio clips to get all of them in the same loudness levels, and later instance normalization is applied.
Training Hyperparameters
- Training regime:: WavJEPA-Nat were trained with mixed precision, torch.compile and flash attention.
Evaluation
We evaluate WavJEPANat and other state-of-the-art models on the HEAR task suite, which presents a wide range of tasks to evaluate the downstream performance of audio representation models (Turian et al., 2022).
Testing Data, Factors & Metrics
Testing Data
HEAR: The aim of the HEAR benchmark is to develop a general-purpose audio representation that provides a strong basis for learning in a wide variety of tasks and scenarios. HEAR evaluates audio representations using a benchmark suite across a variety of domains, including speech, environmental sound, and music. HEAR was launched as a NeurIPS 2021 shared challenge. It still remains an open question whether one single general-purpose audio representation can perform as holistically as the human ear.
Results
HEAR
| Model | Size | DCASE | FSD50K | LC | ESC-50 | CD | VL | SC-5 | NS | BO | Mri-S | Mri-T | s(m) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Baseline | |||||||||||||
| HEAR-Naive | N/A | 7.6 | 12.5 | 40.3 ± 1.2 | 27.4 ± 3.3 | 36.7 ± 2.5 | 16.0 ± 3.4 | 13.3 | 89.2 | 97.1 ± 3.2 | 94.2 ± 1.1 | 93.7 ± 0.3 | 0.0 |
| Speech pre-training | |||||||||||||
| Wav2Vec2.0 | B | 23.5 | 29.4 | 69.9 ± 2.1 | 46.4 ± 1.8 | 57.3 ± 1.1 | 34.9 ± 2.4 | 85.3 | 17.4 | 81.4 ± 4.8 | 90.7 ± 0.8 | 77.0 ± 0.9 | 30.9 |
| HuBERT | B | 78.0 | 32.8 | 63.3 ± 1.2 | 58.6 ± 2.8 | 71.2 ± 1.2 | 65.2 ± 2.9 | 94.0 | 19.8 | 93.2 ± 5.9 | 94.6 ± 0.4 | 85.0 ± 2.5 | 47.3 |
| WavLM | B | 27.0 | 25.7 | 61.3 ± 2.3 | 49.5 ± 3.8 | 64.3 ± 1.3 | 60.1 ± 3.2 | 93.6 | 16.0 | 84.3 ± 6.3 | 88.8 ± 1.0 | 76.8 ± 0.5 | 35.1 |
| Data2Vec | B | 46.5 | 15.2 | 47.9 ± 1.2 | 28.0 ± 2.8 | 55.7 ± 1.0 | 44.9 ± 3.1 | 88.5 | 14.0 | 78.4 ± 4.1 | 85.1 ± 0.7 | 70.5 ± 3.3 | 23.6 |
| Wav2Vec2.0 | L | 66.0 | 34.8 | 64.6 ± 1.9 | 59.8 ± 1.5 | 65.7 ± 0.8 | 53.3 ± 6.3 | 75.8 | 40.6 | 93.6 ± 2.6 | 94.8 ± 0.5 | 82.4 ± 3.0 | 42.5 |
| HuBERT | L | 34.8 | 31.4 | 63.8 ± 1.3 | 60.4 ± 3.0 | 71.0 ± 1.2 | 69.0 ± 2.8 | 84.8 | 20.4 | 93.6 ± 3.0 | 95.3 ± 0.8 | 82.5 ± 2.0 | 44.3 |
| WavLM | L | 77.4 | 40.1 | 69.4 ± 2.1 | 66.6 ± 2.5 | 76.3 ± 2.2 | 79.2 ± 3.9 | 93.8 | 18.2 | 93.6 ± 5.4 | 95.8 ± 0.8 | 90.1 ± 1.0 | 58.1 |
| Data2Vec | L | 40.8 | 18.7 | 50.9 ± 1.7 | 34.4 ± 2.5 | 62.8 ± 1.6 | 60.0 ± 4.9 | 86.1 | 14.4 | 80.1 ± 8.5 | 84.7 ± 2.6 | 65.6 ± 3.1 | 29.0 |
| AudioSet pre-training | |||||||||||||
| Wav2Vec2.0 | B | 52.0 | 34.7 | 60.4 ± 1.7 | 58.9 ± 1.9 | 56.3 ± 1.3 | 27.9 ± 4.6 | 72.1 | 42.0 | 86.0 ± 9.6 | 92.9 ± 1.4 | 77.3 ± 0.5 | 31.9 |
| HuBERT | B | 86.2 | 41.1 | 63.5 ± 3.4 | 69.1 ± 1.6 | 69.5 ± 1.2 | 53.3 ± 3.1 | 83.5 | 38.8 | 91.5 ± 8.8 | 95.6 ± 0.5 | 90.4 ± 0.8 | 51.1 |
| Wav2Vec2.0 | L | 82.6 | 47.8 | 73.6 ± 1.2 | 72.6 ± 2.1 | 68.2 ± 1.7 | 42.2 ± 6.0 | 83.9 | 30.8 | 91.5 ± 5.0 | 96.5 ± 0.3 | 88.7 ± 2.5 | 55.9 |
| HuBERT | L | 86.2 | 45.4 | 75.2 ± 1.4 | 66.3 ± 4.6 | 70.1 ± 0.8 | 39.6 ± 3.6 | 85.7 | 38.6 | 91.6 ± 9.6 | 97.3 ± 0.5 | 89.6 ± 2.3 | 57.7 |
| WavJEPA-Nat | B | 91.6 | 48.7 | 72.4 ±1.8 | 80.2 ±1.7 | 65.9 ±0.7 | 39.7 ±2.4 | 87.4 | 33.4 | 96.2 ±5.3 | 97.4 ±0.5 | 90.4 ±0.8 | 60.0 |
Summary
We presented WavJEPANat, a state-of-the-art audio foundation model that leverages self-supervised semantic learning to obtain robust general-purpose audio representations from raw waveforms. WavJEPANat’s results highlight the superior performance of semantic audio representation learning in comparison with representation learning at the speech unit or token level, as is common in existing time-domain speech representation learning approaches.
Model Card Contact
Goksenin Yuksel; [email protected]
- Downloads last month
- 49