|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- es |
|
|
tags: |
|
|
- TTS |
|
|
- PL-BERT |
|
|
- barcelona-supercomputing-center |
|
|
--- |
|
|
|
|
|
|
|
|
# PL-BERT-wp-es |
|
|
|
|
|
## Overview |
|
|
|
|
|
<details> |
|
|
<summary>Click to expand</summary> |
|
|
|
|
|
- [Model Description](#model-description) |
|
|
- [Intended Uses and Limitations](#intended-uses-and-limitations) |
|
|
- [How to Get Started with the Model](#how-to-get-started-with-the-model) |
|
|
- [Training Details](#training-details) |
|
|
- [Citation](#citation) |
|
|
- [Additional information](#additional-information) |
|
|
|
|
|
</details> |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
## Model Description |
|
|
|
|
|
**PL-BERT-wp-es** is a phoneme-level masked language model trained on Spanish text with diverse regional accents. It is based on the [PL-BERT architecture](https://github.com/yl4579/PL-BERT), which learns phoneme representations via a BERT-style masked language modeling objective. |
|
|
|
|
|
This model is designed to support **phoneme-based text-to-speech (TTS) systems**, including but not limited to [StyleTTS2](https://github.com/yl4579/StyleTTS2). Thanks to its Spanish-specific phoneme vocabulary and contextual embedding capabilities, it can serve as a phoneme encoder for any TTS architecture requiring phoneme-level features. |
|
|
|
|
|
Features of our PL-BERT: |
|
|
- It is trained **exclusively on Spanish** phonemized text. |
|
|
- It uses a reduced **phoneme vocabulary of 178 tokens**. |
|
|
- It uses wordpiece tokenizer. |
|
|
- It includes a custom `token_maps.pkl` and adapted `util.py`. |
|
|
|
|
|
--- |
|
|
|
|
|
## Intended Uses and Limitations |
|
|
|
|
|
### Intended uses |
|
|
|
|
|
- Integration into phoneme-based TTS pipelines such as StyleTTS2, Matxa-TTS, or custom diffusion-based synthesizers. |
|
|
- Accent-aware synthesis and phoneme embedding extraction for Spanish. |
|
|
|
|
|
|
|
|
### Limitations |
|
|
|
|
|
- Not designed for general NLP tasks like classification or sentiment analysis. |
|
|
- Only supports Spanish phoneme tokens. |
|
|
- Some accents may be underrepresented in the training data. |
|
|
|
|
|
--- |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
|
|
Here is an example of how to use this model within the StyleTTS2 framework: |
|
|
|
|
|
1. Clone the StyleTTS2 repository: https://github.com/yl4579/StyleTTS2 |
|
|
2. Inside the `Utils` directory, create a new folder, for example: `PLBERT_es`. |
|
|
3. Copy the following files into that folder: |
|
|
- `config.yml` (training configuration) |
|
|
- `step_1000000.t7` (trained checkpoint) |
|
|
- `token_maps.pkl` (phoneme to ID mapping) |
|
|
- `util.py` (modified to fix position ID loading) |
|
|
|
|
|
4. In your StyleTTS2 configuration file, update the `PLBERT_dir` entry to: |
|
|
|
|
|
`PLBERT_dir: Utils/PLBERT_es` |
|
|
|
|
|
5. Update the import statement in your code to: |
|
|
|
|
|
`from Utils.PLBERT_es.util import load_plbert` |
|
|
|
|
|
6. Use `espeak-ng` with the language code `es-419` to phonemize your Spanish text files for training and validation. |
|
|
|
|
|
Note: Although this example uses StyleTTS2, the model is compatible with other TTS architectures that operate on phoneme sequences. You can use the contextualized phoneme embeddings from PL-BERT in any compatible synthesis system. |
|
|
|
|
|
--- |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training data |
|
|
|
|
|
The model was trained on a Spanish corpus phonemized using espeak-ng. It uses a consistent phoneme token set with boundary markers and masking tokens. |
|
|
|
|
|
Tokenizer: custom (split using whitespaces) |
|
|
Phoneme masking strategy: word-level and phoneme-level masking and replacement |
|
|
Training steps: 1,000,000 |
|
|
Precision: Mixed (fp16) |
|
|
|
|
|
### Training configuration |
|
|
|
|
|
Model parameters: |
|
|
|
|
|
- Vocabulary size: 178 |
|
|
- Hidden size: 768 |
|
|
- Attention heads: 12 |
|
|
- Intermediate size: 2048 |
|
|
- Number of layers: 12 |
|
|
- Max position embeddings: 512 |
|
|
- Dropout: 0.1 |
|
|
|
|
|
Other parameters: |
|
|
|
|
|
- Batch size: 8 |
|
|
- Max mel length: 512 |
|
|
- Word mask probability: 0.15 |
|
|
- Phoneme mask probability: 0.1 |
|
|
- Replacement probability: 0.2 |
|
|
- Token separator: space |
|
|
- Token mask: M |
|
|
- Word separator ID: 102 |
|
|
|
|
|
|
|
|
### Evaluation |
|
|
|
|
|
The model has not been benchmarked via perplexity or extrinsic evaluation, but has been successfully integrated into TTS pipelines such as StyleTTS2, where it enables the synthesis of Spanish. |
|
|
|
|
|
--- |
|
|
|
|
|
## Citation |
|
|
|
|
|
If this code contributes to your research, please cite the work: |
|
|
|
|
|
``` |
|
|
@misc{zevallosplbertwpes, |
|
|
title={PL-BERT-wp-es}, |
|
|
author={Rodolfo Zevallos, Jose Giraldo and Carme Armentano-Oller}, |
|
|
organization={Barcelona Supercomputing Center}, |
|
|
url={https://huggingface.co/langtech-veu/PL-BERT-wp_es}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
## Additional Information |
|
|
|
|
|
|
|
|
### Author |
|
|
|
|
|
The [Language Technologies Laboratory](https://huggingface.co/BSC-LT) of the [Barcelona Supercomputing Center](https://www.bsc.es/) by [Rodolfo Zevallos](https://huggingface.co/rjzevallos). |
|
|
|
|
|
### Contact |
|
|
For further information, please send an email to <[email protected]>. |
|
|
|
|
|
### Copyright |
|
|
Copyright(c) 2025 by Language Technologies Laboratory, Barcelona Supercomputing Center. |
|
|
|
|
|
### License |
|
|
|
|
|
[Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0) |
|
|
|
|
|
|
|
|
### Funding |
|
|
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA. |
|
|
|