SPIRAL-base-MCT / README.md
davidyeung's picture
Update README.md
8e0c780

SPIRAL: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training

This is the pretrained model of SPIRAL Base with Multi-Condition Training, trained with 960-hour LibriSpeech data, and noise dataset from ICASSP 2021 DNS Challenge for noise robustness.

Citation

If you find SPIRAL useful in your research, please cite the following paper:

@inproceedings{huang2022spiral,
  title={{SPIRAL}: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training},
  author={Wenyong Huang and Zhenhe Zhang and Yu Ting Yeung and Xin Jiang and Qun Liu},
  booktitle={International Conference on Learning Representations},
  year={2022},
  url={https://openreview.net/forum?id=TBpg4PnXhYH}
}