File size: 1,524 Bytes
de14453
 
 
 
 
 
 
c0461ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
697e7ab
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
library_name: transformers
tags: []
---

# Model Card for Model ID

---
language: lo
tags:
- audio
- automatic-speech-recognition
- wav2vec2
- lao
license: apache-2.0
model-index:
- name: Wav2Vec2 Lao Fine-tuned
  results: []
---

# Wav2Vec2 Lao Fine-tuned

This model is a fine-tuned version of [`facebook/wav2vec2-xls-r-300m`](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on Lao speech data. It was trained for automatic speech recognition (ASR) using [SiangLao](https://huggingface.co/datasets/sianglao) or similar datasets.

## Intended Use

- Lao language ASR tasks
- Research in low-resource language modeling

## Training Details

- Base model: facebook/wav2vec2-xls-r-300m
- Framework: Hugging Face Transformers
- Fine-tuned on: Lao speech dataset
- Tokenizer and processor: see [`wav2vec2-lao-processor`](https://huggingface.co/YourUsername/wav2vec2-lao-processor)

## How to Use

```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import torch
import torchaudio

processor = Wav2Vec2Processor.from_pretrained("YourUsername/wav2vec2-lao-processor")
model = Wav2Vec2ForCTC.from_pretrained("YourUsername/wav2vec2-lao-finetuned")

# Load audio
waveform, sample_rate = torchaudio.load("your_audio.wav")

# Preprocess
inputs = processor(waveform.squeeze(), sampling_rate=sample_rate, return_tensors="pt", padding=True)
with torch.no_grad():
    logits = model(**inputs).logits

# Decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```