mazesmazes commited on
Commit
fa7aa5d
·
verified ·
1 Parent(s): 47b11fd

Update custom model files, README, and requirements

Browse files
Files changed (4) hide show
  1. README.md +35 -56
  2. asr_config.py +2 -2
  3. handler.py +132 -0
  4. requirements.txt +6 -0
README.md CHANGED
@@ -1,75 +1,54 @@
1
  ---
2
- library_name: transformers
3
- tags:
4
- - generated_from_trainer
5
  datasets:
6
- - generator
7
- model-index:
8
- - name: tiny-audio-swiglu
9
- results: []
 
 
 
 
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
- # tiny-audio-swiglu
16
 
17
- This model is a fine-tuned version of [](https://huggingface.co/) on the generator dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 1.8530
20
 
21
- ## Model description
22
 
23
- More information needed
24
 
25
- ## Intended uses & limitations
26
 
27
- More information needed
28
 
29
- ## Training and evaluation data
30
 
31
- More information needed
32
 
33
- ## Training procedure
34
 
35
- ### Training hyperparameters
36
 
37
- The following hyperparameters were used during training:
38
- - learning_rate: 0.0002
39
- - train_batch_size: 8
40
- - eval_batch_size: 24
41
- - seed: 43
42
- - gradient_accumulation_steps: 3
43
- - total_train_batch_size: 24
44
- - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
- - lr_scheduler_type: cosine
46
- - lr_scheduler_warmup_steps: 1000
47
- - training_steps: 15000
48
 
49
- ### Training results
50
 
51
- | Training Loss | Epoch | Step | Validation Loss |
52
- |:-------------:|:------:|:-----:|:---------------:|
53
- | 6.1089 | 0.0667 | 1000 | 2.0578 |
54
- | 5.7958 | 0.1333 | 2000 | 1.9403 |
55
- | 5.7099 | 0.2 | 3000 | 1.9094 |
56
- | 5.6074 | 0.2667 | 4000 | 1.8903 |
57
- | 5.5041 | 0.3333 | 5000 | 1.8801 |
58
- | 5.7084 | 0.4 | 6000 | 1.8671 |
59
- | 5.6099 | 0.4667 | 7000 | 1.8626 |
60
- | 5.5155 | 0.5333 | 8000 | 1.8574 |
61
- | 5.551 | 0.6 | 9000 | 1.8548 |
62
- | 5.4795 | 0.6667 | 10000 | 1.8514 |
63
- | 5.5731 | 0.7333 | 11000 | 1.8507 |
64
- | 5.5508 | 0.8 | 12000 | 1.8504 |
65
- | 5.4282 | 0.8667 | 13000 | 1.8501 |
66
- | 5.5229 | 0.0667 | 14000 | 1.8529 |
67
- | 5.3341 | 0.1333 | 15000 | 1.8530 |
68
 
 
69
 
70
- ### Framework versions
71
 
72
- - Transformers 4.57.1
73
- - Pytorch 2.8.0+cu128
74
- - Datasets 4.4.1
75
- - Tokenizers 0.22.1
 
1
  ---
2
+ license: mit
3
+ language:
4
+ - en
5
  datasets:
6
+ - speechbrain/LoquaciousSet
7
+ base_model:
8
+ - facebook/hubert-xlarge-ls960-ft
9
+ - HuggingFaceTB/SmolLM3-3B
10
+ pipeline_tag: automatic-speech-recognition
11
+ tags:
12
+ - asr
13
+ - speech-recognition
14
+ - audio
15
+ - smollm
16
+ - hubert
17
  ---
18
 
19
+ # Tiny Audio Model Card
 
 
 
20
 
21
+ This model was born from a simple idea: what if anyone could train a powerful, modern speech recognition model for the price of a few coffees? This model is the result of the [Tiny Audio course](https://github.com/alexkroman/tiny-audio/blob/main/docs/course/0-course-overview.md), a free, hands-on guide to building your own ASR system from scratch.
 
 
22
 
23
+ ## The Story of this Model
24
 
25
+ This model isn't the product of a massive research lab with an unlimited budget. It's the result of a 24-hour training run on a single GPU, made possible by an efficient projector-only training approach. By combining the strengths of a massive pretrained audio encoder (`facebook/hubert-xlarge-ls960-ft`) and a powerful language model (`HuggingFaceTB/SmolLM3-3B`), and only training a small projector between them, we can create a high-quality ASR model with minimal resources.
26
 
27
+ This model is a testament to the power of open-source and the incredible tools and models that are now available to everyone.
28
 
29
+ ## Intended Use
30
 
31
+ This model is for you. It's for the curious, the builders, the learners. It's for anyone who wants to understand how modern AI works by getting their hands dirty. Use it to transcribe your podcasts, your meetings, your voice memos. But more importantly, use it as a starting point. Fork it, fine-tune it, break it, and make it your own.
32
 
33
+ ## Performance
34
 
35
+ This model achieves a Word Error Rate (WER) of **12.14%** on the LoquaciousSet test set. It's not perfect, but it's a solid baseline that you can build on. See how it compares to other models on the [community leaderboard](https://github.com/alexkroman/tiny-audio#leaderboard).
36
 
37
+ ## How to Use
38
 
39
+ ```python
40
+ from transformers import pipeline
 
 
 
 
 
 
 
 
 
41
 
42
+ pipe = pipeline("automatic-speech-recognition", model="mazesmazes/tiny-audio", trust_remote_code=True)
43
 
44
+ result = pipe("path/to/audio.wav")
45
+ print(result["text"])
46
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
+ ## How to Get Involved
49
 
50
+ This project is more than just a model; it's a community. Here's how you can get involved:
51
 
52
+ - **Take the course**: The best way to start is to go through the [free 6-hour course](https://github.com/alexkroman/tiny-audio/blob/main/docs/course/0-course-overview.md) and train your own model.
53
+ - **Share your results**: Add your model to the [leaderboard](https://github.com/alexkroman/tiny-audio#leaderboard) and share what you've learned.
54
+ - **Join the conversation**: Ask questions, share your ideas, and connect with other builders in the [GitHub Discussions](https://github.com/alexkroman/tiny-audio/discussions).
 
asr_config.py CHANGED
@@ -45,10 +45,10 @@ class ASRConfig(transformers.PretrainedConfig):
45
  # Set default generation parameters
46
  generation_defaults = {
47
  "num_beams": 1,
48
- "max_new_tokens": 128,
49
  "min_new_tokens": 1,
50
  "do_sample": False,
51
- "repetition_penalty": 1.0,
52
  "length_penalty": 1.0,
53
  "no_repeat_ngram_size": 0,
54
  "use_cache": True,
 
45
  # Set default generation parameters
46
  generation_defaults = {
47
  "num_beams": 1,
48
+ "max_new_tokens": 96,
49
  "min_new_tokens": 1,
50
  "do_sample": False,
51
+ "repetition_penalty": 1.05,
52
  "length_penalty": 1.0,
53
  "no_repeat_ngram_size": 0,
54
  "use_cache": True,
handler.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Custom inference handler for HuggingFace Inference Endpoints."""
2
+
3
+ from typing import Any, Dict, List, Union
4
+
5
+ import torch
6
+
7
+ try:
8
+ # For remote execution, imports are relative
9
+ from .asr_modeling import ASRModel
10
+ from .asr_pipeline import ASRPipeline
11
+ except ImportError:
12
+ # For local execution, imports are not relative
13
+ from asr_modeling import ASRModel # type: ignore[no-redef]
14
+ from asr_pipeline import ASRPipeline # type: ignore[no-redef]
15
+
16
+
17
+ class EndpointHandler:
18
+ def __init__(self, path: str = ""):
19
+ import os
20
+
21
+ import nltk
22
+
23
+ nltk.download("punkt_tab", quiet=True)
24
+
25
+ os.environ.setdefault("PYTORCH_CUDA_ALLOC_CONF", "expandable_segments:True")
26
+
27
+ # Enable TF32 for faster matmul on Ampere+ GPUs (A100, etc.)
28
+ # Also beneficial for T4 (Turing) which supports TensorFloat-32
29
+ torch.backends.cuda.matmul.allow_tf32 = True
30
+ torch.backends.cudnn.allow_tf32 = True
31
+
32
+ # Set device and dtype
33
+ self.device = "cuda" if torch.cuda.is_available() else "cpu"
34
+
35
+ # Use float16 for better T4 compatibility (bfloat16 not well supported on T4)
36
+ # T4 has excellent float16 performance with tensor cores
37
+ self.dtype = torch.float16 if self.device == "cuda" else torch.float32
38
+
39
+ # Enable CUDA optimizations
40
+ if torch.cuda.is_available():
41
+ torch.backends.cudnn.benchmark = True
42
+
43
+ # Prepare model kwargs for pipeline
44
+ model_kwargs = {
45
+ "dtype": self.dtype,
46
+ "low_cpu_mem_usage": True,
47
+ }
48
+ if torch.cuda.is_available():
49
+ model_kwargs["attn_implementation"] = (
50
+ "flash_attention_2" if self._is_flash_attn_available() else "sdpa"
51
+ )
52
+
53
+ # Load model (this loads the model, tokenizer, and feature extractor)
54
+ self.model = ASRModel.from_pretrained(path, **model_kwargs)
55
+
56
+ # Instantiate custom pipeline - it will get feature_extractor and tokenizer from model
57
+ self.pipe = ASRPipeline(
58
+ model=self.model,
59
+ feature_extractor=self.model.feature_extractor,
60
+ tokenizer=self.model.tokenizer,
61
+ device=self.device,
62
+ )
63
+
64
+ # Apply torch.compile if enabled (after model is loaded by pipeline)
65
+ # Use "default" mode for T4 - better compatibility than "reduce-overhead"
66
+ # "reduce-overhead" is better for A100+ but can be slower on older GPUs
67
+ if torch.cuda.is_available() and os.getenv("ENABLE_TORCH_COMPILE", "1") == "1":
68
+ compile_mode = os.getenv("TORCH_COMPILE_MODE", "default")
69
+ self.model = torch.compile(self.model, mode=compile_mode)
70
+ self.pipe.model = self.model
71
+
72
+ # Warmup the model to trigger compilation and optimize kernels
73
+ if torch.cuda.is_available():
74
+ self._warmup()
75
+
76
+ def _is_flash_attn_available(self):
77
+ """Check if flash attention is available."""
78
+ import importlib.util
79
+
80
+ return importlib.util.find_spec("flash_attn") is not None
81
+
82
+ def _warmup(self):
83
+ """Warmup to trigger model compilation and allocate GPU memory."""
84
+ try:
85
+ # Create dummy audio (1 second at config sample rate)
86
+ sample_rate = self.pipe.model.config.audio_sample_rate
87
+ dummy_audio = torch.randn(sample_rate, dtype=torch.float32)
88
+
89
+ # Run inference to trigger torch.compile and kernel optimization
90
+ with torch.inference_mode():
91
+ warmup_tokens = self.pipe.model.config.inference_warmup_tokens
92
+ _ = self.pipe(
93
+ {"raw": dummy_audio, "sampling_rate": sample_rate},
94
+ max_new_tokens=warmup_tokens,
95
+ )
96
+
97
+ # Force CUDA synchronization to ensure kernels are compiled
98
+ if torch.cuda.is_available():
99
+ torch.cuda.synchronize()
100
+ # Clear cache after warmup to free memory
101
+ torch.cuda.empty_cache()
102
+
103
+ except Exception as e:
104
+ print(f"Warmup skipped due to: {e}")
105
+
106
+ def __call__(self, data: Dict[str, Any]) -> Union[Dict[str, Any], List[Dict[str, Any]]]:
107
+ inputs = data.get("inputs")
108
+ if inputs is None:
109
+ raise ValueError("Missing 'inputs' in request data")
110
+
111
+ params = data.get("parameters", {})
112
+ max_new_tokens = params.get("max_new_tokens", 128)
113
+ num_beams = params.get("num_beams", 1)
114
+ do_sample = params.get("do_sample", False)
115
+ length_penalty = params.get("length_penalty", 1.0)
116
+ repetition_penalty = params.get("repetition_penalty", 1.05)
117
+ no_repeat_ngram_size = params.get("no_repeat_ngram_size", 0)
118
+ early_stopping = params.get("early_stopping", True)
119
+ default_diversity = self.pipe.model.config.inference_diversity_penalty
120
+ diversity_penalty = params.get("diversity_penalty", default_diversity)
121
+
122
+ return self.pipe(
123
+ inputs,
124
+ max_new_tokens=max_new_tokens,
125
+ num_beams=num_beams,
126
+ do_sample=do_sample,
127
+ length_penalty=length_penalty,
128
+ repetition_penalty=repetition_penalty,
129
+ no_repeat_ngram_size=no_repeat_ngram_size,
130
+ early_stopping=early_stopping,
131
+ diversity_penalty=diversity_penalty,
132
+ )
requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ # Core dependencies for tiny-audio model inference
2
+ # This file is pushed to HuggingFace for model repository
3
+
4
+ # Transformers - main library for model loading and inference
5
+ transformers>=4.57.0
6
+ truecase