Text Generation
Transformers
Safetensors
llama
text-generation-inference
TildeSIA commited on
Commit
77f83c5
·
verified ·
1 Parent(s): 1dda6bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -103,3 +103,47 @@ outputs = model.generate(
103
  do_sample=False,
104
  )
105
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
103
  do_sample=False,
104
  )
105
  ```
106
+ # Evaluation
107
+ ## Per-Character Perplexity
108
+ **What is Perplexity?** Perplexity measures how well a language model predicts text. A model with low perplexity makes accurate predictions consistently, while a high perplexity means the model is frequently "surprised" by unexpected words or patterns. Lower perplexity indicates the model has learned language patterns more effectively. It's less "surprised" by what it encounters because it better understands how the language works.
109
+ **Why Character-Level?** Different language models use different internal vocabularies - some break text into whole words, others into word fragments, and some into individual characters. This makes direct comparison difficult.
110
+ Character-level perplexity creates a standardised comparison by calculating how well each model would theoretically perform if we measured their predictions character-by-character. We're not changing how the models work - instead, we use mathematical conversion to approximate their character-level performance based on their predictions.
111
+
112
+ Perplexity fairly evaluates how well each model handles:
113
+ - Spelling accuracy across a diverse vocabulary
114
+ - Grammar rules that span multiple words
115
+ - Sentence structure and flow
116
+ - Language-specific patterns (like how different languages form plurals or compound words)
117
+ **Why does this Matter?** Models with lower perplexity generally perform better on real-world tasks like text generation, translation, and understanding context. It's a reliable indicator of overall language competency across different applications.
118
+ **What data did we use?**
119
+ We use WMT24++ as it is a multilingual, language-parallel evaluation set that none of the models have seen during training. WMT24++ is a composite of texts from news, literature, speech, and social media; thus, it is suitable for foundational model benchmarking.
120
+ | Language | TildeOpen-30B | Gemma-2-27B | EuroLLM-9B | ALIA-40B |
121
+ |----------|---------------|-------------|------------|-----------------|
122
+ | Bulgarian | **2.1716** | 2.3541 | 2.3502 | 2.2411 |
123
+ | Croatian | **2.2259** | 2.6809 | 2.6780 | 2.3456 |
124
+ | Czech | **2.2682** | 2.4873 | 2.4808 | 2.3639 |
125
+ | Danish | **2.0968** | 2.2608 | 2.2586 | 2.1543 |
126
+ | Dutch | **2.0136** | 2.1249 | 2.1185 | 2.0629 |
127
+ | English | 2.1497 | **2.0342** | 2.1897 | 2.1027 |
128
+ | Estonian | **2.2825** | 2.7163 | 2.5652 | 2.4232 |
129
+ | Finnish | **2.1687** | 2.4069 | 2.3844 | 2.2774 |
130
+ | French | 1.9779 | 2.0195 | 2.0479 | **1.9750** |
131
+ | German | **1.9664** | 2.0214 | 2.0499 | 1.9725 |
132
+ | Hungarian | **2.1481** | 2.3308 | 2.3705 | 2.2493 |
133
+ | Icelandic | **2.2011** | 3.1917 | 5.3162 | 4.0978 |
134
+ | Italian | **2.0431** | 2.1065 | 2.1213 | 2.0604 |
135
+ | Latvian | **2.2477** | 2.6701 | 2.4896 | 2.4352 |
136
+ | Lithuanian | **2.2301** | 2.5495 | 2.4754 | 2.4109 |
137
+ | Norwegian | **2.2445** | 2.4173 | 2.5121 | 2.3152 |
138
+ | Polish | **2.1214** | 2.2294 | 2.2264 | 2.1847 |
139
+ | Portuguese | **2.0810** | 2.1554 | 2.1561 | 2.0884 |
140
+ | Romanian | **2.1266** | 2.2724 | 2.2821 | 2.1974 |
141
+ | Russian | **2.1502** | 2.2091 | 2.2813 | 2.1889 |
142
+ | Serbian | **2.3708** | 2.8053 | 4.7160 | 2.5119 |
143
+ | Slovak | **2.2281** | 2.4674 | 2.4588 | 2.3505 |
144
+ | Slovenian | **2.2662** | 2.5798 | 2.5087 | 2.3611 |
145
+ | Spanish | 2.0400 | 2.0665 | 2.1186 | **2.0055** |
146
+ | Swedish | **2.1471** | 2.2971 | 2.2856 | 2.2039 |
147
+ | Turkish | **2.2108** | 2.3665 | 2.3508 | 3.0611 |
148
+ | Ukrainian | **2.2470** | 2.4000 | 2.4251 | 2.3168 |
149
+