Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
semantic-similarity-classification
Size:
10K - 100K
ArXiv:
License:
Suggested readme changes
#3
by
KennethEnevoldsen
- opened
- README.md → readme.md +53 -7
README.md → readme.md
RENAMED
|
@@ -265,18 +265,40 @@ configs:
|
|
| 265 |
- split: test
|
| 266 |
path: turkish/test-*
|
| 267 |
---
|
|
|
|
|
|
|
| 268 |
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
|
| 269 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 270 |
|
| 271 |
This is subset of 'XNLI 2.0: Improving XNLI dataset and performance on Cross Lingual Understanding' with languages that were not part of the original XNLI plus three (verified) languages that are not strongly covered in MTEB
|
| 272 |
|
| 273 |
-
> This dataset is included as a task in [`mteb`](https://github.com/embeddings-benchmark/mteb).
|
| 274 |
|
| 275 |
-
-
|
| 276 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 277 |
|
| 278 |
## How to evaluate on this task
|
| 279 |
|
|
|
|
|
|
|
| 280 |
```python
|
| 281 |
import mteb
|
| 282 |
|
|
@@ -287,8 +309,7 @@ model = mteb.get_model(YOUR_MODEL)
|
|
| 287 |
evaluator.run(model)
|
| 288 |
```
|
| 289 |
|
| 290 |
-
|
| 291 |
-
Reference: https://arxiv.org/pdf/2301.06527
|
| 292 |
|
| 293 |
## Citation
|
| 294 |
|
|
@@ -327,6 +348,23 @@ If you use this dataset, please cite the dataset as well as [mteb](https://githu
|
|
| 327 |
```
|
| 328 |
|
| 329 |
# Dataset Statistics
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 330 |
```json
|
| 331 |
{
|
| 332 |
"test": {
|
|
@@ -640,4 +678,12 @@ If you use this dataset, please cite the dataset as well as [mteb](https://githu
|
|
| 640 |
}
|
| 641 |
}
|
| 642 |
}
|
| 643 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 265 |
- split: test
|
| 266 |
path: turkish/test-*
|
| 267 |
---
|
| 268 |
+
|
| 269 |
+
|
| 270 |
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
|
| 271 |
+
|
| 272 |
+
|
| 273 |
+
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
|
| 274 |
+
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">XNLIV2</h1>
|
| 275 |
+
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
|
| 276 |
+
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
|
| 277 |
+
</div>
|
| 278 |
|
| 279 |
This is subset of 'XNLI 2.0: Improving XNLI dataset and performance on Cross Lingual Understanding' with languages that were not part of the original XNLI plus three (verified) languages that are not strongly covered in MTEB
|
| 280 |
|
|
|
|
| 281 |
|
| 282 |
+
<table align="center" style="border-collapse: collapse; margin: 20px auto;">
|
| 283 |
+
<tr>
|
| 284 |
+
<td style="padding: 8px; border: 1px solid #ddd; font-weight: bold; background-color: #f5f5f5;">Task category</td>
|
| 285 |
+
<td style="padding: 8px; border: 1px solid #ddd;">t2t: text-to-text</td>
|
| 286 |
+
</tr>
|
| 287 |
+
<tr>
|
| 288 |
+
<td style="padding: 8px; border: 1px solid #ddd; font-weight: bold; background-color: #f5f5f5;">Domains</td>
|
| 289 |
+
<td style="padding: 8px; border: 1px solid #ddd;">Non-fiction, Fiction, Government, Written</td>
|
| 290 |
+
</tr>
|
| 291 |
+
<tr>
|
| 292 |
+
<td style="padding: 8px; border: 1px solid #ddd; font-weight: bold; background-color: #f5f5f5;">Reference</td>
|
| 293 |
+
<td style="padding: 8px; border: 1px solid #ddd;"><a href="https://arxiv.org/pdf/2301.06527">https://arxiv.org/pdf/2301.06527</a></td>
|
| 294 |
+
</tr>
|
| 295 |
+
</table>
|
| 296 |
+
|
| 297 |
|
| 298 |
## How to evaluate on this task
|
| 299 |
|
| 300 |
+
You can evaluate an embedding model on this dataset using the following code:
|
| 301 |
+
|
| 302 |
```python
|
| 303 |
import mteb
|
| 304 |
|
|
|
|
| 309 |
evaluator.run(model)
|
| 310 |
```
|
| 311 |
|
| 312 |
+
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
|
|
|
|
| 313 |
|
| 314 |
## Citation
|
| 315 |
|
|
|
|
| 348 |
```
|
| 349 |
|
| 350 |
# Dataset Statistics
|
| 351 |
+
|
| 352 |
+
|
| 353 |
+
<details>
|
| 354 |
+
<summary> Dataset Statistics</summary>
|
| 355 |
+
<pre style="background-color: #f5f5f5; padding: 15px; border-radius: 5px; overflow-x: auto; margin: 10px 0;">
|
| 356 |
+
|
| 357 |
+
The following code contains the descriptive statistics from the task. These can also be obtained using:
|
| 358 |
+
|
| 359 |
+
```py
|
| 360 |
+
import mteb
|
| 361 |
+
|
| 362 |
+
task = mteb.get_task("XNLIV2")
|
| 363 |
+
|
| 364 |
+
desc_stats = task.metadata.descriptive_stats
|
| 365 |
+
```
|
| 366 |
+
|
| 367 |
+
|
| 368 |
```json
|
| 369 |
{
|
| 370 |
"test": {
|
|
|
|
| 678 |
}
|
| 679 |
}
|
| 680 |
}
|
| 681 |
+
```
|
| 682 |
+
|
| 683 |
+
</pre>
|
| 684 |
+
</details>
|
| 685 |
+
|
| 686 |
+
|
| 687 |
+
---
|
| 688 |
+
|
| 689 |
+
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
|