Update README.md
Browse files
README.md
CHANGED
@@ -1,29 +1,79 @@
|
|
1 |
-
---
|
2 |
-
dataset_info:
|
3 |
-
features:
|
4 |
-
- name: text
|
5 |
-
dtype: string
|
6 |
-
- name: label
|
7 |
-
dtype: string
|
8 |
-
splits:
|
9 |
-
- name: train
|
10 |
-
num_bytes: 187690
|
11 |
-
num_examples: 1024
|
12 |
-
- name: val
|
13 |
-
num_bytes: 51154
|
14 |
-
num_examples: 256
|
15 |
-
- name: test
|
16 |
-
num_bytes: 381964
|
17 |
-
num_examples: 2048
|
18 |
-
download_size: 385193
|
19 |
-
dataset_size: 620808
|
20 |
-
configs:
|
21 |
-
- config_name: default
|
22 |
-
data_files:
|
23 |
-
- split: train
|
24 |
-
path: data/train-*
|
25 |
-
- split: val
|
26 |
-
path: data/val-*
|
27 |
-
- split: test
|
28 |
-
path: data/test-*
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
dataset_info:
|
3 |
+
features:
|
4 |
+
- name: text
|
5 |
+
dtype: string
|
6 |
+
- name: label
|
7 |
+
dtype: string
|
8 |
+
splits:
|
9 |
+
- name: train
|
10 |
+
num_bytes: 187690
|
11 |
+
num_examples: 1024
|
12 |
+
- name: val
|
13 |
+
num_bytes: 51154
|
14 |
+
num_examples: 256
|
15 |
+
- name: test
|
16 |
+
num_bytes: 381964
|
17 |
+
num_examples: 2048
|
18 |
+
download_size: 385193
|
19 |
+
dataset_size: 620808
|
20 |
+
configs:
|
21 |
+
- config_name: default
|
22 |
+
data_files:
|
23 |
+
- split: train
|
24 |
+
path: data/train-*
|
25 |
+
- split: val
|
26 |
+
path: data/val-*
|
27 |
+
- split: test
|
28 |
+
path: data/test-*
|
29 |
+
license: openrail
|
30 |
+
task_categories:
|
31 |
+
- text-classification
|
32 |
+
language:
|
33 |
+
- fi
|
34 |
+
size_categories:
|
35 |
+
- 1K<n<10K
|
36 |
+
---
|
37 |
+
|
38 |
+
This version of the ScandiSent dataset was created with a [dataset creation script from EuroEval](https://github.com/EuroEval/EuroEval/blob/main/src/scripts/create_scandisent_fi.py) and archived here for use with the FIN-bench-v2 benchmark suite.
|
39 |
+
|
40 |
+
## Citation Information
|
41 |
+
|
42 |
+
If you find these benchmarks useful in your research, please consider citing the sources below:
|
43 |
+
|
44 |
+
### EuroEval
|
45 |
+
```
|
46 |
+
@article{nielsen2024encoder,
|
47 |
+
title={Encoder vs Decoder: Comparative Analysis of Encoder and Decoder Language Models on Multilingual NLU Tasks},
|
48 |
+
author={Nielsen, Dan Saattrup and Enevoldsen, Kenneth and Schneider-Kamp, Peter},
|
49 |
+
journal={arXiv preprint arXiv:2406.13469},
|
50 |
+
year={2024}
|
51 |
+
}
|
52 |
+
@inproceedings{nielsen2023scandeval,
|
53 |
+
author = {Nielsen, Dan Saattrup},
|
54 |
+
booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)},
|
55 |
+
month = may,
|
56 |
+
pages = {185--201},
|
57 |
+
title = {{ScandEval: A Benchmark for Scandinavian Natural Language Processing}},
|
58 |
+
year = {2023}
|
59 |
+
}
|
60 |
+
```
|
61 |
+
### ScandiSent
|
62 |
+
```
|
63 |
+
@inproceedings{isbister-etal-2021-stop,
|
64 |
+
title = "Should we Stop Training More Monolingual Models, and Simply Use Machine Translation Instead?",
|
65 |
+
author = "Isbister, Tim and
|
66 |
+
Carlsson, Fredrik and
|
67 |
+
Sahlgren, Magnus",
|
68 |
+
editor = "Dobnik, Simon and
|
69 |
+
{\O}vrelid, Lilja",
|
70 |
+
booktitle = "Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)",
|
71 |
+
month = may # " 31--2 " # jun,
|
72 |
+
year = "2021",
|
73 |
+
address = "Reykjavik, Iceland (Online)",
|
74 |
+
publisher = {Link{\"o}ping University Electronic Press, Sweden},
|
75 |
+
url = "https://aclanthology.org/2021.nodalida-main.42/",
|
76 |
+
pages = "385--390",
|
77 |
+
abstract = "Most work in NLP makes the assumption that it is desirable to develop solutions in the native language in question. There is consequently a strong trend towards building native language models even for low-resource languages. This paper questions this development, and explores the idea of simply translating the data into English, thereby enabling the use of pretrained, and large-scale, English language models. We demonstrate empirically that a large English language model coupled with modern machine translation outperforms native language models in most Scandinavian languages. The exception to this is Finnish, which we assume is due to inferior translation quality. Our results suggest that machine translation is a mature technology, which raises a serious counter-argument for training native language models for low-resource languages. This paper therefore strives to make a provocative but important point. As English language models are improving at an unprecedented pace, which in turn improves machine translation, it is from an empirical and environmental stand-point more effective to translate data from low-resource languages into English, than to build language models for such languages."
|
78 |
+
}
|
79 |
+
```
|