KhalidNElmadani commited on
Commit
23210d0
·
verified ·
1 Parent(s): 2b6309f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -28
README.md CHANGED
@@ -11,20 +11,60 @@ size_categories:
11
  pretty_name: 'BAREC 2025: Readability Assessment Shared Task'
12
  ---
13
 
14
- # Dataset Card for BAREC Shared Task 2025
15
 
16
  ## Dataset Summary
17
 
18
- The BAREC Shared Task 2025 focuses on fine-grained readability classification across 19 levels using the Balanced Arabic Readability Evaluation Corpus (BAREC), a dataset of over 1 million words.
19
 
20
- ### Supported Tasks and Leaderboards
21
 
22
- The dataset can be used to train a model for readability assessment in 19, 7, 5, or 3 levels, which is a **multi-class text classification** task.
23
- Please checkout the [Shared Task website] for details on the sub-tasks and leaderboards.
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  ### Languages
26
 
27
- The BAREC dataset contains texts in Arabic language only.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
  ## Dataset Structure (Sentence-level)
30
 
@@ -34,21 +74,23 @@ The BAREC dataset contains texts in Arabic language only.
34
 
35
  ### Data Fields
36
 
37
- - **ID**: Sentence ID.
38
- - **Sentence**: Sentence text.
39
  - **Word_Count**: Number of words in the sentence.
40
  - **Readability_Level**: The readability level in `19-levels` scheme, ranging from `1-alif` to `19-qaf`.
41
  - **Readability_Level_19**: The readability level in `19-levels` scheme, ranging from `1` to `19`.
42
  - **Readability_Level_7**: The readability level in `7-levels` scheme, ranging from `1` to `7`.
43
  - **Readability_Level_5**: The readability level in `5-levels` scheme, ranging from `1` to `5`.
44
  - **Readability_Level_3**: The readability level in `3-levels` scheme, ranging from `1` to `3`.
45
- - **Annotator**: The annotator (it could be one of the five annotators `A1-A5` or Inter-Annotator Agreement `IAA`).
46
- - **Document**: The document where the sentence came from.
47
- - **Source**: The resource where the document came from.
48
- - **Book**: The book where the document came from.
49
- - **Author**: The Author of the document.
50
- - **Domain**: The domain of the document (it could be `Arts & Humanities`, `STEM` or `Social Sciences`).
51
- - **Text_Class**: The readership group (it could be `Foundational`, `Advanced` or `Specialized`).
 
 
52
 
53
  ## Dataset Structure (Document-level)
54
 
@@ -58,30 +100,50 @@ The BAREC dataset contains texts in Arabic language only.
58
 
59
  ### Data Fields
60
 
61
- - **ID**: Document ID.
62
- - **Document**: The document.
63
- - **Sentences**: All sentences in the document.
64
- - **Sentence_Count**: Number of sentences in the document.
65
- - **Word_Count**: Number of words in the document.
66
  - **Readability_Level**: The readability level in `19-levels` scheme, ranging from `1-alif` to `19-qaf`.
67
  - **Readability_Level_19**: The readability level in `19-levels` scheme, ranging from `1` to `19`.
68
  - **Readability_Level_7**: The readability level in `7-levels` scheme, ranging from `1` to `7`.
69
  - **Readability_Level_5**: The readability level in `5-levels` scheme, ranging from `1` to `5`.
70
  - **Readability_Level_3**: The readability level in `3-levels` scheme, ranging from `1` to `3`.
71
- - **Source**: The resource where the document came from.
72
- - **Book**: The book where the document came from.
73
- - **Author**: The Author of the document.
74
- - **Domain**: The domain of the document (it could be `Arts & Humanities`, `STEM` or `Social Sciences`).
75
- - **Text_Class**: The readership group (it could be `Foundational`, `Advanced` or `Specialized`).
 
 
76
 
77
  ## Data Splits
78
 
79
- The BAREC dataset has three splits: *Train* (80%), *Dev* (10%), and *Test* (10%).
80
- The splits are in the document level.
81
- The splits are balanced accross *Readability Levels*, *Domains*, and *Text Classes*.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
  ## Citation
84
 
 
 
85
  ```
86
  @inproceedings{elmadani-etal-2025-readability,
87
  title = "A Large and Balanced Corpus for Fine-grained Arabic Readability Assessment",
@@ -93,4 +155,17 @@ The splits are balanced accross *Readability Levels*, *Domains*, and *Text Class
93
  address = "Vienna, Austria",
94
  publisher = "Association for Computational Linguistics"
95
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
96
  ```
 
11
  pretty_name: 'BAREC 2025: Readability Assessment Shared Task'
12
  ---
13
 
14
+ # BAREC Shared Task 2025
15
 
16
  ## Dataset Summary
17
 
18
+ **BAREC** (the Balanced Arabic Readability Evaluation Corpus) is a large-scale dataset developed for the **BAREC Shared Task 2025**, focused on **fine-grained Arabic readability assessment**. The dataset includes over **1M words**, annotated across **19 readability levels**, with additional mappings to coarser 7, 5, and 3 level schemes.
19
 
20
+ The dataset is **annotated at the sentence level**. Document-level readability scores are derived by assigning each document the readability level of its **most difficult sentence**, based on the 19-level scheme. This provides both **sentence-level** and **document-level** readability information.
21
 
22
+ ---
23
+
24
+ ## Supported Tasks and Leaderboards
25
+
26
+ The dataset supports **multi-class readability classification** in the following formats:
27
+
28
+ - **19 levels** (default)
29
+ - **7 levels**
30
+ - **5 levels**
31
+ - **3 levels**
32
+
33
+ For details on the shared task, evaluation setup, and leaderboards, visit the [Shared Task Website](https://barec.camel-lab.com/sharedtask2025).
34
+
35
+ ---
36
 
37
  ### Languages
38
 
39
+ - **Arabic** (Modern Standard Arabic)
40
+
41
+ ---
42
+
43
+ ## How to Use
44
+
45
+ You can load the dataset using Hugging Face Datasets by specifying the appropriate `data_files`.
46
+
47
+ ### Sentence-level dataset
48
+
49
+ ```python
50
+ data_files={"train": "Sent_Train.csv", "dev": "Sent_Dev.csv", "test": "Sent_Test.csv"}
51
+ barec = load_dataset("CAMeL-Lab/BAREC-Shared-Task-2025", data_files=data_files, token=token)
52
+ barec_train = barec["train"]
53
+ barec_dev = barec["dev"]
54
+ barec_test = barec["test"]
55
+ ```
56
+
57
+ ### Document-level dataset
58
+
59
+ ```python
60
+ data_files={"train": "Doc_Train.csv", "dev": "Doc_Dev.csv", "test": "Doc_Test.csv"}
61
+ barec = load_dataset("CAMeL-Lab/BAREC-Shared-Task-2025", data_files=data_files, token=token)
62
+ barec_train = barec["train"]
63
+ barec_dev = barec["dev"]
64
+ barec_test = barec["test"]
65
+ ```
66
+
67
+ ---
68
 
69
  ## Dataset Structure (Sentence-level)
70
 
 
74
 
75
  ### Data Fields
76
 
77
+ - **ID**: Unique sentence identifier.
78
+ - **Sentence**: The sentence text.
79
  - **Word_Count**: Number of words in the sentence.
80
  - **Readability_Level**: The readability level in `19-levels` scheme, ranging from `1-alif` to `19-qaf`.
81
  - **Readability_Level_19**: The readability level in `19-levels` scheme, ranging from `1` to `19`.
82
  - **Readability_Level_7**: The readability level in `7-levels` scheme, ranging from `1` to `7`.
83
  - **Readability_Level_5**: The readability level in `5-levels` scheme, ranging from `1` to `5`.
84
  - **Readability_Level_3**: The readability level in `3-levels` scheme, ranging from `1` to `3`.
85
+ - **Annotator**: The annotator ID (`A1-A5` or `IAA`).
86
+ - **Document**: Source document file name.
87
+ - **Source**: Document source.
88
+ - **Book**: Book name.
89
+ - **Author**: Author name.
90
+ - **Domain**: Domain (`Arts & Humanities`, `STEM` or `Social Sciences`).
91
+ - **Text_Class**: Readership group (`Foundational`, `Advanced` or `Specialized`).
92
+
93
+ ---
94
 
95
  ## Dataset Structure (Document-level)
96
 
 
100
 
101
  ### Data Fields
102
 
103
+ - **ID**: Unique document identifier.
104
+ - **Document**: Document file name.
105
+ - **Sentences**: Full text of the document.
106
+ - **Sentence_Count**: Number of sentences.
107
+ - **Word_Count**: Total word count.
108
  - **Readability_Level**: The readability level in `19-levels` scheme, ranging from `1-alif` to `19-qaf`.
109
  - **Readability_Level_19**: The readability level in `19-levels` scheme, ranging from `1` to `19`.
110
  - **Readability_Level_7**: The readability level in `7-levels` scheme, ranging from `1` to `7`.
111
  - **Readability_Level_5**: The readability level in `5-levels` scheme, ranging from `1` to `5`.
112
  - **Readability_Level_3**: The readability level in `3-levels` scheme, ranging from `1` to `3`.
113
+ - **Source**: Document source.
114
+ - **Book**: Book name.
115
+ - **Author**: Author name.
116
+ - **Domain**: Domain (`Arts & Humanities`, `STEM` or `Social Sciences`).
117
+ - **Text_Class**: Readership group (`Foundational`, `Advanced` or `Specialized`).
118
+
119
+ ---
120
 
121
  ## Data Splits
122
 
123
+ - The BAREC dataset has three splits: *Train* (80%), *Dev* (10%), and *Test* (10%).
124
+ - The splits are in the document level.
125
+ - The splits are balanced accross *Readability Levels*, *Domains*, and *Text Classes*.
126
+
127
+ ---
128
+
129
+ ## Evaluation
130
+
131
+ We define the Readability Assessment task as an ordinal classification task. The following metrics are used for evaluation:
132
+
133
+ - **Accuracy (Acc<sup>19</sup>):** The percentage of cases where reference and prediction classes match in the 19-level scheme.
134
+ - **Accuracy (Acc<sup>7</sup>, Acc<sup>5</sup>, Acc<sup>3</sup>):** The percentage of cases where reference and prediction classes match after collapsing the 19 levels into 7, 5, or 3 levels, respectively.
135
+ - **Adjacent Accuracy (±1 Acc<sup>19</sup>):** Also known as off-by-1 accuracy. The proportion of predictions that are either exactly correct or off by at most one level in the 19-level scheme.
136
+ - **Average Distance (Dist):** Also known as Mean Absolute Error (MAE). Measures the average absolute difference between predicted and true labels.
137
+ - **Quadratic Weighted Kappa (QWK):** An extension of Cohen’s Kappa that measures the agreement between predicted and true labels, applying a quadratic penalty to larger misclassifications (i.e., predictions farther from the true label are penalized more heavily).
138
+
139
+ We provide evaluation scripts [here](https://github.com/CAMeL-Lab/barec-shared-task-2025).
140
+
141
+ ---
142
 
143
  ## Citation
144
 
145
+ If you use BAREC in your work, please cite the following papers:
146
+
147
  ```
148
  @inproceedings{elmadani-etal-2025-readability,
149
  title = "A Large and Balanced Corpus for Fine-grained Arabic Readability Assessment",
 
155
  address = "Vienna, Austria",
156
  publisher = "Association for Computational Linguistics"
157
  }
158
+
159
+ @inproceedings{habash-etal-2025-guidelines,
160
+ title = "Guidelines for Fine-grained Sentence-level Arabic Readability Annotation",
161
+ author = "Habash, Nizar and
162
+ Taha-Thomure, Hanada and
163
+ Elmadani, Khalid N. and
164
+ Zeino, Zeina and
165
+ Abushmaes, Abdallah",
166
+ booktitle = "Proceedings of the 19th Linguistic Annotation Workshop (LAW-XIX)",
167
+ year = "2025",
168
+ address = "Vienna, Austria",
169
+ publisher = "Association for Computational Linguistics"
170
+ }
171
  ```