apmoore1 commited on
Commit
fab722b
·
verified ·
1 Parent(s): 602f385

How to load the dataset

Browse files
Files changed (2) hide show
  1. README.md +3 -1
  2. example_loading_script.py +13 -0
README.md CHANGED
@@ -12,11 +12,13 @@ size_categories:
12
  viewer: false
13
  configs:
14
  - config_name: default
15
- data_files: "data/wikipedia_shard_0.jsonl.gz"
16
  chunksize: 100
17
  ---
18
  # English USAS Mosaico Dataset
19
 
 
 
20
  This dataset contains the processed English Wikipedia pages of the [Mosaico](https://github.com/SapienzaNLP/mosaico/tree/main) dataset, of which this is a subset of the original English Wikipedia dataset whereby it contains only the pages with the tags `good` and `featured`.
21
 
22
  Each entry in the dataset is a processed Wikipedia page containing the following annotations/tags:
 
12
  viewer: false
13
  configs:
14
  - config_name: default
15
+ data_files: "data/*.jsonl.gz"
16
  chunksize: 100
17
  ---
18
  # English USAS Mosaico Dataset
19
 
20
+ *Note* that this dataset cannot be loaded using the HuggingFace datasets library as we have 2 data types within a list in the `pos` column, however you can download the dataset and load it within Python using the `json` and `gzip` core modules, an example loading script can be found at [./example_loading_script.py](./example_loading_script.py).
21
+
22
  This dataset contains the processed English Wikipedia pages of the [Mosaico](https://github.com/SapienzaNLP/mosaico/tree/main) dataset, of which this is a subset of the original English Wikipedia dataset whereby it contains only the pages with the tags `good` and `featured`.
23
 
24
  Each entry in the dataset is a processed Wikipedia page containing the following annotations/tags:
example_loading_script.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from pathlib import Path
2
+ import gzip
3
+ import json
4
+
5
+ # Loops over each JSONL file
6
+ for data_file in Path("data").glob("*.jsonl.gz"):
7
+ print(data_file)
8
+ # Opens the file
9
+ with gzip.open(str(data_file), "rt", encoding="utf-8") as f:
10
+ # Reads the first line of the file and prints it and then breaks
11
+ for line in f:
12
+ print(json.loads(line))
13
+ break