Datasets:
				
			
			
	
			
			
	
		The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
			removing the
			loading script
			and relying on
			automated data support
			(you can use
			convert_to_parquet
			from the datasets library). If this is not possible, please
			open a discussion
			for direct help.
		
Indonesian Wikipedia Data Repository
license: cc-by-sa-3.0
Welcome to Indonesian Wikipedia Data Repository. The datasets are extracted from Wikipedia HF and processed using the scripts available in this repository for reproducibility purpose.
FAQS
What are the available languages provided in dataset?
Please check the following table.
| Lang Code | Lang Desc | Wiki Info | Total Data | Total Size (bytes) | 
|---|---|---|---|---|
| ace | Acehnese | Wiki Link | 12904 | 4867838 | 
| ban | Balinese | Wiki Link | 19837 | 17366080 | 
| bjn | Acehnese | Wiki Link | 10437 | 6655378 | 
| bug | Buginese | Wiki Link | 9793 | 2072609 | 
| gor | Gorontalo | Wiki Link | 14514 | 5989252 | 
| id | Indonesian | Wiki Link | 654287 | 1100932403 | 
| jv | Javanese | Wiki Link | 72667 | 69774853 | 
| map_bms | Banyumasan  (Dialect of Javanese)  | 
Wiki Link | 11832 | 5060989 | 
| min | Minangkabau | Wiki Link | 225858 | 116376870 | 
| ms | Malay | Wiki Link | 346186 | 410443550 | 
| nia | Nias | Wiki Link | 1650 | 1938121 | 
| su | Sundanese | Wiki Link | 61494 | 47410439 | 
| tet | Tetum | Wiki Link | 1465 | 1452716 | 
How do I extract new Wikipedia Dataset of Indonesian languages?
You may check to the script extract_raw_wiki_data.py to understand its implementations, or you can adjust the bash provided in extract_raw_wiki_data_indo.sh to extract it on your own. Please note that this dataset is extensible to any languages of your choice.
How do I extract new Wikipedia Dataset of Indonesian languages?
You may visit this Wikipedia Dump Index to check any latest available data and this link Wikipedia Language Coverage to map into any languages that you're wanting to extract.
How does the data being preprocessed? What makes it different from loading it directly from Wikipedia HF?
The data available in here are processed with following flows:
- Raw data is being deduplicated on 
titleandtext(text-content from a given article), to remove articles containing boilerplate text (template text that are used usually for no-available informations or asking for contributions of content in that article), which usually deemed noisy for NLP data. - Furthermore, the 
titleandtextdata are being checked for string-matching duplication (duplication of text that are being pre-processed, i.e symbols removed, HTML tags striped, or ASCII-chars validated). You may check thiscleanse_wiki_data.pyscript to understand its implementation. 
Getting Started
To read the datasets directly
Use one of the following code chunks to load it from HuggingFace Hub:
You can refer to the 2nd args of config name using the following script
dataset = load_dataset(
  "sabilmakbar/indo_wiki",
  "indo_wiki_dedup_data" # a config name, can be "indo_wiki_raw_data" or "indowiki_dedup_id_only", defaults to "indo_wiki_dedup_data"
)
Or you can provide both lang and date_stamp (providing only one will thrown an error)
dataset = load_dataset(
  "sabilmakbar/indo_wiki",
  lang = "id", # see the splits for complete lang choices
  date_stamp="20230901"
)
To replicate the whole dataset generation process
- Set-up a new Python/Conda Environment (recommended Python version: 3.9.6 to 3.9.18 or 3.10.0 to 3.10.13) and install the requirements on 
requirements.txtuse this codebase viapip install -r requirements.txt. - Activate the chosen Python/Conda environment which the requirements are being installed.
 - Run this 
shscript for extractions from Wikimedia Dump:sh extract_raw_wiki_data_indo.sh. - Run this 
shscript of deduplication:sh dedup_raw_wiki_data_indo.sh. 
Citation Info:
@ONLINE{wikidump,
    author = "Wikimedia Foundation",
    title  = "Wikimedia Downloads",
    url    = "https://dumps.wikimedia.org"}
@ONLINE{wikipedia-hf,
    title  = "Huggingface Wikipedia Dataset",
    url    = "https://huggingface.co/datasets/wikipedia"}
- Downloads last month
 - 45