PtBrVId / README.md
hugosousa's picture
Update README.md
910745e verified
metadata
dataset_info:
  - config_name: journalistic
    features:
      - name: text
        dtype: string
      - name: label
        dtype: int64
    splits:
      - name: valid
        num_bytes: 571631
        num_examples: 1000
      - name: train
        num_bytes: 1165703801.742406
        num_examples: 1776290
      - name: test
        num_bytes: 27508.055555555555
        num_examples: 35
    download_size: 7801787253
    dataset_size: 1166302940.7979615
  - config_name: legal
    features:
      - name: text
        dtype: string
      - name: label
        dtype: int64
    splits:
      - name: test
        num_bytes: 10385
        num_examples: 37
      - name: valid
        num_bytes: 282724
        num_examples: 1000
      - name: train
        num_bytes: 859653342.1992471
        num_examples: 2961596
    download_size: 3051546595
    dataset_size: 859946451.1992471
  - config_name: literature
    features:
      - name: text
        dtype: string
      - name: label
        dtype: int64
    splits:
      - name: test
        num_bytes: 12767
        num_examples: 36
      - name: valid
        num_bytes: 373696.9233584274
        num_examples: 1000
      - name: train
        num_bytes: 28191249
        num_examples: 75512
    download_size: 174029597
    dataset_size: 28577712.92335843
  - config_name: politics
    features:
      - name: text
        dtype: string
      - name: label
        dtype: int64
    splits:
      - name: test
        num_bytes: 64499
        num_examples: 48
      - name: valid
        num_bytes: 1469255.532624226
        num_examples: 1000
      - name: train
        num_bytes: 44787070
        num_examples: 30495
    download_size: 154407264
    dataset_size: 46320824.53262423
  - config_name: social_media
    features:
      - name: text
        dtype: string
      - name: label
        dtype: int64
    splits:
      - name: test
        num_bytes: 6146
        num_examples: 28
      - name: valid
        num_bytes: 110535.71291367509
        num_examples: 1000
      - name: train
        num_bytes: 261685711.908761
        num_examples: 2367418
    download_size: 1689212607
    dataset_size: 261802393.62167466
  - config_name: web
    features:
      - name: text
        dtype: string
      - name: label
        dtype: int64
    splits:
      - name: test
        num_bytes: 64024
        num_examples: 34
      - name: valid
        num_bytes: 2216516.5075847963
        num_examples: 1000
      - name: train
        num_bytes: 256894009
        num_examples: 86909
    download_size: 1377291469
    dataset_size: 259174549.5075848
configs:
  - config_name: journalistic
    data_files:
      - split: train
        path: journalistic/train-*
      - split: valid
        path: journalistic/valid-*
      - split: test
        path: journalistic/test-*
  - config_name: legal
    data_files:
      - split: train
        path: legal/train-*
      - split: valid
        path: legal/valid-*
      - split: test
        path: legal/test-*
  - config_name: literature
    data_files:
      - split: train
        path: literature/train-*
      - split: valid
        path: literature/valid-*
      - split: test
        path: literature/test-*
  - config_name: politics
    data_files:
      - split: train
        path: politics/train-*
      - split: valid
        path: politics/valid-*
      - split: test
        path: politics/test-*
  - config_name: social_media
    data_files:
      - split: train
        path: social_media/train-*
      - split: valid
        path: social_media/valid-*
      - split: test
        path: social_media/test-*
  - config_name: web
    data_files:
      - split: train
        path: web/train-*
      - split: valid
        path: web/valid-*
      - split: test
        path: web/test-*

PtBrVId

PtBrVId is a Portuguese Variety Identification corpus, built by combining pre-existing datasets originally created for different NLP tasks and released under permissive licenses.

Our goal is to provide a large, diverse, and multi-domain resource for studying and improving automatic identification of European Portuguese (PT-PT) and Brazilian Portuguese (PT-BR).


📚 Data Sources

The corpus is composed of datasets from various domains, each selected to ensure (as much as possible) mono-variety content. The current release is silver-labeled and unsupervised, meaning that we cannot fully guarantee that all documents are strictly mono-variety. A future version will include a refined annotation schema with both automatic and manual verification.

Domain Variety Dataset Original Task # Docs License Silver Labeled
Literature PT-PT Arquivo Pessoa - ~4k CC
Gutenberg Project - 6 CC
LT-Corpus - 56 ELRA END USER
PT-BR Brazilian Literature Author Identification 81 CC
LT-Corpus - 8 ELRA END USER
Politics PT-PT Koehn (2005) Europarl Machine Translation ~10k CC
PT-BR Brazilian Senate Speeches1 - ~5k CC
Journalistic PT-PT CETEM Público - 1M CC
PT-BR CETEM Folha - 272k CC
Social Media PT-PT Ramalho (2021) Fake News Detection 2M MIT
PT-BR Vargas (2022) Hate Speech Detection 5k CC-BY-NC-4.0
Cunha (2021) Fake News Detection 2k GPL-3.0
Web BOTH Ortiz-Suarez (2020) - 10k CC

Table 1: PtBrVId data sources and metadata.

1 The Brazilian Senate Speeches dataset was created by the authors through web crawling of the Brazilian Senate website and is available on Hugging Face.

A raw version of the dataset is available here.


🛠 Annotation & Preprocessing

Annotation

We selected data sources known to contain primarily mono-variety Portuguese texts. While this approach helps ensure quality, this first release is entirely unsupervised. A planned v2 will introduce a hybrid annotation strategy combining automated labeling and manual review.

Preprocessing Pipeline

To standardize and clean the data, we applied the following steps:

  1. Remove NaN values.
  2. Remove empty documents.
  3. Remove duplicate documents.
  4. Apply the clean-text library to strip non-relevant content for variety identification.
  5. Remove outlier documents with lengths below Q1 - 1.5 × IQR or above Q3 + 1.5 × IQR, where Q1 and Q3 are the first and third quartiles, and IQR is the interquartile range.

📖 Citation

If you use this corpus, please cite:

@article{Sousa_Almeida_Silvano_Cantante_Campos_Jorge_2025,
  title={Enhancing Portuguese Variety Identification with Cross-Domain Approaches},
  journal={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={39},
  number={24},
  pages={25192--25200},
  year={2025},
  doi={10.1609/aaai.v39i24.34705},
  author={Sousa, Hugo and Almeida, Rúben and Silvano, Purificação and Cantante, Inês and Campos, Ricardo and Jorge, Alípio}
}