SeLeRoSa-proc / README.md
razvanalex's picture
Upload dataset
d717e72 verified
metadata
dataset_info:
  features:
    - name: index
      dtype: int64
    - name: news_id
      dtype: int64
    - name: sentence
      dtype: string
    - name: domain
      dtype: string
    - name: label_0
      dtype: int64
    - name: label_1
      dtype: int64
    - name: label_2
      dtype: int64
    - name: label
      dtype: int64
  splits:
    - name: train
      num_bytes: 1301927
      num_examples: 9800
    - name: validation
      num_bytes: 264729
      num_examples: 2000
    - name: test
      num_bytes: 279521
      num_examples: 2073
  download_size: 739103
  dataset_size: 1846177
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
license: cc-by-nc-sa-4.0
task_categories:
  - text-classification
language:
  - ro
tags:
  - satire
  - sentence-level
  - news
  - nlp
pretty_name: SeLeRoSa processed
size_categories:
  - 10K<n<100K

SeLeRoSa - Sentence-Level Romanian Satire Detection Dataset

DOI

Abstract

Satire, irony, and sarcasm are techniques that can disseminate untruthful yet plausible information in the news and on social media, akin to fake news. These techniques can be applied at a more granular level, allowing satirical information to be incorporated into news articles. In this paper, we introduce the first sentence-level dataset for Romanian satire detection for news articles, called SeLeRoSa. The dataset comprises 13,873 manually annotated sentences spanning various domains, including social issues, IT, science, and movies. With the rise and recent progress of large language models (LLMs) in the natural language processing literature, LLMs have demonstrated enhanced capabilities to tackle various tasks in zero-shot settings. We evaluate multiple baseline models based on LLMs in both zero-shot and fine-tuning settings, as well as baseline transformer-based models, such as Romanian BERT. Our findings reveal the current limitations of these models in the sentence-level satire detection task, paving the way for new research directions.

Description

This is the anonymized and processed version of the SeLeRoSa dataset.

The anonymization stage involved using Spacy to replace named entities with tokens:

  • Persons → <PERSON>
  • Nationalities/religious/political groups → <NORP>
  • Geopolitical entities → <GPE>
  • Organizations → <ORG>
  • Locations → <LOC>
  • Facilities → <FAC>

In addition, we replace URLs with the @URL tag. Manual filtering of frequently occurring entities not caught by NER was also performed.

The processing step involved:

  • Convert text to lowercase
  • Fix diacritics according to Romanian standards
  • "ţ" and "ş" (cedilla) vs "ț" and "ș" (comma)
  • Tokenize the text using Spacy's tokenizer
  • Remove stop words
  • Remove punctuation marks
  • Remove short tokens (less than 3 characters)
  • Lemmatize text using Spacy's Romanian lemmatizer

For the unprocessed dataset (anonymized only), check https://huggingface.co/datasets/unstpb-nlp/SeLeRoSa

The code for processing this dataset, including also the experiments, can be found on GitHub.

Usage

First, install the following dependencies:

pip install datasets torch

Example of loading the train set in a dataloader:

from datasets import load_dataset
from torch.utils.data import DataLoader

dataset = load_dataset("unstpb-nlp/SeLeRoSa-proc", split="train")

dataloader = DataLoader(dataset)
for sample in dataloader:
    print(sample)

Dataset structure

The following columns are available for every sample:

Field Data Type Description
index int A unique identifier for every sentence
news_id int A unique identifier for the source news associated with the current sentence
sentence string The processed and anonymized sentence
domain string The domain associated with the sentence. Can be one of: life-death, it-stiinta, cronica-de-film
label_0 int The label given by the first annotator. 0 - regular, 1 - satirical
label_1 int The label given by the second annotator 0 - regular, 1 - satirical
label_2 int The label given by the third annotator 0 - regular, 1 - satirical
label int The aggregated label through majority voting. This should be used for training and evaluation. 0 - regular, 1 - satirical

Citation

If you use this dataset in your research, please cite as follows:

@software{smadu_2025_15689794,
  author       = {Smădu, Răzvan-Alexandru and
                  Iuga, Andreea and
                  Cercel, Dumitru-Clementin and
                  Pop, Florin},
  title        = {SeLeRoSa - Sentence-Level Romanian Satire
                   Detection Dataset
                  },
  month        = jun,
  year         = 2025,
  publisher    = {Zenodo},
  version      = {v1.0.0},
  doi          = {10.5281/zenodo.15689794},
  url          = {https://doi.org/10.5281/zenodo.15689794},
  swhid        = {swh:1:dir:5c0e7a00a415d4346ee7a11f18c814ef3c3f5d88
                   ;origin=https://doi.org/10.5281/zenodo.15689793;vi
                   sit=swh:1:snp:e8bb60f04bd1b01d5e3ac68d7db37a3d28ab
                   7a22;anchor=swh:1:rel:ff1b46be53b410c9696b39aa7f24
                   a3bd387be547;path=razvanalex-phd-SeLeRoSa-83693df
                  },
}