Datasets:

Modalities:
Text
Formats:
json
Languages:
Polish
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for PLLuMIC

PLLuMIC - Polish Large Language Model (PLLuM) Instruction Corpus

Dataset Details

Dataset Description

We release the first representative subset of the PLLuM Instruction Corpus (PLLuMIC), which we believe to be useful in guiding and planning the development of similar LLM datasets. PLLuMIC is a hand-crafted set of LLM fine-tuning Polish language instructions, developed in line with the annotation guidelines and covering a functional typology. Each instruction is designed to be unique in some way - no two samples, especially within given subtype, are the same. Every single row provides a new insight into the dataset, making the corpus ideal for extensive analysis. The methodology is described in more detail in a paper titled The PLLuM Instruction Corpus (link below). We plan regular updates and significant extensions of the corpus.

  • Curated by: PELCRA (Polish and English Language Corpora for Research and Applications) Team
  • Language(s) (NLP): Polish
  • License: CC-BY-SA-4.0

Dataset Sources

Uses

Direct Use

We believe the dataset to be useful in guiding and planning the development of similar, bigger, LLM datasets. This first sample is designed to be a representative guidance, on how to properly structure and build your own dataset.

It is also a great foundation for synthetic extensions that will combine high quality, diversity and scale. We are currently working on such corpus extension ourselves and are planning to make it available alongside this organic component.

Out-of-Scope Use

Current scale of the dataset will not be sufficient to perform a full LLM fine-tuning. However, with only 10k synthetic samples that are based around the corpus, one can already expect very interesting results. We will provide more details (and data) on that topic in future updates.

Dataset Structure

Statistics

Total instructions: 1,278

All instructions were annotated by professional annotators. Each sample was developed in accordance with comprehensive annotation guidelines and subsequently reviewed by a senior annotator to ensure full compliance with quality standards. The annotation process followed a functional typology designed to encompass key areas of model competence.

There are both single-turn and multi-turn instructions available.

Type & Thematic distributions

Type Number of samples
Generation 392
Adversarial 125
Dialogue 124
NLP 102
Data manipulation 88
Formatting 87
Knowledge (QA) 80
Extraction 71
Identity 68
Translation 61
CoT 50
Programming 30
Topic Number of samples
Languages 185
Society 169
Computer science 163
Technology 87
Entertainment 85
Biology 78
Other 73
Home 60
Geography 59
Culture 55
Culinary 52
Literature 50
History 48
Politics 42
Medicine 36
Law and administration 31
Sports 26
Travel 25
Industry 20
Economy 19
Psychology 19
Mathematics 15
Art 14
Physics 8
Chemistry 7
Religion 7
Automotive 6
Philosophy 5
Astronomy 5
Ecology 4
Hobby 4

Data format explanation

The PLLuMIC dataset is distributed as a JSON file storing rows with conversations between a user and an AI assistant. Each conversation is a JSON structure described by following fields:

Top-Level Fields

  • dataset_name: Name of the dataset (PLLuMIC).
  • dataset_source: Source organization (CLARIN-BIZ-bis).
  • conv_id: Unique identifier for the conversation (3242183cbce2).
  • messages: Array of dialogue messages (user/assistant/system exchanges).

Message Object Fields

Each entry in messages contains:

  • instruction_id: Unique ID for the instruction/task (2a07c2eca0cb).
  • seq: Sequence number (-1 for system, 0,1,2,… for user/assistant turns).
  • role: Speaker role (system, user, or assistant).
  • content: Text of the message (empty for some system prompts).
  • type: Interaction type (e.g., Dialog, Generation).
  • subtype: List of task subtype (e.g., [System prompt, Text simplification]).
  • topic: List of relevant topics (e.g., [Geography]).
  • language: Language code (e.g., pol for Polish).
  • source: References (e.g., Wikipedia URLs).

Dataset Creation

Curation Rationale

Most instruction-tuning datasets for LLMs are either private or poorly documented, making it hard to understand how models are trained or to build comparable resources. Even when public, such datasets often mix data from many sources without clear structure or balance.

There’s also little research on how different instruction types shape model behavior, and while distilling data from strong LLMs is common, it doesn’t always transfer well across languages and cultures.

That’s why we created this dataset — to offer a transparent, well-documented, and balanced resource for instruction tuning, designed with linguistic and cultural diversity in mind. The results and findings are well-described in the paper linked above.

Annotation

Annotation process

All instructions were annotated by professional annotators. Each sample was developed in accordance with comprehensive annotation guidelines and subsequently reviewed by a senior annotator to ensure full compliance with quality standards. The annotation process followed a functional typology designed to encompass key areas of model competence.

Who are the annotators?

All annotators (over 50 in total) were university graduates, with at least a bachelor’s or master’s degree in linguistics or other humanities with the exception of technical instructions annotators who had a university degree in computer science. All of the super-annotators had a PhD degree.

Citation

@misc{pęzik2025plluminstructioncorpus,
      title={The PLLuM Instruction Corpus}, 
      author={Piotr Pęzik and Filip Żarnecki and Konrad Kaczyński and Anna Cichosz and Zuzanna Deckert and Monika Garnys and Izabela Grabarczyk and Wojciech Janowski and Sylwia Karasińska and Aleksandra Kujawiak and Piotr Misztela and Maria Szymańska and Karolina Walkusz and Igor Siek and Maciej Chrabąszcz and Anna Kołos and Agnieszka Karlińska and Karolina Seweryn and Aleksandra Krasnodębska and Paula Betscher and Zofia Cieślińska and Katarzyna Kowol and Artur Wilczek and Maciej Trzciński and Katarzyna Dziewulska and Roman Roszko and Tomasz Bernaś and Jurgita Vaičenonienė and Danuta Roszko and Paweł Levchuk and Paweł Kowalski and Irena Prawdzic-Jankowska and Marek Kozłowski and Sławomir Dadas and Rafał Poświata and Alina Wróblewska and Katarzyna Krasnowska-Kieraś and Maciej Ogrodniczuk and Michał Rudolf and Piotr Rybak and Karolina Saputa and Joanna Wołoszyn and Marcin Oleksy and Bartłomiej Koptyra and Teddy Ferdinan and Stanisław Woźniak and Maciej Piasecki and Paweł Walkowiak and Konrad Wojtasik and Arkadiusz Janz and Przemysław Kazienko and Julia Moska and Jan Kocoń},
      year={2025},
      eprint={2511.17161},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2511.17161}, 
}
Downloads last month
60