AkademikDerlem / README.md
BayanDuygu's picture
fixed typo
fc9faa7 verified
metadata
annotations_creators:
  - Duygu Altinok
language:
  - tr
license:
  - cc-by-sa-4.0
multilinguality:
  - monolingual
size_categories:
  - 100K<n<1M
source_datasets:
  - original
pretty_name: AkademikDerlem
config_names:
  - makaleler
  - akademik-ozetler
  - medikal-makaleler
  - medikal-ozetler
  - bilkent-writings
dataset_info:
  - config_name: makaleler
    features:
      - name: dergi_ismi
        dtype: string
      - name: title
        dtype: string
      - name: url
        dtype: string
      - name: pdf_url
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 2817356
        num_examples: 128339
  - config_name: akademik-ozetler
    features:
      - name: dergi_ismi
        dtype: string
      - name: title
        dtype: string
      - name: url
        dtype: string
      - name: pdf_url
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 900364
        num_examples: 497261
  - config_name: medikal-makaleler
    features:
      - name: dergi_ismi
        dtype: string
      - name: title
        dtype: string
      - name: url
        dtype: string
      - name: pdf_url
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 116892
        num_examples: 14993
  - config_name: medikal-ozetler
    features:
      - name: dergi_ismi
        dtype: string
      - name: title
        dtype: string
      - name: url
        dtype: string
      - name: pdf_url
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 34944
        num_examples: 21065
  - config_name: bilkent
    features:
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 29712
        num_examples: 6451
configs:
  - config_name: makaleler
    data_files:
      - split: train
        path: makaleler/*jsonl
  - config_name: akademik-ozetler
    data_files:
      - split: train
        path: akademik-ozetler/*jsonl
  - config_name: medikal-makaleler
    data_files:
      - split: train
        path: medikal-makaleler/*jsonl
  - config_name: medikal-ozetler
    data_files:
      - split: train
        path: medikal-ozetler/*jsonl
  - config_name: bilkent-writings
    data_files:
      - split: train
        path: bilkent/*jsonl
task_categories:
  - fill-mask
  - text-generation

Dataset Card for AkademikDerlem

AkademikDerlem is a scientific text corpus for Turkish, gathered from misc academical publication websites.

This corpus is a part of large scale Turkish corpus Bella Turca. For more details about Bella Turca, please refer to the publication.

This collection is made up of five datasets: Articles, Academic-Abstracts, Medical-Articles, Medical-Abstracts, and Bilkent-Writings. The Bilkent-Writings dataset comes from creative writings produced in the Turkish 101 and Turkish 102 courses at Bilkent University between 2014 and 2018.

The other four datasets were collected from various sources. The Academic-Abstracts dataset, for example, was compiled from two main resources: YÖK Açık Erişim and Dergipark. Both YÖK and TÜBİTAK-Dergipark are government-supported organizations that provide access to high-quality research papers and journals on their platforms. Size information per subcorpus is as follows:

Dataset num instances size num of words
Akademik-Ozetler 497.261 880M 86.97M
Makaleler 128.339 2.7G 322.8M
Medikal-Makaleler 14.993 115M 13.35M
Medikal-Ozetler 21.065 35M 3.31M
Bilkent-Writings 6.451 30M 3.67M
Total 668.109 3.8G 430.1M

AkademikDerlem collection includes academic texts covering a wide range of topics, from scientific fields to sociological subjects. This variety results in a rich and diverse vocabulary throughout the dataset. Additionally, since these texts are reviewed by journals, peers, and thesis advisors, they maintain a high standard of quality and credibility.

Instances

A typical instance from the dataset looks like:

{
"dergi_ismi": "Akademik Araştırma Tıp Dergisi",
"title": "Tiroid Piramidal Lob İnsidansı ve Tiroid Fonksiyonları İle İlişkisi",
"url": "https://dergipark.org.tr/tr/pub/aatd/issue/48731/541233",
"pdf_url": "https://dergipark.org.tr/tr/download/article-file/808186",
"text": "Çalışmamızda ultrasonografi ile piramidal lob sıklığını ve piramidal lob boyutları ile tiroid fonksiyon testleri arasında bir ilişki olup olmadığını tespit etmeyi amaçladık. Gereç ve Yöntem: Ekim 2015 ile ekim 2016 tarihleri arasında tiroid ultrasonografi için başvurmuş, erişkin yaş grubunda toplam 644 olgu çalışmamıza dahil edildi. Bulgular: Olgularımızın %15.2sinde (n=98) piramidal lob mevcuttu. Piramidal lob uzun boyutu ortalama 14.97±5.9 mm, kısa boyutu ortalama 3.99±5.1 mm idi. Piramidal lobu olan hastalar cinsiyete göre değerlendirildiğinde, kadın ve erkek cinsiyet arasında yaş, piramidal lob boyutları ve tiroid fonksiyonları açısından fark yoktu (p>0.05). Piramidal lob boyutları ile tiroid fonksiyon testleri arasında anlamlı bir ilişki yoktu. Tartışma: Piramidal lob sıklığı %15.2 olarak tespit edildi ve her iki cinsiyette benzer oranda görüldü. Piramidal lob boyutları ile tiroid fonksiyonları arasında ilişki saptanmadı."
}

Citation

@InProceedings{10.1007/978-3-031-70563-2_16,
author="Altinok, Duygu",
editor="N{\"o}th, Elmar
and Hor{\'a}k, Ale{\v{s}}
and Sojka, Petr",
title="Bella Turca: A Large-Scale Dataset of Diverse Text Sources for Turkish Language Modeling",
booktitle="Text, Speech, and Dialogue",
year="2024",
publisher="Springer Nature Switzerland",
address="Cham",
pages="196--213",
abstract="In recent studies, it has been demonstrated that incorporating diverse training datasets enhances the overall knowledge and generalization capabilities of large-scale language models, especially in cross-domain scenarios. In line with this, we introduce Bella Turca: a comprehensive Turkish text corpus, totaling 265GB, specifically curated for training language models. Bella Turca encompasses 25 distinct subsets of 4 genre, carefully chosen to ensure diversity and high quality. While Turkish is spoken widely across three continents, it suffers from a dearth of robust data resources for language modelling. Existing transformers and language models have primarily relied on repetitive corpora such as OSCAR and/or Wiki, which lack the desired diversity. Our work aims to break free from this monotony by introducing a fresh perspective to Turkish corpora resources. To the best of our knowledge, this release marks the first instance of such a vast and diverse dataset tailored for the Turkish language. Additionally, we contribute to the community by providing the code used in the dataset's construction and cleaning, fostering collaboration and knowledge sharing.",
isbn="978-3-031-70563-2"
}

Acknowledgments

This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).