metadata
datasets:
- boun-tabi/squad_tr
language:
- tr
metrics:
- exact_match
- f1
library_name: transformers
base_model:
- dbmdz/distilbert-base-turkish-cased
pipeline_tag: question-answering
tags:
- Turkish Question-Answering
🇹🇷 DistilBERTurkQA for Turkish Question-Answering
This model is a fine-tuned version of DistilBERTurk Base on the SQuAD-TR, a machine‑translated Turkish version of the original SQuAD 2.0. For more details about the dataset, methodology, and experiments, you can refer to the corresponding research paper.
Citation
If you use this model in your research or application, please cite the following paper:
@article{incidelen8performance,
title={Performance Evaluation of Transformer-Based Pre-Trained Language Models for Turkish Question-Answering},
author={{\.I}ncidelen, Mert and Aydo{\u{g}}an, Murat},
journal={Black Sea Journal of Engineering and Science},
volume={8},
number={2},
pages={15--16},
publisher={U{\u{g}}ur {\c{S}}EN}
}
How to Use
You can use the model directly with 🤗 Transformers:
from transformers import pipeline
qa = pipeline(
"question-answering",
model="incidelen/distilbert-base-turkish-cased-qa"
)
result = qa(
question="...",
context="..."
)
print(result)
Evaluation Results
| Exact Match (%) | F1 Score (%) |
|---|---|
| 41.26 | 55.75 |
Acknowledgments
Special thanks to maydogan for their contributions and support.