flan-t5-qlora-countryqa-v1 / model_card.yml
Pravesh390's picture
Upload model_card.yml with huggingface_hub
c5f7647 verified
---
license: apache-2.0
datasets:
- Pravesh390/country-capital-mixed
language:
- en
library_name: transformers
pipeline_tag: text2text-generation
tags:
- qlora
- flan-t5
- prompt-tuning
- question-answering
- hallucination
- robust-qa
- country-capital
model-index:
- name: flan-t5-qlora-countryqa-v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
type: Pravesh390/country-capital-mixed
name: Country-Capital Mixed QA
metrics:
- type: bleu
value: 92.5
- type: rouge
value: 87.3
---
# 🧠 FLAN-T5 QLoRA (Prompt Tuned) - Country Capital QA
This model is a fine-tuned version of `google/flan-t5-base` using **QLoRA** and **Prompt Tuning** on a hybrid QA dataset.
## πŸ“Œ Highlights
- πŸ” Correct & incorrect (hallucinated) QA pairs
- βš™οΈ Trained using 4-bit QLoRA with PEFT
- πŸ”§ Prompt tuning enables parameter-efficient adaptation
## πŸ—οΈ Training
- Base Model: `google/flan-t5-base`
- Method: **QLoRA** + **Prompt Tuning** with PEFT
- Quantization: 4-bit NF4
- Frameworks: πŸ€— Transformers, PEFT, Accelerate
- Evaluation: BLEU = 92.5, ROUGE = 87.3
## πŸ“š Dataset
Mixture of 20 correct and 3 incorrect QA samples from `Pravesh390/country-capital-mixed`.
## πŸ“¦ Usage
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", model="Pravesh390/flan-t5-qlora-countryqa-v1")
pipe("What is the capital of Brazil?")
```
## πŸ“ˆ Intended Use
- Evaluate hallucinations in QA systems
- Robust model development for real-world QA
- Academic research or education
## 🏷️ License
Apache 2.0 β€” Free for research and commercial use.