YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
gpt2-large-babi - bnb 4bits
- Model creator: https://huggingface.co/p208p2002/
- Original model: https://huggingface.co/p208p2002/gpt2-large-babi/
Original model description:
datasets: - facebook/babi_qa
Fine tune and evaluate transformer model on facebook's bAbi tasks.
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Training Code: p208p2002/bAbi-tasks-with-transformer-model
| task_no | task_name | score |
|---|---|---|
| qa1 | single-supporting-fact | 100 |
| qa2 | two-supporting-facts | 99.4 |
| qa3 | three-supporting-facts | 62.0 |
| qa4 | two-arg-relations | 100 |
| qa5 | three-arg-relations | 96.5 |
| qa6 | yes-no-questions | 100 |
| qa7 | counting | 100 |
| qa8 | lists-sets | 99.8 |
| qa9 | simple-negation | 100 |
| qa10 | indefinite-knowledge | 100 |
| qa11 | basic-coreference | 100 |
| qa12 | conjunction | 100 |
| qa13 | compound-coreference | 100 |
| qa14 | time-reasoning | 100 |
| qa15 | basic-deduction | 100 |
| qa16 | basic-induction | 100 |
| qa17 | positional-reasoning | 100 |
| qa18 | size-reasoning | 100 |
| qa19 | path-finding | 100 |
| qa20 | agents-motivations | 100 |
# Please use with the follow template
INPUT_TEMPLATE = """
Context:
{context}
Question:
{question}
Answer:
{answer}
"""
input_text = INPUT_TEMPLATE.format_map({
"context":context,
"question":question,
"answer":answer
}).strip()
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support