Fine-tuned Question Answering Model
This model is a fine-tuned version of deepset/roberta-base-squad2 for question answering tasks.
Model Description
This model has been fine-tuned on a custom QA dataset to improve its performance on specific question-answering tasks.
How to use
You can use this model directly with the Hugging Face pipeline API:
from transformers import pipeline
# Load the model
qa_pipeline = pipeline(
"question-answering",
model="takumi123xxx/qa-model-hello-world"
)
# Use the model
context = "Tokyo is the capital city of Japan. Tokyo was originally a fishing village named Edo."
question = "What was Tokyo originally called?"
result = qa_pipeline(question=question, context=context)
print(f"Answer: {result['answer']}")
print(f"Score: {result['score']}")
Training Data
The model was fine-tuned on a custom dataset containing various question-answer pairs about general knowledge topics.
Inference API
This model supports the Hugging Face Inference API. You can use it directly through the API widget on this page or programmatically:
from huggingface_hub import InferenceClient
client = InferenceClient()
result = client.question_answering(
question="What was Tokyo originally called?",
context="Tokyo is the capital city of Japan. Tokyo was originally a fishing village named Edo.",
model="takumi123xxx/qa-model-hello-world"
)
Performance
The model achieves high confidence scores (>0.99) on the training examples.
- Downloads last month
- 1