metadata
license: apache-2.0
datasets:
- custom-dataset
language:
- en
base_model:
- facebook/blenderbot-400M-distill
pipeline_tag: text-generation
library_name: transformers
tags:
- BlenderBot
- Conversational
- Fine-tuned
- Text Generation
metrics:
- name: BLEU
value: 0.1687
type: BLEU
- name: ROUGE
value:
rouge1: 0.4078
rouge2: 0.1912
rougeL: 0.3418
rougeLsum: 0.3401
type: ROUGE
model-index:
- name: TalkGPT
results:
- task:
type: text-generation
dataset:
name: custom-dataset
type: text
metrics:
- name: BLEU
value: 0.1687
type: BLEU
- name: ROUGE
value:
rouge1: 0.4078
rouge2: 0.1912
rougeL: 0.3418
rougeLsum: 0.3401
type: ROUGE
source:
name: Self-evaluated
url: https://huggingface.co/12sciencejnv/TalkGPT
TalkGPT
This model is a fine-tuned version of BlenderBot-400M (distilled) based on a custom conversational dataset. It is designed to generate conversational responses in English.
License
Apache 2.0
Datasets
The model is fine-tuned on a custom dataset consisting of conversational dialogues.
Language
English
Metrics
- BLEU: 0.1687 (calculated on the validation set)
- ROUGE-1: 0.4078
- ROUGE-2: 0.1912
- ROUGE-L: 0.3418
- ROUGE-Lsum: 0.3401
- Training Loss: 0.2460 (final training loss after fine-tuning)
Base Model
The model is based on the BlenderBot-400M-distill architecture by Facebook AI.
Pipeline Tag
text-generation
Library Name
transformers
Tags
BlenderBot, Conversational, Fine-tuned, Text Generation
Eval Results
The model achieved the following results on the validation set:
- BLEU: 0.1687
- ROUGE-1: 0.4078
- ROUGE-2: 0.1912
- ROUGE-L: 0.3418
- ROUGE-Lsum: 0.3401
- Training Loss: 0.2460 after 3 epochs of fine-tuning.