✨DarkNeuron-AI/darkneuron-chat-v1.1

DarkNeuron-Chat v1.1 is a chatbot designed for basic, friendly conversations. It provides clear and concise responses and is suitable for general use.


👍Model Overview

  • Model type: GPT-based causal language model
  • Purpose: Basic conversational chatbot
  • Training data: Fine-tuned on Persona-Chat and Bot-Dialog datasets
  • Intended audience: General users, students, hobbyists, and researchers interested in chatbot interactions

🌟Installation

Install the latest version of Transformers:

!pip install --upgrade transformers torch

👽Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch, gc

# Load tokenizer and model
model_name = "DarkNeuron-AI/darkneuron-chat-v1.1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Use GPU if available
device = 0 if torch.cuda.is_available() else -1

# Create chatbot pipeline
chatbot = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    device=device,
    return_full_text=False
)

# Optional: Free GPU memory
gc.collect()
torch.cuda.empty_cache()

# Interactive chat loop
print("Chatbot ready! Type 'exit' or 'quit' to stop.\n")

while True:
    user_input = input("User: ")
    if user_input.lower() in ["exit", "quit"]:
        print("Chat ended.")
        break

    prompt = f"User: {user_input}\nBot:"
    response = chatbot(
        prompt,
        max_length=100,
        do_sample=True,
        temperature=0.7,
        top_p=0.9,
        num_return_sequences=1
    )
    print(response[0]["generated_text"])

Developed With ❤️ By DarkNeuronAI

Downloads last month
79
Safetensors
Model size
81.9M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DarkNeuron-AI/darkneuron-chat-v1.1

Finetuned
(860)
this model
Quantizations
1 model

Datasets used to train DarkNeuron-AI/darkneuron-chat-v1.1