llama-3.1-8b-alpaca-custom

This model is a fine-tuned version of unsloth/Meta-Llama-3.1-8B-bnb-4bit on the Alpaca dataset.

Training Details

  • Base Model: Meta-Llama-3.1-8B (4-bit quantized)
  • Dataset: yahma/alpaca-cleaned (51,760 examples)
  • Training Steps: 60
  • Framework: Unsloth (2x faster training)
  • Hardware: NVIDIA RTX 5090

Prompt Format

Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "Inkersion/llama-3.1-8b-alpaca-custom",
    device_map="auto",
    load_in_4bit=True
)
tokenizer = AutoTokenizer.from_pretrained("Inkersion/llama-3.1-8b-alpaca-custom")

prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
Explain what machine learning is in one sentence.

### Response:
"""

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_length=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Configuration

  • Learning Rate: 2e-4
  • Batch Size: 2 (per device)
  • Gradient Accumulation: 4 steps
  • Optimizer: AdamW 8-bit
  • Max Sequence Length: 2048

Trained using Unsloth for optimized fine-tuning.

Downloads last month
13
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Inkersion/llama-3.1-8b-alpaca-custom

Finetuned
(604)
this model

Dataset used to train Inkersion/llama-3.1-8b-alpaca-custom