Uploaded model

  • Developed by: Agnuxo
  • License: apache-2.0
  • Finetuned from model: Qwen/Qwen2-7B-Instruct

This model was fine-tuned using Unsloth and Huggingface's TRL library.

Benchmark Results

This model has been fine-tuned for various tasks and evaluated on the following benchmarks:

GLUE_MRPC

Accuracy: 0.6863 F1: 0.8134

GLUE_MRPC Metrics

Model Size: 7,070,626,304 parameters Required Memory:

For more details, visit my GitHub.

Thanks for your interest in this model!

Downloads last month
6
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support