Agnuxo's picture
Upload README.md with huggingface_hub
cc516c1 verified
---
language:
- en
- es
license: apache-2.0
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: Agente-Director-Qwen2-7B-Instruct_CODE_Python-GGUF_Spanish_English_8bit
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.6078
- name: F1
type: f1
value: 0.6981
---
# Agente-Director-Qwen2-7B-Instruct_CODE_Python-GGUF_Spanish_English_8bit
[<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" width="100"/><img src="https://github.githubassets.com/assets/GitHub-Logo-ee398b662d42.png" width="100"/>](https://github.com/Agnuxo1)
- **Developed by:** [Agnuxo](https://github.com/Agnuxo1)
- **License:** apache-2.0
- **Finetuned from model:** Qwen/Qwen2-7B-Instruct
This model was fine-tuned using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## Model Details
- **Model Parameters:** 7070.63 million
- **Model Size:** 13.61 GB
- **Quantization:** 8-bit quantized
- **Estimated GPU Memory Required:** ~13 GB
Note: The actual memory usage may vary depending on the specific hardware and runtime environment.
## Benchmark Results
This model has been fine-tuned and evaluated on the GLUE MRPC task:
- **Accuracy:** 0.6078
- **F1 Score:** 0.6981
![GLUE MRPC Metrics](./glue_mrpc_metrics.png)
For more details, visit my [GitHub](https://github.com/Agnuxo1).
Thanks for your interest in this model!