SOLAR-10.7B-Instruct-v1.0-uncensored
SOLAR-10.7B-Instruct-v1.0 finetuned to be less censored. Refer to upstage/SOLAR-10.7B-Instruct-v1.0 for model info and usage instructions.
Training details
This model was trained using Lora and DPOTrainer on unalignment/toxic-dpo-v0.1
How to Cite
@misc{solarUncensoredDPO,
title={solar-10.7b-instruct-V1.0-uncensored},
url={https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored},
author={Stepan Zuev},
year={2023},
month={Dec}
}
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 20.56 |
| IFEval (0-Shot) | 38.84 |
| BBH (3-Shot) | 33.86 |
| MATH Lvl 5 (4-Shot) | 0.23 |
| GPQA (0-shot) | 5.93 |
| MuSR (0-shot) | 18.49 |
| MMLU-PRO (5-shot) | 26.04 |
- Downloads last month
- 755
Model tree for w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored
Spaces using w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored 16
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard38.840
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard33.860
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard0.230
- acc_norm on GPQA (0-shot)Open LLM Leaderboard5.930
- acc_norm on MuSR (0-shot)Open LLM Leaderboard18.490
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard26.040