|  | --- | 
					
						
						|  | license: apache-2.0 | 
					
						
						|  | model-index: | 
					
						
						|  | - name: SOLAR-10.7B-Instruct-v1.0-uncensored | 
					
						
						|  | results: | 
					
						
						|  | - task: | 
					
						
						|  | type: text-generation | 
					
						
						|  | name: Text Generation | 
					
						
						|  | dataset: | 
					
						
						|  | name: IFEval (0-Shot) | 
					
						
						|  | type: HuggingFaceH4/ifeval | 
					
						
						|  | args: | 
					
						
						|  | num_few_shot: 0 | 
					
						
						|  | metrics: | 
					
						
						|  | - type: inst_level_strict_acc and prompt_level_strict_acc | 
					
						
						|  | value: 38.84 | 
					
						
						|  | name: strict accuracy | 
					
						
						|  | source: | 
					
						
						|  | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored | 
					
						
						|  | name: Open LLM Leaderboard | 
					
						
						|  | - task: | 
					
						
						|  | type: text-generation | 
					
						
						|  | name: Text Generation | 
					
						
						|  | dataset: | 
					
						
						|  | name: BBH (3-Shot) | 
					
						
						|  | type: BBH | 
					
						
						|  | args: | 
					
						
						|  | num_few_shot: 3 | 
					
						
						|  | metrics: | 
					
						
						|  | - type: acc_norm | 
					
						
						|  | value: 33.86 | 
					
						
						|  | name: normalized accuracy | 
					
						
						|  | source: | 
					
						
						|  | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored | 
					
						
						|  | name: Open LLM Leaderboard | 
					
						
						|  | - task: | 
					
						
						|  | type: text-generation | 
					
						
						|  | name: Text Generation | 
					
						
						|  | dataset: | 
					
						
						|  | name: MATH Lvl 5 (4-Shot) | 
					
						
						|  | type: hendrycks/competition_math | 
					
						
						|  | args: | 
					
						
						|  | num_few_shot: 4 | 
					
						
						|  | metrics: | 
					
						
						|  | - type: exact_match | 
					
						
						|  | value: 0.23 | 
					
						
						|  | name: exact match | 
					
						
						|  | source: | 
					
						
						|  | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored | 
					
						
						|  | name: Open LLM Leaderboard | 
					
						
						|  | - task: | 
					
						
						|  | type: text-generation | 
					
						
						|  | name: Text Generation | 
					
						
						|  | dataset: | 
					
						
						|  | name: GPQA (0-shot) | 
					
						
						|  | type: Idavidrein/gpqa | 
					
						
						|  | args: | 
					
						
						|  | num_few_shot: 0 | 
					
						
						|  | metrics: | 
					
						
						|  | - type: acc_norm | 
					
						
						|  | value: 5.93 | 
					
						
						|  | name: acc_norm | 
					
						
						|  | source: | 
					
						
						|  | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored | 
					
						
						|  | name: Open LLM Leaderboard | 
					
						
						|  | - task: | 
					
						
						|  | type: text-generation | 
					
						
						|  | name: Text Generation | 
					
						
						|  | dataset: | 
					
						
						|  | name: MuSR (0-shot) | 
					
						
						|  | type: TAUR-Lab/MuSR | 
					
						
						|  | args: | 
					
						
						|  | num_few_shot: 0 | 
					
						
						|  | metrics: | 
					
						
						|  | - type: acc_norm | 
					
						
						|  | value: 18.49 | 
					
						
						|  | name: acc_norm | 
					
						
						|  | source: | 
					
						
						|  | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored | 
					
						
						|  | name: Open LLM Leaderboard | 
					
						
						|  | - task: | 
					
						
						|  | type: text-generation | 
					
						
						|  | name: Text Generation | 
					
						
						|  | dataset: | 
					
						
						|  | name: MMLU-PRO (5-shot) | 
					
						
						|  | type: TIGER-Lab/MMLU-Pro | 
					
						
						|  | config: main | 
					
						
						|  | split: test | 
					
						
						|  | args: | 
					
						
						|  | num_few_shot: 5 | 
					
						
						|  | metrics: | 
					
						
						|  | - type: acc | 
					
						
						|  | value: 26.04 | 
					
						
						|  | name: accuracy | 
					
						
						|  | source: | 
					
						
						|  | url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored | 
					
						
						|  | name: Open LLM Leaderboard | 
					
						
						|  | --- | 
					
						
						|  |  | 
					
						
						|  | # SOLAR-10.7B-Instruct-v1.0-uncensored | 
					
						
						|  | SOLAR-10.7B-Instruct-v1.0 finetuned to be less censored. Refer to [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0) for model info and usage instructions. | 
					
						
						|  |  | 
					
						
						|  | ## Training details | 
					
						
						|  | This model was trained using Lora and DPOTrainer on [unalignment/toxic-dpo-v0.1](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1) | 
					
						
						|  |  | 
					
						
						|  | ## How to Cite | 
					
						
						|  | ``` | 
					
						
						|  | @misc{solarUncensoredDPO, | 
					
						
						|  | title={solar-10.7b-instruct-V1.0-uncensored}, | 
					
						
						|  | url={https://huggingface.co/w4r10ck/SOLAR-10.7B-Instruct-v1.0-uncensored}, | 
					
						
						|  | author={Stepan Zuev}, | 
					
						
						|  | year={2023}, | 
					
						
						|  | month={Dec} | 
					
						
						|  | } | 
					
						
						|  | ``` | 
					
						
						|  |  | 
					
						
						|  |  | 
					
						
						|  | # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) | 
					
						
						|  | Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_w4r10ck__SOLAR-10.7B-Instruct-v1.0-uncensored) | 
					
						
						|  |  | 
					
						
						|  | |      Metric       |Value| | 
					
						
						|  | |-------------------|----:| | 
					
						
						|  | |Avg.               |20.56| | 
					
						
						|  | |IFEval (0-Shot)    |38.84| | 
					
						
						|  | |BBH (3-Shot)       |33.86| | 
					
						
						|  | |MATH Lvl 5 (4-Shot)| 0.23| | 
					
						
						|  | |GPQA (0-shot)      | 5.93| | 
					
						
						|  | |MuSR (0-shot)      |18.49| | 
					
						
						|  | |MMLU-PRO (5-shot)  |26.04| | 
					
						
						|  |  | 
					
						
						|  |  |