2024-05-28 update
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ _"The only difference between Science and screwing around is writing it down."_
|
|
| 12 |
|
| 13 |
# The LLM Creativity benchmark
|
| 14 |
|
| 15 |
-
_Last benchmark update:
|
| 16 |
|
| 17 |
The goal of this benchmark is to evaluate the ability of Large Language Models to be used
|
| 18 |
as an **uncensored creative writing assistant**. Human evaluation of the results is done manually,
|
|
@@ -37,24 +37,42 @@ The questions can be split half-half in 2 possible ways:
|
|
| 37 |
- **Second best _large_ model**: [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus). Very close to the above choice, but 4 times slower! On my m2 max with 38 GPU cores, I get an inference speed of **3.88 tok/s** with q5_km. However it gives different results from WizardLM, and it can definitely be worth using.
|
| 38 |
- **Best _medium_ model**: [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5)
|
| 39 |
- **Best _small_ model**: [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)
|
| 40 |
-
- **Best _tiny_ model**: [froggeric/WestLake-10.7b-v2](https://huggingface.co/froggeric/WestLake-10.7B-v2-GGUF)
|
| 41 |
|
| 42 |
# Results
|
| 43 |
|
| 44 |
-
\
|
|
|
|
|
|
|
|
|
|
| 50 |
I used the imatrix quantisation from [mradermacher](https://huggingface.co/mradermacher/WizardLM-2-8x22B-i1-GGUF)\
|
| 51 |
Fast inference! Great quality writing, that feels a lot different from most other models.
|
| 52 |
Unrushed, less repetitions. Good at following instructions.
|
| 53 |
Non creative writing tasks are also better, with more details and useful additional information.
|
| 54 |
This is a huge improvement over the original **Mixtral-8x22B**.
|
| 55 |
My new favourite model.\
|
|
|
|
| 56 |
Inference speed: **11.81 tok/s** (iq4_xs on m2 max with 38 gpu cores)
|
| 57 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
[llmixer/BigWeave-v16-103b](https://huggingface.co/llmixer/BigWeave-v16-103b)\
|
| 59 |
A miqu self-merge, which is the winner of the BigWeave experiments. I was hoping for an improvement over the
|
| 60 |
existing _traditional_ 103B and 120B self-merges, but although it comes close, it is still not as good.
|
|
@@ -83,10 +101,6 @@ Experiments in trying to get a better self-merge of miqu-1, by using @jukofyork
|
|
| 83 |
More info about the _attenuation_ is available in this [discussion](https://huggingface.co/wolfram/miqu-1-120b/discussions/4).
|
| 84 |
So far no better results.
|
| 85 |
|
| 86 |
-
|
| 87 |
-
**Previously:**
|
| 88 |
-
|
| 89 |
-
|
| 90 |
[CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus)\
|
| 91 |
A big step up for open LLM models. Has a tendency to work best by giving it the beginning of an answer
|
| 92 |
for completion. To get the best of it, I recommend getting familiar with the
|
|
|
|
| 12 |
|
| 13 |
# The LLM Creativity benchmark
|
| 14 |
|
| 15 |
+
_Last benchmark update: 28 May 2024_
|
| 16 |
|
| 17 |
The goal of this benchmark is to evaluate the ability of Large Language Models to be used
|
| 18 |
as an **uncensored creative writing assistant**. Human evaluation of the results is done manually,
|
|
|
|
| 37 |
- **Second best _large_ model**: [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus). Very close to the above choice, but 4 times slower! On my m2 max with 38 GPU cores, I get an inference speed of **3.88 tok/s** with q5_km. However it gives different results from WizardLM, and it can definitely be worth using.
|
| 38 |
- **Best _medium_ model**: [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5)
|
| 39 |
- **Best _small_ model**: [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)
|
| 40 |
+
- **Best _tiny_ model**: [daybreak-kunoichi-2dpo-7b](https://huggingface.co/crestf411/daybreak-kunoichi-2dpo-7b) and [froggeric/WestLake-10.7b-v2](https://huggingface.co/froggeric/WestLake-10.7B-v2-GGUF)
|
| 41 |
|
| 42 |
# Results
|
| 43 |
|
| 44 |
+

|
| 45 |
|
| 46 |
# Remarks about some of the models
|
| 47 |
|
| 48 |
|
| 49 |
[WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B)\
|
| 50 |
+
Even though the score is close to the iq4_xs version, **the _q4_km_ quant definitely feels smarter and writes better text than the
|
| 51 |
+
_iq4_xs_ quant**. Unfortunately with my 96GB of RAM, once I go over 8k context size, it fails. Best to use it (for me), is until 8k,
|
| 52 |
+
and then switch to the iq4_xs version which can accomodate a much larger context size.
|
| 53 |
I used the imatrix quantisation from [mradermacher](https://huggingface.co/mradermacher/WizardLM-2-8x22B-i1-GGUF)\
|
| 54 |
Fast inference! Great quality writing, that feels a lot different from most other models.
|
| 55 |
Unrushed, less repetitions. Good at following instructions.
|
| 56 |
Non creative writing tasks are also better, with more details and useful additional information.
|
| 57 |
This is a huge improvement over the original **Mixtral-8x22B**.
|
| 58 |
My new favourite model.\
|
| 59 |
+
Inference speed: **11.22 tok/s** (q4_km on m2 max with 38 gpu cores)
|
| 60 |
Inference speed: **11.81 tok/s** (iq4_xs on m2 max with 38 gpu cores)
|
| 61 |
|
| 62 |
+
[daybreak-kunoichi-2dpo-7b](https://huggingface.co/crestf411/daybreak-kunoichi-2dpo-7b)
|
| 63 |
+
Absolutely no guard rails! No refusal, no censorship. Good writing, but very hardcore.
|
| 64 |
+
|
| 65 |
+
[jukofyork/Dark-Miqu-70B](https://huggingface.co/jukofyork/Dark-Miqu-70B)
|
| 66 |
+
Can write long and detailed narratives, but often continues writing slightly beyond the requested stop point.
|
| 67 |
+
It has some slight difficulties at following instructions. But the biggest problem by far is it is marred by
|
| 68 |
+
too many spelling and grammar mistakes.
|
| 69 |
+
|
| 70 |
+
[dreamgen/opus-v1-34b](https://huggingface.co/dreamgen/opus-v1-34b)
|
| 71 |
+
Writes complete nonsense: no logic, absurd plots. Poor writing style. Lots of canned expressions used again and again.
|
| 72 |
+
|
| 73 |
+
**Previously:**
|
| 74 |
+
|
| 75 |
+
|
| 76 |
[llmixer/BigWeave-v16-103b](https://huggingface.co/llmixer/BigWeave-v16-103b)\
|
| 77 |
A miqu self-merge, which is the winner of the BigWeave experiments. I was hoping for an improvement over the
|
| 78 |
existing _traditional_ 103B and 120B self-merges, but although it comes close, it is still not as good.
|
|
|
|
| 101 |
More info about the _attenuation_ is available in this [discussion](https://huggingface.co/wolfram/miqu-1-120b/discussions/4).
|
| 102 |
So far no better results.
|
| 103 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 104 |
[CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus)\
|
| 105 |
A big step up for open LLM models. Has a tendency to work best by giving it the beginning of an answer
|
| 106 |
for completion. To get the best of it, I recommend getting familiar with the
|