silly-v0.2-exl2
Original model: silly-v0.2 by wave-on-discord
Based on: Mistral-Nemo-Base-2407 by Mistral AI
Quants
4bpw h6 (main)
4.5bpw h6
5bpw h6
6bpw h6
8bpw h8
Quantization notes
Made with Exllamav2 0.3.1 with default dataset.
The model can be used with Nvidia RTX GPUs on Windows or RTX/AMD ROCm on Linux with TabbyAPI or Text-Generation-WebUI.
Should be usable at 6bpw/16k context with something like RTX3060/12GB or 6bpw/32k with RTX4060Ti/16GB, both Q8 cache.
In my brief testing the model had interesting writing style but it's very fragile and easily starts looping or repeating.
I guess it should be used with DRY sampler to avoid repetition/loops. Both TabbyAPI and TGW have it.
I don't recommend using repetition_penalty or frequency_penalty samplers for this as they are far far more destructive than DRY.
Original model card
silly-v0.2
Finetune of Mistral-Nemo-Base-2407 designed to emulate the writing style of character.ai models.
- 2 epochs of SFT on RP data, then about an hour of PPO on 8xH100 with POLAR-7B RFT
- Kind of wonky, if you're dealing with longer messages you may need to decrease your temperature
- ChatML chat format
- Reviews:
its typically good at writing, v good for 12b, coherent in RP, follows context and starts conversations well
I do legit like it, it feels good to use. When it gives me stable output the output is high quality and on task, its got small model stupid where basic logic holds but it invents things or forgets them (feels like small effective context window maybe?) which, to be clear, is like. Perfectly fine. Very good st synthesizing and inferring information provided in context on a higher level
This is mostly a proof-of-concept, showcasing that POLAR reward models can be very useful for "out of distribution" tasks like roleplaying. If you're working on your own roleplay finetunes, please consider using POLAR!
- Downloads last month
- 1
Model tree for cgus/silly-v0.2-exl2
Base model
mistralai/Mistral-Nemo-Base-2407