Dark Sapling V2 7B - 32k Context - Ultra Quality - 32bit upscale.

Complete remerge, and remaster of the incredible Dark Sapling V2 7B - 32k Context from source files.
Registering an impressive drop of 320 points (lower is better) at Q4KM.
This puts "Q4KM" operating at "Q6" levels, and further elevates Q6 and Q8 as well.
Likewise, even Q2K (smallest quant) will operate at much higher levels than it's original source counterpart.
RESULTS:
The result is superior performance in instruction following, reasoning, depth, nuance and emotion.
Reduction in prompt size, as it understands nuance better.
And as a side effect more context available for output due to reduction in prompt size.
Note that there will be an outsized difference between quants especially for creative and/or "no right answer" use cases.
Because of this it is suggested to download the highest quant you can operate, and it's closest neighbours so to speak.
IE: Q4KS, Q4KM, Q5KS as an example.
Imatrix Plus versions to be uploaded at a separate repo shortly.
Special thanks to "TEEZEE" the original model creator:
[ https://huggingface.co/TeeZee/DarkSapling-7B-v2.0 ]
NOTE: Version 1 and Version 1.1 are also remastered.
Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers
This a "Class 1":
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
Special Thanks:
Special thanks to all the following, and many more...
All the model makers, fine tuners, mergers, and tweakers:
- Provides the raw "DNA" for almost all my models.
- Sources of model(s) can be found on the repo pages, especially the "source" repos with link(s) to the model creator(s).
Huggingface [ https://huggingface.co ] :
- The place to store, merge, and tune models endlessly.
- THE reason we have an open source community.
LlamaCPP [ https://github.com/ggml-org/llama.cpp ] :
- The ability to compress and run models on GPU(s), CPU(s) and almost all devices.
- Imatrix, Quantization, and other tools to tune the quants and the models.
- Llama-Server : A cli based direct interface to run GGUF models.
- The only tool I use to quant models.
Quant-Masters: Team Mradermacher, Bartowski, and many others:
- Quant models day and night for us all to use.
- They are the lifeblood of open source access.
MergeKit [ https://github.com/arcee-ai/mergekit ] :
- The universal online/offline tool to merge models together and forge something new.
- Over 20 methods to almost instantly merge model, pull them apart and put them together again.
- The tool I have used to create over 1500 models.
Lmstudio [ https://lmstudio.ai/ ] :
- The go to tool to test and run models in GGUF format.
- The Tool I use to test/refine and evaluate new models.
- LMStudio forum on discord; endless info and community for open source.
Text Generation Webui // KolboldCPP // SillyTavern:
- Excellent tools to run GGUF models with - [ https://github.com/oobabooga/text-generation-webui ] [ https://github.com/LostRuins/koboldcpp ] .
- Sillytavern [ https://github.com/SillyTavern/SillyTavern ] can be used with LMSTudio [ https://lmstudio.ai/ ] , TextGen [ https://github.com/oobabooga/text-generation-webui ], Kolboldcpp [ https://github.com/LostRuins/koboldcpp ], Llama-Server [part of LLAMAcpp] as a off the scale front end control system and interface to work with models.
- Downloads last month
- 324
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit