Qwen2.5-Godzilla-Coder-51B-gguf

"It will pound your programming problems into the pavement... perfectly."
Tipping the scales at 101 layers and 1215 tensors... the monster lives.
Two monsters in fact.
Each model generates stronger, more compact code with an enhanced understanding of your instructions and follows what you tell them to the letter.
And then some.
These overpowered CODING ENGINEs are based on two of the best coder AIs:
"Qwen2.5-Coder-32B-Instruct"
[ https://huggingface.co/Qwen/Qwen2.5-Coder-32B-Instruct ]
and
"OlympicCoder-32B"
[ https://huggingface.co/open-r1/OlympicCoder-32B ]
These two models are stuffed into one MASSSIVE 51B merge that is stronger in performance and understanding than both donor models.
Quants Q2_K, and Q4_K_S - one of each version - are available at the moment.
These are unaltered quants for primary testing.
CONFIGS:
- #1 -> Qwen2.5-Coder-32B-Instruct primary/start, with OlympicCoder-32B as "finalizer".
- #2 -> OlympicCoder-32B as primary/start, with Qwen2.5-Coder-32B-Instruct as "finalizer".
NOTES:
- Each config/version will be very different from each other.
- Tool Calling is supported in both versions.
- Source(s) / full quanting to follow // full repos to follow.
- Model is fully operational at Q2k - both versions - and stronger than the base donor models in terms of raw performance.
- Final model size (including layers/tensors) / config subject to change.
Config / Settings
Model is set at 32k/32768 context for these GGUFS, full quants/full repos will be 128k/131072.
Requirements [Qwen 2.5 32B Coder default settings]:
- Temp .5 to .7 (or lower)
- topk: 20, topp: .8, minp: .05
- rep pen: 1.1 (can be lower)
- Jinja Template (embedded) or CHATML template.
- A System Prompt is not required. (ran tests with blank system prompt)
Refer to either "Qwen2.5-Coder-32B-Instruct" and/or "OlympicCoder-32B" repos (above) for additional settings, benchmarks and usage.
Help, Adjustments, Samplers, Parameters and More
CHANGE THE NUMBER OF ACTIVE EXPERTS:
See this document:
https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
OTHER OPTIONS:
Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
Special Thanks:
Special thanks to all the following, and many more...
All the model makers, fine tuners, mergers, and tweakers:
- Provides the raw "DNA" for almost all my models.
- Sources of model(s) can be found on the repo pages, especially the "source" repos with link(s) to the model creator(s).
Huggingface [ https://huggingface.co ] :
- The place to store, merge, and tune models endlessly.
- THE reason we have an open source community.
LlamaCPP [ https://github.com/ggml-org/llama.cpp ] :
- The ability to compress and run models on GPU(s), CPU(s) and almost all devices.
- Imatrix, Quantization, and other tools to tune the quants and the models.
- Llama-Server : A cli based direct interface to run GGUF models.
- The only tool I use to quant models.
Quant-Masters: Team Mradermacher, Bartowski, and many others:
- Quant models day and night for us all to use.
- They are the lifeblood of open source access.
MergeKit [ https://github.com/arcee-ai/mergekit ] :
- The universal online/offline tool to merge models together and forge something new.
- Over 20 methods to almost instantly merge model, pull them apart and put them together again.
- The tool I have used to create over 1500 models.
Lmstudio [ https://lmstudio.ai/ ] :
- The go to tool to test and run models in GGUF format.
- The Tool I use to test/refine and evaluate new models.
- LMStudio forum on discord; endless info and community for open source.
Text Generation Webui // KolboldCPP // SillyTavern:
- Excellent tools to run GGUF models with - [ https://github.com/oobabooga/text-generation-webui ] [ https://github.com/LostRuins/koboldcpp ] .
- Sillytavern [ https://github.com/SillyTavern/SillyTavern ] can be used with LMSTudio [ https://lmstudio.ai/ ] , TextGen [ https://github.com/oobabooga/text-generation-webui ], Kolboldcpp [ https://github.com/LostRuins/koboldcpp ], Llama-Server [part of LLAMAcpp] as a off the scale front end control system and interface to work with models.
- Downloads last month
- 1,386
2-bit
4-bit
Model tree for DavidAU/Qwen2.5-Godzilla-Coder-51B-gguf
Base model
DavidAU/Qwen2.5-Godzilla-Coder-51B