--- base_model: - Phr00t/Phr00tyMix-v2-32B library_name: transformers base_model_relation: quantized tags: - mergekit - merge - creative writing - qwq - deepseek - r1 - qwen - roleplay - qwen2 - RP --- This model has been replaced by [Phr00tyMix v3](https://huggingface.co/Phr00t/Phr00tyMix-v3-32B-GGUF) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/631be8402ea8535ea48abbc6/zih8hNmLcp-uOrzb7JKBs.png) # Phr00tyMix-v2-32B The goal: smart, obedient, uncensored, coherent roleplay and creative storywriting. I think this is a significant improvement over Phr00tyMix-v1. This model is more uncensored and pays much better attention to details. I picked these models mostly for creative purposes that do not force thinking into responses: * [ArliAI/QwQ-32B-ArliAI-RpR-v4](https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v4) (for smart creativity and longer context) * [allura-org/Qwen2.5-32b-RP-Ink](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink) ("cursed" roleplay support) * [Delta-Vector/Hamanasu-Magnum-QwQ-32B](https://huggingface.co/Delta-Vector/Hamanasu-Magnum-QwQ-32B) (solid instruct creative finetune) * [Sao10K/32B-Qwen2.5-Kunou-v1](https://huggingface.co/Sao10K/32B-Qwen2.5-Kunou-v1) (solid Qwen roleplay finetune) * [nbeerbower/EVA-Gutenberg3-Qwen2.5-32B](https://huggingface.co/nbeerbower/EVA-Gutenberg3-Qwen2.5-32B) (mix of many solid writing finetunes) The base model is [huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated](https://huggingface.co/huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated) for an uncensored and very smart foundation. I dropped the "LongWriter Zero" because it didn't seem to write very well when testing directly. I also dropped ROMBOS as the DeepSeek-R1-Distill appears to have enough brains as a foundation. I've been very impressed with my (limited) testing of it thus far (formatted script writing, uncensored testing, reasoning etc.). These are the GGUFs for [the original model](https://huggingface.co/Phr00t/Phr00tyMix-v2-32B/). ## Merge Details ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: model_stock base_model: huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated dtype: bfloat16 models: - model: nbeerbower/EVA-Gutenberg3-Qwen2.5-32B - model: Delta-Vector/Hamanasu-Magnum-QwQ-32B - model: ArliAI/QwQ-32B-ArliAI-RpR-v4 - model: Sao10K/32B-Qwen2.5-Kunou-v1 - model: allura-org/Qwen2.5-32b-RP-Ink ```