M3.2-24B-Animus-V7.0

Send me your support to help me feed the data beast! also taking comissions for universe specific models
Support on Ko-fiGGUF Quantized Models
The GGUF quantized model files are available for download. Click the button below to view the files.
Download GGUF Files โModel Description
This is Version 7.0 of the fine-tuned mistralai/Mistral-Small-3.2-24B-Instruct-2506
, specialized for roleplaying within the Wings of Fire universe. V7.0 represents an iterative improvement over V6, with focused adjustments to enhance the model's ability to act as a Dungeon Master. Additionally, this version addresses and fixes an issue with consecutive AI turns in the training data, leading to a more stable and healthier learning process.
The goal of this model is to provide the most lore-accurate and immersive conversational experience to date. It can adopt canon character personas with high fidelity, explore alternate timelines from the books, and guide the narrative with new interactive elements.
While trained for a specific universe, previous versions have shown surprising capability in general, non-WOF roleplay. This versatility is expected to continue in V7.0, making it a flexible creative partner for various scenarios.
Training Details
Training Hardware
This model was trained for 1 epoch on a rented single NVIDIA H100 GPU.
Training Procedure
A QLoRA (Quantized Low-Rank Adaptation) approach was used for efficient fine-tuning, with an optimized process configured using Axolotl.
Training Data
V7.0 was fine-tuned on a refined version of the V6 dataset, which consists of 3,200 high-quality examples. This version includes targeted improvements to the data:
- Canon-Centric Scenarios: All roleplay scenarios are based on pivotal events from the Wings of Fire book series, exploring "what-if" outcomes. (e.g., What if Darkstalker didn't kill Arctic at that moment?). This ensures deep and lore-consistent interactions.
- Canon-Only Characters: The model was trained exclusively on canon characters from the books. AI-generated characters have been removed from the training data (except for the user's persona), leading to more authentic character portrayals.
- Enhanced Dungeon Master (DM) Role: Training data was adjusted to better capture the Dungeon Master role, encouraging the model to prompt the user with multiple-choice actions to drive the story forward. For example:
You arrive in front of Queen Scarlet. What do you do? A)... B)... C)...
- Fixed Turn-Taking: The dataset was corrected to eliminate instances of consecutive AI turns, leading to a healthier learning curve and better conversational behavior.
- Improved Data Cleaning: The rigorous cleaning process from V6 was maintained, removing formatting artifacts like
**scene transitions**
for cleaner, more natural prose.
Intended Use & Limitations
- Intended Use: The primary purpose of this model is for creative and roleplaying within the Wings of Fire universe. However, user feedback indicates it is also highly effective for general-purpose roleplaying.
- Limitations & Quirks:
- Performance on tasks outside of its training domain (general knowledge, coding, etc.) is not guaranteed and will likely be poor.
- Versatility: While it appears to be only a Wings of Fire tuned model, users have reported it is very capable of performing normal roleplay with other settings and characters.
- The model may "hallucinate" or generate plausible but non-canonical information, especially when pushed outside the established "what-if" scenarios.
- Content: The training data includes mature and darker themes from the Wings of Fire series, such as conflict, character death, and moral ambiguity. The model is capable of generating content reflecting these themes. As always, it is up to the user what they do with it.
- Formatting: Training data was cleaned to remove narrative artifacts like
**scene transitions**
. The model should now produce cleaner prose. - Safety: This model has not undergone additional safety alignment beyond what was included in its base Mistral-Small-3.2 model. Standard responsible AI practices should be followed.
Recommended Sampler Settings
For optimal performance that balances creativity and coherence, the following default sampler settings are recommended.
Acknowledgements
- Credit to Mistral AI for the powerful Mistral-Small-3.2 architecture.
- Credit to Google for the Gemini Pro model, used in dataset generation.
- Credit to Evan Armstrong for Augmentoolkit, an invaluable tool for dataset creation.
- Downloads last month
- -
Model tree for Darkhn/M3.2-24B-Animus-V7.0
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503