Austral 32B GLM4 Winton

Model banner
Trained by Delta-Vector

Overview

Austral 32B - Winton

Codex Finetune GLM-4-Tulu Based KTO enhanced Adventure/Roleplay generalist 32B Sized model

More than 1.5-metres tall, about six-metres long and up to 1000-kilograms heavy, Australovenator Wintonensis was a fast and agile hunter. The largest known Australian theropod.

This is a finetune of Delta-Vector/GLM-4-32B-Tulu-Instruct to be a generalist Roleplay/Adventure model. I've removed some of the "slops" that i noticed in an otherwise great model aswell as improving the general writing of the model, This was a multi-stage finetune, all previous checkpoints are released aswell. In testing it has shown to be a great model for Adventure cards & Roleplay, Often pushing the plot forward better then other models, While avoiding some of the slops you'd find in models from Drummer and Co.

Support my finetunes / Me on Kofi: https://Ko-fi.com/deltavector | Thank you to Auri for helping/Testing ♥

Quants

Quants Formats

  • GGUFFor use with LLama.cpp & Forks
  • EXL3For use with TabbyAPI

Chat Format

This model utilizes ChatML.

<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant

Training

As is known by now, I trained off my own Instruct tune from base. `Delta-Vector/GLM-4-32B-Tulu-Instruct`, After which it was trained for 4 epochs on a datamix of Light Novels/Natural Human writing datasets, etc, The resulting model is kinda incoherent, so we end up having to KTO the model to improve coherency and cohesiveness but that resulted in the model not being as "creative" as once thought, So usually i'd SFT with Pocketdoc's rep-remover data, however this time, I decided to convert the dataset into a KTO format and that resulted in a better model. Thankies to Pocket for that dataset.

Config(Post-KTO V2)
https://wandb.ai/new-eden/Austral-32B/runs/zlhv6tfw?nw=nwuserdeltavector

This model was trained over 4 epochs using 8 x A100s (Ty to my work, Quixi.AI) for the base SFT, Then i used KTO to clean up some coherency issues for 1 epoch, then finally training for another 1 epoch on Rep_Remover to delete slops. Total was roughly 80 hours total.

Credits

TYSM to my friends: Auri, Lucy, Trappu, Alicat, Kubernetes Bad, Intervitens, NyxKrage & Kalomaze

Downloads last month
57
Safetensors
Model size
32.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Delta-Vector/Austral-32B-GLM4-Winton

Datasets used to train Delta-Vector/Austral-32B-GLM4-Winton

Collection including Delta-Vector/Austral-32B-GLM4-Winton