File size: 2,027 Bytes
7434438 f7778ff 7434438 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
datasets:
- Delta-Vector/Hydrus-Preview-Tulu-3-SFT-Mix
base_model:
- arcee-ai/GLM-4-32B-Base-32K
library_name: transformers
tags:
- instruct
- code
- chemistry
- GLM
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/j9LpDp1Wup-m-IJ_XMh23.png" alt="Image" width="400" />
<br>
<small><em>Promise I will never go blonde like Kanye</em></small>
</div>
---
# Overview
Didn't really have any cool README ideas for this so we're just going with just whatever song i'm listening to rn and it happened to be `Baby i'm bleeding`
Nevertheless, This is a finetune from the 32K context extended (or fixed?) Arcee GLM4 base - Trained shrimply with just the Tulu-SFT-Mixture *but* I removed Safety alignment examples. Came out pretty well, It uses chatML due to the GLM4 Format giving me a headache. It's a decently competant assistant although I haven't done any testing on how well the model performs at longer-contexts, nor have i done any RL afterwards to fix up it's edges.
Think it should be a decent base for any future finetunes, I felt that GLM4 really wasn't given the proper time of day and it's a way better base then any Qwen3 model.
# Quants
GGUF: https://huggingface.co/mradermacher/GLM-Tulu-ChatML-GGUF
Imatrix GGUF: https://huggingface.co/mradermacher/GLM-Tulu-ChatML-i1-GGUF
# Prompting
The model was trained with ChatML formatting
```
"""<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
```
# Configs
WandB : https://wandb.ai/new-eden/Training-A100/runs/05kktve8?nw=nwuserdeltavector
This train took 15 hours on 8xB200s provided by Deepinfra and Cognitive Computations, Config is linked in the WandB
# Credits
Thank you to Lucy, Auri, NyxKrage, Creators of the Tulu-SFT-Mix and everyone at Anthracite & Allura
|