Update README.md (#1)
Browse files- Update README.md (08bcd3e8abc58ee6637f18ff95b0a328da6d103c)
Co-authored-by: FBL <[email protected]>
README.md
CHANGED
|
@@ -107,6 +107,19 @@ model-index:
|
|
| 107 |
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-UNAMGS
|
| 108 |
name: Open LLM Leaderboard
|
| 109 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 110 |
|
| 111 |
## Llamacpp imatrix Quantizations of cybertron-v4-qw7B-UNAMGS
|
| 112 |
|
|
@@ -267,3 +280,4 @@ Thank you kalomaze and Dampf for assistance in creating the imatrix calibration
|
|
| 267 |
Thank you ZeroWw for the inspiration to experiment with embed/output.
|
| 268 |
|
| 269 |
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
|
|
|
|
|
| 107 |
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=fblgit/cybertron-v4-qw7B-UNAMGS
|
| 108 |
name: Open LLM Leaderboard
|
| 109 |
---
|
| 110 |
+
# cybertron-v4-qw7B-UNAMGS
|
| 111 |
+
|
| 112 |
+
**UNA IS BACK** Cybertron v4 UNA-MGS, Based on the amazing Qwen2.5 7B
|
| 113 |
+
|
| 114 |
+
**SCORING #1 7-8B LLM WITH NO CONTAMINATION 21.11.2024 with avg. 31.82**
|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
|
| 118 |
+
This special edition went thru UNA at MLP layers just like [miniclaus-1.5B](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS)
|
| 119 |
+
|
| 120 |
+
Here we use our novel approach called `MGS`. Its up to you to figure out what it means. On top of that we used `UNA: Uniform Neural Alignment`
|
| 121 |
+
|
| 122 |
+
Cybertron V4 went thru SFT with `MGS & UNA` over `Magpie-Align/Magpie-Qwen2.5-Pro-1M-v0.1` dataset.
|
| 123 |
|
| 124 |
## Llamacpp imatrix Quantizations of cybertron-v4-qw7B-UNAMGS
|
| 125 |
|
|
|
|
| 280 |
Thank you ZeroWw for the inspiration to experiment with embed/output.
|
| 281 |
|
| 282 |
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
| 283 |
+
|