BennyDaBall's picture
Update README.md
6776728 verified
metadata
library_name: transformers
license: apache-2.0
language:
  - en
tags:
  - qwen3
  - moe
  - merge
  - fp32
  - linear-merge
  - basedbase
base_model:
  - BasedBase/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill-FP32
  - BasedBase/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32

EXPIRIMENTAL - MODEL MERGED AND QUANTIZED BY AI AGENT!

Qwen3-30B A3B — Think+Code (Linear FP32, 60/40)

CPU-merged FP32 model blending:

  • Thinking (60%): BasedBase/Qwen3-30B-A3B-Thinking-2507-Deepseek-v3.1-Distill-FP32
  • Coder-Instruct (40%): BasedBase/Qwen3-Coder-30B-A3B-Instruct-480B-Distill-V2-Fp32

Saved as ~4GB safetensors shards with an index (model.safetensors.index.json). Tokenizer/config sourced from Thinking and backfilled from Coder where missing.

Load (Transformers)

from transformers import AutoTokenizer, AutoModelForCausalLM repo = "BennyDaBall/Qwen3-30B-A3B-ThinkCode-Linear-FP32" tok = AutoTokenizer.from_pretrained(repo, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( repo, trust_remote_code=True, torch_dtype="float32" )