Model Details

This model is a mixed int4 model with group_size 64 and symmetric quantization of Qwen/Qwen3-235B-A22B-Instruct-2507 generated by intel/auto-round via RTN (no algorithm tuning).
Non expert layers are fall back to 8 bits and group_size 128. mlp.gate layers fallback to 16 bits to ensure runing successfully on vLLM.

Please follow the license of the original model.

How To Use

vLLM usage

vllm serve Intel/Qwen3-235B-A22B-Instruct-2507-int4-mixed-ar --tensor-parallel-size 4 --max-model-len 262144

INT4 Inference on CPU/Intel GPU/CUDA

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Intel/Qwen3-235B-A22B-Instruct-2507-int4-mixed-ar"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=16384
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()

content = tokenizer.decode(output_ids, skip_special_tokens=True)

print("content:", content)

Generate the model

Here is the sample command to reproduce the model

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig
from auto_round import AutoRound

model_name = "Qwen/Qwen3-235B-A22B-Instruct-2507"

model = AutoModelForCausalLM.from_pretrained(model_name,
                                             device_map="cpu", torch_dtype="auto")

tokenizer = AutoTokenizer.from_pretrained(model_name)

layer_config = {}
for n, m in model.named_modules():
    if "mlp.gate" in n: ## vllm only support 16 bit for this layer
        layer_config[n] = {"bits": 16}
    elif isinstance(m, torch.nn.Linear) and (not "expert" in n or "shared_experts" in n) and n != "lm_head":
        layer_config[n] = {"bits": 8, "group_size": 128}

autoround = AutoRound(model, tokenizer, iters=0, group_size=64, layer_config=layer_config)
output_dir = "/dataset/Qwen3-235B-A22B-Instruct-2507-int4-mixed"
autoround.quantize_and_save(output_dir)

## tricky code to handle qkv fusing issue, we will fix it in vllm later
import os
import json

config_path = os.path.join(output_dir, "config.json")

with open(config_path, "r") as file:
    config = json.load(file)
extra_config = config["quantization_config"]["extra_config"]
num_hidden_layers = config["num_hidden_layers"]
for i in range(num_hidden_layers):
    qkv_name = f"model.layers.{str(i)}.self_attn.qkv_proj"
    extra_config[qkv_name] = {"bits": 8, "group_size": 128}
with open(config_path, "w") as file:
    json.dump(config, file, indent=2)

Ethical Considerations and Limitations

The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.

Therefore, before deploying any applications of the model, developers should perform safety testing.

Caveats and Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Here are a couple of useful links to learn more about Intel's AI software:

  • Intel Neural Compressor link

Disclaimer

The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.

Cite

@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }

arxiv github

Downloads last month
120
Safetensors
Model size
1.29B params
Tensor type
I32
BF16
F16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for Intel/Qwen3-235B-A22B-Instruct-2507-int4-mixed-ar

Quantized
(39)
this model