Instructions to use amd/Qwen3.5-397B-A17B-MXFP4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use amd/Qwen3.5-397B-A17B-MXFP4 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="amd/Qwen3.5-397B-A17B-MXFP4") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("amd/Qwen3.5-397B-A17B-MXFP4") model = AutoModelForImageTextToText.from_pretrained("amd/Qwen3.5-397B-A17B-MXFP4") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use amd/Qwen3.5-397B-A17B-MXFP4 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "amd/Qwen3.5-397B-A17B-MXFP4" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amd/Qwen3.5-397B-A17B-MXFP4", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/amd/Qwen3.5-397B-A17B-MXFP4
- SGLang
How to use amd/Qwen3.5-397B-A17B-MXFP4 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "amd/Qwen3.5-397B-A17B-MXFP4" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amd/Qwen3.5-397B-A17B-MXFP4", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "amd/Qwen3.5-397B-A17B-MXFP4" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amd/Qwen3.5-397B-A17B-MXFP4", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use amd/Qwen3.5-397B-A17B-MXFP4 with Docker Model Runner:
docker model run hf.co/amd/Qwen3.5-397B-A17B-MXFP4
Model Overview
- Model Architecture: Qwen3_5MoeForConditionalGeneration
- Input: Text
- Output: Text
- Supported Hardware Microarchitecture: AMD MI300 MI350/MI355
- ROCm: 7.0.0
- PyTorch: 2.9.1
- Transformers: 5.3.0
- Operating System(s): Linux
- Inference Engine: SGLang/vLLM
- Model Optimizer: AMD-Quark (v0.11.1)
- Quantized layers: Experts in language model only
- Weight quantization: OCP MXFP4, Static
- Activation quantization: OCP MXFP4, Dynamic
Model Quantization
The model was quantized from Qwen/Qwen3.5-397B-A17B-FP8 using AMD-Quark. The weights are quantized to MXFP4 and activations are quantized to MXFP4.
Quantization scripts:
import os
from quark.torch import LLMTemplate, ModelQuantizer
from quark.common.profiler import GlobalProfiler
# Register qwen3_5_moe template
qwen3_5_moe_template = LLMTemplate(
model_type="qwen3_5_moe",
kv_layers_name=["*k_proj", "*v_proj"],
q_layer_name="*q_proj"
)
LLMTemplate.register_template(qwen3_5_moe_template)
# Configuration
ckpt_path = "Qwen/Qwen3.5-397B-A17B-FP8"
output_dir = "amd/Qwen3.5-397B-A17B-MXFP4"
quant_scheme = "mxfp4"
exclude_layers = ["lm_head", "model.visual.*", "mtp.*", "*mlp.gate", "*shared_expert_gate*", "*.linear_attn.*", "*.self_attn.*", "*.shared_expert.*"]
# Get quant config from template
template = LLMTemplate.get("qwen3_5_moe")
quant_config = template.get_config(scheme=quant_scheme, exclude_layers=exclude_layers)
# Quantize with File-to-file mode
profiler = GlobalProfiler(output_path=os.path.join(output_dir, "quark_profile.yaml"))
quantizer = ModelQuantizer(quant_config)
quantizer.direct_quantize_checkpoint(
pretrained_model_path=ckpt_path,
save_path=output_dir,
)
For further details or issues, please refer to the AMD-Quark documentation or contact the respective developers.
Evaluation
The model was evaluated on gsm8k benchmarks using the vllm framework.
Accuracy
| Benchmark | Qwen/Qwen3.5-397B-A17B-FP8 | amd/Qwen3.5-397B-A17B-MXFP4(this model) | Recovery |
| gsm8k (flexible-extract) | 95.38 | 93.48 | 98.01% |
Reproduction
The GSM8K results were obtained using the vLLM framework, based on the Docker image rocm/vllm-dev:nightly_main_20260211, and vLLM is installed inside the container.
Evaluating model in a new terminal
lm_eval \
--model vllm \
--model_args pretrained=amd/Qwen3.5-397B-A17B-MXFP4,tensor_parallel_size=4,max_model_len=262144,gpu_memory_utilization=0.90,max_gen_toks=2048,trust_remote_code=True,reasoning_parser=qwen3 \
--tasks gsm8k --num_fewshot 5 \
--batch_size auto
License
Modifications Copyright(c) 2026 Advanced Micro Devices, Inc. All rights reserved.
- Downloads last month
- 6,879