Upload folder using huggingface_hub
Browse files- .ipynb_checkpoints/README-checkpoint.md +307 -0
- README.md +307 -0
- config.json +46 -0
- configuration.json +1 -0
- generation_config.json +13 -0
- merges.txt +0 -0
- model.safetensors.index.json +0 -0
- tokenizer.json +0 -0
- tokenizer_config.json +239 -0
- vocab.json +0 -0
.ipynb_checkpoints/README-checkpoint.md
ADDED
@@ -0,0 +1,307 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
+
license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507/blob/main/LICENSE
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- Qwen3
|
8 |
+
- AWQ
|
9 |
+
- 量化修复
|
10 |
+
- vLLM
|
11 |
+
base_model:
|
12 |
+
- Qwen/Qwen3-235B-A22B-Thinking-2507
|
13 |
+
base_model_relation: quantized
|
14 |
+
---
|
15 |
+
# 通义千问3-235B-A22B-Thinking-2507-AWQ
|
16 |
+
基础型 [Qwen/Qwen3-235B-A22B-Thinking-2507](https://www.modelscope.cn/models/Qwen/Qwen3-235B-A22B-Thinking-2507)
|
17 |
+
|
18 |
+
|
19 |
+
### 【Vllm 单机8卡启动命令】
|
20 |
+
<i>注: 8卡启动一定要跟`--enable-expert-parallel` 否则该模型专家张量TP整除除不尽;4卡则不需要。 </i>
|
21 |
+
```
|
22 |
+
$CONTEXT_LENGTH=32768 # 262144
|
23 |
+
|
24 |
+
vllm serve \
|
25 |
+
tclf90/Qwen3-235B-A22B-Thinking-2507-AWQ \
|
26 |
+
--served-model-name Qwen3-235B-A22B-Thinking-2507-AWQ \
|
27 |
+
--enable-expert-parallel \
|
28 |
+
--swap-space 16 \
|
29 |
+
--max-num-seqs 512 \
|
30 |
+
--max-model-len $CONTEXT_LENGTH \
|
31 |
+
--max-seq-len-to-capture $CONTEXT_LENGTH \
|
32 |
+
--gpu-memory-utilization 0.9 \
|
33 |
+
--tensor-parallel-size 8 \
|
34 |
+
--trust-remote-code \
|
35 |
+
--disable-log-requests \
|
36 |
+
--host 0.0.0.0 \
|
37 |
+
--port 8000
|
38 |
+
```
|
39 |
+
|
40 |
+
### 【依赖】
|
41 |
+
|
42 |
+
```
|
43 |
+
vllm>=0.9.2
|
44 |
+
```
|
45 |
+
|
46 |
+
### 【模型更新日期】
|
47 |
+
```
|
48 |
+
2025-07-26
|
49 |
+
1. 首次commit
|
50 |
+
```
|
51 |
+
|
52 |
+
### 【模型列表】
|
53 |
+
|
54 |
+
| 文件大小 | 最近更新时间 |
|
55 |
+
|---------|--------------|
|
56 |
+
| `116GB` | `2025-07-26` |
|
57 |
+
|
58 |
+
|
59 |
+
|
60 |
+
### 【模型下载】
|
61 |
+
|
62 |
+
```python
|
63 |
+
from modelscope import snapshot_download
|
64 |
+
snapshot_download('tclf90/Qwen3-235B-A22B-Thinking-2507-AWQ', cache_dir="本地路径")
|
65 |
+
```
|
66 |
+
|
67 |
+
|
68 |
+
### 【介绍】
|
69 |
+
|
70 |
+
# Qwen3-235B-A22B-Thinking-2507
|
71 |
+
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
|
72 |
+
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
|
73 |
+
</a>
|
74 |
+
|
75 |
+
## Highlights
|
76 |
+
|
77 |
+
Over the past three months, we have continued to scale the **thinking capability** of Qwen3-235B-A22B, improving both the **quality and depth** of reasoning. We are pleased to introduce **Qwen3-235B-A22B-Thinking-2507**, featuring the following key enhancements:
|
78 |
+
- **Significantly improved performance** on reasoning tasks, including logical reasoning, mathematics, science, coding, and academic benchmarks that typically require human expertise — achieving **state-of-the-art results among open-source thinking models**.
|
79 |
+
- **Markedly better general capabilities**, such as instruction following, tool usage, text generation, and alignment with human preferences.
|
80 |
+
- **Enhanced 256K long-context understanding** capabilities.
|
81 |
+
|
82 |
+
**NOTE**: This version has an increased thinking length. We strongly recommend its use in highly complex reasoning tasks.
|
83 |
+
|
84 |
+

|
85 |
+
|
86 |
+
## Model Overview
|
87 |
+
|
88 |
+
**Qwen3-235B-A22B-Thinking-2507** has the following features:
|
89 |
+
- Type: Causal Language Models
|
90 |
+
- Training Stage: Pretraining & Post-training
|
91 |
+
- Number of Parameters: 235B in total and 22B activated
|
92 |
+
- Number of Paramaters (Non-Embedding): 234B
|
93 |
+
- Number of Layers: 94
|
94 |
+
- Number of Attention Heads (GQA): 64 for Q and 4 for KV
|
95 |
+
- Number of Experts: 128
|
96 |
+
- Number of Activated Experts: 8
|
97 |
+
- Context Length: **262,144 natively**.
|
98 |
+
|
99 |
+
**NOTE: This model supports only thinking mode.**
|
100 |
+
|
101 |
+
Additionally, to enforce model thinking, the default chat template automatically includes `<think>`. Therefore, it is normal for the model's output to contain only `</think>` without an explicit opening `<think>` tag.
|
102 |
+
|
103 |
+
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
104 |
+
|
105 |
+
## Performance
|
106 |
+
|
107 |
+
|
108 |
+
| | Deepseek-R1-0528 | OpenAI O4-mini | OpenAI O3 | Gemini-2.5 Pro | Claude4 Opus Thinking | Qwen3-235B-A22B Thinking | Qwen3-235B-A22B-Thinking-2507 |
|
109 |
+
|--- | --- | --- | --- | --- | --- | --- | --- |
|
110 |
+
| **Knowledge** | | | | | | | |
|
111 |
+
| MMLU-Pro | 85.0 | 81.9 | **85.9** | 85.6 | - | 82.8 | 84.4 |
|
112 |
+
| MMLU-Redux | 93.4 | 92.8 | **94.9** | 94.4 | 94.6 | 92.7 | 93.8 |
|
113 |
+
| GPQA | 81.0 | 81.4* | 83.3* | **86.4** | 79.6 | 71.1 | 81.1 |
|
114 |
+
| SuperGPQA | 61.7 | 56.4 | - | 62.3 | - | 60.7 | **64.9** |
|
115 |
+
| **Reasoning** | | | | | | |
|
116 |
+
| AIME25 | 87.5 | **92.7*** | 88.9* | 88.0 | 75.5 | 81.5 | 92.3 |
|
117 |
+
| HMMT25 | 79.4 | 66.7 | 77.5 | 82.5 | 58.3 | 62.5 | **83.9** |
|
118 |
+
| LiveBench 20241125 | 74.7 | 75.8 | 78.3 | **82.4** | 78.2 | 77.1 | 78.4 |
|
119 |
+
| HLE | 17.7# | 18.1* | 20.3 | **21.6** | 10.7 | 11.8# | 18.2# |
|
120 |
+
| **Coding** | | | | | | | |
|
121 |
+
| LiveCodeBench v6 (25.02-25.05) | 68.7 | 71.8 | 58.6 | 72.5 | 48.9 | 55.7 | **74.1** |
|
122 |
+
| CFEval | 2099 | 1929 | 2043 | 2001 | - | 2056 | **2134** |
|
123 |
+
| OJBench | 33.6 | 33.3 | 25.4 | **38.9** | - | 25.6 | 32.5 |
|
124 |
+
| **Alignment** | | | | | | | |
|
125 |
+
| IFEval | 79.1 | **92.4** | 92.1 | 90.8 | 89.7 | 83.4 | 87.8 |
|
126 |
+
| Arena-Hard v2$ | 72.2 | 59.3 | **80.8** | 72.5 | 59.1 | 61.5 | 79.7 |
|
127 |
+
| Creative Writing v3 | 86.3 | 78.8 | **87.7** | 85.9 | 83.8 | 84.6 | 86.1 |
|
128 |
+
| WritingBench | 83.2 | 78.4 | 85.3 | 83.1 | 79.1 | 80.3 | **88.3** |
|
129 |
+
| **Agent** | | | | | | | |
|
130 |
+
| BFCL-v3 | 63.8 | 67.2 | **72.4** | 67.2 | 61.8 | 70.8 | 71.9 |
|
131 |
+
| TAU2-Retail | 64.9 | 71.0 | **76.3** | 71.3 | - | 40.4 | 71.9 |
|
132 |
+
| TAU2-Airline | 60.0 | 59.0 | **70.0** | 60.0 | - | 30.0 | 58.0 |
|
133 |
+
| TAU2-Telecom | 33.3 | 42.0 | **60.5** | 37.4 | - | 21.9 | 45.6 |
|
134 |
+
| **Multilingualism** | | | | | | | |
|
135 |
+
| MultiIF | 63.5 | 78.0 | 80.3 | 77.8 | - | 71.9 | **80.6** |
|
136 |
+
| MMLU-ProX | 80.6 | 79.0 | 83.3 | **84.7** | - | 80.0 | 81.0 |
|
137 |
+
| INCLUDE | 79.4 | 80.8 | **86.6** | 85.1 | - | 78.7 | 81.0 |
|
138 |
+
| PolyMATH | 46.9 | 48.7 | 49.7 | 52.2 | - | 54.7 | **60.1** |
|
139 |
+
|
140 |
+
\* For OpenAI O4-mini and O3, we use a medium reasoning effort, except for scores marked with *, which are generated using high reasoning effort.
|
141 |
+
|
142 |
+
\# According to the official evaluation criteria of HLE, scores marked with \# refer to models that are not multi-modal and were evaluated only on the text-only subset.
|
143 |
+
|
144 |
+
$ For reproducibility, we report the win rates evaluated by GPT-4.1.
|
145 |
+
|
146 |
+
\& For highly challenging tasks (including PolyMATH and all reasoning and coding tasks), we use an output length of 81,920 tokens. For all other tasks, we set the output length to 32,768.
|
147 |
+
|
148 |
+
|
149 |
+
## Quickstart
|
150 |
+
|
151 |
+
The code of Qwen3-MoE has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
|
152 |
+
|
153 |
+
With `transformers<4.51.0`, you will encounter the following error:
|
154 |
+
```
|
155 |
+
KeyError: 'qwen3_moe'
|
156 |
+
```
|
157 |
+
|
158 |
+
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
|
159 |
+
```python
|
160 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
161 |
+
|
162 |
+
model_name = "Qwen/Qwen3-235B-A22B-Thinking-2507"
|
163 |
+
|
164 |
+
# load the tokenizer and the model
|
165 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
166 |
+
model = AutoModelForCausalLM.from_pretrained(
|
167 |
+
model_name,
|
168 |
+
torch_dtype="auto",
|
169 |
+
device_map="auto"
|
170 |
+
)
|
171 |
+
|
172 |
+
# prepare the model input
|
173 |
+
prompt = "Give me a short introduction to large language model."
|
174 |
+
messages = [
|
175 |
+
{"role": "user", "content": prompt}
|
176 |
+
]
|
177 |
+
text = tokenizer.apply_chat_template(
|
178 |
+
messages,
|
179 |
+
tokenize=False,
|
180 |
+
add_generation_prompt=True,
|
181 |
+
)
|
182 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
183 |
+
|
184 |
+
# conduct text completion
|
185 |
+
generated_ids = model.generate(
|
186 |
+
**model_inputs,
|
187 |
+
max_new_tokens=32768
|
188 |
+
)
|
189 |
+
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
|
190 |
+
|
191 |
+
# parsing thinking content
|
192 |
+
try:
|
193 |
+
# rindex finding 151668 (</think>)
|
194 |
+
index = len(output_ids) - output_ids[::-1].index(151668)
|
195 |
+
except ValueError:
|
196 |
+
index = 0
|
197 |
+
|
198 |
+
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
|
199 |
+
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
|
200 |
+
|
201 |
+
print("thinking content:", thinking_content) # no opening <think> tag
|
202 |
+
print("content:", content)
|
203 |
+
|
204 |
+
```
|
205 |
+
|
206 |
+
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
|
207 |
+
- SGLang:
|
208 |
+
```shell
|
209 |
+
python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B-Thinking-2507 --tp 8 --context-length 262144 --reasoning-parser qwen3
|
210 |
+
```
|
211 |
+
- vLLM:
|
212 |
+
```shell
|
213 |
+
vllm serve Qwen/Qwen3-235B-A22B-Thinking-2507 --tensor-parallel-size 8 --max-model-len 262144 --enable-reasoning --reasoning-parser deepseek_r1
|
214 |
+
```
|
215 |
+
|
216 |
+
**Note: If you encounter out-of-memory (OOM) issues, you may consider reducing the context length to a smaller value. However, since the model may require longer token sequences for reasoning, we strongly recommend using a context length greater than 131,072 when possible.**
|
217 |
+
|
218 |
+
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
|
219 |
+
|
220 |
+
## Agentic Use
|
221 |
+
|
222 |
+
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
|
223 |
+
|
224 |
+
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
|
225 |
+
```python
|
226 |
+
from qwen_agent.agents import Assistant
|
227 |
+
|
228 |
+
# Define LLM
|
229 |
+
# Using Alibaba Cloud Model Studio
|
230 |
+
llm_cfg = {
|
231 |
+
'model': 'qwen3-235b-a22b-thinking-2507',
|
232 |
+
'model_type': 'qwen_dashscope',
|
233 |
+
}
|
234 |
+
|
235 |
+
# Using OpenAI-compatible API endpoint. It is recommended to disable the reasoning and the tool call parsing
|
236 |
+
# functionality of the deployment frameworks and let Qwen-Agent automate the related operations. For example,
|
237 |
+
# `VLLM_USE_MODELSCOPE=true vllm serve Qwen/Qwen3-235B-A22B-Thinking-2507 --served-model-name Qwen3-235B-A22B-Thinking-2507 --tensor-parallel-size 8 --max-model-len 262144`.
|
238 |
+
#
|
239 |
+
# llm_cfg = {
|
240 |
+
# 'model': 'Qwen3-235B-A22B-Thinking-2507',
|
241 |
+
#
|
242 |
+
# # Use a custom endpoint compatible with OpenAI API:
|
243 |
+
# 'model_server': 'http://localhost:8000/v1', # api_base without reasoning and tool call parsing
|
244 |
+
# 'api_key': 'EMPTY',
|
245 |
+
# 'generate_cfg': {
|
246 |
+
# 'thought_in_content': True,
|
247 |
+
# },
|
248 |
+
# }
|
249 |
+
|
250 |
+
# Define Tools
|
251 |
+
tools = [
|
252 |
+
{'mcpServers': { # You can specify the MCP configuration file
|
253 |
+
'time': {
|
254 |
+
'command': 'uvx',
|
255 |
+
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
|
256 |
+
},
|
257 |
+
"fetch": {
|
258 |
+
"command": "uvx",
|
259 |
+
"args": ["mcp-server-fetch"]
|
260 |
+
}
|
261 |
+
}
|
262 |
+
},
|
263 |
+
'code_interpreter', # Built-in tools
|
264 |
+
]
|
265 |
+
|
266 |
+
# Define Agent
|
267 |
+
bot = Assistant(llm=llm_cfg, function_list=tools)
|
268 |
+
|
269 |
+
# Streaming generation
|
270 |
+
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
|
271 |
+
for responses in bot.run(messages=messages):
|
272 |
+
pass
|
273 |
+
print(responses)
|
274 |
+
```
|
275 |
+
|
276 |
+
## Best Practices
|
277 |
+
|
278 |
+
To achieve optimal performance, we recommend the following settings:
|
279 |
+
|
280 |
+
1. **Sampling Parameters**:
|
281 |
+
- We suggest using `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`.
|
282 |
+
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
|
283 |
+
|
284 |
+
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
|
285 |
+
|
286 |
+
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
|
287 |
+
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
|
288 |
+
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
|
289 |
+
|
290 |
+
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
|
291 |
+
|
292 |
+
|
293 |
+
### Citation
|
294 |
+
|
295 |
+
If you find our work helpful, feel free to give us a cite.
|
296 |
+
|
297 |
+
```
|
298 |
+
@misc{qwen3technicalreport,
|
299 |
+
title={Qwen3 Technical Report},
|
300 |
+
author={Qwen Team},
|
301 |
+
year={2025},
|
302 |
+
eprint={2505.09388},
|
303 |
+
archivePrefix={arXiv},
|
304 |
+
primaryClass={cs.CL},
|
305 |
+
url={https://arxiv.org/abs/2505.09388},
|
306 |
+
}
|
307 |
+
```
|
README.md
ADDED
@@ -0,0 +1,307 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
+
license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507/blob/main/LICENSE
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
tags:
|
7 |
+
- Qwen3
|
8 |
+
- AWQ
|
9 |
+
- 量化修复
|
10 |
+
- vLLM
|
11 |
+
base_model:
|
12 |
+
- Qwen/Qwen3-235B-A22B-Thinking-2507
|
13 |
+
base_model_relation: quantized
|
14 |
+
---
|
15 |
+
# 通义千问3-235B-A22B-Thinking-2507-AWQ
|
16 |
+
基础型 [Qwen/Qwen3-235B-A22B-Thinking-2507](https://www.modelscope.cn/models/Qwen/Qwen3-235B-A22B-Thinking-2507)
|
17 |
+
|
18 |
+
|
19 |
+
### 【Vllm 单机8卡启动命令】
|
20 |
+
<i>注: 8卡启动一定要跟`--enable-expert-parallel` 否则该模型专家张量TP整除除不尽;4卡则不需要。 </i>
|
21 |
+
```
|
22 |
+
$CONTEXT_LENGTH=32768 # 262144
|
23 |
+
|
24 |
+
vllm serve \
|
25 |
+
tclf90/Qwen3-235B-A22B-Thinking-2507-AWQ \
|
26 |
+
--served-model-name Qwen3-235B-A22B-Thinking-2507-AWQ \
|
27 |
+
--enable-expert-parallel \
|
28 |
+
--swap-space 16 \
|
29 |
+
--max-num-seqs 512 \
|
30 |
+
--max-model-len $CONTEXT_LENGTH \
|
31 |
+
--max-seq-len-to-capture $CONTEXT_LENGTH \
|
32 |
+
--gpu-memory-utilization 0.9 \
|
33 |
+
--tensor-parallel-size 8 \
|
34 |
+
--trust-remote-code \
|
35 |
+
--disable-log-requests \
|
36 |
+
--host 0.0.0.0 \
|
37 |
+
--port 8000
|
38 |
+
```
|
39 |
+
|
40 |
+
### 【依赖】
|
41 |
+
|
42 |
+
```
|
43 |
+
vllm>=0.9.2
|
44 |
+
```
|
45 |
+
|
46 |
+
### 【模型更新日期】
|
47 |
+
```
|
48 |
+
2025-07-26
|
49 |
+
1. 首次commit
|
50 |
+
```
|
51 |
+
|
52 |
+
### 【模型列表】
|
53 |
+
|
54 |
+
| 文件大小 | 最近更新时间 |
|
55 |
+
|---------|--------------|
|
56 |
+
| `116GB` | `2025-07-26` |
|
57 |
+
|
58 |
+
|
59 |
+
|
60 |
+
### 【模型下载】
|
61 |
+
|
62 |
+
```python
|
63 |
+
from modelscope import snapshot_download
|
64 |
+
snapshot_download('tclf90/Qwen3-235B-A22B-Thinking-2507-AWQ', cache_dir="本地路径")
|
65 |
+
```
|
66 |
+
|
67 |
+
|
68 |
+
### 【介绍】
|
69 |
+
|
70 |
+
# Qwen3-235B-A22B-Thinking-2507
|
71 |
+
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
|
72 |
+
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
|
73 |
+
</a>
|
74 |
+
|
75 |
+
## Highlights
|
76 |
+
|
77 |
+
Over the past three months, we have continued to scale the **thinking capability** of Qwen3-235B-A22B, improving both the **quality and depth** of reasoning. We are pleased to introduce **Qwen3-235B-A22B-Thinking-2507**, featuring the following key enhancements:
|
78 |
+
- **Significantly improved performance** on reasoning tasks, including logical reasoning, mathematics, science, coding, and academic benchmarks that typically require human expertise — achieving **state-of-the-art results among open-source thinking models**.
|
79 |
+
- **Markedly better general capabilities**, such as instruction following, tool usage, text generation, and alignment with human preferences.
|
80 |
+
- **Enhanced 256K long-context understanding** capabilities.
|
81 |
+
|
82 |
+
**NOTE**: This version has an increased thinking length. We strongly recommend its use in highly complex reasoning tasks.
|
83 |
+
|
84 |
+

|
85 |
+
|
86 |
+
## Model Overview
|
87 |
+
|
88 |
+
**Qwen3-235B-A22B-Thinking-2507** has the following features:
|
89 |
+
- Type: Causal Language Models
|
90 |
+
- Training Stage: Pretraining & Post-training
|
91 |
+
- Number of Parameters: 235B in total and 22B activated
|
92 |
+
- Number of Paramaters (Non-Embedding): 234B
|
93 |
+
- Number of Layers: 94
|
94 |
+
- Number of Attention Heads (GQA): 64 for Q and 4 for KV
|
95 |
+
- Number of Experts: 128
|
96 |
+
- Number of Activated Experts: 8
|
97 |
+
- Context Length: **262,144 natively**.
|
98 |
+
|
99 |
+
**NOTE: This model supports only thinking mode.**
|
100 |
+
|
101 |
+
Additionally, to enforce model thinking, the default chat template automatically includes `<think>`. Therefore, it is normal for the model's output to contain only `</think>` without an explicit opening `<think>` tag.
|
102 |
+
|
103 |
+
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
|
104 |
+
|
105 |
+
## Performance
|
106 |
+
|
107 |
+
|
108 |
+
| | Deepseek-R1-0528 | OpenAI O4-mini | OpenAI O3 | Gemini-2.5 Pro | Claude4 Opus Thinking | Qwen3-235B-A22B Thinking | Qwen3-235B-A22B-Thinking-2507 |
|
109 |
+
|--- | --- | --- | --- | --- | --- | --- | --- |
|
110 |
+
| **Knowledge** | | | | | | | |
|
111 |
+
| MMLU-Pro | 85.0 | 81.9 | **85.9** | 85.6 | - | 82.8 | 84.4 |
|
112 |
+
| MMLU-Redux | 93.4 | 92.8 | **94.9** | 94.4 | 94.6 | 92.7 | 93.8 |
|
113 |
+
| GPQA | 81.0 | 81.4* | 83.3* | **86.4** | 79.6 | 71.1 | 81.1 |
|
114 |
+
| SuperGPQA | 61.7 | 56.4 | - | 62.3 | - | 60.7 | **64.9** |
|
115 |
+
| **Reasoning** | | | | | | |
|
116 |
+
| AIME25 | 87.5 | **92.7*** | 88.9* | 88.0 | 75.5 | 81.5 | 92.3 |
|
117 |
+
| HMMT25 | 79.4 | 66.7 | 77.5 | 82.5 | 58.3 | 62.5 | **83.9** |
|
118 |
+
| LiveBench 20241125 | 74.7 | 75.8 | 78.3 | **82.4** | 78.2 | 77.1 | 78.4 |
|
119 |
+
| HLE | 17.7# | 18.1* | 20.3 | **21.6** | 10.7 | 11.8# | 18.2# |
|
120 |
+
| **Coding** | | | | | | | |
|
121 |
+
| LiveCodeBench v6 (25.02-25.05) | 68.7 | 71.8 | 58.6 | 72.5 | 48.9 | 55.7 | **74.1** |
|
122 |
+
| CFEval | 2099 | 1929 | 2043 | 2001 | - | 2056 | **2134** |
|
123 |
+
| OJBench | 33.6 | 33.3 | 25.4 | **38.9** | - | 25.6 | 32.5 |
|
124 |
+
| **Alignment** | | | | | | | |
|
125 |
+
| IFEval | 79.1 | **92.4** | 92.1 | 90.8 | 89.7 | 83.4 | 87.8 |
|
126 |
+
| Arena-Hard v2$ | 72.2 | 59.3 | **80.8** | 72.5 | 59.1 | 61.5 | 79.7 |
|
127 |
+
| Creative Writing v3 | 86.3 | 78.8 | **87.7** | 85.9 | 83.8 | 84.6 | 86.1 |
|
128 |
+
| WritingBench | 83.2 | 78.4 | 85.3 | 83.1 | 79.1 | 80.3 | **88.3** |
|
129 |
+
| **Agent** | | | | | | | |
|
130 |
+
| BFCL-v3 | 63.8 | 67.2 | **72.4** | 67.2 | 61.8 | 70.8 | 71.9 |
|
131 |
+
| TAU2-Retail | 64.9 | 71.0 | **76.3** | 71.3 | - | 40.4 | 71.9 |
|
132 |
+
| TAU2-Airline | 60.0 | 59.0 | **70.0** | 60.0 | - | 30.0 | 58.0 |
|
133 |
+
| TAU2-Telecom | 33.3 | 42.0 | **60.5** | 37.4 | - | 21.9 | 45.6 |
|
134 |
+
| **Multilingualism** | | | | | | | |
|
135 |
+
| MultiIF | 63.5 | 78.0 | 80.3 | 77.8 | - | 71.9 | **80.6** |
|
136 |
+
| MMLU-ProX | 80.6 | 79.0 | 83.3 | **84.7** | - | 80.0 | 81.0 |
|
137 |
+
| INCLUDE | 79.4 | 80.8 | **86.6** | 85.1 | - | 78.7 | 81.0 |
|
138 |
+
| PolyMATH | 46.9 | 48.7 | 49.7 | 52.2 | - | 54.7 | **60.1** |
|
139 |
+
|
140 |
+
\* For OpenAI O4-mini and O3, we use a medium reasoning effort, except for scores marked with *, which are generated using high reasoning effort.
|
141 |
+
|
142 |
+
\# According to the official evaluation criteria of HLE, scores marked with \# refer to models that are not multi-modal and were evaluated only on the text-only subset.
|
143 |
+
|
144 |
+
$ For reproducibility, we report the win rates evaluated by GPT-4.1.
|
145 |
+
|
146 |
+
\& For highly challenging tasks (including PolyMATH and all reasoning and coding tasks), we use an output length of 81,920 tokens. For all other tasks, we set the output length to 32,768.
|
147 |
+
|
148 |
+
|
149 |
+
## Quickstart
|
150 |
+
|
151 |
+
The code of Qwen3-MoE has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
|
152 |
+
|
153 |
+
With `transformers<4.51.0`, you will encounter the following error:
|
154 |
+
```
|
155 |
+
KeyError: 'qwen3_moe'
|
156 |
+
```
|
157 |
+
|
158 |
+
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
|
159 |
+
```python
|
160 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
161 |
+
|
162 |
+
model_name = "Qwen/Qwen3-235B-A22B-Thinking-2507"
|
163 |
+
|
164 |
+
# load the tokenizer and the model
|
165 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
166 |
+
model = AutoModelForCausalLM.from_pretrained(
|
167 |
+
model_name,
|
168 |
+
torch_dtype="auto",
|
169 |
+
device_map="auto"
|
170 |
+
)
|
171 |
+
|
172 |
+
# prepare the model input
|
173 |
+
prompt = "Give me a short introduction to large language model."
|
174 |
+
messages = [
|
175 |
+
{"role": "user", "content": prompt}
|
176 |
+
]
|
177 |
+
text = tokenizer.apply_chat_template(
|
178 |
+
messages,
|
179 |
+
tokenize=False,
|
180 |
+
add_generation_prompt=True,
|
181 |
+
)
|
182 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
183 |
+
|
184 |
+
# conduct text completion
|
185 |
+
generated_ids = model.generate(
|
186 |
+
**model_inputs,
|
187 |
+
max_new_tokens=32768
|
188 |
+
)
|
189 |
+
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
|
190 |
+
|
191 |
+
# parsing thinking content
|
192 |
+
try:
|
193 |
+
# rindex finding 151668 (</think>)
|
194 |
+
index = len(output_ids) - output_ids[::-1].index(151668)
|
195 |
+
except ValueError:
|
196 |
+
index = 0
|
197 |
+
|
198 |
+
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
|
199 |
+
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
|
200 |
+
|
201 |
+
print("thinking content:", thinking_content) # no opening <think> tag
|
202 |
+
print("content:", content)
|
203 |
+
|
204 |
+
```
|
205 |
+
|
206 |
+
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
|
207 |
+
- SGLang:
|
208 |
+
```shell
|
209 |
+
python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B-Thinking-2507 --tp 8 --context-length 262144 --reasoning-parser qwen3
|
210 |
+
```
|
211 |
+
- vLLM:
|
212 |
+
```shell
|
213 |
+
vllm serve Qwen/Qwen3-235B-A22B-Thinking-2507 --tensor-parallel-size 8 --max-model-len 262144 --enable-reasoning --reasoning-parser deepseek_r1
|
214 |
+
```
|
215 |
+
|
216 |
+
**Note: If you encounter out-of-memory (OOM) issues, you may consider reducing the context length to a smaller value. However, since the model may require longer token sequences for reasoning, we strongly recommend using a context length greater than 131,072 when possible.**
|
217 |
+
|
218 |
+
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
|
219 |
+
|
220 |
+
## Agentic Use
|
221 |
+
|
222 |
+
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
|
223 |
+
|
224 |
+
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
|
225 |
+
```python
|
226 |
+
from qwen_agent.agents import Assistant
|
227 |
+
|
228 |
+
# Define LLM
|
229 |
+
# Using Alibaba Cloud Model Studio
|
230 |
+
llm_cfg = {
|
231 |
+
'model': 'qwen3-235b-a22b-thinking-2507',
|
232 |
+
'model_type': 'qwen_dashscope',
|
233 |
+
}
|
234 |
+
|
235 |
+
# Using OpenAI-compatible API endpoint. It is recommended to disable the reasoning and the tool call parsing
|
236 |
+
# functionality of the deployment frameworks and let Qwen-Agent automate the related operations. For example,
|
237 |
+
# `VLLM_USE_MODELSCOPE=true vllm serve Qwen/Qwen3-235B-A22B-Thinking-2507 --served-model-name Qwen3-235B-A22B-Thinking-2507 --tensor-parallel-size 8 --max-model-len 262144`.
|
238 |
+
#
|
239 |
+
# llm_cfg = {
|
240 |
+
# 'model': 'Qwen3-235B-A22B-Thinking-2507',
|
241 |
+
#
|
242 |
+
# # Use a custom endpoint compatible with OpenAI API:
|
243 |
+
# 'model_server': 'http://localhost:8000/v1', # api_base without reasoning and tool call parsing
|
244 |
+
# 'api_key': 'EMPTY',
|
245 |
+
# 'generate_cfg': {
|
246 |
+
# 'thought_in_content': True,
|
247 |
+
# },
|
248 |
+
# }
|
249 |
+
|
250 |
+
# Define Tools
|
251 |
+
tools = [
|
252 |
+
{'mcpServers': { # You can specify the MCP configuration file
|
253 |
+
'time': {
|
254 |
+
'command': 'uvx',
|
255 |
+
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
|
256 |
+
},
|
257 |
+
"fetch": {
|
258 |
+
"command": "uvx",
|
259 |
+
"args": ["mcp-server-fetch"]
|
260 |
+
}
|
261 |
+
}
|
262 |
+
},
|
263 |
+
'code_interpreter', # Built-in tools
|
264 |
+
]
|
265 |
+
|
266 |
+
# Define Agent
|
267 |
+
bot = Assistant(llm=llm_cfg, function_list=tools)
|
268 |
+
|
269 |
+
# Streaming generation
|
270 |
+
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
|
271 |
+
for responses in bot.run(messages=messages):
|
272 |
+
pass
|
273 |
+
print(responses)
|
274 |
+
```
|
275 |
+
|
276 |
+
## Best Practices
|
277 |
+
|
278 |
+
To achieve optimal performance, we recommend the following settings:
|
279 |
+
|
280 |
+
1. **Sampling Parameters**:
|
281 |
+
- We suggest using `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`.
|
282 |
+
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
|
283 |
+
|
284 |
+
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
|
285 |
+
|
286 |
+
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
|
287 |
+
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
|
288 |
+
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
|
289 |
+
|
290 |
+
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
|
291 |
+
|
292 |
+
|
293 |
+
### Citation
|
294 |
+
|
295 |
+
If you find our work helpful, feel free to give us a cite.
|
296 |
+
|
297 |
+
```
|
298 |
+
@misc{qwen3technicalreport,
|
299 |
+
title={Qwen3 Technical Report},
|
300 |
+
author={Qwen Team},
|
301 |
+
year={2025},
|
302 |
+
eprint={2505.09388},
|
303 |
+
archivePrefix={arXiv},
|
304 |
+
primaryClass={cs.CL},
|
305 |
+
url={https://arxiv.org/abs/2505.09388},
|
306 |
+
}
|
307 |
+
```
|
config.json
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"name_or_path": "tclf90/Qwen3-235B-A22B-Thinking-2507-AWQ",
|
3 |
+
"architectures": [
|
4 |
+
"Qwen3MoeForCausalLM"
|
5 |
+
],
|
6 |
+
"attention_bias": false,
|
7 |
+
"attention_dropout": 0.0,
|
8 |
+
"bos_token_id": 151643,
|
9 |
+
"decoder_sparse_step": 1,
|
10 |
+
"eos_token_id": 151645,
|
11 |
+
"head_dim": 128,
|
12 |
+
"hidden_act": "silu",
|
13 |
+
"hidden_size": 4096,
|
14 |
+
"initializer_range": 0.02,
|
15 |
+
"intermediate_size": 12288,
|
16 |
+
"max_position_embeddings": 262144,
|
17 |
+
"max_window_layers": 94,
|
18 |
+
"mlp_only_layers": [],
|
19 |
+
"model_type": "qwen3_moe",
|
20 |
+
"moe_intermediate_size": 1536,
|
21 |
+
"norm_topk_prob": true,
|
22 |
+
"num_attention_heads": 64,
|
23 |
+
"num_experts": 128,
|
24 |
+
"num_experts_per_tok": 8,
|
25 |
+
"num_hidden_layers": 94,
|
26 |
+
"num_key_value_heads": 4,
|
27 |
+
"output_router_logits": false,
|
28 |
+
"rms_norm_eps": 1e-06,
|
29 |
+
"rope_scaling": null,
|
30 |
+
"rope_theta": 5000000,
|
31 |
+
"router_aux_loss_coef": 0.001,
|
32 |
+
"sliding_window": null,
|
33 |
+
"tie_word_embeddings": false,
|
34 |
+
"torch_dtype": "float16",
|
35 |
+
"transformers_version": "4.51.0",
|
36 |
+
"use_cache": true,
|
37 |
+
"use_sliding_window": false,
|
38 |
+
"vocab_size": 151936,
|
39 |
+
"quantization_config": {
|
40 |
+
"quant_method": "awq",
|
41 |
+
"bits": 4,
|
42 |
+
"group_size": 128,
|
43 |
+
"version": "gemm",
|
44 |
+
"zero_point": true
|
45 |
+
}
|
46 |
+
}
|
configuration.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"framework":"Pytorch","task":"text-generation"}
|
generation_config.json
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token_id": 151643,
|
3 |
+
"do_sample": true,
|
4 |
+
"eos_token_id": [
|
5 |
+
151645,
|
6 |
+
151643
|
7 |
+
],
|
8 |
+
"pad_token_id": 151643,
|
9 |
+
"temperature": 0.6,
|
10 |
+
"top_k": 20,
|
11 |
+
"top_p": 0.95,
|
12 |
+
"transformers_version": "4.51.0"
|
13 |
+
}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
model.safetensors.index.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,239 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"added_tokens_decoder": {
|
4 |
+
"151643": {
|
5 |
+
"content": "<|endoftext|>",
|
6 |
+
"lstrip": false,
|
7 |
+
"normalized": false,
|
8 |
+
"rstrip": false,
|
9 |
+
"single_word": false,
|
10 |
+
"special": true
|
11 |
+
},
|
12 |
+
"151644": {
|
13 |
+
"content": "<|im_start|>",
|
14 |
+
"lstrip": false,
|
15 |
+
"normalized": false,
|
16 |
+
"rstrip": false,
|
17 |
+
"single_word": false,
|
18 |
+
"special": true
|
19 |
+
},
|
20 |
+
"151645": {
|
21 |
+
"content": "<|im_end|>",
|
22 |
+
"lstrip": false,
|
23 |
+
"normalized": false,
|
24 |
+
"rstrip": false,
|
25 |
+
"single_word": false,
|
26 |
+
"special": true
|
27 |
+
},
|
28 |
+
"151646": {
|
29 |
+
"content": "<|object_ref_start|>",
|
30 |
+
"lstrip": false,
|
31 |
+
"normalized": false,
|
32 |
+
"rstrip": false,
|
33 |
+
"single_word": false,
|
34 |
+
"special": true
|
35 |
+
},
|
36 |
+
"151647": {
|
37 |
+
"content": "<|object_ref_end|>",
|
38 |
+
"lstrip": false,
|
39 |
+
"normalized": false,
|
40 |
+
"rstrip": false,
|
41 |
+
"single_word": false,
|
42 |
+
"special": true
|
43 |
+
},
|
44 |
+
"151648": {
|
45 |
+
"content": "<|box_start|>",
|
46 |
+
"lstrip": false,
|
47 |
+
"normalized": false,
|
48 |
+
"rstrip": false,
|
49 |
+
"single_word": false,
|
50 |
+
"special": true
|
51 |
+
},
|
52 |
+
"151649": {
|
53 |
+
"content": "<|box_end|>",
|
54 |
+
"lstrip": false,
|
55 |
+
"normalized": false,
|
56 |
+
"rstrip": false,
|
57 |
+
"single_word": false,
|
58 |
+
"special": true
|
59 |
+
},
|
60 |
+
"151650": {
|
61 |
+
"content": "<|quad_start|>",
|
62 |
+
"lstrip": false,
|
63 |
+
"normalized": false,
|
64 |
+
"rstrip": false,
|
65 |
+
"single_word": false,
|
66 |
+
"special": true
|
67 |
+
},
|
68 |
+
"151651": {
|
69 |
+
"content": "<|quad_end|>",
|
70 |
+
"lstrip": false,
|
71 |
+
"normalized": false,
|
72 |
+
"rstrip": false,
|
73 |
+
"single_word": false,
|
74 |
+
"special": true
|
75 |
+
},
|
76 |
+
"151652": {
|
77 |
+
"content": "<|vision_start|>",
|
78 |
+
"lstrip": false,
|
79 |
+
"normalized": false,
|
80 |
+
"rstrip": false,
|
81 |
+
"single_word": false,
|
82 |
+
"special": true
|
83 |
+
},
|
84 |
+
"151653": {
|
85 |
+
"content": "<|vision_end|>",
|
86 |
+
"lstrip": false,
|
87 |
+
"normalized": false,
|
88 |
+
"rstrip": false,
|
89 |
+
"single_word": false,
|
90 |
+
"special": true
|
91 |
+
},
|
92 |
+
"151654": {
|
93 |
+
"content": "<|vision_pad|>",
|
94 |
+
"lstrip": false,
|
95 |
+
"normalized": false,
|
96 |
+
"rstrip": false,
|
97 |
+
"single_word": false,
|
98 |
+
"special": true
|
99 |
+
},
|
100 |
+
"151655": {
|
101 |
+
"content": "<|image_pad|>",
|
102 |
+
"lstrip": false,
|
103 |
+
"normalized": false,
|
104 |
+
"rstrip": false,
|
105 |
+
"single_word": false,
|
106 |
+
"special": true
|
107 |
+
},
|
108 |
+
"151656": {
|
109 |
+
"content": "<|video_pad|>",
|
110 |
+
"lstrip": false,
|
111 |
+
"normalized": false,
|
112 |
+
"rstrip": false,
|
113 |
+
"single_word": false,
|
114 |
+
"special": true
|
115 |
+
},
|
116 |
+
"151657": {
|
117 |
+
"content": "<tool_call>",
|
118 |
+
"lstrip": false,
|
119 |
+
"normalized": false,
|
120 |
+
"rstrip": false,
|
121 |
+
"single_word": false,
|
122 |
+
"special": false
|
123 |
+
},
|
124 |
+
"151658": {
|
125 |
+
"content": "</tool_call>",
|
126 |
+
"lstrip": false,
|
127 |
+
"normalized": false,
|
128 |
+
"rstrip": false,
|
129 |
+
"single_word": false,
|
130 |
+
"special": false
|
131 |
+
},
|
132 |
+
"151659": {
|
133 |
+
"content": "<|fim_prefix|>",
|
134 |
+
"lstrip": false,
|
135 |
+
"normalized": false,
|
136 |
+
"rstrip": false,
|
137 |
+
"single_word": false,
|
138 |
+
"special": false
|
139 |
+
},
|
140 |
+
"151660": {
|
141 |
+
"content": "<|fim_middle|>",
|
142 |
+
"lstrip": false,
|
143 |
+
"normalized": false,
|
144 |
+
"rstrip": false,
|
145 |
+
"single_word": false,
|
146 |
+
"special": false
|
147 |
+
},
|
148 |
+
"151661": {
|
149 |
+
"content": "<|fim_suffix|>",
|
150 |
+
"lstrip": false,
|
151 |
+
"normalized": false,
|
152 |
+
"rstrip": false,
|
153 |
+
"single_word": false,
|
154 |
+
"special": false
|
155 |
+
},
|
156 |
+
"151662": {
|
157 |
+
"content": "<|fim_pad|>",
|
158 |
+
"lstrip": false,
|
159 |
+
"normalized": false,
|
160 |
+
"rstrip": false,
|
161 |
+
"single_word": false,
|
162 |
+
"special": false
|
163 |
+
},
|
164 |
+
"151663": {
|
165 |
+
"content": "<|repo_name|>",
|
166 |
+
"lstrip": false,
|
167 |
+
"normalized": false,
|
168 |
+
"rstrip": false,
|
169 |
+
"single_word": false,
|
170 |
+
"special": false
|
171 |
+
},
|
172 |
+
"151664": {
|
173 |
+
"content": "<|file_sep|>",
|
174 |
+
"lstrip": false,
|
175 |
+
"normalized": false,
|
176 |
+
"rstrip": false,
|
177 |
+
"single_word": false,
|
178 |
+
"special": false
|
179 |
+
},
|
180 |
+
"151665": {
|
181 |
+
"content": "<tool_response>",
|
182 |
+
"lstrip": false,
|
183 |
+
"normalized": false,
|
184 |
+
"rstrip": false,
|
185 |
+
"single_word": false,
|
186 |
+
"special": false
|
187 |
+
},
|
188 |
+
"151666": {
|
189 |
+
"content": "</tool_response>",
|
190 |
+
"lstrip": false,
|
191 |
+
"normalized": false,
|
192 |
+
"rstrip": false,
|
193 |
+
"single_word": false,
|
194 |
+
"special": false
|
195 |
+
},
|
196 |
+
"151667": {
|
197 |
+
"content": "<think>",
|
198 |
+
"lstrip": false,
|
199 |
+
"normalized": false,
|
200 |
+
"rstrip": false,
|
201 |
+
"single_word": false,
|
202 |
+
"special": false
|
203 |
+
},
|
204 |
+
"151668": {
|
205 |
+
"content": "</think>",
|
206 |
+
"lstrip": false,
|
207 |
+
"normalized": false,
|
208 |
+
"rstrip": false,
|
209 |
+
"single_word": false,
|
210 |
+
"special": false
|
211 |
+
}
|
212 |
+
},
|
213 |
+
"additional_special_tokens": [
|
214 |
+
"<|im_start|>",
|
215 |
+
"<|im_end|>",
|
216 |
+
"<|object_ref_start|>",
|
217 |
+
"<|object_ref_end|>",
|
218 |
+
"<|box_start|>",
|
219 |
+
"<|box_end|>",
|
220 |
+
"<|quad_start|>",
|
221 |
+
"<|quad_end|>",
|
222 |
+
"<|vision_start|>",
|
223 |
+
"<|vision_end|>",
|
224 |
+
"<|vision_pad|>",
|
225 |
+
"<|image_pad|>",
|
226 |
+
"<|video_pad|>"
|
227 |
+
],
|
228 |
+
"bos_token": null,
|
229 |
+
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0].content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- if message.content is string %}\n {%- set content = message.content %}\n {%- else %}\n {%- set content = '' %}\n {%- endif %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is string %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in content %}\n {%- set reasoning_content = content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- set content = content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if loop.last or (not loop.last and reasoning_content) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}",
|
230 |
+
"clean_up_tokenization_spaces": false,
|
231 |
+
"eos_token": "<|im_end|>",
|
232 |
+
"errors": "replace",
|
233 |
+
"model_max_length": 262144,
|
234 |
+
"pad_token": "<|endoftext|>",
|
235 |
+
"split_special_tokens": false,
|
236 |
+
"tokenizer_class": "Qwen2Tokenizer",
|
237 |
+
"unk_token": null,
|
238 |
+
"add_bos_token": false
|
239 |
+
}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|