danielhanchen commited on
Commit
12c7e28
·
verified ·
1 Parent(s): 260455a

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,276 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - unsloth
4
+ base_model:
5
+ - Qwen/Qwen3-235B-A22B-Thinking-2507-FP8
6
+ library_name: transformers
7
+ license: apache-2.0
8
+ license_link: https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507-FP8/blob/main/LICENSE
9
+ pipeline_tag: text-generation
10
+ ---
11
+ > [!NOTE]
12
+ > Includes Unsloth **chat template fixes**! <br> For `llama.cpp`, use `--jinja`
13
+ >
14
+
15
+ <div>
16
+ <p style="margin-top: 0;margin-bottom: 0;">
17
+ <em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
18
+ </p>
19
+ <div style="display: flex; gap: 5px; align-items: center; ">
20
+ <a href="https://github.com/unslothai/unsloth/">
21
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
22
+ </a>
23
+ <a href="https://discord.gg/unsloth">
24
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
25
+ </a>
26
+ <a href="https://docs.unsloth.ai/">
27
+ <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
28
+ </a>
29
+ </div>
30
+ </div>
31
+
32
+
33
+ # Qwen3-235B-A22B-Thinking-2507-FP8
34
+ <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
35
+ <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
36
+ </a>
37
+
38
+ ## Highlights
39
+
40
+ Over the past three months, we have continued to scale the **thinking capability** of Qwen3-235B-A22B, improving both the **quality and depth** of reasoning. We are pleased to introduce **Qwen3-235B-A22B-Thinking-2507-FP8**, featuring the following key enhancements:
41
+ - **Significantly improved performance** on reasoning tasks, including logical reasoning, mathematics, science, coding, and academic benchmarks that typically require human expertise — achieving **state-of-the-art results among open-source thinking models**.
42
+ - **Markedly better general capabilities**, such as instruction following, tool usage, text generation, and alignment with human preferences.
43
+ - **Enhanced 256K long-context understanding** capabilities.
44
+
45
+ **NOTE**: This version has an increased thinking length. We strongly recommend its use in highly complex reasoning tasks.
46
+
47
+ ![image/jpeg](https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3-2507/Qwen3-235B-A22B-Thinking-2507.jpeg)
48
+
49
+ ## Model Overview
50
+
51
+ This repo contains the FP8 version of **Qwen3-235B-A22B-Thinking-2507**, which has the following features:
52
+ - Type: Causal Language Models
53
+ - Training Stage: Pretraining & Post-training
54
+ - Number of Parameters: 235B in total and 22B activated
55
+ - Number of Paramaters (Non-Embedding): 234B
56
+ - Number of Layers: 94
57
+ - Number of Attention Heads (GQA): 64 for Q and 4 for KV
58
+ - Number of Experts: 128
59
+ - Number of Activated Experts: 8
60
+ - Context Length: **262,144 natively**.
61
+
62
+ **NOTE: This model supports only thinking mode. Meanwhile, specifying `enable_thinking=True` is no longer required.**
63
+
64
+ Additionally, to enforce model thinking, the default chat template automatically includes `<think>`. Therefore, it is normal for the model's output to contain only `</think>` without an explicit opening `<think>` tag.
65
+
66
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
67
+
68
+ ## Performance
69
+
70
+
71
+ | | Deepseek-R1-0528 | OpenAI O4-mini | OpenAI O3 | Gemini-2.5 Pro | Claude4 Opus Thinking | Qwen3-235B-A22B Thinking | Qwen3-235B-A22B-Thinking-2507 |
72
+ |--- | --- | --- | --- | --- | --- | --- | --- |
73
+ | **Knowledge** | | | | | | | |
74
+ | MMLU-Pro | 85.0 | 81.9 | **85.9** | 85.6 | - | 82.8 | 84.4 |
75
+ | MMLU-Redux | 93.4 | 92.8 | **94.9** | 94.4 | 94.6 | 92.7 | 93.8 |
76
+ | GPQA | 81.0 | 81.4* | 83.3* | **86.4** | 79.6 | 71.1 | 81.1 |
77
+ | SuperGPQA | 61.7 | 56.4 | - | 62.3 | - | 60.7 | **64.9** |
78
+ | **Reasoning** | | | | | | |
79
+ | AIME25 | 87.5 | **92.7*** | 88.9* | 88.0 | 75.5 | 81.5 | 92.3 |
80
+ | HMMT25 | 79.4 | 66.7 | 77.5 | 82.5 | 58.3 | 62.5 | **83.9** |
81
+ | LiveBench 20241125 | 74.7 | 75.8 | 78.3 | **82.4** | 78.2 | 77.1 | 78.4 |
82
+ | HLE | 17.7# | 18.1* | 20.3 | **21.6** | 10.7 | 11.8# | 18.2# |
83
+ | **Coding** | | | | | | | |
84
+ | LiveCodeBench v6 (25.02-25.05) | 68.7 | 71.8 | 58.6 | 72.5 | 48.9 | 55.7 | **74.1** |
85
+ | CFEval | 2099 | 1929 | 2043 | 2001 | - | 2056 | **2134** |
86
+ | OJBench | 33.6 | 33.3 | 25.4 | **38.9** | - | 25.6 | 32.5 |
87
+ | **Alignment** | | | | | | | |
88
+ | IFEval | 79.1 | **92.4** | 92.1 | 90.8 | 89.7 | 83.4 | 87.8 |
89
+ | Arena-Hard v2$ | 72.2 | 59.3 | **80.8** | 72.5 | 59.1 | 61.5 | 79.7 |
90
+ | Creative Writing v3 | 86.3 | 78.8 | **87.7** | 85.9 | 83.8 | 84.6 | 86.1 |
91
+ | WritingBench | 83.2 | 78.4 | 85.3 | 83.1 | 79.1 | 80.3 | **88.3** |
92
+ | **Agent** | | | | | | | |
93
+ | BFCL-v3 | 63.8 | 67.2 | **72.4** | 67.2 | 61.8 | 70.8 | 71.9 |
94
+ | TAU2-Retail | 64.9 | 71.0 | **76.3** | 71.3 | - | 40.4 | 71.9 |
95
+ | TAU2-Airline | 60.0 | 59.0 | **70.0** | 60.0 | - | 30.0 | 58.0 |
96
+ | TAU2-Telecom | 33.3 | 42.0 | **60.5** | 37.4 | - | 21.9 | 45.6 |
97
+ | **Multilingualism** | | | | | | | |
98
+ | MultiIF | 63.5 | 78.0 | 80.3 | 77.8 | - | 71.9 | **80.6** |
99
+ | MMLU-ProX | 80.6 | 79.0 | 83.3 | **84.7** | - | 80.0 | 81.0 |
100
+ | INCLUDE | 79.4 | 80.8 | **86.6** | 85.1 | - | 78.7 | 81.0 |
101
+ | PolyMATH | 46.9 | 48.7 | 49.7 | 52.2 | - | 54.7 | **60.1** |
102
+
103
+ \* For OpenAI O4-mini and O3, we use a medium reasoning effort, except for scores marked with *, which are generated using high reasoning effort.
104
+
105
+ \# According to the official evaluation criteria of HLE, scores marked with \# refer to models that are not multi-modal and were evaluated only on the text-only subset.
106
+
107
+ $ For reproducibility, we report the win rates evaluated by GPT-4.1.
108
+
109
+ \& For highly challenging tasks (including PolyMATH and all reasoning and coding tasks), we use an output length of 81,920 tokens. For all other tasks, we set the output length to 32,768.
110
+
111
+
112
+ ## Quickstart
113
+
114
+ The code of Qwen3-MoE has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
115
+
116
+ With `transformers<4.51.0`, you will encounter the following error:
117
+ ```
118
+ KeyError: 'qwen3_moe'
119
+ ```
120
+
121
+ The following contains a code snippet illustrating how to use the model generate content based on given inputs.
122
+ ```python
123
+ from transformers import AutoModelForCausalLM, AutoTokenizer
124
+
125
+ model_name = "Qwen/Qwen3-235B-A22B-Thinking-2507-FP8"
126
+
127
+ # load the tokenizer and the model
128
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
129
+ model = AutoModelForCausalLM.from_pretrained(
130
+ model_name,
131
+ torch_dtype="auto",
132
+ device_map="auto"
133
+ )
134
+
135
+ # prepare the model input
136
+ prompt = "Give me a short introduction to large language model."
137
+ messages = [
138
+ {"role": "user", "content": prompt}
139
+ ]
140
+ text = tokenizer.apply_chat_template(
141
+ messages,
142
+ tokenize=False,
143
+ add_generation_prompt=True,
144
+ )
145
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
146
+
147
+ # conduct text completion
148
+ generated_ids = model.generate(
149
+ **model_inputs,
150
+ max_new_tokens=32768
151
+ )
152
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
153
+
154
+ # parsing thinking content
155
+ try:
156
+ # rindex finding 151668 (</think>)
157
+ index = len(output_ids) - output_ids[::-1].index(151668)
158
+ except ValueError:
159
+ index = 0
160
+
161
+ thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
162
+ content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
163
+
164
+ print("thinking content:", thinking_content) # no opening <think> tag
165
+ print("content:", content)
166
+
167
+ ```
168
+
169
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
170
+ - SGLang:
171
+ ```shell
172
+ python -m sglang.launch_server --model-path Qwen/Qwen3-235B-A22B-Thinking-2507-FP8 --tp 4 --context-length 262144 --reasoning-parser qwen3
173
+ ```
174
+ - vLLM:
175
+ ```shell
176
+ vllm serve Qwen/Qwen3-235B-A22B-Thinking-2507-FP8 --tensor-parallel-size 4 --max-model-len 262144 --enable-reasoning --reasoning-parser deepseek_r1
177
+ ```
178
+
179
+ **Note: If you encounter out-of-memory (OOM) issues, you may consider reducing the context length to a smaller value. However, since the model may require longer token sequences for reasoning, we strongly recommend using a context length greater than 131,072 when possible.**
180
+
181
+ For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
182
+
183
+ ## Note on FP8
184
+
185
+ For convenience and performance, we have provided `fp8`-quantized model checkpoint for Qwen3, whose name ends with `-FP8`. The quantization method is fine-grained `fp8` quantization with block size of 128. You can find more details in the `quantization_config` field in `config.json`.
186
+
187
+ You can use the Qwen3-235B-A22B-Thinking-2507-FP8 model with serveral inference frameworks, including `transformers`, `sglang`, and `vllm`, as the original bfloat16 model.
188
+
189
+ ## Agentic Use
190
+
191
+ Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
192
+
193
+ To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
194
+ ```python
195
+ from qwen_agent.agents import Assistant
196
+
197
+ # Define LLM
198
+ # Using Alibaba Cloud Model Studio
199
+ llm_cfg = {
200
+ 'model': 'qwen3-235b-a22b-thinking-2507',
201
+ 'model_type': 'qwen_dashscope',
202
+ }
203
+
204
+ # Using OpenAI-compatible API endpoint. It is recommended to disable the reasoning and the tool call parsing
205
+ # functionality of the deployment frameworks and let Qwen-Agent automate the related operations. For example,
206
+ # `VLLM_USE_MODELSCOPE=true vllm serve Qwen/Qwen3-235B-A22B-Thinking-2507-FP8 --served-model-name Qwen3-235B-A22B-Thinking-2507 --tensor-parallel-size 4 --max-model-len 262144`.
207
+ #
208
+ # llm_cfg = {
209
+ # 'model': 'Qwen3-235B-A22B-Thinking-2507',
210
+ #
211
+ # # Use a custom endpoint compatible with OpenAI API:
212
+ # 'model_server': 'http://localhost:8000/v1', # api_base without reasoning and tool call parsing
213
+ # 'api_key': 'EMPTY',
214
+ # 'generate_cfg': {
215
+ # 'thought_in_content': True,
216
+ # },
217
+ # }
218
+
219
+ # Define Tools
220
+ tools = [
221
+ {'mcpServers': { # You can specify the MCP configuration file
222
+ 'time': {
223
+ 'command': 'uvx',
224
+ 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
225
+ },
226
+ "fetch": {
227
+ "command": "uvx",
228
+ "args": ["mcp-server-fetch"]
229
+ }
230
+ }
231
+ },
232
+ 'code_interpreter', # Built-in tools
233
+ ]
234
+
235
+ # Define Agent
236
+ bot = Assistant(llm=llm_cfg, function_list=tools)
237
+
238
+ # Streaming generation
239
+ messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
240
+ for responses in bot.run(messages=messages):
241
+ pass
242
+ print(responses)
243
+ ```
244
+
245
+ ## Best Practices
246
+
247
+ To achieve optimal performance, we recommend the following settings:
248
+
249
+ 1. **Sampling Parameters**:
250
+ - We suggest using `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`.
251
+ - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
252
+
253
+ 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
254
+
255
+ 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
256
+ - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
257
+ - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
258
+
259
+ 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
260
+
261
+
262
+ ### Citation
263
+
264
+ If you find our work helpful, feel free to give us a cite.
265
+
266
+ ```
267
+ @misc{qwen3technicalreport,
268
+ title={Qwen3 Technical Report},
269
+ author={Qwen Team},
270
+ year={2025},
271
+ eprint={2505.09388},
272
+ archivePrefix={arXiv},
273
+ primaryClass={cs.CL},
274
+ url={https://arxiv.org/abs/2505.09388},
275
+ }
276
+ ```
added_tokens.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</think>": 151668,
3
+ "</tool_call>": 151658,
4
+ "</tool_response>": 151666,
5
+ "<think>": 151667,
6
+ "<tool_call>": 151657,
7
+ "<tool_response>": 151665,
8
+ "<|box_end|>": 151649,
9
+ "<|box_start|>": 151648,
10
+ "<|endoftext|>": 151643,
11
+ "<|file_sep|>": 151664,
12
+ "<|fim_middle|>": 151660,
13
+ "<|fim_pad|>": 151662,
14
+ "<|fim_prefix|>": 151659,
15
+ "<|fim_suffix|>": 151661,
16
+ "<|im_end|>": 151645,
17
+ "<|im_start|>": 151644,
18
+ "<|image_pad|>": 151655,
19
+ "<|object_ref_end|>": 151647,
20
+ "<|object_ref_start|>": 151646,
21
+ "<|quad_end|>": 151651,
22
+ "<|quad_start|>": 151650,
23
+ "<|repo_name|>": 151663,
24
+ "<|video_pad|>": 151656,
25
+ "<|vision_end|>": 151653,
26
+ "<|vision_pad|>": 151654,
27
+ "<|vision_start|>": 151652
28
+ }
chat_template.jinja ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0].role == 'system' %}
4
+ {{- messages[0].content + '\n\n' }}
5
+ {%- endif %}
6
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
7
+ {%- for tool in tools %}
8
+ {{- "\n" }}
9
+ {{- tool | tojson }}
10
+ {%- endfor %}
11
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
12
+ {%- else %}
13
+ {%- if messages[0].role == 'system' %}
14
+ {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
15
+ {%- endif %}
16
+ {%- endif %}
17
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
18
+ {%- for message in messages[::-1] %}
19
+ {%- set index = (messages|length - 1) - loop.index0 %}
20
+ {%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
21
+ {%- set ns.multi_step_tool = false %}
22
+ {%- set ns.last_query_index = index %}
23
+ {%- endif %}
24
+ {%- endfor %}
25
+ {%- for message in messages %}
26
+ {%- if message.content is string %}
27
+ {%- set content = message.content %}
28
+ {%- else %}
29
+ {%- set content = '' %}
30
+ {%- endif %}
31
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
32
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
33
+ {%- elif message.role == "assistant" %}
34
+ {%- set reasoning_content = '' %}
35
+ {%- if message.reasoning_content is string %}
36
+ {%- set reasoning_content = message.reasoning_content %}
37
+ {%- else %}
38
+ {%- if '</think>' in content %}
39
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
40
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
41
+ {%- endif %}
42
+ {%- endif %}
43
+ {%- if loop.index0 > ns.last_query_index %}
44
+ {%- if loop.last or (not loop.last and reasoning_content) %}
45
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
46
+ {%- else %}
47
+ {{- '<|im_start|>' + message.role + '\n' + content }}
48
+ {%- endif %}
49
+ {%- else %}
50
+ {{- '<|im_start|>' + message.role + '\n' + content }}
51
+ {%- endif %}
52
+ {%- if message.tool_calls %}
53
+ {%- for tool_call in message.tool_calls %}
54
+ {%- if (loop.first and content) or (not loop.first) %}
55
+ {{- '\n' }}
56
+ {%- endif %}
57
+ {%- if tool_call.function %}
58
+ {%- set tool_call = tool_call.function %}
59
+ {%- endif %}
60
+ {{- '<tool_call>\n{"name": "' }}
61
+ {{- tool_call.name }}
62
+ {{- '", "arguments": ' }}
63
+ {%- if tool_call.arguments is string %}
64
+ {{- tool_call.arguments }}
65
+ {%- else %}
66
+ {{- tool_call.arguments | tojson }}
67
+ {%- endif %}
68
+ {{- '}\n</tool_call>' }}
69
+ {%- endfor %}
70
+ {%- endif %}
71
+ {{- '<|im_end|>\n' }}
72
+ {%- elif message.role == "tool" %}
73
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
74
+ {{- '<|im_start|>user' }}
75
+ {%- endif %}
76
+ {{- '\n<tool_response>\n' }}
77
+ {{- content }}
78
+ {{- '\n</tool_response>' }}
79
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
80
+ {{- '<|im_end|>\n' }}
81
+ {%- endif %}
82
+ {%- endif %}
83
+ {%- endfor %}
84
+ {%- if add_generation_prompt %}
85
+ {{- '<|im_start|>assistant\n<think>\n' }}
86
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,333 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen3MoeForCausalLM"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "decoder_sparse_step": 1,
8
+ "eos_token_id": 151645,
9
+ "head_dim": 128,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 4096,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 12288,
14
+ "max_position_embeddings": 262144,
15
+ "max_window_layers": 94,
16
+ "mlp_only_layers": [],
17
+ "model_type": "qwen3_moe",
18
+ "moe_intermediate_size": 1536,
19
+ "norm_topk_prob": true,
20
+ "num_attention_heads": 64,
21
+ "num_experts": 128,
22
+ "num_experts_per_tok": 8,
23
+ "num_hidden_layers": 94,
24
+ "num_key_value_heads": 4,
25
+ "output_router_logits": false,
26
+ "pad_token_id": 151654,
27
+ "quantization_config": {
28
+ "activation_scheme": "dynamic",
29
+ "fmt": "e4m3",
30
+ "modules_to_not_convert": [
31
+ "lm_head",
32
+ "model.layers.0.input_layernorm",
33
+ "model.layers.0.mlp.gate",
34
+ "model.layers.0.post_attention_layernorm",
35
+ "model.layers.1.input_layernorm",
36
+ "model.layers.1.mlp.gate",
37
+ "model.layers.1.post_attention_layernorm",
38
+ "model.layers.2.input_layernorm",
39
+ "model.layers.2.mlp.gate",
40
+ "model.layers.2.post_attention_layernorm",
41
+ "model.layers.3.input_layernorm",
42
+ "model.layers.3.mlp.gate",
43
+ "model.layers.3.post_attention_layernorm",
44
+ "model.layers.4.input_layernorm",
45
+ "model.layers.4.mlp.gate",
46
+ "model.layers.4.post_attention_layernorm",
47
+ "model.layers.5.input_layernorm",
48
+ "model.layers.5.mlp.gate",
49
+ "model.layers.5.post_attention_layernorm",
50
+ "model.layers.6.input_layernorm",
51
+ "model.layers.6.mlp.gate",
52
+ "model.layers.6.post_attention_layernorm",
53
+ "model.layers.7.input_layernorm",
54
+ "model.layers.7.mlp.gate",
55
+ "model.layers.7.post_attention_layernorm",
56
+ "model.layers.8.input_layernorm",
57
+ "model.layers.8.mlp.gate",
58
+ "model.layers.8.post_attention_layernorm",
59
+ "model.layers.9.input_layernorm",
60
+ "model.layers.9.mlp.gate",
61
+ "model.layers.9.post_attention_layernorm",
62
+ "model.layers.10.input_layernorm",
63
+ "model.layers.10.mlp.gate",
64
+ "model.layers.10.post_attention_layernorm",
65
+ "model.layers.11.input_layernorm",
66
+ "model.layers.11.mlp.gate",
67
+ "model.layers.11.post_attention_layernorm",
68
+ "model.layers.12.input_layernorm",
69
+ "model.layers.12.mlp.gate",
70
+ "model.layers.12.post_attention_layernorm",
71
+ "model.layers.13.input_layernorm",
72
+ "model.layers.13.mlp.gate",
73
+ "model.layers.13.post_attention_layernorm",
74
+ "model.layers.14.input_layernorm",
75
+ "model.layers.14.mlp.gate",
76
+ "model.layers.14.post_attention_layernorm",
77
+ "model.layers.15.input_layernorm",
78
+ "model.layers.15.mlp.gate",
79
+ "model.layers.15.post_attention_layernorm",
80
+ "model.layers.16.input_layernorm",
81
+ "model.layers.16.mlp.gate",
82
+ "model.layers.16.post_attention_layernorm",
83
+ "model.layers.17.input_layernorm",
84
+ "model.layers.17.mlp.gate",
85
+ "model.layers.17.post_attention_layernorm",
86
+ "model.layers.18.input_layernorm",
87
+ "model.layers.18.mlp.gate",
88
+ "model.layers.18.post_attention_layernorm",
89
+ "model.layers.19.input_layernorm",
90
+ "model.layers.19.mlp.gate",
91
+ "model.layers.19.post_attention_layernorm",
92
+ "model.layers.20.input_layernorm",
93
+ "model.layers.20.mlp.gate",
94
+ "model.layers.20.post_attention_layernorm",
95
+ "model.layers.21.input_layernorm",
96
+ "model.layers.21.mlp.gate",
97
+ "model.layers.21.post_attention_layernorm",
98
+ "model.layers.22.input_layernorm",
99
+ "model.layers.22.mlp.gate",
100
+ "model.layers.22.post_attention_layernorm",
101
+ "model.layers.23.input_layernorm",
102
+ "model.layers.23.mlp.gate",
103
+ "model.layers.23.post_attention_layernorm",
104
+ "model.layers.24.input_layernorm",
105
+ "model.layers.24.mlp.gate",
106
+ "model.layers.24.post_attention_layernorm",
107
+ "model.layers.25.input_layernorm",
108
+ "model.layers.25.mlp.gate",
109
+ "model.layers.25.post_attention_layernorm",
110
+ "model.layers.26.input_layernorm",
111
+ "model.layers.26.mlp.gate",
112
+ "model.layers.26.post_attention_layernorm",
113
+ "model.layers.27.input_layernorm",
114
+ "model.layers.27.mlp.gate",
115
+ "model.layers.27.post_attention_layernorm",
116
+ "model.layers.28.input_layernorm",
117
+ "model.layers.28.mlp.gate",
118
+ "model.layers.28.post_attention_layernorm",
119
+ "model.layers.29.input_layernorm",
120
+ "model.layers.29.mlp.gate",
121
+ "model.layers.29.post_attention_layernorm",
122
+ "model.layers.30.input_layernorm",
123
+ "model.layers.30.mlp.gate",
124
+ "model.layers.30.post_attention_layernorm",
125
+ "model.layers.31.input_layernorm",
126
+ "model.layers.31.mlp.gate",
127
+ "model.layers.31.post_attention_layernorm",
128
+ "model.layers.32.input_layernorm",
129
+ "model.layers.32.mlp.gate",
130
+ "model.layers.32.post_attention_layernorm",
131
+ "model.layers.33.input_layernorm",
132
+ "model.layers.33.mlp.gate",
133
+ "model.layers.33.post_attention_layernorm",
134
+ "model.layers.34.input_layernorm",
135
+ "model.layers.34.mlp.gate",
136
+ "model.layers.34.post_attention_layernorm",
137
+ "model.layers.35.input_layernorm",
138
+ "model.layers.35.mlp.gate",
139
+ "model.layers.35.post_attention_layernorm",
140
+ "model.layers.36.input_layernorm",
141
+ "model.layers.36.mlp.gate",
142
+ "model.layers.36.post_attention_layernorm",
143
+ "model.layers.37.input_layernorm",
144
+ "model.layers.37.mlp.gate",
145
+ "model.layers.37.post_attention_layernorm",
146
+ "model.layers.38.input_layernorm",
147
+ "model.layers.38.mlp.gate",
148
+ "model.layers.38.post_attention_layernorm",
149
+ "model.layers.39.input_layernorm",
150
+ "model.layers.39.mlp.gate",
151
+ "model.layers.39.post_attention_layernorm",
152
+ "model.layers.40.input_layernorm",
153
+ "model.layers.40.mlp.gate",
154
+ "model.layers.40.post_attention_layernorm",
155
+ "model.layers.41.input_layernorm",
156
+ "model.layers.41.mlp.gate",
157
+ "model.layers.41.post_attention_layernorm",
158
+ "model.layers.42.input_layernorm",
159
+ "model.layers.42.mlp.gate",
160
+ "model.layers.42.post_attention_layernorm",
161
+ "model.layers.43.input_layernorm",
162
+ "model.layers.43.mlp.gate",
163
+ "model.layers.43.post_attention_layernorm",
164
+ "model.layers.44.input_layernorm",
165
+ "model.layers.44.mlp.gate",
166
+ "model.layers.44.post_attention_layernorm",
167
+ "model.layers.45.input_layernorm",
168
+ "model.layers.45.mlp.gate",
169
+ "model.layers.45.post_attention_layernorm",
170
+ "model.layers.46.input_layernorm",
171
+ "model.layers.46.mlp.gate",
172
+ "model.layers.46.post_attention_layernorm",
173
+ "model.layers.47.input_layernorm",
174
+ "model.layers.47.mlp.gate",
175
+ "model.layers.47.post_attention_layernorm",
176
+ "model.layers.48.input_layernorm",
177
+ "model.layers.48.mlp.gate",
178
+ "model.layers.48.post_attention_layernorm",
179
+ "model.layers.49.input_layernorm",
180
+ "model.layers.49.mlp.gate",
181
+ "model.layers.49.post_attention_layernorm",
182
+ "model.layers.50.input_layernorm",
183
+ "model.layers.50.mlp.gate",
184
+ "model.layers.50.post_attention_layernorm",
185
+ "model.layers.51.input_layernorm",
186
+ "model.layers.51.mlp.gate",
187
+ "model.layers.51.post_attention_layernorm",
188
+ "model.layers.52.input_layernorm",
189
+ "model.layers.52.mlp.gate",
190
+ "model.layers.52.post_attention_layernorm",
191
+ "model.layers.53.input_layernorm",
192
+ "model.layers.53.mlp.gate",
193
+ "model.layers.53.post_attention_layernorm",
194
+ "model.layers.54.input_layernorm",
195
+ "model.layers.54.mlp.gate",
196
+ "model.layers.54.post_attention_layernorm",
197
+ "model.layers.55.input_layernorm",
198
+ "model.layers.55.mlp.gate",
199
+ "model.layers.55.post_attention_layernorm",
200
+ "model.layers.56.input_layernorm",
201
+ "model.layers.56.mlp.gate",
202
+ "model.layers.56.post_attention_layernorm",
203
+ "model.layers.57.input_layernorm",
204
+ "model.layers.57.mlp.gate",
205
+ "model.layers.57.post_attention_layernorm",
206
+ "model.layers.58.input_layernorm",
207
+ "model.layers.58.mlp.gate",
208
+ "model.layers.58.post_attention_layernorm",
209
+ "model.layers.59.input_layernorm",
210
+ "model.layers.59.mlp.gate",
211
+ "model.layers.59.post_attention_layernorm",
212
+ "model.layers.60.input_layernorm",
213
+ "model.layers.60.mlp.gate",
214
+ "model.layers.60.post_attention_layernorm",
215
+ "model.layers.61.input_layernorm",
216
+ "model.layers.61.mlp.gate",
217
+ "model.layers.61.post_attention_layernorm",
218
+ "model.layers.62.input_layernorm",
219
+ "model.layers.62.mlp.gate",
220
+ "model.layers.62.post_attention_layernorm",
221
+ "model.layers.63.input_layernorm",
222
+ "model.layers.63.mlp.gate",
223
+ "model.layers.63.post_attention_layernorm",
224
+ "model.layers.64.input_layernorm",
225
+ "model.layers.64.mlp.gate",
226
+ "model.layers.64.post_attention_layernorm",
227
+ "model.layers.65.input_layernorm",
228
+ "model.layers.65.mlp.gate",
229
+ "model.layers.65.post_attention_layernorm",
230
+ "model.layers.66.input_layernorm",
231
+ "model.layers.66.mlp.gate",
232
+ "model.layers.66.post_attention_layernorm",
233
+ "model.layers.67.input_layernorm",
234
+ "model.layers.67.mlp.gate",
235
+ "model.layers.67.post_attention_layernorm",
236
+ "model.layers.68.input_layernorm",
237
+ "model.layers.68.mlp.gate",
238
+ "model.layers.68.post_attention_layernorm",
239
+ "model.layers.69.input_layernorm",
240
+ "model.layers.69.mlp.gate",
241
+ "model.layers.69.post_attention_layernorm",
242
+ "model.layers.70.input_layernorm",
243
+ "model.layers.70.mlp.gate",
244
+ "model.layers.70.post_attention_layernorm",
245
+ "model.layers.71.input_layernorm",
246
+ "model.layers.71.mlp.gate",
247
+ "model.layers.71.post_attention_layernorm",
248
+ "model.layers.72.input_layernorm",
249
+ "model.layers.72.mlp.gate",
250
+ "model.layers.72.post_attention_layernorm",
251
+ "model.layers.73.input_layernorm",
252
+ "model.layers.73.mlp.gate",
253
+ "model.layers.73.post_attention_layernorm",
254
+ "model.layers.74.input_layernorm",
255
+ "model.layers.74.mlp.gate",
256
+ "model.layers.74.post_attention_layernorm",
257
+ "model.layers.75.input_layernorm",
258
+ "model.layers.75.mlp.gate",
259
+ "model.layers.75.post_attention_layernorm",
260
+ "model.layers.76.input_layernorm",
261
+ "model.layers.76.mlp.gate",
262
+ "model.layers.76.post_attention_layernorm",
263
+ "model.layers.77.input_layernorm",
264
+ "model.layers.77.mlp.gate",
265
+ "model.layers.77.post_attention_layernorm",
266
+ "model.layers.78.input_layernorm",
267
+ "model.layers.78.mlp.gate",
268
+ "model.layers.78.post_attention_layernorm",
269
+ "model.layers.79.input_layernorm",
270
+ "model.layers.79.mlp.gate",
271
+ "model.layers.79.post_attention_layernorm",
272
+ "model.layers.80.input_layernorm",
273
+ "model.layers.80.mlp.gate",
274
+ "model.layers.80.post_attention_layernorm",
275
+ "model.layers.81.input_layernorm",
276
+ "model.layers.81.mlp.gate",
277
+ "model.layers.81.post_attention_layernorm",
278
+ "model.layers.82.input_layernorm",
279
+ "model.layers.82.mlp.gate",
280
+ "model.layers.82.post_attention_layernorm",
281
+ "model.layers.83.input_layernorm",
282
+ "model.layers.83.mlp.gate",
283
+ "model.layers.83.post_attention_layernorm",
284
+ "model.layers.84.input_layernorm",
285
+ "model.layers.84.mlp.gate",
286
+ "model.layers.84.post_attention_layernorm",
287
+ "model.layers.85.input_layernorm",
288
+ "model.layers.85.mlp.gate",
289
+ "model.layers.85.post_attention_layernorm",
290
+ "model.layers.86.input_layernorm",
291
+ "model.layers.86.mlp.gate",
292
+ "model.layers.86.post_attention_layernorm",
293
+ "model.layers.87.input_layernorm",
294
+ "model.layers.87.mlp.gate",
295
+ "model.layers.87.post_attention_layernorm",
296
+ "model.layers.88.input_layernorm",
297
+ "model.layers.88.mlp.gate",
298
+ "model.layers.88.post_attention_layernorm",
299
+ "model.layers.89.input_layernorm",
300
+ "model.layers.89.mlp.gate",
301
+ "model.layers.89.post_attention_layernorm",
302
+ "model.layers.90.input_layernorm",
303
+ "model.layers.90.mlp.gate",
304
+ "model.layers.90.post_attention_layernorm",
305
+ "model.layers.91.input_layernorm",
306
+ "model.layers.91.mlp.gate",
307
+ "model.layers.91.post_attention_layernorm",
308
+ "model.layers.92.input_layernorm",
309
+ "model.layers.92.mlp.gate",
310
+ "model.layers.92.post_attention_layernorm",
311
+ "model.layers.93.input_layernorm",
312
+ "model.layers.93.mlp.gate",
313
+ "model.layers.93.post_attention_layernorm"
314
+ ],
315
+ "quant_method": "fp8",
316
+ "weight_block_size": [
317
+ 128,
318
+ 128
319
+ ]
320
+ },
321
+ "rms_norm_eps": 1e-06,
322
+ "rope_scaling": null,
323
+ "rope_theta": 5000000,
324
+ "router_aux_loss_coef": 0.001,
325
+ "sliding_window": null,
326
+ "tie_word_embeddings": false,
327
+ "torch_dtype": "bfloat16",
328
+ "transformers_version": "4.53.3",
329
+ "unsloth_fixed": true,
330
+ "use_cache": true,
331
+ "use_sliding_window": false,
332
+ "vocab_size": 151936
333
+ }
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "pad_token_id": 151643,
9
+ "temperature": 0.6,
10
+ "top_k": 20,
11
+ "top_p": 0.95,
12
+ "transformers_version": "4.51.0"
13
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d79b874c8901349de90736201951e2d592a3c7a8a8cff9b7bc0e9c9fc9e7079
3
+ size 9997546840
model-00002-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b0facc85512d3659fd85ea9cf6e1ee88d07425e13d03580e1332e1d7c5002bc0
3
+ size 9998803944
model-00003-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a39a8c3e8c86deacec067d02042188e8c880c14721b7397d44f2ac9fd45c1c0a
3
+ size 9998805544
model-00004-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e6c1eb033d5933413ccd7cc189f3ab8b239b77e1b13bb141a01836623d2e888
3
+ size 9998807112
model-00005-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8dd1dcf09ef5fda5709150a56d4589c3b207dbc2f54cc1348924b15711ec4557
3
+ size 9998807128
model-00006-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4312bff2d62f815d98684d9d94119ed6de148b73a28672d17389c67fc1ba7156
3
+ size 9998807144
model-00007-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:510b7c1b8f73b6406ef13d72df63663640a49efb4860535db9b940e608d1d44f
3
+ size 9998807160
model-00008-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7be82358f50ef6de7350a33d34cecff4e50c8461fa066daf7e7c4e5b886d571f
3
+ size 9998807168
model-00009-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2566b28146fbe8ae4140b2d6f74eb434d4b4e3802173e05e2de1689d55446b5c
3
+ size 9998807184
model-00010-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50a268e5acf5d75af13d7a13a441f7cf9c74304add57b46f7e615df0a2e876b8
3
+ size 9998807200
model-00011-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61aee74e3def779fb365996cc58218caaed1acf1e189765f2038cedab82d82d2
3
+ size 9998807208
model-00012-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:088910fe38b5c21645e1f34a884f416e25f25715b047861a067b43aaca78ef55
3
+ size 9998807224
model-00013-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2bf734d6d7c3e63b01a99c331a5f9e9a3401cb4996a76228c93d707be3a0192
3
+ size 9998807248
model-00014-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00ea1fe0db3e3b908c3b17b84a36b80d120fbac66965b358576c3df1d5bb9636
3
+ size 9998807272
model-00015-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8408af19f79161c220b2f9e194090a7dc0a44a18f4e1f57f019633a78d1af6c0
3
+ size 9998807304
model-00016-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d1d8564934a93889d9663095a8be2cc83d3eb812475222ff40b39132827eebc
3
+ size 9998807320
model-00017-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bac734cd44465ffdc80241904ec4a2666fd3dc3c2a01daafea42d7cdf050b6f6
3
+ size 9998807344
model-00018-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53a1a226ffad5b07fa10de9fb6b7e5073ebe5e068dfdafd51f0e867469cd0b63
3
+ size 9998807360
model-00019-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e8a7fd88f67bf1eba284ed12f6936d638e757d6f705f7b565db7b634abf4da8
3
+ size 9998807376
model-00020-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70f72f4dcc613354a3097ac172f1ec9da0a8894ac7643a7940cee65dbfb9fc85
3
+ size 9998807384
model-00021-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4417dfbf182ce7e7fbcae6cb8bd6f3450cfa167038367c6fafe9d576f788d84
3
+ size 9998807408
model-00022-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c167da1b5078592c4d86eea4c16cf1e3abfe8407e0df50b4840dff7f19aac05d
3
+ size 9998807416
model-00023-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:067fa915bdb9baaf57b2f4685c53e57edfd4fd54fd785220830bc9a924510954
3
+ size 9998807432
model-00024-of-00024.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efe2926fe6bd64e182aa88762162a53a053bacf77e0fc731768c64964ecb9c75
3
+ size 6454892160
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|im_end|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|vision_pad|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeb13307a71acd8fe81861d94ad54ab689df773318809eed3cbe794b4492dae4
3
+ size 11422654
tokenizer_config.json ADDED
@@ -0,0 +1,241 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|im_start|>",
216
+ "<|im_end|>",
217
+ "<|object_ref_start|>",
218
+ "<|object_ref_end|>",
219
+ "<|box_start|>",
220
+ "<|box_end|>",
221
+ "<|quad_start|>",
222
+ "<|quad_end|>",
223
+ "<|vision_start|>",
224
+ "<|vision_end|>",
225
+ "<|vision_pad|>",
226
+ "<|image_pad|>",
227
+ "<|video_pad|>"
228
+ ],
229
+ "bos_token": null,
230
+ "clean_up_tokenization_spaces": false,
231
+ "eos_token": "<|im_end|>",
232
+ "errors": "replace",
233
+ "extra_special_tokens": {},
234
+ "model_max_length": 262144,
235
+ "pad_token": "<|vision_pad|>",
236
+ "padding_side": "left",
237
+ "split_special_tokens": false,
238
+ "tokenizer_class": "Qwen2Tokenizer",
239
+ "unk_token": null,
240
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0].content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- if message.content is string %}\n {%- set content = message.content %}\n {%- else %}\n {%- set content = '' %}\n {%- endif %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is string %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in content %}\n {%- set reasoning_content = content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- set content = content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if loop.last or (not loop.last and reasoning_content) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n<think>\\n' }}\n{%- endif %}"
241
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff