danielhanchen commited on
Commit
9cacc05
·
verified ·
1 Parent(s): bc79a21

Add files using upload-large-folder tool

Browse files
README.md CHANGED
@@ -1,10 +1,18 @@
1
  ---
2
  tags:
3
  - unsloth
 
 
 
 
4
  base_model:
5
  - Qwen/Qwen3-14B-FP8
6
  ---
 
7
  # Qwen3-14B-FP8
 
 
 
8
 
9
  ## Qwen3 Highlights
10
 
@@ -86,53 +94,32 @@ print("thinking content:", thinking_content)
86
  print("content:", content)
87
  ```
88
 
89
- For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
90
- - vLLM:
91
  ```shell
92
- vllm serve Qwen/Qwen3-14B-FP8 --enable-reasoning --reasoning-parser deepseek_r1
93
  ```
94
- - SGLang:
95
  ```shell
96
- python -m sglang.launch_server --model-path Qwen/Qwen3-14B-FP8 --reasoning-parser deepseek-r1
97
  ```
98
 
 
 
99
  ## Note on FP8
100
 
101
  For convenience and performance, we have provided `fp8`-quantized model checkpoint for Qwen3, whose name ends with `-FP8`. The quantization method is fine-grained `fp8` quantization with block size of 128. You can find more details in the `quantization_config` field in `config.json`.
102
 
103
- You can use the Qwen3-14B-FP8 model with serveral inference frameworks, including `transformers`, `vllm`, and `sglang`, as the original bfloat16 model.
104
  However, please pay attention to the following known issues:
105
  - `transformers`:
106
  - there are currently issues with the "fine-grained fp8" method in `transformers` for distributed inference. You may need to set the environment variable `CUDA_LAUNCH_BLOCKING=1` if multiple devices are used in inference.
107
- - vLLM:
108
- - there are currently compatibility issues with `vllm`. For a quick fix, you should make the following changes to `vllm/vllm/model_executor/layers/linear.py`:
109
- ```python
110
- # these changes are in QKVParallelLinear.weight_loader_v2() of vllm/vllm/model_executor/layers/linear.py
111
- ...
112
- shard_offset = self._get_shard_offset_mapping(loaded_shard_id)
113
- shard_size = self._get_shard_size_mapping(loaded_shard_id)
114
-
115
- # add the following code
116
- if isinstance(param, BlockQuantScaleParameter):
117
- weight_block_size = self.quant_method.quant_config.weight_block_size
118
- block_n, _ = weight_block_size[0], weight_block_size[1]
119
- shard_offset = (shard_offset + block_n - 1) // block_n
120
- shard_size = (shard_size + block_n - 1) // block_n
121
- # end of the modification
122
-
123
- param.load_qkv_weight(loaded_weight=loaded_weight,
124
- num_heads=self.num_kv_head_replicas,
125
- shard_id=loaded_shard_id,
126
- shard_offset=shard_offset,
127
- shard_size=shard_size)
128
- ...
129
- ```
130
 
131
  ## Switching Between Thinking and Non-Thinking Mode
132
 
133
  > [!TIP]
134
- > The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
135
- > Please refer to [our documentation](https://qwen.readthedocs.io/) for more details.
136
 
137
  ### `enable_thinking=True`
138
 
@@ -230,7 +217,7 @@ if __name__ == "__main__":
230
  print(f"Bot: {response_3}")
231
  ```
232
 
233
- > **Note**
234
  > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
235
  > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
236
 
@@ -300,7 +287,7 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
300
  {
301
  ...,
302
  "rope_scaling": {
303
- "type": "yarn",
304
  "factor": 4.0,
305
  "original_max_position_embeddings": 32768
306
  }
@@ -312,12 +299,12 @@ YaRN is currently supported by several inference frameworks, e.g., `transformers
312
 
313
  For `vllm`, you can use
314
  ```shell
315
- vllm serve ... --rope-scaling '{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
316
  ```
317
 
318
  For `sglang`, you can use
319
  ```shell
320
- python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
321
  ```
322
 
323
  For `llama-server` from `llama.cpp`, you can use
 
1
  ---
2
  tags:
3
  - unsloth
4
+ library_name: transformers
5
+ license: apache-2.0
6
+ license_link: https://huggingface.co/Qwen/Qwen3-14B-FP8/blob/main/LICENSE
7
+ pipeline_tag: text-generation
8
  base_model:
9
  - Qwen/Qwen3-14B-FP8
10
  ---
11
+
12
  # Qwen3-14B-FP8
13
+ <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
14
+ <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
15
+ </a>
16
 
17
  ## Qwen3 Highlights
18
 
 
94
  print("content:", content)
95
  ```
96
 
97
+ For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
98
+ - SGLang:
99
  ```shell
100
+ python -m sglang.launch_server --model-path Qwen/Qwen3-14B-FP8 --reasoning-parser qwen3
101
  ```
102
+ - vLLM:
103
  ```shell
104
+ vllm serve Qwen/Qwen3-14B-FP8 --enable-reasoning --reasoning-parser deepseek_r1
105
  ```
106
 
107
+ For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
108
+
109
  ## Note on FP8
110
 
111
  For convenience and performance, we have provided `fp8`-quantized model checkpoint for Qwen3, whose name ends with `-FP8`. The quantization method is fine-grained `fp8` quantization with block size of 128. You can find more details in the `quantization_config` field in `config.json`.
112
 
113
+ You can use the Qwen3-14B-FP8 model with serveral inference frameworks, including `transformers`, `sglang`, and `vllm`, as the original bfloat16 model.
114
  However, please pay attention to the following known issues:
115
  - `transformers`:
116
  - there are currently issues with the "fine-grained fp8" method in `transformers` for distributed inference. You may need to set the environment variable `CUDA_LAUNCH_BLOCKING=1` if multiple devices are used in inference.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
 
118
  ## Switching Between Thinking and Non-Thinking Mode
119
 
120
  > [!TIP]
121
+ > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
122
+ > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
123
 
124
  ### `enable_thinking=True`
125
 
 
217
  print(f"Bot: {response_3}")
218
  ```
219
 
220
+ > [!NOTE]
221
  > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
222
  > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
223
 
 
287
  {
288
  ...,
289
  "rope_scaling": {
290
+ "rope_type": "yarn",
291
  "factor": 4.0,
292
  "original_max_position_embeddings": 32768
293
  }
 
299
 
300
  For `vllm`, you can use
301
  ```shell
302
+ vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
303
  ```
304
 
305
  For `sglang`, you can use
306
  ```shell
307
+ python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
308
  ```
309
 
310
  For `llama-server` from `llama.cpp`, you can use
chat_template.jinja ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0].role == 'system' %}
4
+ {{- messages[0].content + '\n\n' }}
5
+ {%- endif %}
6
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
7
+ {%- for tool in tools %}
8
+ {{- "\n" }}
9
+ {{- tool | tojson }}
10
+ {%- endfor %}
11
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
12
+ {%- else %}
13
+ {%- if messages[0].role == 'system' %}
14
+ {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
15
+ {%- endif %}
16
+ {%- endif %}
17
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
18
+ {%- for forward_message in messages %}
19
+ {%- set index = (messages|length - 1) - loop.index0 %}
20
+ {%- set message = messages[index] %}
21
+ {%- set current_content = message.content if message.content is not none else '' %}
22
+ {%- set tool_start = '<tool_response>' %}
23
+ {%- set tool_start_length = tool_start|length %}
24
+ {%- set start_of_message = current_content[:tool_start_length] %}
25
+ {%- set tool_end = '</tool_response>' %}
26
+ {%- set tool_end_length = tool_end|length %}
27
+ {%- set start_pos = (current_content|length) - tool_end_length %}
28
+ {%- if start_pos < 0 %}
29
+ {%- set start_pos = 0 %}
30
+ {%- endif %}
31
+ {%- set end_of_message = current_content[start_pos:] %}
32
+ {%- if ns.multi_step_tool and message.role == "user" and not(start_of_message == tool_start and end_of_message == tool_end) %}
33
+ {%- set ns.multi_step_tool = false %}
34
+ {%- set ns.last_query_index = index %}
35
+ {%- endif %}
36
+ {%- endfor %}
37
+ {%- for message in messages %}
38
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
39
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
40
+ {%- elif message.role == "assistant" %}
41
+ {%- set content = message.content %}
42
+ {%- set reasoning_content = '' %}
43
+ {%- if message.reasoning_content is defined and message.reasoning_content is not none %}
44
+ {%- set reasoning_content = message.reasoning_content %}
45
+ {%- else %}
46
+ {%- if '</think>' in message.content %}
47
+ {%- set content = (message.content.split('</think>')|last).lstrip('\n') %}
48
+ {%- set reasoning_content = (message.content.split('</think>')|first).rstrip('\n') %}
49
+ {%- set reasoning_content = (reasoning_content.split('<think>')|last).lstrip('\n') %}
50
+ {%- endif %}
51
+ {%- endif %}
52
+ {%- if loop.index0 > ns.last_query_index %}
53
+ {%- if loop.last or (not loop.last and reasoning_content) %}
54
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
55
+ {%- else %}
56
+ {{- '<|im_start|>' + message.role + '\n' + content }}
57
+ {%- endif %}
58
+ {%- else %}
59
+ {{- '<|im_start|>' + message.role + '\n' + content }}
60
+ {%- endif %}
61
+ {%- if message.tool_calls %}
62
+ {%- for tool_call in message.tool_calls %}
63
+ {%- if (loop.first and content) or (not loop.first) %}
64
+ {{- '\n' }}
65
+ {%- endif %}
66
+ {%- if tool_call.function %}
67
+ {%- set tool_call = tool_call.function %}
68
+ {%- endif %}
69
+ {{- '<tool_call>\n{"name": "' }}
70
+ {{- tool_call.name }}
71
+ {{- '", "arguments": ' }}
72
+ {%- if tool_call.arguments is string %}
73
+ {{- tool_call.arguments }}
74
+ {%- else %}
75
+ {{- tool_call.arguments | tojson }}
76
+ {%- endif %}
77
+ {{- '}\n</tool_call>' }}
78
+ {%- endfor %}
79
+ {%- endif %}
80
+ {{- '<|im_end|>\n' }}
81
+ {%- elif message.role == "tool" %}
82
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
83
+ {{- '<|im_start|>user' }}
84
+ {%- endif %}
85
+ {{- '\n<tool_response>\n' }}
86
+ {{- message.content }}
87
+ {{- '\n</tool_response>' }}
88
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
89
+ {{- '<|im_end|>\n' }}
90
+ {%- endif %}
91
+ {%- endif %}
92
+ {%- endfor %}
93
+ {%- if add_generation_prompt %}
94
+ {{- '<|im_start|>assistant\n' }}
95
+ {%- if enable_thinking is defined and enable_thinking is false %}
96
+ {{- '<think>\n\n</think>\n\n' }}
97
+ {%- endif %}
98
+ {%- endif %}
config.json CHANGED
@@ -4,7 +4,6 @@
4
  ],
5
  "attention_bias": false,
6
  "attention_dropout": 0.0,
7
- "bos_token_id": 151643,
8
  "eos_token_id": 151645,
9
  "head_dim": 128,
10
  "hidden_act": "silu",
@@ -17,23 +16,25 @@
17
  "num_attention_heads": 40,
18
  "num_hidden_layers": 40,
19
  "num_key_value_heads": 8,
 
 
 
 
 
 
 
 
 
 
20
  "rms_norm_eps": 1e-06,
21
  "rope_scaling": null,
22
  "rope_theta": 1000000,
23
  "sliding_window": null,
24
  "tie_word_embeddings": false,
25
  "torch_dtype": "bfloat16",
26
- "transformers_version": "4.51.0",
 
27
  "use_cache": true,
28
  "use_sliding_window": false,
29
- "vocab_size": 151936,
30
- "quantization_config": {
31
- "activation_scheme": "dynamic",
32
- "fmt": "e4m3",
33
- "quant_method": "fp8",
34
- "weight_block_size": [
35
- 128,
36
- 128
37
- ]
38
- }
39
- }
 
4
  ],
5
  "attention_bias": false,
6
  "attention_dropout": 0.0,
 
7
  "eos_token_id": 151645,
8
  "head_dim": 128,
9
  "hidden_act": "silu",
 
16
  "num_attention_heads": 40,
17
  "num_hidden_layers": 40,
18
  "num_key_value_heads": 8,
19
+ "pad_token_id": 151654,
20
+ "quantization_config": {
21
+ "activation_scheme": "dynamic",
22
+ "modules_to_not_convert": null,
23
+ "quant_method": "fp8",
24
+ "weight_block_size": [
25
+ 128,
26
+ 128
27
+ ]
28
+ },
29
  "rms_norm_eps": 1e-06,
30
  "rope_scaling": null,
31
  "rope_theta": 1000000,
32
  "sliding_window": null,
33
  "tie_word_embeddings": false,
34
  "torch_dtype": "bfloat16",
35
+ "transformers_version": "4.52.0.dev0",
36
+ "unsloth_fixed": true,
37
  "use_cache": true,
38
  "use_sliding_window": false,
39
+ "vocab_size": 151936
40
+ }
 
 
 
 
 
 
 
 
 
generation_config.json CHANGED
@@ -1,6 +1,14 @@
1
  {
2
- "_from_model_config": true,
3
- "eos_token_id": 151645,
4
- "pad_token_id": 151643,
5
- "transformers_version": "4.51.3"
 
 
 
 
 
 
 
 
6
  }
 
1
  {
2
+ "bos_token_id": 151643,
3
+ "do_sample": true,
4
+ "eos_token_id": [
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "max_length": 40960,
9
+ "pad_token_id": 151654,
10
+ "temperature": 0.6,
11
+ "top_k": 20,
12
+ "top_p": 0.95,
13
+ "transformers_version": "4.52.0.dev0"
14
  }
model.safetensors.index.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "metadata": {
3
- "total_size": 16339276800
4
  },
5
  "weight_map": {
6
  "lm_head.weight": "model-00004-of-00004.safetensors",
 
1
  {
2
  "metadata": {
3
+ "total_size": 16326169600
4
  },
5
  "weight_map": {
6
  "lm_head.weight": "model-00004-of-00004.safetensors",
special_tokens_map.json CHANGED
@@ -22,7 +22,7 @@
22
  "single_word": false
23
  },
24
  "pad_token": {
25
- "content": "<|endoftext|>",
26
  "lstrip": false,
27
  "normalized": false,
28
  "rstrip": false,
 
22
  "single_word": false
23
  },
24
  "pad_token": {
25
+ "content": "<|vision_pad|>",
26
  "lstrip": false,
27
  "normalized": false,
28
  "rstrip": false,
tokenizer_config.json CHANGED
@@ -227,13 +227,15 @@
227
  "<|video_pad|>"
228
  ],
229
  "bos_token": null,
230
- "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0].content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set content = message.content %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is defined and message.reasoning_content is not none %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in message.content %}\n {%- set content = message.content.split('</think>')[-1].lstrip('\\n') %}\n {%- set reasoning_content = message.content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if loop.last or (not loop.last and reasoning_content) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n {%- if enable_thinking is defined and enable_thinking is false %}\n {{- '<think>\\n\\n</think>\\n\\n' }}\n {%- endif %}\n{%- endif %}",
231
  "clean_up_tokenization_spaces": false,
232
  "eos_token": "<|im_end|>",
233
  "errors": "replace",
234
- "model_max_length": 131072,
235
- "pad_token": "<|endoftext|>",
 
 
236
  "split_special_tokens": false,
237
  "tokenizer_class": "Qwen2Tokenizer",
238
- "unk_token": null
 
239
  }
 
227
  "<|video_pad|>"
228
  ],
229
  "bos_token": null,
 
230
  "clean_up_tokenization_spaces": false,
231
  "eos_token": "<|im_end|>",
232
  "errors": "replace",
233
+ "extra_special_tokens": {},
234
+ "model_max_length": 40960,
235
+ "pad_token": "<|vision_pad|>",
236
+ "padding_side": "left",
237
  "split_special_tokens": false,
238
  "tokenizer_class": "Qwen2Tokenizer",
239
+ "unk_token": null,
240
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0].content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for forward_message in messages %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- set message = messages[index] %}\n {%- set current_content = message.content if message.content is not none else '' %}\n {%- set tool_start = '<tool_response>' %}\n {%- set tool_start_length = tool_start|length %}\n {%- set start_of_message = current_content[:tool_start_length] %}\n {%- set tool_end = '</tool_response>' %}\n {%- set tool_end_length = tool_end|length %}\n {%- set start_pos = (current_content|length) - tool_end_length %}\n {%- if start_pos < 0 %}\n {%- set start_pos = 0 %}\n {%- endif %}\n {%- set end_of_message = current_content[start_pos:] %}\n {%- if ns.multi_step_tool and message.role == \"user\" and not(start_of_message == tool_start and end_of_message == tool_end) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set content = message.content %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is defined and message.reasoning_content is not none %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in message.content %}\n {%- set content = (message.content.split('</think>')|last).lstrip('\\n') %}\n {%- set reasoning_content = (message.content.split('</think>')|first).rstrip('\\n') %}\n {%- set reasoning_content = (reasoning_content.split('<think>')|last).lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if loop.last or (not loop.last and reasoning_content) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n {%- if enable_thinking is defined and enable_thinking is false %}\n {{- '<think>\\n\\n</think>\\n\\n' }}\n {%- endif %}\n{%- endif %}"
241
  }