LG-AI-EXAONE commited on
Commit
2aa2227
·
1 Parent(s): f689186

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -344,7 +344,7 @@ The following tables show the evaluation results of each model, with reasoning a
344
  <td align="center">64.7</td>
345
  </tr>
346
  <tr>
347
- <td >Tau-bench (Airline)</td>
348
  <td align="center">51.5</td>
349
  <td align="center">N/A</td>
350
  <td align="center">38.5</td>
@@ -353,7 +353,7 @@ The following tables show the evaluation results of each model, with reasoning a
353
  <td align="center">53.5</td>
354
  </tr>
355
  <tr>
356
- <td >Tau-bench (Retail)</td>
357
  <td align="center">62.8</td>
358
  <td align="center">N/A</td>
359
  <td align="center">10.2</td>
@@ -419,7 +419,7 @@ The following tables show the evaluation results of each model, with reasoning a
419
  <th>EXAONE 4.0 32B </th>
420
  <th>Phi 4</th>
421
  <th>Mistral-Small-2506</th>
422
- <th>Gemma 3 27B</th>
423
  <th>Qwen3 32B </th>
424
  <th>Qwen3 235B </th>
425
  <th>Llama-4-Maverick</th>
@@ -718,7 +718,7 @@ The following tables show the evaluation results of each model, with reasoning a
718
  <th>EXAONE Deep 2.4B</th>
719
  <th>Qwen 3 0.6B </th>
720
  <th>Qwen 3 1.7B </th>
721
- <th>SmolLM3 3B </th>
722
  </tr>
723
  <tr>
724
  <td align="center">Model Size</td>
@@ -898,7 +898,7 @@ The following tables show the evaluation results of each model, with reasoning a
898
  <th>Qwen 3 0.6B </th>
899
  <th>Gemma 3 1B</th>
900
  <th>Qwen 3 1.7B </th>
901
- <th>SmolLM3 3B </th>
902
  </tr>
903
  <tr>
904
  <td align="center">Model Size</td>
 
344
  <td align="center">64.7</td>
345
  </tr>
346
  <tr>
347
+ <td >Tau-Bench (Airline)</td>
348
  <td align="center">51.5</td>
349
  <td align="center">N/A</td>
350
  <td align="center">38.5</td>
 
353
  <td align="center">53.5</td>
354
  </tr>
355
  <tr>
356
+ <td >Tau-Bench (Retail)</td>
357
  <td align="center">62.8</td>
358
  <td align="center">N/A</td>
359
  <td align="center">10.2</td>
 
419
  <th>EXAONE 4.0 32B </th>
420
  <th>Phi 4</th>
421
  <th>Mistral-Small-2506</th>
422
+ <th>Gemma3 27B</th>
423
  <th>Qwen3 32B </th>
424
  <th>Qwen3 235B </th>
425
  <th>Llama-4-Maverick</th>
 
718
  <th>EXAONE Deep 2.4B</th>
719
  <th>Qwen 3 0.6B </th>
720
  <th>Qwen 3 1.7B </th>
721
+ <th>SmolLM 3 3B </th>
722
  </tr>
723
  <tr>
724
  <td align="center">Model Size</td>
 
898
  <th>Qwen 3 0.6B </th>
899
  <th>Gemma 3 1B</th>
900
  <th>Qwen 3 1.7B </th>
901
+ <th>SmolLM 3 3B </th>
902
  </tr>
903
  <tr>
904
  <td align="center">Model Size</td>