---
license: other
license_name: exaone
license_link: LICENSE
language:
- en
- ko
- es
tags:
- lg-ai
- exaone
- exaone-4.0
pipeline_tag: text-generation
library_name: transformers
---
# EXAONE-4.0-32B GGUF Models
## Model Generation Details
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`bf9087f5`](https://github.com/ggerganov/llama.cpp/commit/bf9087f59aab940cf312b85a67067ce33d9e365a).
---
## Quantization Beyond the IMatrix
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
π [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
While this does increase model file size, it significantly improves precision for a given quantization level.
### **I'd love your feedbackβhave you tried this? How does it perform for you?**
---
Click here to get info on choosing the right GGUF model format
---
π License Updated! We are pleased to announce our more flexible licensing terms π€
βοΈ Try on FriendliAI
# EXAONE-4.0-32B
## Introduction
We introduce **EXAONE 4.0**, which integrates a **Non-reasoning mode** and **Reasoning mode** to achieve both the excellent usability of [EXAONE 3.5](https://github.com/LG-AI-EXAONE/EXAONE-3.5) and the advanced reasoning abilities of [EXAONE Deep](https://github.com/LG-AI-EXAONE/EXAONE-Deep). To pave the way for the agentic AI era, EXAONE 4.0 incorporates essential features such as agentic tool use, and its multilingual capabilities are extended
to support Spanish in addition to English and Korean.
The EXAONE 4.0 model series consists of two sizes: a mid-size **32B** model optimized for high performance, and a small-size **1.2B** model designed for on-device applications.
In the EXAONE 4.0 architecture, we apply new architectural changes compared to previous EXAONE models as below:
1. **Hybrid Attention**: For the 32B model, we adopt hybrid attention scheme, which combines *Local attention (sliding window attention)* with *Global attention (full attention)* in a 3:1 ratio. We do not use RoPE (Rotary Positional Embedding) for global attention for better global context understanding.
2. **QK-Reorder-Norm**: We reorder the LayerNorm position from the traditional Pre-LN scheme by applying LayerNorm directly to the attention and MLP outputs, and we add RMS normalization right after the Q and K projection. It helps yield better performance on downstream tasks despite consuming more computation.
For more details, please refer to our [technical report](https://arxiv.org/abs/2507.11407), [HuggingFace paper](https://huggingface.co/papers/2507.11407), [blog](https://www.lgresearch.ai/blog/view?seq=576), and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-4.0).
### Model Configuration
- Number of Parameters (without embeddings): 30.95B
- Number of Layers: 64
- Number of Attention Heads: GQA with 40-heads and 8-KV heads
- Vocab Size: 102,400
- Context Length: 131,072 tokens
## Quickstart
You should install the transformers library forked from the original, available in our [PR](https://github.com/huggingface/transformers/pull/39129).
Once this PR is merged and released, we will update this section.
You can install the latest version of transformers with support for EXAONE 4.0 by following the command:
```bash
pip install git+https://github.com/lgai-exaone/transformers@add-exaone4
```
### Non-reasoning mode
For general use, you can use the EXAONE 4.0 models with the following example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "LGAI-EXAONE/EXAONE-4.0-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="bfloat16",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# choose your prompt
prompt = "Explain how wonderful you are"
prompt = "Explica lo increΓble que eres"
prompt = "λκ° μΌλ§λ λλ¨νμ§ μ€λͺ
ν΄ λ΄"
messages = [
{"role": "user", "content": prompt}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
output = model.generate(
input_ids.to(model.device),
max_new_tokens=128,
do_sample=False,
)
print(tokenizer.decode(output[0]))
```
### Reasoning mode
The EXAONE 4.0 models have reasoning capabilities for handling complex problems. You can activate reasoning mode by using the `enable_thinking=True` argument with the tokenizer, which opens a reasoning block that starts with `` tag without closing it.
```python
messages = [
{"role": "user", "content": "Which one is bigger, 3.12 vs 3.9?"}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
enable_thinking=True,
)
output = model.generate(
input_ids.to(model.device),
max_new_tokens=128,
do_sample=True,
temperature=0.6,
top_p=0.95
)
print(tokenizer.decode(output[0]))
```
> [!IMPORTANT]
> The model generation with reasoning mode can be affected sensitively by sampling parameters, so please refer to the [Usage Guideline](#usage-guideline) for better quality.
### Agentic tool use
The EXAONE 4.0 models can be used as agents with their tool calling capabilities. You can provide tool schemas to the model for effective tool calling.
```python
import random
def roll_dice(max_num: int):
return random.randint(1, max_num)
tools = [
{
"type": "function",
"function": {
"name": "roll_dice",
"description": "Roll a dice with the number 1 to N. User can select the number N.",
"parameters": {
"type": "object",
"required": ["max_num"],
"properties": {
"max_num": {
"type": "int",
"description": "Max number of the dice"
}
}
}
}
}
]
messages = [
{"role": "user", "content": "Roll D6 dice twice!"}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
tools=tools,
)
output = model.generate(
input_ids.to(model.device),
max_new_tokens=1024,
do_sample=True,
temperature=0.6,
top_p=0.95,
)
print(tokenizer.decode(output[0]))
```
## Deployment
### TensorRT-LLM
TensorRT-LLM officially supports EXAONE 4.0 models in the latest commits. Before it is released, you need to clone the TensorRT-LLM repository to build from source.
```bash
git clone https://github.com/NVIDIA/TensorRT-LLM.git
```
After cloning the repository, you need to build the source for installation. Please refer to [the official documentation](https://nvidia.github.io/TensorRT-LLM/installation/build-from-source-linux.html) for a guide to build the TensorRT-LLM environment.
You can run the TensorRT-LLM server by following steps:
1. Write extra configuration YAML file
```yaml
# extra_llm_api_config.yaml
kv_cache_config:
enable_block_reuse: false
```
2. Run server with the configuration
```bash
trtllm-serve serve [MODEL_PATH] --backend pytorch --extra_llm_api_options extra_llm_api_config.yaml
```
For more details, please refer to [the documentation](https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/models/core/exaone) of EXAONE from TensorRT-LLM.
> [!NOTE]
> Other inference engines including `vllm` and `sglang` don't support the EXAONE 4.0 officially now. We will update as soon as these libraries are updated.
## Performance
The following tables show the evaluation results of each model, with reasoning and non-reasoning mode. The evaluation details can be found in the [technical report](https://arxiv.org/abs/2507.11407).
- β
denotes the model has a hybrid reasoning capability, evaluated by selecting reasoning / non-reasoning on the purpose.
- To assess Korean **practical** and **professional** knowledge, we adopt both the [KMMLU-Redux](https://huggingface.co/datasets/LGAI-EXAONE/KMMLU-Redux) and [KMMLU-Pro](https://huggingface.co/datasets/LGAI-EXAONE/KMMLU-Pro) benchmarks. Both datasets are publicly released!
### 32B Reasoning Mode
|
EXAONE 4.0 32B |
Phi 4 reasoning-plus |
Magistral Small-2506 |
Qwen 3 32B |
Qwen 3 235B |
DeepSeek R1-0528 |
Model Size |
32.0B |
14.7B |
23.6B |
32.8B |
235B |
671B |
Hybrid Reasoning |
β
|
|
|
β
|
β
|
|
World Knowledge |
MMLU-Redux |
92.3 |
90.8 |
86.8 |
90.9 |
92.7 |
93.4 |
MMLU-Pro |
81.8 |
76.0 |
73.4 |
80.0 |
83.0 |
85.0 |
GPQA-Diamond |
75.4 |
68.9 |
68.2 |
68.4 |
71.1 |
81.0 |
Math/Coding |
AIME 2025 |
85.3 |
78.0 |
62.8 |
72.9 |
81.5 |
87.5 |
HMMT Feb 2025 |
72.9 |
53.6 |
43.5 |
50.4 |
62.5 |
79.4 |
LiveCodeBench v5 |
72.6 |
51.7 |
55.8 |
65.7 |
70.7 |
75.2 |
LiveCodeBench v6 |
66.7 |
47.1 |
47.4 |
60.1 |
58.9 |
70.3 |
Instruction Following |
IFEval |
83.7 |
84.9 |
37.9 |
85.0 |
83.4 |
80.8 |
Multi-IF (EN) |
73.5 |
56.1 |
27.4 |
73.4 |
73.4 |
72.0 |
Agentic Tool Use |
BFCL-v3 |
63.9 |
N/A |
40.4 |
70.3 |
70.8 |
64.7 |
Tau-bench (Airline) |
51.5 |
N/A |
38.5 |
34.5 |
37.5 |
53.5 |
Tau-bench (Retail) |
62.8 |
N/A |
10.2 |
55.2 |
58.3 |
63.9 |
Multilinguality |
KMMLU-Pro |
67.7 |
55.8 |
51.5 |
61.4 |
68.1 |
71.7 |
KMMLU-Redux |
72.7 |
62.7 |
54.6 |
67.5 |
74.5 |
77.0 |
KSM |
87.6 |
79.8 |
71.9 |
82.8 |
86.2 |
86.7 |
MMMLU (ES) |
85.6 |
84.3 |
68.9 |
82.8 |
86.7 |
88.2 |
MATH500 (ES) |
95.8 |
94.2 |
83.5 |
94.3 |
95.1 |
96.0 |
### 32B Non-Reasoning Mode
|
EXAONE 4.0 32B |
Phi 4 |
Mistral-Small-2506 |
Gemma 3 27B |
Qwen3 32B |
Qwen3 235B |
Llama-4-Maverick |
DeepSeek V3-0324 |
Model Size |
32.0B |
14.7B |
24.0B |
27.4B |
32.8B |
235B |
402B |
671B |
Hybrid Reasoning |
β
|
|
|
|
β
|
β
|
|
|
World Knowledge |
MMLU-Redux |
89.8 |
88.3 |
85.9 |
85.0 |
85.7 |
89.2 |
92.3 |
92.3 |
MMLU-Pro |
77.6 |
70.4 |
69.1 |
67.5 |
74.4 |
77.4 |
80.5 |
81.2 |
GPQA-Diamond |
63.7 |
56.1 |
46.1 |
42.4 |
54.6 |
62.9 |
69.8 |
68.4 |
Math/Coding |
AIME 2025 |
35.9 |
17.8 |
30.2 |
23.8 |
20.2 |
24.7 |
18.0 |
50.0 |
HMMT Feb 2025 |
21.8 |
4.0 |
16.9 |
10.3 |
9.8 |
11.9 |
7.3 |
29.2 |
LiveCodeBench v5 |
43.3 |
24.6 |
25.8 |
27.5 |
31.3 |
35.3 |
43.4 |
46.7 |
LiveCodeBench v6 |
43.1 |
27.4 |
26.9 |
29.7 |
28.0 |
31.4 |
32.7 |
44.0 |
Instruction Following |
IFEval |
84.8 |
63.0 |
77.8 |
82.6 |
83.2 |
83.2 |
85.4 |
81.2 |
Multi-IF (EN) |
71.6 |
47.7 |
63.2 |
72.1 |
71.9 |
72.5 |
77.9 |
68.3 |
Long Context |
HELMET |
58.3 |
N/A |
61.9 |
58.3 |
54.5 |
63.3 |
13.7 |
N/A |
RULER |
88.2 |
N/A |
71.8 |
66.0 |
85.6 |
90.6 |
2.9 |
N/A |
LongBench v1 |
48.1 |
N/A |
51.5 |
51.5 |
44.2 |
45.3 |
34.7 |
N/A |
Agentic Tool Use |
BFCL-v3 |
65.2 |
N/A |
57.7 |
N/A |
63.0 |
68.0 |
52.9 |
63.8 |
Tau-Bench (Airline) |
25.5 |
N/A |
36.1 |
N/A |
16.0 |
27.0 |
38.0 |
40.5 |
Tau-Bench (Retail) |
55.9 |
N/A |
35.5 |
N/A |
47.6 |
56.5 |
6.5 |
68.5 |
Multilinguality |
KMMLU-Pro |
60.0 |
44.8 |
51.0 |
50.7 |
58.3 |
64.4 |
68.8 |
67.3 |
KMMLU-Redux |
64.8 |
50.1 |
53.6 |
53.3 |
64.4 |
71.7 |
76.9 |
72.2 |
KSM |
59.8 |
29.1 |
35.5 |
36.1 |
41.3 |
46.6 |
40.6 |
63.5 |
Ko-LongBench |
76.9 |
N/A |
55.4 |
72.0 |
73.9 |
74.6 |
65.6 |
N/A |
MMMLU (ES) |
80.6 |
81.2 |
78.4 |
78.7 |
82.1 |
83.7 |
86.9 |
86.7 |
MATH500 (ES) |
87.3 |
78.2 |
83.4 |
86.8 |
84.7 |
87.2 |
78.7 |
89.2 |
WMT24++ (ES) |
90.7 |
89.3 |
92.2 |
93.1 |
91.4 |
92.9 |
92.7 |
94.3 |
### 1.2B Reasoning Mode
|
EXAONE 4.0 1.2B |
EXAONE Deep 2.4B |
Qwen 3 0.6B |
Qwen 3 1.7B |
SmolLM3 3B |
Model Size |
1.28B |
2.41B |
596M |
1.72B |
3.08B |
Hybrid Reasoning |
β
|
|
β
|
β
|
β
|
World Knowledge |
MMLU-Redux |
71.5 |
68.9 |
55.6 |
73.9 |
74.8 |
MMLU-Pro |
59.3 |
56.4 |
38.3 |
57.7 |
57.8 |
GPQA-Diamond |
52.0 |
54.3 |
27.9 |
40.1 |
41.7 |
Math/Coding |
AIME 2025 |
45.2 |
47.9 |
15.1 |
36.8 |
36.7 |
HMMT Feb 2025 |
34.0 |
27.3 |
7.0 |
21.8 |
26.0 |
LiveCodeBench v5 |
44.6 |
47.2 |
12.3 |
33.2 |
27.6 |
LiveCodeBench v6 |
45.3 |
43.1 |
16.4 |
29.9 |
29.1 |
Instruction Following |
IFEval |
67.8 |
71.0 |
59.2 |
72.5 |
71.2 |
Multi-IF (EN) |
53.9 |
54.5 |
37.5 |
53.5 |
47.5 |
Agentic Tool Use |
BFCL-v3 |
52.9 |
N/A |
46.4 |
56.6 |
37.1 |
Tau-Bench (Airline) |
20.5 |
N/A |
22.0 |
31.0 |
37.0 |
Tau-Bench (Retail) |
28.1 |
N/A |
3.3 |
6.5 |
5.4 |
Multilinguality |
KMMLU-Pro |
42.7 |
24.6 |
21.6 |
38.3 |
30.5 |
KMMLU-Redux |
46.9 |
25.0 |
24.5 |
38.0 |
33.7 |
KSM |
60.6 |
60.9 |
22.8 |
52.9 |
49.7 |
MMMLU (ES) |
62.4 |
51.4 |
48.8 |
64.5 |
64.7 |
MATH500 (ES) |
88.8 |
84.5 |
70.6 |
87.9 |
87.5 |
### 1.2B Non-Reasoning Mode
|
EXAONE 4.0 1.2B |
Qwen 3 0.6B |
Gemma 3 1B |
Qwen 3 1.7B |
SmolLM3 3B |
Model Size |
1.28B |
596M |
1.00B |
1.72B |
3.08B |
Hybrid Reasoning |
β
|
β
|
|
β
|
β
|
World Knowledge |
MMLU-Redux |
66.9 |
44.6 |
40.9 |
63.4 |
65.0 |
MMLU-Pro |
52.0 |
26.6 |
14.7 |
43.7 |
43.6 |
GPQA-Diamond |
40.1 |
22.9 |
19.2 |
28.6 |
35.7 |
Math/Coding |
AIME 2025 |
23.5 |
2.6 |
2.1 |
9.8 |
9.3 |
HMMT Feb 2025 |
13.0 |
1.0 |
1.5 |
5.1 |
4.7 |
LiveCodeBench v5 |
26.4 |
3.6 |
1.8 |
11.6 |
11.4 |
LiveCodeBench v6 |
30.1 |
6.9 |
2.3 |
16.6 |
20.6 |
Instruction Following |
IFEval |
74.7 |
54.5 |
80.2 |
68.2 |
76.7 |
Multi-IF (EN) |
62.1 |
37.5 |
32.5 |
51.0 |
51.9 |
Long Context |
HELMET |
41.2 |
21.1 |
N/A |
33.8 |
38.6 |
RULER |
77.4 |
55.1 |
N/A |
65.9 |
66.3 |
LongBench v1 |
36.9 |
32.4 |
N/A |
41.9 |
39.9 |
Agentic Tool Use |
BFCL-v3 |
55.7 |
44.1 |
N/A |
52.2 |
47.3 |
Tau-Bench (Airline) |
10.0 |
31.5 |
N/A |
13.5 |
38.0 |
Tau-Bench (Retail) |
21.7 |
5.7 |
N/A |
4.6 |
6.7 |
Multilinguality |
KMMLU-Pro |
37.5 |
24.6 |
9.7 |
29.5 |
27.6 |
KMMLU-Redux |
40.4 |
22.8 |
19.4 |
29.8 |
26.4 |
KSM |
26.3 |
0.1 |
22.8 |
16.3 |
16.1 |
Ko-LongBench |
69.8 |
16.4 |
N/A |
57.1 |
15.7 |
MMMLU (ES) |
54.6 |
39.5 |
35.9 |
54.3 |
55.1 |
MATH500 (ES) |
71.2 |
38.5 |
41.2 |
66.0 |
62.4 |
WMT24++ (ES) |
65.9 |
58.2 |
76.9 |
76.7 |
84.0 |
## Usage Guideline
> [!IMPORTANT]
> To achieve the expected performance, we recommend using the following configurations:
>
> - For non-reasoning mode, we recommend using a lower temperature value such as `temperature<0.6` for better performance.
> - For reasoning mode (using `` block), we recommend using `temperature=0.6` and `top_p=0.95`.
> - If you suffer from the model degeneration, we recommend using `presence_penalty=1.5`.
> - For Korean general conversation with 1.2B model, we suggest to use `temperature=0.1` to avoid code switching.
## Limitation
The EXAONE language model has certain limitations and may occasionally generate inappropriate responses. The language model generates responses based on the output probability of tokens, and it is determined during learning from training data. While we have made every effort to exclude personal, harmful, and biased information from the training data, some problematic content may still be included, potentially leading to undesirable responses. Please note that the text generated by EXAONE language model does not reflect the views of LG AI Research.
- Inappropriate answers may be generated, which contain personal, harmful or other inappropriate information.
- Biased responses may be generated, which are associated with age, gender, race, and so on.
- The generated responses rely heavily on statistics from the training data, which can result in the generation of
semantically or syntactically incorrect sentences.
- Since the model does not reflect the latest information, the responses may be false or contradictory.
LG AI Research strives to reduce potential risks that may arise from EXAONE language models. Users are not allowed
to engage in any malicious activities (e.g., keying in illegal information) that may induce the creation of inappropriate
outputs violating LG AI's ethical principles when using EXAONE language models.
## License
The model is licensed under [EXAONE AI Model License Agreement 1.2 - NC](./LICENSE)
> [!NOTE]
> The main difference from the older version is as below:
> - We removed **the claim of model output ownership** from the license.
> - We restrict the model use **against the development of models that compete with EXAONE**.
> - We allow the model to be used for **educational purposes**, not just research.
## Citation
```
@article{exaone-4.0,
title={EXAONE 4.0: Unified Large Language Models Integrating Non-reasoning and Reasoning Modes},
author={{LG AI Research}},
journal={arXiv preprint arXiv:2507.11407},
year={2025}
}
```
## Contact
LG AI Research Technical Support: contact_us@lgresearch.ai
---
# π If you find these models useful
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
π [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
π¬ **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What Iβm Testing**
Iβm pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
π‘ **TestLLM** β Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- β
**Zero-configuration setup**
- β³ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- π§ **Help wanted!** If youβre into **edge-device AI**, letβs collaborate!
### **Other Assistants**
π’ **TurboLLM** β Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
π΅ **HugLLM** β Latest Open-source models:
- π Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### π‘ **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβall out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) β. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! π