Kurakura AI Logo

Luth-1.7B-Instruct

Luth-1.7B-Instruct is a French fine-tuned version of Qwen3-1.7B, trained on the Luth-SFT dataset. The model has drastically improved its French capabilities in instruction following, math, and general knowledge. Additionally, its English capabilities have remained stable and have even increased in some areas.

Our Evaluation, training and data scripts are available on GitHub, along with the Blog we wrote.

Luth Graph

Model Details

Luth was trained using full fine-tuning on the Luth-SFT dataset with Axolotl. The resulting model was then merged with the base Qwen3-1.7B model. This process successfully retained the model's English capabilities while improving its performance on most selected benchmarks in both French and English.

Benchmark Results

We used LightEval for evaluation, with custom tasks for the French benchmarks. The models were evaluated with a temperature=0.

French Benchmark Scores

Model IFEval
French
GPQA-Diamond
French
MMLU
French
Math500
French
Arc-Challenge
French
Hellaswag
French
Luth-1.7B-Instruct 58.53 36.55 49.75 62.60 35.16 31.88
Qwen3-1.7B 54.71 31.98 28.49 60.40 33.28 24.86
SmolLM2-1.7B-Instruct 30.93 20.30 33.73 10.20 28.57 49.58
Qwen2.5-1.5B-Instruct 31.30 27.41 46.25 33.20 32.68 34.33
LFM2-1.2B 54.41 22.84 47.59 36.80 39.44 33.05

English Benchmark Scores

Model IFEval
English
GPQA-Diamond
English
MMLU
English
Math500
English
Arc-Challenge
English
Hellaswag
English
Luth-1.7B-Instruct 65.80 29.80 60.28 70.40 42.24 58.53
Qwen3-1.7B 68.88 31.82 52.82 71.20 36.18 46.98
SmolLM2-1.7B-Instruct 49.04 25.08 50.27 22.67 42.32 66.94
Qwen2.5-1.5B-Instruct 39.99 25.76 59.81 57.20 41.04 64.48
LFM2-1.2B 68.52 24.24 55.22 45.80 42.58 57.61

Code Example

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("kurakurai/Luth-1.7B-Instruct")
model = AutoModelForCausalLM.from_pretrained("kurakurai/Luth-1.7B-Instruct")
messages = [
    {"role": "user", "content": "Quelle est la capitale de la France?"},
]
inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=100)
print(
    tokenizer.decode(
        outputs[0][inputs["input_ids"].shape[-1] :], skip_special_tokens=True
    )
)

Citation

@misc{luth2025kurakurai,
  title   = {Luth-1.7B-Instruct},
  author  = {Maxence Lasbordes, Sinoué Gad},
  year    = {2025},
  howpublished = {\url{https://huggingface.co/kurakurai/Luth-1.7B-Instruct}},
}
Downloads last month
88
Safetensors
Model size
1.72B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kurakurai/Luth-1.7B-Instruct

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(274)
this model
Finetunes
1 model
Quantizations
6 models

Dataset used to train kurakurai/Luth-1.7B-Instruct

Collections including kurakurai/Luth-1.7B-Instruct