MOTIF paper
Collection
MOTIF trained model and Vanilla GRPO trained model, compared in the paper.
β’
3 items
β’
Updated
π Paper link: Arxiv preprint
π Github link: Training and evaluation code
π Link to the trained models: Hugging Face collection
The INFTYTHINK architecture, shown below, allows multi-round thinking for extended LLM reasoning beyond its context size.
In this work, we propose a GRPO based training method for such a system that allows to calculate the accuracy reward by rolling out trajectories and applying the reward at the first round of inference outcomes. This is depicted as following:
Our results are shown below:
from transformers import AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit")
model = PeftModel.from_pretrained(base_model, "purbeshmitra/MOTIF")
SYSTEM_PROMPT = "You are a helpful assistant. When the user asks a question, you solve it in 3 rounds. In each round, you first think about the reasoning process of answering and then provide the user with a detailed progress about it. The reasoning process and the progress are enclosed within <reasoning> </reasoning> and <answer> </answer> tags, respectively. Therefore, you follow the strict format:
<reasoning> reasoning process here </reasoning> <answer> detailed progress here </answer>
The User provides this detailed progress as additional context in the next round. You then respond again with further thinking and further progress. When the User says that the current round is the final (third) round, you provide an answer inside the answer tags. You also enclose a final answer in third round in the box: \\boxed{}. Only this boxed final answer is used for evaluation."
If you find our work useful, consider citing it as:
@article{mitra2025motif,
title={MOTIF: Modular Thinking via Reinforcement Fine-tuning in LLMs},
author={Mitra, Purbesh and Ulukus, Sennur},
journal={arXiv preprint arXiv:2507.02851},
year={2025}
}