Text Generation
Transformers
Safetensors
English

MOTIF: Modular Thinking via Reinforcement Fine-tuning in LLMs

πŸ”— Paper link: Arxiv preprint

πŸ”— Github link: Training and evaluation code

πŸ”— Link to the trained models: Hugging Face collection

The INFTYTHINK architecture, shown below, allows multi-round thinking for extended LLM reasoning beyond its context size.

Alt Text

In this work, we propose a GRPO based training method for such a system that allows to calculate the accuracy reward by rolling out trajectories and applying the reward at the first round of inference outcomes. This is depicted as following:

Alt Text

Results

Our results are shown below:

Alt Text

Usage

from transformers import AutoModelForCausalLM
from peft import PeftModel

base_model = AutoModelForCausalLM.from_pretrained("unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit")
model = PeftModel.from_pretrained(base_model, "purbeshmitra/MOTIF")

SYSTEM_PROMPT = "You are a helpful assistant. When the user asks a question, you solve it in 3 rounds. In each round, you first think about the reasoning process of answering and then provide the user with a detailed progress about it. The reasoning process and the progress are enclosed within <reasoning> </reasoning> and <answer> </answer> tags, respectively. Therefore, you follow the strict format:
<reasoning> reasoning process here </reasoning> <answer> detailed progress here </answer>

The User provides this detailed progress as additional context in the next round. You then respond again with further thinking and further progress. When the User says that the current round is the final (third) round, you provide an answer inside the answer tags. You also enclose a final answer in third round in the box: \\boxed{}. Only this boxed final answer is used for evaluation."

Citation

If you find our work useful, consider citing it as:

@article{mitra2025motif,
  title={MOTIF: Modular Thinking via Reinforcement Fine-tuning in LLMs},
  author={Mitra, Purbesh and Ulukus, Sennur},
  journal={arXiv preprint arXiv:2507.02851},
  year={2025}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Datasets used to train purbeshmitra/MOTIF

Collection including purbeshmitra/MOTIF