Papers
arxiv:2506.08343

Wait, We Don't Need to "Wait"! Removing Thinking Tokens Improves Reasoning Efficiency

Published on Jun 10
ยท Submitted by shuaishuaicdp on Jun 17
Authors:
,
,
,
,
,

Abstract

NoWait suppresses explicit self-reflection tokens during inference to enhance efficiency in multimodal reasoning without reducing model utility.

AI-generated summary

Recent advances in large reasoning models have enabled complex, step-by-step reasoning but often introduce significant overthinking, resulting in verbose and redundant outputs that hinder efficiency. In this study, we examine whether explicit self-reflection, signaled by tokens such as "Wait" and "Hmm", is necessary for advanced reasoning. We propose NoWait, a simple yet effective approach that disables explicit self-reflection by suppressing these tokens during inference. Extensive experiments on ten benchmarks across textual, visual, and video reasoning tasks show that NoWait reduces chain-of-thought trajectory length by up to 27%-51% in five R1-style model series, without compromising model utility. NoWait thus offers a plug-and-play solution for efficient and utility-preserving multimodal reasoning.

Community

Paper submitter

๐Ÿš€ Do we really need to "Wait" in AI reasoning?

NEW RESEARCH: Removing "Wait", "Hmm" thinking tokens BOOSTS efficiency by 27%-51%! ๐Ÿคฏ

๐Ÿ”ฅ Key Findings

โŒ "Wait, let me think again..."
โŒ "Hmm, maybe I should..."
โœ… Direct reasoning = 2x efficiency!

โšก NoWait Method Highlights:

๐ŸŽฏ Training-Free: Plug-and-play solution
๐Ÿ“Š Massive Token Reduction: Up to 51% shorter outputs
๐ŸŽฏ Accuracy Preserved: Performance maintained or improved
๐ŸŒ Multimodal: Text + Vision + Video reasoning

๐Ÿ“ˆ Extensive Validation:

โ€ข 10 benchmarks tested
โ€ข 5 R1-style model families
โ€ข QwQ-32B, Phi4, Qwen3, Kimi-VL, QvQ models

๐Ÿ’ก Core Insight:

Explicit self-reflection โ‰  Better reasoning
Simple keyword suppression โ†’ Dramatic efficiency gains

This could reshape how we think about AI reasoning! ๐Ÿค–โœจ

Paper: https://arxiv.org/pdf/2506.08343

#AI #MachineLearning #Reasoning #Efficiency #LLM #Research

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.08343 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.08343 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.08343 in a Space README.md to link it from this page.

Collections including this paper 4