The Sequential Edge: Inverse-Entropy Voting Beats Parallel Self-Consistency at Matched Compute
Abstract
Sequential scaling in language model reasoning outperforms parallel scaling across multiple models and benchmarks, with inverse-entropy weighted voting further enhancing accuracy.
We revisit test-time scaling for language model reasoning and ask a fundamental question: at equal token budget and compute, is it better to run multiple independent chains in parallel, or to run fewer chains that iteratively refine through sequential steps? Through comprehensive evaluation across 5 state-of-the-art open source models and 3 challenging reasoning benchmarks, we find that sequential scaling where chains explicitly build upon previous attempts consistently outperforms the dominant parallel self-consistency paradigm in 95.6% of configurations with gains in accuracy upto 46.7%. Further, we introduce inverse-entropy weighted voting, a novel training-free method to further boost the accuracy of sequential scaling. By weighing answers in proportion to the inverse entropy of their reasoning chains, we increase our success rate over parallel majority and establish it as the optimal test-time scaling strategy. Our findings fundamentally challenge the parallel reasoning orthodoxy that has dominated test-time scaling since Wang et al.'s self-consistency decoding (Wang et al., 2022), positioning sequential refinement as the robust default for modern LLM reasoning and necessitating a paradigm shift in how we approach inference-time optimization.
Community
Our paper challenges - majority voting - the conventional method of increasing accuracy on reasoning benchmarks. We show that sequential voting, where reasoning chains are generated sequentially not parallely, can double the performance on reasoning benchmarks such as AIME-2025 and GPQA-Diamond without using additional tokens or chains. Using our method, we achieve upto 80% performance on GPQA-Daimond.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Think Just Enough: Sequence-Level Entropy as a Confidence Signal for LLM Reasoning (2025)
- DeepPrune: Parallel Scaling without Inter-trace Redundancy (2025)
- Recursive Self-Aggregation Unlocks Deep Thinking in Large Language Models (2025)
- Slim-SC: Thought Pruning for Efficient Scaling with Self-Consistency (2025)
- Not All Bits Are Equal: Scale-Dependent Memory Optimization Strategies for Reasoning Models (2025)
- ARISE: An Adaptive Resolution-Aware Metric for Test-Time Scaling Evaluation in Large Reasoning Models (2025)
- MoEs Are Stronger than You Think: Hyper-Parallel Inference Scaling with RoE (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper