Debatable Intelligence: Benchmarking LLM Judges via Debate Speech Evaluation
Abstract
This study evaluates the performance of large language models in assessing debate speeches, a task requiring deep understanding of various aspects of speech, including argument strength and coherence, and compares it to human judges.
We introduce Debate Speech Evaluation as a novel and challenging benchmark for assessing LLM judges. Evaluating debate speeches requires a deep understanding of the speech at multiple levels, including argument strength and relevance, the coherence and organization of the speech, the appropriateness of its style and tone, and so on. This task involves a unique set of cognitive abilities that have previously received limited attention in systematic LLM benchmarking. To explore such skills, we leverage a dataset of over 600 meticulously annotated debate speeches and present the first in-depth analysis of how state-of-the-art LLMs compare to human judges on this task. Our findings reveal a nuanced picture: while larger models can approximate individual human judgments in some respects, they differ substantially in their overall judgment behavior. We also investigate the ability of frontier LLMs to generate persuasive, opinionated speeches, showing that models may perform at a human level on this task.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Evaluation Under Imperfect Benchmarks and Ratings: A Case Study in Text Simplification (2025)
- Flex-Judge: Think Once, Judge Anywhere (2025)
- Towards Multi-dimensional Evaluation of LLM Summarization across Domains and Languages (2025)
- How Reliable is Multilingual LLM-as-a-Judge? (2025)
- Quantitative LLM Judges (2025)
- FinNLI: Novel Dataset for Multi-Genre Financial Natural Language Inference Benchmarking (2025)
- Judging the Judges: Can Large Vision-Language Models Fairly Evaluate Chart Comprehension and Reasoning? (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper