How Far Are We from Genuinely Useful Deep Research Agents?
Abstract
FINDER is a benchmark for deep research agents with standardized human-curated tasks and DEFT is a failure taxonomy revealing that DRAs struggle with evidence integration, verification, and reasoning-resilient planning.
Deep Research Agents (DRAs) aim to automatically produce analyst-level reports through iterative information retrieval and synthesis. However, most existing DRAs were validated on question-answering benchmarks, while research on generating comprehensive reports remains overlooked. Worse, current benchmarks for report synthesis suffer from task complexity and subjective metrics -- this fails to reflect user demands and limits the practical utility of generated reports. To address these gaps, we present Fine-grained DEepResearch bench (FINDER), an enhanced benchmark consisting of 100 human-curated research tasks with 419 structured checklist items that standardize report structure, analytical depth, and factual grounding. Based on approximately 1,000 reports produced by mainstream DRAs, we further propose Deep rEsearch Failure Taxonomy (DEFT), the first failure taxonomy for deep research agents. DEFT contains 14 fine-grained failure modes across reasoning, retrieval, and generation, and is built upon grounded theory with human-LLM co-annotating and inter-annotator reliability validation. Our experimental findings reveal that current DRAs struggle not with task comprehension but with evidence integration, verification, and reasoning-resilient planning.
Community
We present Finegrained DEepResearch bench (FINDER), an enhanced benchmark consisting of 100 human-curated research tasks with 419 structured checklist items that standardize report structure, analytical depth, and factual grounding. Based on approximately 1,000 reports produced by mainstream DRAs, we further propose Deep rEsearch Failure Taxonomy (DEFT), the first failure taxonomy for deep research agents.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Understanding DeepResearch via Reports (2025)
- ResearchRubrics: A Benchmark of Prompts and Rubrics For Evaluating Deep Research Agents (2025)
- FinDeepResearch: Evaluating Deep Research Agents in Rigorous Financial Analysis (2025)
- RhinoInsight: Improving Deep Research through Control Mechanisms for Model Behavior and Context (2025)
- Enterprise Deep Research: Steerable Multi-Agent Deep Research for Enterprise Analytics (2025)
- Deep Research Brings Deeper Harm (2025)
- DeepWideSearch: Benchmarking Depth and Width in Agentic Information Seeking (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper