datasets:
- dataset: tianyumyum/AOE
data_files:
- split: all
path: table_data/all_AOE_tables.jsonl
๐ AOE: Arranged and Organized Extraction Benchmark
๐ For full reproducibility, all source code is available in our GitHub repository.
๐ฏ Challenge: Can AI models construct structured tables from complex, real-world documents? AOE tests this critical capability across legal, financial, and academic domains.
๐ What is AOE?
The AOE (Arranged and Organized Extraction) Benchmark addresses a critical gap in existing text-to-table evaluation frameworks. Unlike synthetic benchmarks, AOE challenges modern LLMs with authentic, complex, and practically relevant data extraction tasks.
๐ฅ Why "AOE"? Like Area of Effect damage in gaming that impacts everything within range, our benchmark reveals that current AI models struggle across all aspects of structured extraction - from basic parsing to complex reasoning. No model escapes unscathed!
๐ฏ Core Innovation
Beyond Isolated Information: AOE doesn't just test information retrievalโit evaluates models' ability to:
- ๐ง Understand complex task requirements and construct appropriate schemas
- ๐ Locate scattered information across multiple lengthy documents
- ๐๏ธ Integrate diverse data points into coherent, structured tables
- ๐งฎ Perform numerical reasoning and cross-document analysis
๐ Key Statistics
| Metric | Value |
|---|---|
| Total Tasks | 373 benchmark instances |
| Domains | 3 (Legal, Financial, Academic) |
| Document Sources | 100% real-world, authentic content |
| Total Documents | 1,914 source documents |
| Languages | English & Chinese |
๐ Detailed Domain Statistics
| Domain | Language | Tables | Documents | Avg Tokens | Docs/Table |
|---|---|---|---|---|---|
| Academic | EN | 74 | 257 | 69k | 3.5/5 |
| Financial | ZH,EN | 224 | 944 | 437k | 4.2/5 |
| Legal | ZH | 75 | 713 | 7k | 9.6/13 |
๐ Dataset Structure
{
"record_id": "academic_10_0_en",
"query": "Identify possible citation relationships among the following articles...",
"doc_length": {
"paper_1.md": 141566, # Character count per document
"paper_2.md": 885505,
"paper_3.md": 48869,
"paper_4.md": 65430,
"paper_5.md": 53987
},
"table_schema": { # Dynamic schema definition
"columns": [
{"name": "Cited paper title", "about": "the name of the paper"},
{"name": "Referencing paper title", "about": "Referencing paper title"},
{"name": "Referenced content", "about": "the context of the cited paper"},
{"name": "Label", "about": "reference type: background/methodology/additional"}
]
},
"answers": [ # Ground truth structured output
{
"Cited paper title": "Large Language Model Is Not a Good Few-shot Information Extractor...",
"Referencing paper title": "What Makes Good In-Context Examples for GPT-3?",
"Referenced content": "(2) Sentence-embedding (Liu et al., 2022; Su et al., 2022): retrieving...",
"Label": "background"
}
]
}
๐ญ Data Sources & Domains
Figure: AOE benchmark construction pipeline from raw documents to structured evaluation tasks
๐ Academic Domain
- Sources: Semantic Scholar, Papers With Code
- Content: Research papers, citation networks, performance leaderboards
- Tasks: Citation relationship extraction, methodology performance analysis
๐ฐ Financial Domain
- Source: CNINFO (China's official financial disclosure platform)
- Content: Annual reports (2020-2023) from A-share listed companies
- Tasks: Longitudinal financial analysis, cross-company comparisons
โ๏ธ Legal Domain
- Sources: People's Court Case Library, National Legal Database
- Content: Chinese civil law judgments, official statutes
- Tasks: Legal provision retrieval, defendant verdict extraction
๐ฏ Benchmark Tasks Overview
๐ Task Categories
| Domain | Task ID | Description | Challenge Level |
|---|---|---|---|
| Academic | $Aca_0$ | Citation Context Extraction | ๐ฅ๐ฅ๐ฅ |
| $Aca_1$ | Methodology Performance Extraction | ๐ฅ๐ฅ | |
| Legal | $Legal_0$ | Legal Provision Retrieval | ๐ฅ๐ฅ๐ฅ๐ฅ |
| $Legal_1$ | Defendant Verdict Extraction | ๐ฅ๐ฅ๐ฅ | |
| Financial | $Fin_{0-3}$ | Single Company Longitudinal Analysis | ๐ฅ๐ฅ |
| $Fin_{4-6}$ | Multi-Company Comparative Analysis | ๐ฅ๐ฅ๐ฅ |
๐๏ธ Data Processing Pipeline
- ๐ Document Preservation: Advanced parsing with
markitdown,Marker, and OCR - ๐ท๏ธ Human-in-the-Loop: Expert annotation for legal document processing
- โ Quality Assurance: Multi-stage validation ensuring accuracy and completeness
๐ก Example Tasks
โ๏ธ Legal Analysis Example
Task: Extract structured verdict information from complex trademark infringement cases
๐ View Ground Truth Table
Input Query: "ไฝไธบๆณๅพๆๆฌๅๆไธๅฎถ๏ผ่ฏทๆ็ งๆๅฎๆ ผๅผไปๅคๅณไฟกๆฏไธญๅ็กฎๆๅๆฏไฝ่ขซๅ็ๆ็ปๅคๅณ็ปๆ"
Source Documents:complex legal cases (678-2391 tokens each)
ๆกไปถๅ,่ขซๅ,็ฝชๅ,ๅๆ,็ผๅ,ๅค็ฝ้,ๅ
ถไปๅคๅณ
ๅๆๅๅๆณจๅๅๆ ๆก,ๅๆ,ๅๅๆณจๅๅๆ ็ฝช,ๆๆๅพๅๅๅนด,,ๅค็ฝ้ไบบๆฐๅธๅไบไธๅ
,ๆฃๆผ่ฝฆ่พใๆๆบ็ญๅไปทๆตไฝ็ฝ้
ๆฌงๆ่พใๅผ ๆๅฆนๅๅๆณจๅๅๆ ๆก,ๆฌงๆ่พ,ๅๅๆณจๅๅๆ ็ฝช,ๆๆๅพๅไบๅนดๅ
ญไธชๆ,,ๅค็ฝ้ไบบๆฐๅธๅ
ญๅไบไธๅ
,่ฟฝ็ผด่ฟๆณๆๅพ100.6583ไธๅ
่ฐขๆๆ็ฒ็ญๅๅๆณจๅๅๆ ๆก,่ฐขๆๆ็ฒ,ๆ ็ฝช,,,,
้ฉฌๆๅ็ญๅๅๆณจๅๅๆ ๆก,้ฉฌๆๅ,ๅๅๆณจๅๅๆ ็ฝช,ๆๆๅพๅๅ
ญๅนด,,ๅค็ฝ้ไบบๆฐๅธๅ
ญ็พๅ
ซๅไธๅ
,
โฆโฆ
Challenge: Models must parse complex legal language from multiple case documents (avg 9.6 docs per table), handle joint defendant cases with up to 16 defendants per case, distinguish between different verdict outcomes (guilty vs. acquitted), and extract structured information from unstructured legal narratives involving trademark infringement worth millions.
๐ Academic Analysis Example
Task: Extract methodology performance from research papers on WikiText-103 dataset
๐ View Ground Truth Table
Input Query: "List the Test perplexity performance of the proposed methods in the paper on the WikiText-103 dataset."
Source Documents: research papers (36k-96k tokens each)
paper_name,method,result,models_and_settings
Primal-Attention: Self-attention through Asymmetric Kernel SVD,Primal.+Trans.,31,
Language Modeling with Gated Convolutional Networks,GCNN-8,44.9,
GATELOOP: FULLY DATA-CONTROLLED LINEAR RECURRENCE,GateLoop,13.4,
Challenge: Models must parse complex academic papers, identify specific methodologies, locate performance tables, and extract numerical results while handling various formatting styles.
๐ฆ Financial Analysis Example
Task: Extract and compare financial metrics across multiple company annual reports
๐ View Ground Truth Table
Company,Revenue (CNY),Net Profit (CNY),Operating Cash Flow (CNY)
Gree Electric,203979266387,29017387604,56398426354
Midea Group,372037280000,33719935000,57902611000
Haier Smart Home,261427783050,16596615046,25262376228
TCL Technology,174366657015,4781000000,25314756105
GONGNIU GROUP,15694755600,3870135376,4827282090
Challenge: Models must locate financial data scattered across lengthy annual reports (avg 437k tokens), handle different formatting conventions, and ensure numerical accuracy across multiple documents.
๐ฌ Research Applications
๐ฏ Ideal for Evaluating:
- Multi-document Understanding: Information synthesis across long-form texts
- Schema Construction: Dynamic table structure generation
- Domain Adaptation: Performance across specialized fields
- Numerical Reasoning: Financial calculations and quantitative analysis
- Cross-lingual Capabilities: English and Chinese document processing
๐ Benchmark Insights:
- Even SOTA models struggle: Best performers achieve only ~68% accuracy
- Domain specificity matters: Performance varies significantly across fields
- Length matters: Document complexity correlates with task difficulty
- RAG limitations revealed: Standard retrieval often fails for structured tasks
๐ Getting Started
Quick Usage
from datasets import load_dataset
# Load the complete benchmark
dataset = load_dataset("tianyumyum/AOE")
# Access specific splits
all_tasks = dataset["all"]
# Example task
task = all_tasks[0]
print(f"Documents: {len(task['doc_length'])}")
print(f"Expected output: {task['answers']}")
๐ Evaluation Framework
AOE provides a comprehensive 3-tier evaluation system:
- ๐ฏ CSV Parsability: Basic structure compliance (Pass Rate)
- ๐ Overall Quality: LLM-assessed holistic evaluation (0-100%)
- ๐ฌ Cell-Level Accuracy: Granular content precision (F1-Score)
๐ค Contributing & Support
- ๐ Issues: GitHub Issues
- ๐ฌ Discussions: GitHub Discussions
โญ Star our GitHub repo if you find AOE useful! โญ
Pushing the boundaries of structured knowledge extraction ๐