AOE / README.md
tianyumyum's picture
Update README for github link
be8f366 verified
metadata
datasets:
  - dataset: tianyumyum/AOE
    data_files:
      - split: all
        path: table_data/all_AOE_tables.jsonl

๐Ÿ† AOE: Arranged and Organized Extraction Benchmark

๐Ÿ“š For full reproducibility, all source code is available in our GitHub repository.

๐ŸŽฏ Challenge: Can AI models construct structured tables from complex, real-world documents? AOE tests this critical capability across legal, financial, and academic domains.

๐Ÿš€ What is AOE?

The AOE (Arranged and Organized Extraction) Benchmark addresses a critical gap in existing text-to-table evaluation frameworks. Unlike synthetic benchmarks, AOE challenges modern LLMs with authentic, complex, and practically relevant data extraction tasks.

๐Ÿ’ฅ Why "AOE"? Like Area of Effect damage in gaming that impacts everything within range, our benchmark reveals that current AI models struggle across all aspects of structured extraction - from basic parsing to complex reasoning. No model escapes unscathed!

๐ŸŽฏ Core Innovation

Beyond Isolated Information: AOE doesn't just test information retrievalโ€”it evaluates models' ability to:

  • ๐Ÿง  Understand complex task requirements and construct appropriate schemas
  • ๐Ÿ” Locate scattered information across multiple lengthy documents
  • ๐Ÿ—๏ธ Integrate diverse data points into coherent, structured tables
  • ๐Ÿงฎ Perform numerical reasoning and cross-document analysis

๐Ÿ“Š Key Statistics

Metric Value
Total Tasks 373 benchmark instances
Domains 3 (Legal, Financial, Academic)
Document Sources 100% real-world, authentic content
Total Documents 1,914 source documents
Languages English & Chinese

๐Ÿ“ˆ Detailed Domain Statistics

Domain Language Tables Documents Avg Tokens Docs/Table
Academic EN 74 257 69k 3.5/5
Financial ZH,EN 224 944 437k 4.2/5
Legal ZH 75 713 7k 9.6/13

๐Ÿ“ Dataset Structure

{
    "record_id": "academic_10_0_en",
    "query": "Identify possible citation relationships among the following articles...",
    "doc_length": {
        "paper_1.md": 141566,         # Character count per document
        "paper_2.md": 885505,
        "paper_3.md": 48869,
        "paper_4.md": 65430,
        "paper_5.md": 53987
    },
    "table_schema": {               # Dynamic schema definition
        "columns": [
            {"name": "Cited paper title", "about": "the name of the paper"},
            {"name": "Referencing paper title", "about": "Referencing paper title"},
            {"name": "Referenced content", "about": "the context of the cited paper"},
            {"name": "Label", "about": "reference type: background/methodology/additional"}
        ]
    },
    "answers": [                    # Ground truth structured output
        {
            "Cited paper title": "Large Language Model Is Not a Good Few-shot Information Extractor...",
            "Referencing paper title": "What Makes Good In-Context Examples for GPT-3?",
            "Referenced content": "(2) Sentence-embedding (Liu et al., 2022; Su et al., 2022): retrieving...",
            "Label": "background"
        }
    ]
}

๐Ÿญ Data Sources & Domains

AOE Benchmark Construction Process

Figure: AOE benchmark construction pipeline from raw documents to structured evaluation tasks

๐Ÿ“š Academic Domain

  • Sources: Semantic Scholar, Papers With Code
  • Content: Research papers, citation networks, performance leaderboards
  • Tasks: Citation relationship extraction, methodology performance analysis

๐Ÿ’ฐ Financial Domain

  • Source: CNINFO (China's official financial disclosure platform)
  • Content: Annual reports (2020-2023) from A-share listed companies
  • Tasks: Longitudinal financial analysis, cross-company comparisons

โš–๏ธ Legal Domain

  • Sources: People's Court Case Library, National Legal Database
  • Content: Chinese civil law judgments, official statutes
  • Tasks: Legal provision retrieval, defendant verdict extraction

๐ŸŽฏ Benchmark Tasks Overview

๐Ÿ“Š Task Categories

Domain Task ID Description Challenge Level
Academic $Aca_0$ Citation Context Extraction ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
$Aca_1$ Methodology Performance Extraction ๐Ÿ”ฅ๐Ÿ”ฅ
Legal $Legal_0$ Legal Provision Retrieval ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
$Legal_1$ Defendant Verdict Extraction ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
Financial $Fin_{0-3}$ Single Company Longitudinal Analysis ๐Ÿ”ฅ๐Ÿ”ฅ
$Fin_{4-6}$ Multi-Company Comparative Analysis ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ

๐Ÿ—๏ธ Data Processing Pipeline

  • ๐Ÿ“„ Document Preservation: Advanced parsing with markitdown, Marker, and OCR
  • ๐Ÿท๏ธ Human-in-the-Loop: Expert annotation for legal document processing
  • โœ… Quality Assurance: Multi-stage validation ensuring accuracy and completeness

๐Ÿ’ก Example Tasks

โš–๏ธ Legal Analysis Example

Task: Extract structured verdict information from complex trademark infringement cases

๐Ÿ“‹ View Ground Truth Table

Input Query: "ไฝœไธบๆณ•ๅพ‹ๆ–‡ๆœฌๅˆ†ๆžไธ“ๅฎถ๏ผŒ่ฏทๆŒ‰็…งๆŒ‡ๅฎšๆ ผๅผไปŽๅˆคๅ†ณไฟกๆฏไธญๅ‡†็กฎๆๅ–ๆฏไฝ่ขซๅ‘Š็š„ๆœ€็ปˆๅˆคๅ†ณ็ป“ๆžœ"

Source Documents:complex legal cases (678-2391 tokens each)

ๆกˆไปถๅ,่ขซๅ‘Š,็ฝชๅ,ๅˆ‘ๆœŸ,็ผ“ๅˆ‘,ๅค„็ฝš้‡‘,ๅ…ถไป–ๅˆคๅ†ณ
ๅˆ˜ๆŸๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡ๆกˆ,ๅˆ˜ๆŸ,ๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡็ฝช,ๆœ‰ๆœŸๅพ’ๅˆ‘ๅ››ๅนด,,ๅค„็ฝš้‡‘ไบบๆฐ‘ๅธๅไบ”ไธ‡ๅ…ƒ,ๆ‰ฃๆŠผ่ฝฆ่พ†ใ€ๆ‰‹ๆœบ็ญ‰ๅ˜ไปทๆŠตไฝœ็ฝš้‡‘
ๆฌงๆŸ่พ‰ใ€ๅผ ๆŸๅฆนๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡ๆกˆ,ๆฌงๆŸ่พ‰,ๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡็ฝช,ๆœ‰ๆœŸๅพ’ๅˆ‘ไบ”ๅนดๅ…ญไธชๆœˆ,,ๅค„็ฝš้‡‘ไบบๆฐ‘ๅธๅ…ญๅไบ”ไธ‡ๅ…ƒ,่ฟฝ็ผด่ฟๆณ•ๆ‰€ๅพ—100.6583ไธ‡ๅ…ƒ
่ฐขๆŸๆŸ็”ฒ็ญ‰ๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡ๆกˆ,่ฐขๆŸๆŸ็”ฒ,ๆ— ็ฝช,,,, 
้ฉฌๆŸๅŽ็ญ‰ๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡ๆกˆ,้ฉฌๆŸๅŽ,ๅ‡ๅ†’ๆณจๅ†Œๅ•†ๆ ‡็ฝช,ๆœ‰ๆœŸๅพ’ๅˆ‘ๅ…ญๅนด,,ๅค„็ฝš้‡‘ไบบๆฐ‘ๅธๅ…ญ็™พๅ…ซๅไธ‡ๅ…ƒ,
โ€ฆโ€ฆ

Challenge: Models must parse complex legal language from multiple case documents (avg 9.6 docs per table), handle joint defendant cases with up to 16 defendants per case, distinguish between different verdict outcomes (guilty vs. acquitted), and extract structured information from unstructured legal narratives involving trademark infringement worth millions.

๐Ÿ“š Academic Analysis Example

Task: Extract methodology performance from research papers on WikiText-103 dataset

๐Ÿ“Š View Ground Truth Table

Input Query: "List the Test perplexity performance of the proposed methods in the paper on the WikiText-103 dataset."

Source Documents: research papers (36k-96k tokens each)

paper_name,method,result,models_and_settings
Primal-Attention: Self-attention through Asymmetric Kernel SVD,Primal.+Trans.,31,
Language Modeling with Gated Convolutional Networks,GCNN-8,44.9,
GATELOOP: FULLY DATA-CONTROLLED LINEAR RECURRENCE,GateLoop,13.4,

Challenge: Models must parse complex academic papers, identify specific methodologies, locate performance tables, and extract numerical results while handling various formatting styles.

๐Ÿฆ Financial Analysis Example

Task: Extract and compare financial metrics across multiple company annual reports

๐Ÿ“Š View Ground Truth Table
Company,Revenue (CNY),Net Profit (CNY),Operating Cash Flow (CNY)
Gree Electric,203979266387,29017387604,56398426354
Midea Group,372037280000,33719935000,57902611000
Haier Smart Home,261427783050,16596615046,25262376228
TCL Technology,174366657015,4781000000,25314756105
GONGNIU GROUP,15694755600,3870135376,4827282090

Challenge: Models must locate financial data scattered across lengthy annual reports (avg 437k tokens), handle different formatting conventions, and ensure numerical accuracy across multiple documents.

๐Ÿ”ฌ Research Applications

๐ŸŽฏ Ideal for Evaluating:

  • Multi-document Understanding: Information synthesis across long-form texts
  • Schema Construction: Dynamic table structure generation
  • Domain Adaptation: Performance across specialized fields
  • Numerical Reasoning: Financial calculations and quantitative analysis
  • Cross-lingual Capabilities: English and Chinese document processing

๐Ÿ“ˆ Benchmark Insights:

  • Even SOTA models struggle: Best performers achieve only ~68% accuracy
  • Domain specificity matters: Performance varies significantly across fields
  • Length matters: Document complexity correlates with task difficulty
  • RAG limitations revealed: Standard retrieval often fails for structured tasks

๐Ÿš€ Getting Started

Quick Usage

from datasets import load_dataset

# Load the complete benchmark
dataset = load_dataset("tianyumyum/AOE")

# Access specific splits
all_tasks = dataset["all"]

# Example task
task = all_tasks[0]
print(f"Documents: {len(task['doc_length'])}")
print(f"Expected output: {task['answers']}")

๐Ÿ“Š Evaluation Framework

AOE provides a comprehensive 3-tier evaluation system:

  1. ๐ŸŽฏ CSV Parsability: Basic structure compliance (Pass Rate)
  2. ๐Ÿ† Overall Quality: LLM-assessed holistic evaluation (0-100%)
  3. ๐Ÿ”ฌ Cell-Level Accuracy: Granular content precision (F1-Score)

๐Ÿค Contributing & Support

โญ Star our GitHub repo if you find AOE useful! โญ

Pushing the boundaries of structured knowledge extraction ๐Ÿš€