--- language: - en license: mit tags: - tables - benchmark - qa - llms - document-understanding - multimodal pretty_name: Human Centric Tables Question Answering (HCTQA) size_categories: - 10K Human Centric Tables Question Answering (HCTQA) is a benchmark designed for evaluating the performance of LLMs on question answering over complex, real-world and synthetic tables. This dataset contains both real-world and synthetic tables with associated images, CSVs, and structured metadata. The dataset includes questions with varying levels of complexity, requiring models to handle reasoning across complex structures, numeric aggregation, and context-dependent understanding. The `dataset_type` field indicates whether a sample is from the real world data sources (`realWorldHCTs`) or synthetically created (`syntheticHCTs`). --- # HCT-QA: Human-Centric Tables Question Answering **HCT-QA** is a benchmark dataset designed to evaluate large language models (LLMs) on question answering over complex, human-centric tables (HCTs). These tables often appear in documents such as research papers, reports, and webpages and present significant challenges for traditional table QA due to their non-standard layouts and compositional structure. The dataset includes: - **2,188 real-world tables** with **9,835 human-annotated QA pairs** - **4,679 synthetic tables** with **67,500 programmatically generated QA pairs** - Logical and structural metadata for each table and question > 📄 **Paper**: [Title TBD] > _The associated paper is currently under review and will be linked here once published._ --- ## 📊 Dataset Splits | Config | Split | # Examples (Placeholder) | |------------|-------|--------------------------| | RealWorld | Train | 7,500 | | RealWorld | Test | 2,335 | | Synthetic | Train | 55,000 | | Synthetic | Test | 12,500 | --- ## 🏆 Leaderboard | Model Name | FT (Finetuned) | Recall | Precision | |-----------------|----------------|--------|-----------| | Model-A | True | 0.81 | 0.78 | | Model-B | False | 0.64 | 0.61 | | Model-C | True | 0.72 | 0.69 | > 📌 If you're evaluating on this dataset, open a pull request to update the leaderboard. --- ## Dataset Structure Each entry in the dataset is a dictionary with the following structure: ### Sample Entry ```json { "table_id": "arxiv--1--1118", "dataset_type": "arxiv", "table_data": { "table_as_csv": ",0,1,2\n0,Domain,Average Text Length,Aspects Identified\n1,Journalism,50,44\n...", "table_as_html": "...", "table_as_markdown": "| Domain | Average Text Length | Aspects Identified |...", "table_image_local_path_within_github_repo": "tables/images/arxiv--1--1118.jpg", "table_image_url": "https://hcsdtables.qcri.org/datasets/all_images/arxiv_1_1118.jpg", "table_properties_metadata": { "Standard Relational Table": true, "Row Nesting": false, "Column Aggregation": false } }, "questions": [ { "question_id": "arxiv--1--1118--M0", "question": "Report the Domain and the Average Text Length where the Aspects Identified equals 72", "question_template_for_synthetic_only": "Report [column_1] and [column_2] where [column_3] equals [value]", "question_properties_metadata": { "Row Filter": true, "Aggregation": false, "Returned Columns": true }, "answer": "{Psychology | 86} || {Linguistics | 90}", "prompt": "......", "prompt_without_system": "..." } ] } ``` ### Ground Truth Format Explain the GT format here Example: {value1 | value2} || {value3 | value4} ### Table Properties For details on table and question properties please see our [paper](https://openreview.net/pdf?id=eaW6OtD4HR)
DomainAverage Text Length