Improve dataset card: Add task categories, GitHub links, and detailed overview

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +47 -10
README.md CHANGED
@@ -1,4 +1,10 @@
1
  ---
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: query
@@ -24,15 +30,8 @@ configs:
24
  data_files:
25
  - split: test
26
  path: data/test-*
27
- task_categories:
28
- - text-classification
29
- license: cc-by-4.0
30
- language:
31
- - en
32
  papers:
33
- - title: >-
34
- FinAuditing: Taxonomy-Grounded Financial Auditing Benchmark for Evaluating
35
- Large Language Models
36
  authors:
37
  - Yan Wang
38
  - Keyi Wang
@@ -61,5 +60,43 @@ tags:
61
  # ๐Ÿงพ FinAuditing Benchmark
62
 
63
  This dataset is introduced in the paper
64
- **[FinAuditing: Taxonomy-Grounded Financial Auditing Benchmark for Evaluating Large Language Models](https://arxiv.org/abs/2510.08886)**
65
- by Yan Wang, Keyi Wang, Shanshan Yang, Jaisal Patel, Jeff Zhao, Fengran Mo, Xueqing Peng, Lingfei Qian, Jimin Huang, Guojun Xiong, Xiao-Yang Liu, and Jian-Yun Nie (2025).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: cc-by-4.0
5
+ task_categories:
6
+ - text-classification
7
+ - question-answering
8
  dataset_info:
9
  features:
10
  - name: query
 
30
  data_files:
31
  - split: test
32
  path: data/test-*
 
 
 
 
 
33
  papers:
34
+ - title: 'FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for Evaluating LLMs'
 
 
35
  authors:
36
  - Yan Wang
37
  - Keyi Wang
 
60
  # ๐Ÿงพ FinAuditing Benchmark
61
 
62
  This dataset is introduced in the paper
63
+ **[FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for Evaluating LLMs](https://arxiv.org/abs/2510.08886)**
64
+ by Yan Wang, Keyi Wang, Shanshan Yang, Jaisal Patel, Jeff Zhao, Fengran Mo, Xueqing Peng, Lingfei Qian, Jimin Huang, Guojun Xiong, Xiao-Yang Liu, and Jian-Yun Nie (2025).
65
+
66
+ **GitHub Repository**: [https://github.com/The-FinAI/FinAuditing.git](https://github.com/The-FinAI/FinAuditing.git)
67
+ **Evaluation Framework**: [https://github.com/The-FinAI/FinBen](https://github.com/The-FinAI/FinBen)
68
+
69
+ ---
70
+
71
+ ## ๐ŸŒŸ Overview
72
+
73
+ FinAuditing is the first taxonomy-aligned, structure-aware, multi-document benchmark for evaluating Large Language Models (LLMs) on financial auditing tasks. Built from real US-GAAP-compliant XBRL filings, FinAuditing defines three complementary subtasks: FinSM for semantic consistency, FinRE for relational consistency, and FinMR for numerical consistency, each targeting a distinct aspect of structured auditing reasoning. It further proposes a unified evaluation framework integrating retrieval, classification, and reasoning metrics across these subtasks.
74
+
75
+ ### ๐Ÿ“š Datasets Released
76
+
77
+ | ๐Ÿ“‚ Dataset | ๐Ÿ“ Description |
78
+ |------------|----------------|
79
+ | [**FinSM**](https://huggingface.co/datasets/TheFinAI/FinSM) | Evaluation set for FinSM subtask within FinAuditing benchmark. This task follows the information retrieval paradigm: given a query describing a financial term that represents either currency or concentration of credit risk, an XBRL filing, and a US-GAAP taxonomy, the output is the set of mismatched US-GAAP tags after retrieval. |
80
+ | [**FinRE**](https://huggingface.co/datasets/TheFinAI/FinRE) | Evaluation set for FinRE subtask within FinAuditing benchmark. This is a relation extraction task, given two specific elements $e_1$ and $e_2$, an XBRL filing, and a US-GAAP taxonomy, the goal of this task is to classify three relation error types. |
81
+ | [**FinMR**](https://huggingface.co/datasets/TheFinAI/FinMR) | Evaluation set for FinMR subtask within FinAuditing benchmark. This is a mathematical reasoning task, given two questions $q_1$ and $q_2$, where $q_1$ concerns the extraction of a reported value and $q_2$ pertains to the calculation of the corresponding real value, an XBRL filing, and a US-GAAP taxonomy, the task is to extract the reported value for a given instance in the XBRL filing and to compute the numeric value for that instance, which is then used to verify whether the reported value is correct. |
82
+ | [**FinSM_Sub**](https://huggingface.co/datasets/TheFinAI/FinSM_Sub) | FinSM subset for ICAIF 2025. |
83
+ | [**FinRE_Sub**](https://huggingface.co/datasets/TheFinAI/FinRE_Sub) | FinRE subset for ICAIF 2025. |
84
+ | [**FinMR_Sub**](https://huggingface.co/datasets/TheFinAI/FinMR_Sub) | FinMR subset for ICAIF 2025. |
85
+
86
+ ---
87
+
88
+ ## Citation
89
+
90
+ If you find our benchmark useful, please cite:
91
+
92
+ ```bibtex
93
+ @misc{wang2025finauditingfinancialtaxonomystructuredmultidocument,
94
+ title={FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for Evaluating LLMs},
95
+ author={Yan Wang and Keyi Wang and Shanshan Yang and Jaisal Patel and Jeff Zhao and Fengran Mo and Xueqing Peng and Lingfei Qian and Jimin Huang and Guojun Xiong and Xiao-Yang Liu and Jian-Yun Nie},
96
+ year={2025},
97
+ eprint={2510.08886},
98
+ archivePrefix={arXiv},
99
+ primaryClass={cs.CL},
100
+ url={https://arxiv.org/abs/2510.08886},
101
+ }
102
+ ```