bibkey
stringlengths 18
52
| title
stringlengths 31
151
⌀ | inclusion
stringclasses 1
value | exclusion_criteria
stringclasses 1
value | exclusion_criteria_detail
stringclasses 2
values | short_summary
stringlengths 48
766
⌀ | contribution
stringclasses 88
values | phenomenon_short
stringclasses 6
values | target_phenomenon
stringlengths 3
360
⌀ | phenomenon_defined
stringclasses 2
values | phenomenon_definition
stringlengths 10
964
⌀ | definition_scope
stringclasses 2
values | purpose_extra
stringclasses 81
values | task_definition
stringlengths 14
1.39k
⌀ | task_item_definition
stringlengths 7
3.27k
⌀ | task_definition_detail
stringlengths 1
1.19k
⌀ | task_source
stringlengths 14
460
⌀ | task_dataset_size
stringlengths 2
309
⌀ | task_dataset_metadata
stringclasses 2
values | dataset_metadata_detail
stringlengths 1
570
⌀ | dataset_sampling_method
stringclasses 18
values | response_format
stringclasses 52
values | metric_definition
stringlengths 3
419
| metric_definition_detail
stringlengths 21
1.18k
⌀ | task_source_detail
stringlengths 6
829
⌀ | authorship
stringclasses 7
values | benchmark_availability
stringclasses 18
values | procedural_extra
stringclasses 45
values | notes_extra
stringclasses 40
values | task_train_val
stringclasses 6
values | task_dataset_size_extra
stringlengths 2
549
⌀ | response_format_detail
stringclasses 88
values | metric_aggregation
stringclasses 26
values | metric_subscores
stringclasses 2
values | metric_subscores_detail
stringlengths 6
1.07k
⌀ | metric_metascoring
stringclasses 17
values | benchmark_location
stringlengths 6
117
⌀ | benchmark
stringlengths 3
146
⌀ | phenomenon_contested
stringclasses 3
values | task_face_validity
stringclasses 21
values | metric_face_validity
stringclasses 18
values | result_interpretation
stringclasses 2
values | results_comparison
stringclasses 2
values | results_comparison_explanation
stringclasses 3
values | results_realism
stringclasses 7
values | results_human_baseline
stringclasses 2
values | results_author_validity
stringclasses 15
values | results_author_validity_detail
stringlengths 17
1.19k
⌀ | metric_statistics
stringlengths 4
405
⌀ | metric_access
stringclasses 2
values | task_ecology
stringclasses 17
values | task_ecology_detail
stringlengths 5
580
⌀ | definition_integrity
stringclasses 3
values | definition_integrity_detail
stringclasses 3
values | task_dataset_size_detail
stringclasses 64
values | metric_fewshot
stringclasses 2
values | phenomenon_taxonomy_root
stringclasses 30
values | phenomenon_taxonomy_leaf
stringclasses 32
values | phenomenon_taxonomy_alternate
stringclasses 8
values | task_source_clean
stringlengths 11
119
| dataset_sampling_method_clean
stringclasses 18
values | response_format_clean
stringclasses 29
values | metric_definition_clean
stringclasses 77
values | phenomenon_contested_clean
stringclasses 3
values | task_face_validity_clean
stringclasses 5
values | metric_face_validity_clean
stringclasses 4
values | results_realism_clean
stringclasses 5
values | results_author_validity_clean
stringclasses 4
values | task_ecology_clean
stringclasses 14
values | metric_statistics_clean
stringclasses 10
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
asthanaEvaluatingLLMsTargeted2024
|
Evaluating LLMs for Targeted Concept Simplification for Domain-Specific Texts
|
Include
| null | null |
NLP models are useful for aiding comprehension of complex texts from unfamiliar domains, but simplifying entire texts can remove important details. Targeted concept simplification helps readers understand difficult concepts within context, enhancing vocabulary and knowledge. The new WIKIDOMAINS dataset and preliminary benchmarks show human judges prefer explanations over simplifications for difficult concepts, with no single model excelling across all quality dimensions, highlighting the need for personalized reading comprehension support.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
LLMs' ability to support people in reading complex text from unfamiliar domains
|
Yes
|
targeted concept simplification as a task for supporting readers
|
Comprehensive
| null |
The task of targeted concept simplification is to rewrite an input definition containing a concept to make it understandable to someone unfamiliar with the concept.
|
Text, domain, concept, human difficulty rating, evaluation
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks)
|
22561
|
Yes
|
human difficulty
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragarph)
|
Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
3384 val, 15873 train
| null |
Simple Mean
|
Yes
|
simplify, explain, human eval, automatic eval
| null |
GitHub
|
WIKIDOMAINS
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean/sum, t-tests
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Summarization
| null |
['Author-crafted', 'Crowd-sourced']
|
['Targeted']
|
['Free response']
|
['Human ratings', 'LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Constructed']
|
['Mean', 'Tests']
|
karpinskaOneThousandOne2024
|
One Thousand and One Pairs: A “novel” challenge for long-context language models
|
Include
| null | null |
While synthetic long-context LLM benchmarks typically test surface-level retrieval, the NOCHA dataset assesses models' abilities to retrieve, synthesize, and reason over book-length texts. The dataset consists of true and false claim pairs about 67 recently-published English fictional books, requiring global reasoning for verification. Experiments show that human readers excel at this task, but long-context LLMs struggle significantly, with the highest accuracy from GPT-4O at 55.8%, indicating a need for improved models and methodologies for handling extensive world-building and complex narratives.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Claim verification in long-context tasks
|
Yes
|
true/false narrative minimal pairs that isolate a single narrative phenomenon present in their novels. Each false claim differs from the true claim in its pair only by the inclusion of false information regarding the same event or entity
|
Comprehensive
| null |
Discern true and false claims about books
|
Book, claim, answer, score
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks)
|
1001
|
Yes
|
global, passage, sentence
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
global, passage, sentence
| null |
https://github.com/marzenakrp/nocha
|
NOCHA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
realistic dataset
|
simple mean/sum, GLMs
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Author-crafted', 'Crowd-sourced']
|
['Criterion']
|
['Multiple choice', 'Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Representative']
|
['Mean', 'Tests']
|
zhaoQTSummQueryfocusedSummarization2023
|
QTSumm: Query-Focused Summarization over Tabular Data
|
Include
| null | null |
Propose a dataset. Show that the task is challenging. Propose a method to improve model performance.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
table summarization
|
Yes
|
"the model is required to generate a user-customized summary given the table and user query"-p1158
|
Subset
| null |
"the model is required to generate a user-customized summary given the table and user query"-p1158
|
given a query and a table, the model has to generate a user-customized summary
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
440 tables, 1078 summaries
|
Yes
|
topic area, length, and annotation details
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragarph)
|
n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
train: 2055 tables, 4981 summaries; dev: 439 tables, 1052 summaries
| null | null |
No
| null | null |
https://github.com/yale-nlp/QTsumm
|
QTSUMM
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
they conduct extensive human experiment
| null |
Model access required (e.g. logits)
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
Code Generation
| null | null |
['Crowd-sourced', 'Another benchmark']
|
['Targeted']
|
['Free response']
|
['Soft match', 'Human ratings', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
| null |
suTextttConflictBankBenchmarkEvaluating2024
|
CONFLICTBANK: A Benchmark for Evaluating Knowledge Conflicts in Large Language Models
|
Include
| null | null |
Large language models (LLMs) have made significant progress, but the issue of knowledge conflicts, which can lead to hallucinations, remains underexplored. To address this, CONFLICTBANK, a large benchmark of claim-evidence and QA pairs, is introduced to study conflicts arising from misinformation, temporal discrepancies, and semantic divergences. Through comprehensive experiments on various LLMs, the study provides insights into conflicts in retrieved and encoded knowledge, highlighting the importance of resolving these conflicts for developing trustworthy AI.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Knowledge conflicts in LLMs
|
Yes
|
Retrieved conflicts arise during the inference stage when newly retrieved information contradicts the model’s parametric memory, while embedded conflicts occur during the training stage due to discrepancies within the training text itself
|
Comprehensive
| null |
Answer question-answer pairs
|
QA pair, conflict type, answer, memorization ratio
| null |
Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
553,117
|
Yes
|
conflict type
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number)
|
Memorization score
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
conflict types
| null |
https://github.com/zhaochen0110/conflictbank
|
CONFLICTBANK
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Factuality
| null | null |
['Procedurally-generated', 'LLM-generated']
|
['Targeted']
|
['Multiple choice', 'Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Constructed']
|
['Mean']
|
yangCRAGComprehensiveRAG2024
|
CRAG - Comprehensive RAG Benchmark
|
Include
| null | null |
Retrieval-Augmented Generation (RAG) aims to improve Large Language Models (LLMs) by supplementing them with external knowledge, but existing datasets fail to capture the diverse and dynamic nature of real-world Question Answering (QA) tasks. The Comprehensive RAG Benchmark (CRAG) is introduced to address this gap, featuring 4,409 question-answer pairs and mock APIs for web and Knowledge Graph search, covering a wide range of domains and question types. Evaluation on CRAG shows that adding RAG improves LLM accuracy but still falls short of trustworthy QA, particularly with questions involving high dynamism, low popularity, or complexity, indicating key areas for future research.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
RAG performance in real-world QA settings
|
Yes
|
RAG in the wild is more difficult than in contrived settings and models often fail
|
Comprehensive
| null |
Answer questions from an array of questions across domains and question categories
|
Question, retrieval contents, domain, category, answer, score
| null |
Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
4409
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
train: 1760, val: 1320
| null |
Simple Mean
|
Yes
| null | null |
https://github.com/facebookresearch/CRAG/
|
CRAG
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Retrieval
| null | null |
['Another benchmark', 'Procedurally-generated']
|
['Targeted']
|
['Short free response', 'Free response']
|
['Human ratings', 'LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative']
|
['Mean']
|
zhangSelenePioneeringAutomated2024
|
Selene: Pioneering Automated Proof in Software Verification
|
Include
| null | null |
A benchmark for automated proof generation in software verification.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Automated proof generation in software verification.
|
Yes
|
Write proofs that project-level software has the required properties. These can then be formally checked by an independent prover.
|
Subset
| null |
The subject LLM’s goal is to write proofs for the given specifications from seL4 (a real-world industrial-level operating system microkernel) and pass the verification.
|
Generation of a target lemma, which is then merged with the other lemmas to verify the proof. You are given the specifications to prove. Write the proofs in the Isabelle language. Prompted with "several" demonstrations of perfect proofs in-context.
|
This benchmark is in a complex domain.
|
Real task examples (e.g. GitHub issues)
|
360
|
Yes
|
Difficulty level of the lemmas.
|
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible)
|
Structured response (e.g. valid JSON, API call alone)
|
Generated proof verified by an independent prover system.
| null |
All tasks are taken from a single piece of software (seL4) as this has ground truth proofs.
|
Mix (multiple authors from industry and academia)
|
No, no link is provided
| null |
This was quite far outside my skillset.
|
Test
| null | null |
Simple Mean
|
Yes
|
Difficulty of the target lemma.
|
pass@k (any correct answer in k trials)
| null |
Selene
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
Yes
|
Identify that software verification requires two stages: the prerequisite specification stage and the proof stage. Their benchmark only considers the second stage.
|
Mean,
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
No
| null | null |
Code Generation
| null | null |
['Real task']
|
['Random', 'Convenience']
|
['Structured']
|
['Reward']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial']
|
['Mean']
|
gharaeeBIOSCAN5MMultimodalDataset2024
|
BIOSCAN-5M: A Multimodal Dataset for Insect Biodiversity
|
Include
| null | null |
BIOSCAN-5M is a multimodal benchmark for insect classification and contains images, taxonomic labels, raw nucleotide barcode sequences, barcode index numbers, geographic location, and size metadata. The dataset is publicly available and includes data from novel species. The benchmark supports classification, zero-shot transfer learning, and retrieval learning.
|
BIOSCAN-5M is an expansion of BIOSCAN-1M. It is unique in its inclusion of 4 million additional images, and location, taxonomic rank, and size metadata. Addtitionally, BIOSCAN-5M was cleaned to resolve inconsistencies and provide more reliable labels.
|
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
insect classification
|
Yes
|
Insect classification is the automatic classification of inset specimens by AI tools.
|
Comprehensive
| null |
The benchmark outlines three possible tasks. The first task is insect classification, which can be performed as DNA-based and/or image-based taxonomic classification. In a closed-world setting, the task is to accurately identify species from a predefined set of existing labels. In the open-world setting the task is to group together samples of novel species. The benchmark also supports zero-shot transfer-learning, which measures how unseen datasets can be clustered using embeddings from pre-trained feature extractors, and multimodal retrieval learning by aligning image, DNA, and taxonomic label embeddings using CLIBD.
|
A single item in the dataset contains the biological taxonomy (phylum, class, order, family, subfamily, genus, species), the genetic information (DNA barcode sequence, barcode index number), a cropped and original RBG image of the insect, size information (meas. value, scale factor, area fraction), and geographical information (coordinates, country, province/state).
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
47,260
|
Yes
|
phylum, class, order, family, subfamily, genus, species, DNA barcode sequence, barcode index number, original RBG image, cropped RBG image, measured value, scale factor, area fraction, country, province/state, latitude, longitude
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation)
| null |
Dataset is an superset of previous datasets with additional metadata.
|
Academia
|
Yes
| null | null |
Test, Train, Validation
|
There are four types of species labels that each carry their splits. Unknown samples are samples without a species label. Seen samples are samples with an established scientific name for their species. Unseen are samples with an established scientific name for the genus, and a uniquely identifying placeholder name for the species. Heldout samples are labelled with a placeholder genus and species name. Unknown: Pretrain 4677756 Seen: Train/Validation/Test 289203/14757/38373 Unseen: Retrieval Keys/Validation/Test 36465/8819/7887 Heldout: 76590
| null | null |
No
| null | null |
https://github.com/bioscan-ml/BIOSCAN-5M
|
BIOSCAN-5M
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
The authors highlight that a strong dataset for insect classification requires detailed metadata, and geographic and specimen diversity. BIOSCAN-5M, compared to other benchmarks, covers 98% of discovered insects with 1.2 million labeled to the species rank, and contains geographical information, size, and DNA barcodes. The authors claim that multimodal datasets are critical for robust species classification.
|
Accuracy is reported for classification in both open and closed-world settings. Fine-tuned accuracy and linear probing accuracy are reported in a closed-world setting, while 1NN-genus probing accuracy is reported in an open-world setting. AMI is reported for zero-shot transfer learning, and in multimodal retrieval learning, micro and macro top-1 accuracy is reported.
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Biology
| null | null |
['Real task', 'Another benchmark']
|
['Convenience', 'Targeted']
|
['Short free response']
|
['Exact match', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Mean', 'Other']
|
coda-fornoCogBenchLargeLanguage2024
|
CogBench: a large language model walks into a psychology lab
|
Include
| null | null |
CogBench is a benchmark that uses seven cognitive psychology experiments to evaluate LLMs by assessing their behavioral characteristics. CogBench provides ten behavioral metrics to phenotype LLM behavior e.g. model-based reasoning, exploration strategies, metacognition, and risk-taking tendencies. The benchmark is applied to 40 different LLMs.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Cognitive behavioral phenotyping
|
Yes
|
Its not very clearly defined. There is a footnote: "A computational phenotype is a collection of mathematically derived parameters that precisely describe individuals across different domains "
|
Subset
| null |
The task consists of seven cognitive psychology experiments where LLMs must respond to textual prompts simulating classic experimental paradigms such as the two-armed bandit problems.
|
textual prompt
| null |
Real task examples (e.g. GitHub issues), Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
Yes
|
Performance metrics and behavoural metrics
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice, Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean, The metrics are normalized against human performance
|
Yes
|
They are split by performance metrics (e.g. probabilistic reasoning) and behavoural metrics (e.g. meta-cognition)
| null |
https://github.com/juliancodaforno/CogBench
|
CogBench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
They explicitly note that their benchmark is based on "well-established experimental paradigms from the cognitive psychology literature, providing a unique set of advantages over traditional LLM benchmarks" because these measures "have been extensively validated over many years and shown to capture general cognitive constructs."
|
The metrics are averaged and normalized against human performance
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
The benchmark uses well-established cognitive psychology experiments (for humans).
|
Composite phenomenon
|
Yes
| null | null |
Psychology
| null | null |
['Real task', 'Procedurally-generated']
|
['Convenience']
|
['Multiple choice', 'Free response']
|
['Exact match', 'Distribution']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
|
['Mean']
|
liTEGDBComprehensiveDataset2024
|
TEG-DB: A Comprehensive Dataset and Benchmark of Textual-Edge Graphs
|
Include
| null | null |
Present a large-scale dataset. Develop a pipeline for relevant research. Benchmark existing models on the dataset.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
textual-edge graph processing
|
Yes
|
"A Textual-Edge Graph (TEG) is a graph-structured data format in which both nodes and edges have free-form text descriptions."-p4
|
Comprehensive
| null |
Given a textual-edge graph, a model has to process it and answer relevant questions
|
A graph and a question
| null |
Modified from another benchmark (e.g. translation into another language)
|
Total Nodes: 2,164,239 Total Edges: 10,579,752 Total Nodes-Class: 1,053
|
Yes
|
topic area
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null | null |
Yes
|
subsets of topic areas
| null |
https://github.com/Zhuofeng-Li/TEG-Benchmark
|
TEG-DB
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
NLP
|
Extraction
| null |
['Another benchmark']
|
['Targeted']
|
['Free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
| null |
mitaStrikingGoldAdvertising2024
|
Striking Gold in Advertising: Standardization and Exploration of Ad Text Generation
|
Include
| null | null |
CAMERA is a multimodal benchmark for automatic ad text generation (ATG) in Japanese. The paper presents the first standardization and formalization of the ATG task, and the first ATG benchmark. The dataset was manually annotated, and the benchmark contains automatic and human evaluations.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
automatic ad text generation
|
Yes
|
We standardize the ATG (automatic ad text generation) task as follows: Let x be a source document that describes advertised products or services, a a user signal reflecting the user’s latent needs or interests, and y an ad text. ATG aims is to model p(y|a, x). The specific data to be selected for each x, a, and y will be left to future dataset designers
and providers.
|
Subset
|
The paper describes speed, trend, and user-friendliness, faithfulness, fluency, and attractiveness as aspects of a good ad text. Faithfulness, fluency, and attractiveness are used in human evaluation, and those sub-elements are reported.
|
Models are optionally pre-trained on the train split of CAMERA, and then generate an ad text given a landing page OCR text, the landing page layout information, and the landing page bbox image features. The ad text is manually and automatically evaluated.
|
A single item would have the landing page (LP) description, the user query, the landing page layout information, the landing page bbox image features, and entity type (time expression, named entity, terms, etc).
| null |
Real task examples (e.g. GitHub issues)
|
872
|
Yes
|
Landing page description, user query, landing page layout information, landing page bbox image features, entity type, industry type.
|
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible)
|
Free response (e.g. summary paragarph)
|
n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM post-processing (extracting answers, reformatting for automated scoring), Distribution (perplexity, calibration, correlation)
| null |
Task dataset in Japanese
|
Industry
|
Yes
| null | null |
Test, Train, Validation
|
Train/Dev/Test 12395/3098/872
| null | null |
Yes
|
Faithfulness, fluency, and attractiveness have sub-scores in human evaluation
| null |
https://huggingface.co/datasets/cyberagent/camera
|
CAMERA (CyberAgent Multimodal Evaluation for Ad Text GeneRAtion)
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No
|
The benchmark is itself realistic
|
Yes
|
Yes
|
The authors define two requirements for ad text: (1) the information provided by the ad text is consistent with the content of the source document; and (2) the information is carefully curated and filtered based on the users’ potential needs. Thus, for a benchmark for ATG, the authors outline two design policies: the benchmark should (1) utilize multimodal information and (2) evaluate by industry domain. The authors tailor CAMERA to fit both design policies and measure both ad text requirements.
|
BLEU-4, Rouge-1, BERTScore, Keyword Insertion Rates (KWD), Sentence Length Regulation Compliance Rates (REG), Pearson and Spearman Correlation for Human Evaluation
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Composite phenomenon
|
Yes
| null | null |
Business
| null | null |
['Real task']
|
['Random', 'Convenience']
|
['Free response']
|
['Soft match', 'Human ratings', 'LLM post-processing', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Other']
|
jacoviChainofthoughtStrongIts2024
|
A Chain-of-Thought Is as Strong as Its Weakest Link: A Benchmark for Verifiers of Reasoning Chains
|
Include
| null | null |
This paper introduces REVEAL, a benchmark dataset created to evaluate automatic methods for verifying step-by-step reasoning chains, specifically Chain-of-Thought (CoT) answers from language models in open-domain QA. REVEAL provides fine-grained annotations for each reasoning step, assessing its relevance, type (attribution or logic), factual correctness against evidence (attribution), and logical consistency with previous steps. The benchmark aims to support research in improving the reliability and correctness of LLM reasoning.
|
The main contribution is the REVEAL dataset, the first benchmark for detailed, step-level evaluation of CoT reasoning verifiers. It includes a comprehensive annotation schema covering relevance, step type, attribution, and logic, applied to CoT answers from multiple LLMs across diverse QA datasets. The dataset also features annotator justifications for each label. The paper provides baseline results for several verifiers, highlighting current challenges, especially in verifying logical correctness.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Correctness verification of reasoning chains (Chain-of-Thought); Assessing attribution and logical validity of reasoning steps.
|
Yes
|
The correctness of a reasoning step within a Chain-of-Thought answer is defined along multiple dimensions: relevance to the question, the step's type (introducing external facts vs. logical inference vs. both), its attribution status relative to provided evidence (fully supported, partially supported, contradicted, unsupported), and its logical consistency with preceding steps (correct vs. incorrect).
|
Comprehensive
|
The work is motivated by the need for reliable methods to evaluate and improve the correctness of LLM-generated reasoning chains, as incorrect reasoning can undermine the utility of CoT prompting despite potentially correct final answers. The dataset isolates verification from evidence retrieval.
|
Step-level Reasoning Verification: Given a question, a CoT answer, a specific step within that CoT, preceding steps, and potentially external evidence passages, classify the step based on its relevance, type (attribution/logic/both), attribution correctness relative to evidence (if applicable), and logical correctness relative to preceding steps (if applicable).
|
An instance comprises a question (from StrategyQA, MuSiQue, Sports Understanding, or Fermi), a CoT answer generated by an LLM (Flan-PaLM, GPT-3, or Flan-UL2), and step-level annotations. For each step, these annotations include relevance, step type, logical correctness label, and (for attribution steps) attribution labels relative to up to three retrieved Wikipedia evidence paragraphs. Each label comes with free-text justifications from 5 annotators.
|
Verification is performed at the step level. Attribution uses Wikipedia as the knowledge source. A distinction is made between steps requiring factual attribution, logical inference, or both. Ambiguous/low-agreement cases are separated into REVEAL-Open.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
REVEAL-Eval (main evaluation set): 1002 CoT answers / 3360 steps.
|
Yes
|
Source QA Dataset, CoT Generating LLM, Question, Full CoT Answer, Step Index, Step Text, Relevance Label, Step Type Label, Logical Correctness Label, Evidence Passages (for attribution steps), Attribution Label (per step-evidence pair), Annotator Justifications (free text).
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
|
Macro F1 score is used for evaluating the step-level classification tasks (attribution 2-class, attribution 3-class, logic, type) and the CoT-level correctness task. Per-class F1 scores are also provided.
|
Questions are from StrategyQA, MuSiQue, Sports Understanding, Fermi. CoT answers generated by Flan-PaLM-540B, GPT-3, Flan-UL2-20B. Evidence retrieved from a 2021 Wikipedia snapshot using GTR/BM25 after decontextualization. Step-level verification annotations collected from 13 human annotators (5 per item) using a custom two-task protocol.
|
Mix (multiple authors from industry and academia)
|
Yes
|
Source questions from StrategyQA, MuSiQue, Sports Understanding, Fermi. CoT answers generated by Flan-PaLM-540B, GPT-3 (text-davinci-003), Flan-UL2-20B. Evidence from 2021 Wikipedia via GTR/BM25. Decontextualization applied before retrieval. 13 annotators involved. Data contamination mitigation practices used. Detailed prompts provided in appendix.
|
A key distinction is the fine-grained, step-level verification, compared to evaluating only the final answer or full chain correctness. The dataset highlights that verifiers find logical correctness harder to assess than attribution, while CoT generators often struggle more with attribution. The inclusion of free-text justifications is a valuable resource for future work.
|
Test
|
REVEAL-Open (low agreement set): 224 CoT answers / 847 steps. No training/validation splits defined.
|
Automatic verifiers output class labels for relevance, step type, attribution correctness, and logical correctness for each step.
|
Simple Mean
|
Yes
|
Performance is reported per task (Attribution 2/3-class, Logic, Type, CoT-level). Full CoT correctness breakdown by source dataset and generating model is shown. Analysis of unsupported steps and disagreement categories in REVEAL-Open is provided. Per-class F1 scores are available.
| null |
reveal-dataset.github.io and huggingface.co/datasets/google/reveal
|
REVEAL (Reasoning Verification Evaluation)
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
A two-task annotation protocol was designed to reduce cognitive load. 5 annotators provided labels and justifications for each item. 3 pilot rounds refined the process. Inter-annotator agreement measured (Krippendorff's alpha 0.49 for attribution, 0.46 for logic). Low-agreement (ambiguous/difficult) cases were identified and separated into REVEAL-Open, with analysis of disagreement reasons. Reasons for unsupported attribution labels were also analyzed.
|
Macro F1 score, per-class F1 score
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
Assesses fundamental aspects (factual grounding, logical flow) required for reliable reasoning, applicable to various domains where complex problem-solving is needed.
|
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Logical
| null |
['Real task', 'Author-crafted', 'Crowd-sourced', 'LLM-generated']
|
['Convenience', 'Targeted']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
tanzerBenchmarkLearningTranslate2024
|
A Benchmark for Learning to Translate a New Language from One Grammar Book
|
Include
| null | null |
This paper introduces MTOB, a benchmark for learning to translate between English and
Kalamang—a language with less than 200 speakers and therefore virtually no presence on the web—using several hundred pages of grammatical reference materials. This task framing is novel in that it asks a model to learn a language from a single human-readable book of grammar explanations, rather than a large mined corpus of in-domain data. While LLM baselines do not yet match human performance, their experiments show a clear trend that increasing LLM quality and context window size improves translation quality.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
translation between English and Kalamang
|
No
|
The ability to translate between English and Kalamang
|
Comprehensive
| null |
LLMs are asked to translate a sentence from/to English to/from Kalamang. The experiments are done in zero-shot and few-shot settings.
|
A pair of sentences
| null |
The examples are created by a linguist
|
test set: 100
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragarph)
|
n-gram (BLEU, ROUGE, chrF)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
train set: 400
| null |
Unknown
|
No
| null | null |
https://github.com/lukemelas/mtob
|
MTOB
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
Yes
|
No
| null |
Unknown
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Language Modelling
|
In-context Learning
| null |
['Expert-crafted']
|
['Targeted']
|
['Free response']
|
['Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Constructed']
|
['Unknown']
|
ribeiroSTREETMULTITASKSTRUCTURED2023
|
STREET: A MULTI-TASK STRUCTURED REASONING AND EXPLANATION BENCHMARK
|
Include
| null | null |
This paper introduces STREET, a unified multi-task benchmark designed to evaluate natural language reasoning and explanation capabilities. Unlike typical QA datasets, STREET requires models not only to answer questions but also to generate structured, step-by-step explanations (reasoning graphs) detailing the derivation process. Evaluations using T5 and GPT-3 indicate that current models struggle to produce accurate reasoning graphs, lagging behind human performance.
|
The paper proposes the STREET benchmark, unifying diverse reasoning tasks (math, logic, science QA) under a common framework. It introduces "reasoning graphs" as a novel format for structured explanations. The benchmark provides reasoning graph annotations (human-annotated or programmatically generated) for over 35k questions from existing datasets. It evaluates T5 and GPT-3 on generating these structured explanations, revealing limitations of current models. The dataset and code are publicly released.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Multi-step structured reasoning and explanation generation in natural language.
|
Yes
|
The ability to perform multi-step reasoning to answer a question and concurrently generate a structured explanation ("reasoning graph"). This graph explicitly links premises (represented as Textual Logical Units or TLUs) to intermediate conclusions and the final answer, showing the derivation path.
|
Comprehensive
|
To provide a benchmark for evaluating the generation of structured explanations, going beyond free-form rationales, and focusing on reasoning primarily from the provided input context rather than external knowledge retrieval.
|
Given a question (potentially with context and answer options), generate the correct answer and a corresponding reasoning graph. The reasoning graph is represented as a sequence of textual reasoning steps, where each step explicitly references the premise Textual Logical Units (TLUs).
|
A QA instance (from ARC, SCONE, GSM8K, AQUA-RAT, or AR-LSAT) comprising context, question, answer options (if any), and the gold answer. This is augmented with segmented Textual Logical Units (TLUs) for all components and a reasoning graph represented by links between premise TLUs and conclusion/reasoning step TLUs.
|
Explanations are structured as Directed Acyclic Graphs (DAGs). The benchmark focuses on reasoning where premises are mostly contained within the input text. It adapts multiple existing datasets into this unified structured explanation format.
|
Human exam questions (e.g. GRE questions), Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
Total questions included: 19,096. Test set sizes vary per task (not explicitly summed). Total reasoning steps: 151,093.
|
Yes
|
Source Task/Domain, Answer Type, Textual Logical Units (TLUs) with IDs for all components (context, question, options, answer, rationale steps), Reasoning Graph Edges (dependency links between TLU IDs).
|
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), Reasoning Graph Accuracy and Reasoning Graph Similarity based on graph edit distance and textual similarity e.g. BLEURT.
|
1. Answer Accuracy (Exact Match for numerical/MCQ, custom state match for SCONE). 2. Reasoning Graph Accuracy (strict structural and textual match). 3. Reasoning Graph Similarity (normalized graph edit distance using task-specific node text similarity - exact, numeric, or BLEURT)
|
Reasoning graphs were added to existing datasets (ARC, SCONE, GSM8K, AQUA-RAT, AR-LSAT). This involved programmatic generation (SCONE), expert annotation based on existing rationales (GSM8K, AQUA-RAT), expert annotation from scratch (AR-LSAT), or adapting existing structured explanations (ARC from ENTAILMENTBANK). Annotators were experts with relevant educational backgrounds.
|
Mix (multiple authors from industry and academia)
|
Yes
|
Details the reasoning graph linearization format. Specifies models used (T5-large, GPT-3 text-davinci-002) and training/prompting setup.
Defines custom evaluation metrics (Graph Accuracy, Graph Similarity) and node similarity functions.
An annotation tool is described, and a screenshot is provided.
|
The core novelty lies in the structured "reasoning graph" representation for explanations, contrasting with free-form rationales. The benchmark explicitly tests the generation of these structures, revealing it's harder for models than just getting the final answer right.
|
Test, Train, Validation
|
Train and Dev splits exist, derived from source datasets.
|
The model generates a single text sequence containing the reasoning steps and their dependencies encoded using the specific syntax (e.g., premise IDs -> conclusion ID: conclusion text;) followed by the final answer.
|
Simple Mean
|
Yes
|
Results are reported separately for each of the 5 source tasks (ARC, SCONE, GSM8K, AQUA-RAT, AR-LSAT).
| null |
https://github.com/amazon-science/street-reasoning
|
STREET (Structured REasoning and Explanation Multi-Task benchmark)
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
Yes
|
Yes
|
Built on reputable QA datasets. Used expert annotators (undergrad/grad level) with guidelines and multiple passes for quality control. Achieved substantial inter-annotator agreement (Fleiss Kappa κ=0.79) for graph structure annotation. Dataset analysis shows complex reasoning structures (avg 7.8 steps, multi-premise steps)
|
Answer Accuracy (Exact Match %), Reasoning Graph Accuracy (%), Reasoning Graph Similarity (%).
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
The benchmark uses established QA problems and adds a requirement for structured explanations, aligning with the need for explainable AI in complex reasoning scenarios.
|
Composite phenomenon
|
Yes
| null | null |
Reasoning
| null | null |
['Human exams', 'Real task', 'Author-crafted', 'Crowd-sourced', 'Another benchmark', 'Procedurally-generated']
|
['Random', 'Convenience', 'Targeted']
|
['Free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Representative']
|
['Mean']
|
yangDataTalesBenchmarkRealworld2024
|
DataTales: A Benchmark for Real-World Intelligent Data Narration
|
Include
| null | null |
DataTales is a novel benchmark designed to assess data narration of market movement data. It contains a human baseline and is publicly available. Specifically, DataTales assesses the proficiency of LLMs at performing lookups, comparisons, subtraction, rate of change, causal analysis, trend analysis, and predictive analysis to craft a financial report based upon market data.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Data narration
|
Yes
|
Data narration is the process of transforming intricate data into compelling narratives.
|
Subset
|
DataTales focuses on data narration with financial data, e.g. narrating financial market reports. Data Narration is separated into seven analytical operations across three domains: simple lookup, basic quantitative operations, and advanced analytical operations. Basic quantitative operations include comparison, subtraction, and rate of change, while advanced analytical operations include causal analysis, trend analysis, and predictive analysis.
|
We define the task of financial data narration as follows: given market movement data {T_{i,j} | i ≤ E_T, j ≤ D_T } with E_T financial entities and D_T days, where T_{i, j} is the row of entity i on date j, a data narration model M generates a report y narrating the market data y = M(T_{i,j} | i ≤ E_T, j ≤ D_T). Narrations are evaluated generated with same-day data, and historical data spanning one week. Both zero-shot and fine-tuned scenarios are analyzed.
|
A single item in the dataset would have market movement data (open, high, low, close, volume), the date, the entity, the market report, and the market type.
| null |
Real task examples (e.g. GitHub issues)
|
4900
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics)
|
Accuracy is calculated with a specific MCQA-inspired methodology, that utilizes Named Entity Recognition to assess if LLM's predict numerical values accurately.
| null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
Train/Validation and Testing 80/20, split by time
| null |
Simple Mean, None
|
Yes
|
Factuality, Style, and Insightfulness
| null |
https://github.com/yajingyang/DataTales/
|
DataTales
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
Yes
|
Yes
|
The authors highlight that data narration requires deeper analysis to craft narratives around key insights, and goes beyond the scope of existing datasets that focus on data-to-text tasks like basic information transformation. Thus, this justification is used to define a benchmark exclusively tailored for data narration.
|
Factuality is calculated with Named Entity Recognition (NER) empowered accuracy, described in the paper. Style is measured with BLEU. Insightfulness is measured by human assessments based on impact (breadth of claim), and significance (magnitude of changes) on a 5 point Likert scale, and the average of the human review is reported.
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Composite phenomenon
|
Yes
| null | null |
Data Analysis
| null | null |
['Real task']
|
['Targeted', 'Criterion']
|
['Free response']
|
['Exact match', 'Soft match', 'Human ratings']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Mean']
|
guptaTempTabQATemporalQuestion2023
|
TempTabQA: Temporal Question Answering for Semi-Structured Tables
|
Include
| null | null |
This paper introduces TempTabQA, a new dataset designed for evaluating temporal question answering capabilities on semi-structured Wikipedia Infobox tables. The dataset includes over 11k QA pairs covering more than 90 domains. Experiments demonstrate that state-of-the-art models, including large language models, significantly underperform compared to humans, indicating the benchmark's difficulty and its potential to drive improvements in temporal reasoning.
|
The main contributions are: (1) Defining the novel task of temporal QA over semi-structured tables. (2) Creating and releasing TempTabQA, a large, human-verified dataset specifically for this task, featuring diverse domains and complex temporal reasoning requirements. (3) Providing detailed analysis of the dataset's temporal reasoning challenges. (4) Benchmarking SOTA models (fine-tuned and LLMs via zero/few-shot prompting) and highlighting their limitations on this task.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Temporal reasoning; Question answering over semi-structured tables with temporal constraints.
|
Yes
|
The ability to answer natural language questions that require understanding and reasoning about temporal aspects (like dates, durations, ordering, implicit time references) based on information contained within semi-structured tables, such as Wikipedia Infoboxes.
|
Subset
|
The benchmark was created to address the lack of focus on complex temporal reasoning in existing table QA datasets and to provide a challenging testbed for improving models' temporal understanding capabilities.
|
Given a semi-structured table (e.g., Wikipedia Infobox) containing temporal information and a natural language question requiring temporal reasoning over the table's content, the task is to generate the correct answer.
|
An instance consists of a semi-structured Wikipedia Infobox table, a temporal question related to the table, and the ground-truth answer (usually a short phrase or number).
|
Questions frequently involve mathematical operations on temporal concepts (e.g., calculating durations, counting events in a period) and require understanding implicit time references. The dataset spans over 90 distinct domains.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks)
|
Total Test set size = 2889 QA pairs (Head Test: 1851, Tail Test: 1038)
|
Yes
|
Table Domain/Category, Data Split (Train/Dev/Head Test/Tail Test).
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), Meteor
|
F1 score, Exact Match (EM), ROUGE-1 (R1), ROUGE-2 (R2), and Meteor (MET)
|
Tables were selected from Wikipedia Infoboxes across >90 categories. MTurk workers drafted initial QA pairs following guidelines to ensure temporal complexity and diversity. Data was subsequently filtered and validated by expert NLP annotators.
|
Mix (multiple authors from industry and academia)
|
Yes
|
Details provided on crowdsourcing via MTurk (batches, payment, qualification, bonuses, quality control).
The validation process using expert annotators is detailed.
The Data filtering steps are described.
The Table linearization method for models is explained.
The Fine-tuning hyperparameters are listed.
|
This work specifically targets the underexplored area of temporal reasoning over semi-structured tables. It demonstrates that even advanced LLMs struggle significantly with the complex temporal and numerical reasoning required, especially compared to human performance. The Head/Tail split provides insights into generalization capabilities.
|
Test, Train, Validation
|
Train set: 7680 QA pairs. Dev set: 885 QA pairs.
|
Answers are brief, often numerical or temporal values, either extracted or calculated from the table data.
|
Simple Mean
|
Yes
|
Results are reported separately for the Head Test and Tail Test sets. Performance is also broken down by question type (Wh-word), reasoning operation, implicit/explicit nature, and answer entity type. Category-specific analysis is also performed.
| null |
Data: https://zenodo.org/records/10022927, Code/Analysis: https://temptabqa.github.io
|
TempTabQA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
Yes
|
Yes
|
Focused on tables with temporal data.
Used MTurk with specific guidelines for complex temporal questions and linguistic diversity, including bias mitigation steps.
Validation by 3 expert annotators per item in dev/test sets achieved high majority agreement (91-93%) and estimated human accuracy (~86%).
Detailed statistical analysis of question complexity, temporal intervals, required operations, and answer types provided.
Non-temporal/trivial questions were filtered.
|
F1, EM, R1, R2, MET
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
The task models users querying structured summaries (Infoboxes) for specific information, often requiring temporal understanding.
|
Composite phenomenon
|
Yes
| null | null |
Language Modelling
|
Updating
| null |
['Real task', 'Author-crafted', 'Crowd-sourced']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response']
|
['Exact match', 'Soft match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Representative']
|
['Mean']
|
liangSceMQAScientificCollege2024
|
SceMQA: A Scientific College Entrance Level Multimodal Question Answering Benchmark
|
Include
| null | null |
This paper introduces SceMQA, a multimodal question answering benchmark focused on science subjects (Math, Physics, Chemistry, Biology) at the college entrance level. It aims to fill the difficulty gap between primary/middle school and college-level benchmarks. SceMQA includes multiple-choice and free-response questions, detailed solution explanations, and knowledge point labels. Evaluation of current MLLMs shows performance around 50-60% accuracy, indicating the benchmark's challenge.
|
Key contributions include: (1) Creating the SceMQA benchmark targeting the underrepresented college entrance difficulty level for multimodal science QA. (2) Providing high-quality annotations, including detailed solution explanations and specific knowledge points for most problems. (3) Incorporating problems with varied questions for the same context to robustly assess reasoning. (4) Benchmarking several SOTA MLLMs and analyzing their performance and error types.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Scientific reasoning; Multimodal question answering capability at the college entrance level.
|
Yes
|
The ability to perform scientific reasoning across core subjects (Math, Physics, Chemistry, Biology) at a level typical of college entrance examinations, requiring the integration and comprehension of both textual descriptions and visual information (images, diagrams, graphs) to answer questions.
|
Subset
|
The benchmark specifically aims to address the gap in difficulty level between existing primary/middle school and college-level multimodal science datasets. It provides detailed explanations and knowledge points to facilitate finer-grained analysis of model capabilities.
|
Given a scientific problem presented multimodally (text and potentially an image), answer a related question corresponding to college entrance-level difficulty in Math, Physics, Chemistry, or Biology. The answer format is either multiple-choice or free-response.
|
A problem instance consists of a textual description/question, often accompanied by an essential image (diagram, graph, etc.), potential multiple-choice options, the correct answer, a detailed solution explanation (for >90% of items), and one or more specific knowledge point labels.
|
Covers 4 science subjects at the college entrance level. Uses a mix of multiple-choice (4-5 options) and free-response (numerical, yes/no, fill-in-the-blank) formats.
Some instances feature the same context with different questions.
|
Human exam questions (e.g. GRE questions), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
1,045 problems total. Test set size is not applicable as no standard split is defined; evaluation appears to use the entire set.
|
Yes
|
Subject (Mathematics, Physics, Chemistry, Biology), Problem Format (Multiple Choice, Free Response), Knowledge Point(s), Solution Explanation.
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
|
Accuracy based on exact match. Rule-based normalisation or GPT-4 evaluation was used for free-response answers in the paper's experiments.
|
Problems were collected by annotators from public online materials intended for college entrance exams, respecting licenses. Domain experts reviewed problems for difficulty level and verified annotations (explanations, knowledge points).
|
Academia
|
No, no link is provided
|
Data sourced from public online college entrance materials. Adherence to licenses checked. Mathematical expressions converted to LaTeX. Exact-match accuracy used, with GPT-4 as evaluator for free-response questions in experiments. Error analysis involved 2 experts and Kappa score. Detailed prompts planned for release.
|
The benchmark's specific focus on the college entrance difficulty level and its high annotation quality (explanations, knowledge points) are key distinguishing features. The finding that few-shot prompting did not improve, and even slightly hurt performance compared to zero-shot for GPT-4V/Gemini Pro, is unusual and suggests potential negative interference from the text-only examples used in the prompts.
|
Test
| null |
Answers are either a single letter choice (MCQ) or a short, specific free text answer (number, word, yes/no).
|
Simple Mean
|
Yes
|
Performance reported per subject (Math, Physics, Chemistry, Biology). Results separated for Multiple Choice vs. Free Response formats. Accuracy distribution across specific knowledge points is analyzed (in appendix). Performance compared across zero-shot, few-shot, and text-only settings.
| null | null |
SceMQA (Scientific College Entrance Level Multimodal Question Answering)
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
Problems sourced from college entrance materials. Difficulty level aligned with high school/pre-college, filling a gap. Annotations (explanations, knowledge points) verified by domain experts. Problem selection required image essentiality. Difficulty confirmed by comparing GPT-4 performance to its performance on primary-level (ScienceQA) and college-level (MMMU) benchmarks. Error analysis conducted by human experts.
|
Accuracy (%). Kappa score used for error analysis inter-rater reliability.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
The benchmark uses questions formatted like standardized test items to assess scientific reasoning expected for college admission.
|
Composite phenomenon
|
Yes
| null | null |
General Science
| null | null |
['Human exams', 'Author-crafted']
|
['Convenience', 'Targeted']
|
['Multiple choice', 'Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Representative']
|
['Mean', 'Other']
|
zhangHumorAIMassive2024
|
Humor in AI: Massive Scale Crowd-Sourced Preferences and Benchmarks for Cartoon Captioning
|
Include
| null | null |
The paper presents a New Yorker Caption Ranking Dataset, a novel multimodal human preference dataset for generating humorous cartoon captions. The paper presents additional novel evaluation methods to perform group comparisons between AI and human-generated cartoon captures, and leverages data from The New Yorker Caption Contest. The benchmark can be used to assess model-generated captions and support preference-based fine-tuning algorithms.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
humor, humorous captions, funny captions
|
No
|
Generating a humorous caption is the task of writing funny captions on a literary piece, primarily cartoons.
|
Comprehensive
| null |
The cartoon captioning task is defined as a model generating a funny caption given information about the cartoon. Both multimodal and language-only models are evaluated, where language-only models receive descriptions and object entities of the cartoons. The paper also compared zero-shot models against SFT, RLHF, and DPO finetuned models on certain contests within the dataset.
|
A single item would have the cartoon, its language description (provided by GPT4o-vision), its caption, and its label (funny, somewhat funny, unfunny).
| null |
Real task examples (e.g. GitHub issues), Crowd-sourced task examples (e.g. Prolific-created tasks)
|
284183913
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
|
The paper presents a novel evaluation method for group comparison techniques, denoted by Group Overall and Group Best Pick. Human or LLM raters evaluate groups of 10 captions from different sources, and compare them against four groups of past human submissions in the buckets of ranks 1-10, 200-209, 1000-1009, and median. The evaluators then compare the overall funniness of the group against the contest-submitted captions, and pick the funniest caption overall between the funniest captions of the evaluation group and the contest group. GPT4-Turbo-vision, GPT4o-vision, GPT4-Turbo, and GPT4o were used as LLM evaluators. The ranking accuracy and caption win rates of the cartoons are then calculated from the evaluations.
|
The dataset is crowdsourced from The New Yorker cartoon caption contest.
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
The fine-tuning experiments designated contests 530-890. The test set contains 47 contests, the validation set contains 44 contests, and the train set contains the remaining contests.
| null |
Simple Mean
|
No
| null | null |
https://huggingface.co/datasets/yguooo/newyorker_caption_ranking
|
New Yorker Caption Ranking
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
Yes
|
Yes
|
Authors highlight that writing funny captions requires an understanding of how to appeal to a broad range and variability within humor and human judgements. Thus, a benchmark in funny caption writing requires a comparison to human performance, because the task is a domain where expert humans consistently outperform current AI system, leading to the creation of the introduced dataset.
|
Simple mean and variance on accuracy are used to assess the overall and best pick comparisons for cartoons, and expectation adjusted distinct N-grams (EAD) and Sentence-BERT embedding cosine similarity (SBERT) are used to assess caption diversity.
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Understanding
| null |
['Real task', 'Crowd-sourced']
|
['Convenience', 'Targeted']
|
['Free response']
|
['Exact match', 'Soft match', 'Human ratings', 'LLM-as-a-Judge']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Mean', 'Std', 'Other']
|
liMEQABenchmarkMultihop2024
|
MEQA: A Benchmark for Multi-hop Event-centric Question Answering with Explanations
|
Include
| null | null |
This paper introduces MEQA, the first benchmark for multi-hop event-centric question answering, designed to evaluate reasoning over both events and entities. Using a novel semi-automatic strategy based on composing event structures from information extraction datasets, it created 2,243 challenging questions. Each question is paired with a multi-step QA-format explanation. Experiments show that MEQA is challenging for state-of-the-art models, including LLMs, which struggle with both answer accuracy and generating faithful explanations.
|
Key contributions include: (1) Creating MEQA, the first benchmark targeting multi-hop event-centric QA. (2) Proposing a novel semi-automatic question/explanation generation method leveraging existing IE datasets. (3) Providing explanations in a QA-pair format for each question. (4) Introducing explanation evaluation metrics: completeness and logical consistency. (5) Benchmarking SOTA models (including LLMs and fine-tuned models) and analysing their performance, revealing significant challenges.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Multi-hop reasoning; Event-centric question answering; Explanation generation; Faithfulness of explanations.
|
Yes
|
The ability to perform multi-step reasoning by integrating information about both entities and events (including their relationships) from a given context to answer a complex question, and the ability to generate a faithful step-by-step explanation (reasoning chain) of this process.
|
Subset
|
The benchmark aims to fill the gap of event-centric reasoning in multi-hop QA datasets, providing a more challenging evaluation scenario than entity-focused benchmarks. It also introduces metrics specifically for evaluating the generated reasoning explanations.
|
Given a document and a multi-hop event-centric question, generate the correct answer and a step-by-step explanation in QA-pair format that outlines the reasoning process.
|
An instance contains a source document (from WikiEvents), a multi-hop question focusing on events, the gold answer, and a gold explanation structured as a sequence of single-hop question-answer pairs representing the reasoning chain.
|
Questions involve diverse reasoning patterns like event relations, entity bridging, listing/counting, and comparison, often requiring 2-4 hops. Explanations are structured as QA chains. The dataset creation process includes steps to mitigate reasoning shortcuts.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
Test set: 287 questions.
|
Yes
|
Source Document ID, Question Strategy Type, Explanation (Sequence of QA pairs), Event Structure information (from WikiEvents: triggers, arguments, roles, event types).
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), Explanation Completeness P/R/F1, Explanation Logical Consistency %
|
Answer: Precision, Recall, F1 (using HotpotQA script). Explanation: Completeness (P/R/F1 vs gold steps using semantic matching) and Logical Consistency (% of steps deemed consistent by an LLM verifier).
|
Started with event structures from WikiEvents. Composed event reasoning chains based on shared entities or relations. Filtered chains to avoid shortcuts. Generated synthetic QA pairs/explanations from chains using templates and schema info. Human annotators then curated (rephrased, corrected) these synthetic outputs and verified/completed answers.
|
Academia
|
Yes
|
Uses WikiEvents dataset as source. Primarily uses ChatGPT (GPT-3.5-turbo-1106) for experiments. Details explanation of evaluation metrics and calculation (including prompts). Details crowd-sourcing setup (student workers, qualification test, payment). Discusses potential data leakage. Provides annotation interface examples.
|
This benchmark uniquely tackles event-centric multi-hop reasoning and explicitly evaluates the generated explanations using novel metrics (completeness, logical consistency). The semi-automatic generation process leveraging IE datasets is a notable methodological contribution.
|
Test, Train, Validation
|
Train set: 1,674 questions. Dev set: 282 questions.
|
Models need to output the final short answer. They are also evaluated on generating an explanation, which can be a sequence of QA pairs (CoT-QA) or freeform text (CoT-Freeform).
|
Simple Mean
|
Yes
|
Performance broken down by question strategy type. Comparison of models with/without additional structured information.
| null |
https://github.com/du-nlp-lab/MEQA
|
MEQA (Multi-hop Event-centric Question Answering)
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Semi-automatic generation leverages existing event annotations. The reasoning shortcut problem is specifically addressed during chain filtering. Human annotators curate outputs. Explanation metrics (Completeness and logical Consistency) were validated with a human correlation study (0.693 and 0.601, respectively).
|
Precision, Recall, F1 score, Completeness (P/R/F1), Logical Consistency (%).
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
The task simulates deriving answers to complex event-related questions by chaining simpler inferences based on document content.
|
Composite phenomenon
|
Yes
| null | null |
Reasoning
| null | null |
['Real task', 'Author-crafted', 'Crowd-sourced', 'Another benchmark', 'Procedurally-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response', 'Free response']
|
['Exact match', 'Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
hoWikiWhyAnsweringExplaining2023
|
WikiWhy: Answering and Explaining Cause-and-Effect Questions
|
Include
| null | null |
This paper introduces WikiWhy, a question-answering dataset focused on evaluating LLM reasoning by requiring models to answer "why" questions and provide explicit natural language rationales explaining cause-and-effect relationships. Grounded in Wikipedia facts across 11 diverse topics, the dataset contains over 9,000 question-answer-rationale triples. Experiments with GPT-3 baselines show low correctness (38.7% human eval) for end-to-end answer and explanation generation, indicating significant room for improvement.
|
Proposes the task of explaining cause-effect relations via natural language rationales as a benchmark for LLM reasoning. Creates WikiWhy, a large dataset (>9k examples) for this task, grounded in Wikipedia and spanning 11 topics. Establishes baseline results using GPT-2 and GPT-3, highlighting the task's difficulty. Introduces and validates automatic evaluation metrics for generated explanations using human correlation studies.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Reasoning capability; Explanation generation; Cause-and-effect understanding; Commonsense reasoning.
|
Yes
|
The ability to bridge the gap between a stated cause and its effect by generating a coherent natural language rationale (a set or sequence of supporting statements) that demonstrates an understanding of the underlying mechanism, often relying on implicit commonsense knowledge.
|
Subset
|
Aims to evaluate implicit commonsense knowledge within LLMs, which is often needed to explain why a cause leads to an effect, moving beyond factoid retrieval. Employs a generative task format to test recall rather than recognition. Covers a broad range of topics (11) for generality.
|
Given either a cause-effect pair (EO task) or a "why" question about an effect (A&E task), generate a natural language rationale (set or sequence of sentences) explaining how the cause leads to the effect. For the A&E task, also generate the answer (which is the cause).
|
Each entry includes: Cause text, Effect text, "Why" Question text, Answer text (same as Cause), Rationale (one or more sentences), Source Wikipedia passage, Source Article URL, Topic Category.
|
Rationales average 1.5 steps/sentences, but can be longer (36% have 2+ steps). Data derived from Wikipedia "Good Articles". Questions/cause/effect intended to be understandable without the original passage. Two explanation structures noted: sequential chain and rationale set.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks)
|
Total: 9,406 examples. Test set: 1,005 examples.
|
Yes
|
Cause text, Effect text, Source Passage, Source Article URL, Topic Category (11 types).
|
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics), Unordered/Ordered BERT-F1 using DeBERTa-based BERTScore
|
Automatic: Unordered/Ordered BERT-F1 (using DeBERTa-xlarge-mnli with threshold 0.64), ROUGE-L F1. Human Evaluation: Binary ratings for Correctness, Concision, Fluency, Validity; Win/Tie/Lose comparison vs gold rationale.
|
Data originated from Wikipedia "Good Articles". Passages are filtered using causal keywords. MTurk workers performed cause-effect extraction and QA synthesis in Stage 1, and rationale generation in Stage 2. Multi-stage validation and quality control were applied.
|
Academia
|
Yes
|
Details crowdsourcing setup (MTurk, worker quals, pay rate, interfaces). Fine-tuning details for GPT-2 provided. GPT-3 experiments use DaVinci-002 via API. Details evaluation metrics including BERTScore setup [cite: 136-146, 157]. Human evaluation criteria detailed.
|
Unique focus on generating natural language explanations for "why" questions about cause-effect pairs derived from text, aiming to probe commonsense reasoning. Fully generative task formulation chosen deliberately. The correlation between human ratings of similarity and correctness suggests reference-based metrics are meaningful proxies for explanation quality.
|
Test, Train, Validation
|
Train set: 7,397 examples. Dev set: 1,004 examples. Total rationale elements: 14,238.
|
Models generate a natural language explanation (rationale) consisting of one or more sentences. In the A&E task, they also output the short answer (the cause).
|
Simple Mean
|
Yes
|
Results were analysed separately for Task 2 (EO) and Task 3 (A&E, with single-model vs. pipeline variants). Comparison across models (GPT-2 vs GPT-3) and decoding temperatures. Performance reported per topic category.
| null |
https://github.com/matt-seb-ho/WikiWhy
|
WikiWhy
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
Used Wikipedia "Good Articles". Multi-stage MTurk process with worker qualification, validation, and manual review by panellists. Analysed rationale length and structure. Validated automatic metrics via correlation with human judgments (r=0.82 between human similarity/correctness; r=0.35 between ordered F1/human similarity).
|
BERT-F1, ROUGE-L F1, Human Judgement Proportions (%), Pearson Correlation (r) for metric validation.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
The task requires generating human-like explanations for causal links found in encyclopedia text, testing a fundamental reasoning skill.
|
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Commonsense
| null |
['Real task', 'Author-crafted', 'Crowd-sourced']
|
['Random', 'Convenience', 'Targeted']
|
['Short free response', 'Free response']
|
['Soft match', 'Human ratings', 'LLM-as-a-Judge']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
sunRevealingPersonalityTraits2024
|
Revealing Personality Traits: A New Benchmark Dataset for Explainable Personality Recognition on Dialogues
|
Include
| null | null |
This paper introduces Explainable Personality Recognition, a novel task requiring models to identify Big-Five personality traits from dialogues and provide supporting evidence. It proposes the Chain-of-Personality-Evidence (COPE) framework, reasoning from dialogue context to short-term states to long-term traits. Based on COPE, the PersonalityEvd dataset is constructed from dialogues, featuring annotated state/trait labels and detailed reasoning evidence. Experiments with LLMs show the task is challenging.
|
Key contributions include: (1) Proposing the novel task of Explainable Personality Recognition. (2) Developing the COPE framework grounded in personality theory for structured explanation. (3) Creating the PersonalityEvd dataset with dialogue-level state and speaker-level trait annotations, including utterance/dialogue IDs and natural language reasoning evidence. (4) Defining two sub-tasks (EPR-S and EPR-T) and providing LLM baselines. (5) Demonstrating the task's difficulty and offering insights for future work.
|
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Personality recognition (Big-Five model); Explainable AI; Reasoning about personality states and traits from dialogue evidence.
|
Yes
|
Recognising Big-Five personality traits (Openness, Conscientiousness, Extraversion, Agreeableness, Neuroticism) of a speaker based on aggregating evidence of short-term personality states (patterns of thoughts, feelings, behaviours) observed across multiple dialogues. The task also requires generating an explanation tracing evidence from specific utterances (for states) and dialogues (for traits).
|
Subset
|
Addresses the lack of interpretability in existing automatic personality recognition systems by requiring evidence-based explanations grounded in psychological theory (state-trait distinction).
|
Explainable Personality Recognition, comprising two sub-tasks: (1) EPR-S: Given a dialogue, target speaker, and Big-Five dimension, predict the speaker's personality state (high/low/uncertain) and provide evidence (relevant utterance IDs and natural language reasoning). (2) EPR-T: Given multiple dialogues for a speaker and a Big-Five dimension, predict the speaker's personality trait (high/low/uncertain) and provide evidence (relevant dialogue IDs and faceted natural language reasoning).
|
An EPR-S instance includes a dialogue, target speaker, and target dimension, mapped to a state label and state evidence (utterance IDs + reasoning text). An EPR-T instance includes multiple dialogues for a speaker and a target dimension, mapped to a trait label and trait evidence (dialogue IDs + faceted reasoning text).
|
Uses the Big-Five model structured according to the BFI-2 scale (3 facets per dimension). Labels are High/Low/Uncertain. Evidence structure is specific (utterance/dialogue IDs + template-based natural language reasoning).
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
Total: 72 speakers, ~1924 dialogues. Test set size (state-level): ~14 speakers / ~370 dialogues. Test set size (trait-level): 24 speakers per fold.
|
Yes
|
Speaker ID, Dialogue ID, Utterance IDs, Big-Five Dimension Name, Facet Name, State Label, Trait Label, Evidence Utterance IDs, State Reasoning Text, Evidence Dialogue IDs, Trait Reasoning Text.
|
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), Binary F1 for Evidence IDs
|
Label Accuracy (average over 5 dimensions). Evidence ID F1 score. Reasoning Text Quality: BERTScore F1, Claude-3 score (avg 1-5), GPT-4 score (avg 1-5). Human Evaluation: Fluency, Coherence, Plausibility (avg 1-5).
|
Dialogues from CPED corpus (Chinese TV series), translated to English. State labels/evidence pre-annotated by GPT-4, then manually corrected and validated by psychology students/experts. Trait labels/evidence annotated by psychology students via consensus, checked by authors.
|
Academia and Independent Researcher
|
Yes
|
Dataset sourced from CPED (Chinese TV dialogues), translated to English. Uses Big Five Inventory-2 (BFI-2) scale. Details GPT-4 pre-annotation prompt. Details human annotation process (training, guidelines, quality checks). Details LoRA fine-tuning parameters. Provides LLM details used for evaluation (Claude, GPT-4). Mentions fair pay for annotators.
|
The paper's strength lies in its psychologically grounded framework (COPE) and the two-level annotation (state and trait) with explicit evidence linking. The results highlight the significant challenge LLMs face in not just predicting personality but justifying it with evidence from dialogue history.
|
Test, Train, Validation
|
State-level: Train ~50 speakers / ~1347 dialogues; Valid ~8 speakers / ~215 dialogues. Trait-level: 48 speakers for train/val per fold.
|
Output requires the personality label (High/Low/Uncertain) and the structured evidence (specific utterance/dialogue IDs plus natural language reasoning text).
|
Simple Mean
|
Yes
|
Results reported for state (EPR-S) vs. trait (EPR-T) tasks. Accuracy reported per Big-Five dimension. Ablation studies analyze impact of evidence and state analysis on trait prediction.
| null |
https://github.com/Lei-Sun-RUC/PersonalityEvd
|
PersonalityEvd
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
COPE framework based on personality theories. Annotators were psychology students/graduates. Multi-stage annotation included GPT-4 pre-annotation, human correction, expert inspection (states), and 3 annotators + consensus (traits). Human evaluation confirmed high quality of ground truth explanations (avg scores >4.3/5).
|
Accuracy, F1 score, BERTScore F1, Average score (1-5 scale).
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
Models how personality traits might be inferred and explained based on observed conversational behaviours over time.
|
Composite phenomenon
|
Yes
| null | null |
Psychology
| null | null |
['Real task', 'Author-crafted', 'Crowd-sourced', 'Another benchmark', 'LLM-generated']
|
['Random', 'Convenience', 'Targeted', 'Criterion']
|
['Short free response', 'Free response', 'Structured']
|
['Exact match', 'Soft match', 'Human ratings', 'LLM-as-a-Judge', 'Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
zhangMultimodalSelfinstructSynthetic2024
|
Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Model
|
Include
| null | null |
The authors devise a synthetic data generation pipeline to generate a visual QA dataset on abstract images, like charts, dashboards, and 2D layouts. They find that LMMs struggle on basic QA tasks, like reading analog clocks. However, finetuning on their synthetic dataset yields minor improvements, including some transferred improvements to related benchmarks like ChartQA and MathVista.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Reasoning with abstract images
|
Yes
|
"these capabilities, i.e., perceiving abstract images and reasoning about visual elements, are essential for LMMs if we deploy an LMM- driven agent in our daily lives" (19229)
|
Subset
| null |
The model is given an abstract image, like a chart, and a question about it. Answering the question may require more than just reporting visual elements, e.g. route-planning, comparing features, or adding and subtracting tabular figures.
|
An image, a question about the image, an answer (or answers), and rationale for the answer. All of the text is generated by GPT-4, and the image is produced with python visualisation code generated by GPT-4.
|
There are eight tasks, assessing very different capabilities (not just image processing but planning, mathematical reasoning, abstract pattern matching like ARC-AGI, ...)
|
LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
11,193
|
Yes
|
task, answer rationale
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), Landmark Coverage Rate (LCR(%)) for route-planning
| null | null |
Academia
|
Yes
| null | null |
Test, Train
|
62,476
|
Route-planning is evaluated as a structured response, but the model is not instructed to adhere to any format. The model gives the answer in free response, and these are post-processed (presumably by an LLM but unclear)
|
Averaged, but unclear if weighted
|
Yes
|
Subtask (chart, table, map, etc.) plus results after finetuning on various subtasks
| null |
https://huggingface.co/datasets/zwq2018/Multi-modal-Self-instruct
|
Multi-modal Self-Instruct
|
Contested
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
No
|
Yes
|
No
| null |
simple mean/sum, percentage point improvements
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Compositional
| null |
['LLM-generated']
|
['Targeted']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Soft match', 'Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
maExaminationCompositionalityLarge2024
|
An Examination of the Compositionality of Large Generative Vision-Language Models
|
Include
| null | null |
The authors explore the different failure modes across evaluation methods for image composition understanding in multimodal models. They find via ablation that a popular metric, VisualGPTScore, is biased towards syntactical correctness in the caption over image contents. They compose a new benchmark, SADE, by combining debiased subsets of existing composition understanding benchmarks.
|
Exemplary paper for construct validity—the contribution consists in debiasing the evaluation of a popular task to better match the desired phenomenon.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
understanding multimodal compositionality
|
No
| null |
Comprehensive
| null |
Given an image and 2-3 reference sentences, rank the appropriate sentence as the most likely image caption.
|
An image, multiple candidate captions for the image in English, and an index for which caption is correct.
| null |
Modified from another benchmark (e.g. translation into another language)
| null |
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragarph), Log-likelihood of a given free response
|
Distribution (perplexity, calibration, correlation), recall@1
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Conceptual task subsets: "Comprehensive, Relation, Attribute, Atomic, Negate, Content." Partially corresponding to the existing benchmarks that the authors sample (e.g. "Comprehensive" is just Winoground's group score, VL-CheckList and ARO both have "Relation" and "Attribute" subsets, etc.).
| null |
https://github.com/TeleeMa/SADE
|
SyntActically DE-biased benchmark (SADE)
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
"we identify the syntactical bias that exists in current datasets for GVLMs, and define the bias with SyntaxBias Score quantitatively. We then pro- pose a SADE benchmark that mitigates the syntacti- cal bias and provides a better content understanding evaluation for GVLMs" (700)
|
simple mean/sum
|
Model access required (e.g. logits)
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Compositional
| null |
['Another benchmark']
|
['Convenience', 'Targeted']
|
['Free response', 'Logits']
|
['Distribution', 'Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
huangMetaLogicLogicalReasoning2022
|
MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure
|
Include
| null | null |
This paper proposes MetaLogic, a benchmark designed to evaluate models' logical reasoning by generating detailed explanations called "logic metagraphs". These metagraphs extend typical reasoning chains by including rebuttal conditions, internal logical formulae based on modal logic, and degrees of certainty for each statement. Based on 1,000 logical passages from the ReClor dataset, MetaLogic challenges models to produce these fine-grained structures. Experimental results show current models struggle significantly with this task.
|
Key contributions include: (1) Proposing the "logic metagraph", a novel, fine-grained explanation structure for logical reasoning, incorporating rebuttal, internal formulae, and certainty, inspired by cognitive science and logic theories. (2) Creating the MetaLogic dataset annotated with these structures using passages from ReClor. (3) Defining the task of generating logic metagraphs. (4) Benchmarking sequence-to-sequence models and demonstrating the significant challenge posed by the task.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Logical reasoning; Generation of fine-grained logical explanations.
|
Yes
|
The ability to parse a natural language passage containing a logical argument and represent its underlying structure as a "logic metagraph". This involves identifying statements (nodes), their inferential relationships (support/rebut edges), the internal logical composition of each statement (formulae using variables and modal/logical operators), and the certainty level associated with each statement.
|
Comprehensive
|
To create a benchmark that captures finer details of logical reasoning present in real-world arguments (like rebuttal and certainty) compared to previous simpler chain-of-reasoning datasets, leveraging established logical and argumentation theories.
|
Logic Metagraph Generation: Given a logical passage (pre-segmented into statements and atomic sentences), generate the full logic metagraph including the support/rebut relationships between statements, the internal logical formula (using modal logic) for each statement, and the degree of certainty for each statement.
|
An instance consists of a logical passage from the ReClor dataset (which includes context, question, and options, though the task focuses on the passage logic) with pre-identified statements (nodes) and atomic clauses (variables). The target output is the fully annotated logic metagraph for the passage detailing node relationships (support/rebut), internal node formulae, and node certainty levels.
|
The task involves generating a complex structured output. Internal node formulae use propositional logic extended with modal operators (necessity □, possibility ◇) based on the S5 system. Certainty is mapped to 5 discrete levels derived from modal logic.
|
Human exam questions (e.g. GRE questions), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
Total: 1,000 passages/metagraphs. Test set: 200 passages/metagraphs.
|
Yes
|
Source Passage Text, Segmented Statements (Nodes), Segmented Atomic Sentences (Variables), Meta Edges (Support/Rebut links), Node Formulae (Modal logic representation), Node Certainty Degree (5 levels).
|
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragarph), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), Macro-F1 for the multi-class certainty prediction
|
Component-wise F1 and AllCorrect scores for Meta Structure (Nodes, Steps), Formulae. Accuracy, AllCorrect, and Macro-F1 for Certainty. Overall AllCorrect score for the entire metagraph.
|
Passages from the ReClor dataset (GMAT/LSAT problems). Initial segmentation automatic. Crowdworkers annotated the meta-graph structure (support/rebut links) and internal binary logical relations. Unary operators and certainty labels were derived semi-automatically using keyword heuristics and dependency parsing, followed by checks.
|
Mix (multiple authors from industry and academia)
|
Yes
|
Source data from ReClor dataset. Details the semi-automatic annotation of unary operators using dependency parsing and keyword indicators. Provides details on model implementations (T5, MetGen), training setup (batch size, epochs, optimizer, GPUs), and evaluation metrics. Error analysis categories are defined. Annotation interface shown.
|
The standout feature is the highly detailed "logic metagraph" structure, aiming to capture logical nuances like rebuttal and certainty often missed in simpler explanation formats. The results strongly indicate that generating such complex structured logical representations from text remains a major hurdle for current generative models.
|
Test, Train, Validation
|
Train: 600 passages. Dev: 200 passages. Total meta nodes: 3,609. Total formulae: 1,500.
|
Models output a linearised text sequence encoding the metagraph's components: meta-structure edges, node formulae (using specific tokens for operators), and node certainties, separated by delimiters.
|
Simple Mean
|
Yes
|
Performance reported per component (Meta Structure, Formula, Certainty). Analysis by inference type (Support vs Rebut) and per logical operator. Analysis across varying training data sizes. Error analysis categorizes mistakes within each component.
| null |
https://github.com/tencent-ailab/MetaLogic
|
MetaLogic
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
Logic metagraph format based on Toulmin model and S5 modal logic. Annotations performed by trained crowdworkers with quality checks (IAA reported as high/very high). Detailed statistics on graph complexity, formulae, and certainty distribution provided. Error analysis identified specific challenges for models.
|
F1 score, AllCorrect (Exact Match), Accuracy, Macro F1
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
Focuses on extracting and formalizing the logical structure inherent in complex argumentative texts, a core component of analytical reasoning.
|
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Logical
| null |
['Human exams', 'Author-crafted', 'Crowd-sourced', 'Another benchmark']
|
['Random', 'Convenience', 'Targeted', 'Criterion']
|
['Free response', 'Structured']
|
['Exact match', 'Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
leeQASAAdvancedQuestion2023
|
QASA: Advanced Question Answering on Scientific Articles
|
Include
| null | null |
This paper introduces QASA, a benchmark for advanced question answering in scientific articles, motivated by the dual process theory of human reasoning. It proposes a three-stage approach (associative selection, evidential rationale-generation, systematic composition) to tackle "full-stack reasoning". The dataset contains 1798 QA pairs on AI/ML papers, featuring diverse question types (surface, testing, deep) and requiring answers composed from multiple evidential rationales. Experiments show the proposed approach outperforms InstructGPT, emphasising the importance of the rationale generation step.
|
Introduces the QASA benchmark for full-stack reasoning QA on scientific articles. Develop a question schema based on cognitive reasoning levels (surface, testing, deep) via a think-aloud study. Creates a dataset requiring a composition of long-form answers from multiple evidential rationales. Proposes and evaluates a three-stage computational approach (associative selection, rationale-generation, composition) mimicking dual process theory. Demonstrates the effectiveness of this approach and the importance of explicit rationale generation.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Full-stack reasoning (associative thinking + logical reasoning); Advanced question answering; Rationale generation; Answer composition from multiple evidences.
|
Yes
|
The ability to answer complex questions about scientific articles by first selecting relevant paragraphs (associative selection), then extracting or generating key rationale points from each selected paragraph (evidential rationale-generation), and finally synthesizing these potentially disparate rationales into a single, comprehensive, non-redundant answer (systematic composition).
|
Subset
|
Inspired by cognitive science (dual process theory) to create a QA task that better reflects complex human reasoning compared to factoid or simple multi-hop QA. Aims to evaluate the ability to synthesise answers from multiple pieces of evidence spread across a document.
|
Given a question about a scientific paper and the full paper text (as paragraphs), perform full-stack QA: select evidence paragraphs, generate an evidential rationale for each, and compose these into a final answer. The primary evaluation focuses on the quality of the final composed answer.
|
An instance includes a question (with type label: surface/testing/deep), the source scientific paper, a set of gold evidence paragraph identifiers, a set of gold evidential rationales (text snippets corresponding to each evidence paragraph), and a final composed gold answer (long-form text).
|
Questions are diverse (surface, testing, deep) and collected from both readers and authors of AI/ML papers. Answers are often long-form and require composing information from an average of 1.67 (max 9) rationales.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks)
|
1,798 QA pairs total. Evaluation seems performed on the full dataset.
|
Yes
|
Question Type (Surface/Testing/Deep + sub-types), Source Paper ID, Evidence Paragraph IDs, Evidential Rationale Text, Composition Required (True/False).
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragarph)
|
n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics)
|
Associative Selection: Precision, Recall, F1. Rationale Generation & Answer Composition: ROUGE-1, ROUGE-2, ROUGE-L (F1 scores). Human Evaluation: Pairwise win/tie/lose rates based on Groundedness, Completeness, Specificity, and Fluency.
|
Papers sourced from S2ORC and arXiv (CS.AI domain). Questions collected from AI/ML graduate students, freelancers, and paper authors following specific guidelines and question schema. Answers, including evidence selection, rationale writing, and final composition, were annotated by qualified experts.
|
Mix (multiple authors from industry and academia)
|
Yes
|
Details annotator recruitment (Upwork, professional networks), qualification, and background. Uses OpenAI embeddings for retrieval. Details models used for experiments. Describes construction of training data using public datasets and distillation from InstructGPT. Details human evaluation procedure and criteria.
|
The explicit modelling of the three reasoning stages (selection, rationale generation, composition) and the empirical validation of the importance of the rationale generation step are key takeaways. The dataset's focus on scientific articles from AI/ML makes it specialised but highly relevant for evaluating models within this domain.
|
Test
| null |
The key output is the final composed answer, which is a long-form text passage synthesized from intermediate rationale texts.
|
Simple Mean
|
Yes
|
Performance reported for each subtask (selection, rationale gen, composition) and the full-stack QA. Ablation study on training data sources. Analysis based on question types and compositionality requirements.
| null | ERROR: type should be string, got " https://github.com/lgresearch/QASA" |
QASA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
Think-aloud study informed question taxonomy. Annotators were domain experts. Separate reader/author sessions enhanced question diversity. Detailed annotation process involved evidence selection, rationale generation, and composition steps. Manual checks confirmed high answer correctness (90%) and groundedness (87%) on a sample.
|
Precision, Recall, F1, ROUGE-1, ROUGE-2, ROUGE-L, Human evaluation win/tie/lose rates (%).
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
Simulates the process of deeply understanding a scientific paper to answer complex questions that go beyond simple fact retrieval.
|
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Logical
| null |
['Real task', 'Author-crafted', 'Crowd-sourced']
|
['Convenience', 'Targeted', 'Criterion']
|
['Free response']
|
['Soft match', 'Human ratings']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Representative']
|
['Mean']
|
mirzaeeSPARTQATextualQuestion2021
|
SPARTQA: A Textual Question Answering Benchmark for Spatial Reasoning
|
Include
| null | null |
This paper introduces SPARTQA, a textual QA benchmark designed to evaluate spatial reasoning in language models, addressing limitations of prior datasets like bAbI Task 17. It includes SPARTQA-HUMAN, a set annotated by humans with more natural language and complex scenes (based on NLVR images), and SPARTQA-AUTO, a larger, automatically generated dataset using novel context-free grammar and spatial reasoning rules. Experiments show LMs perform poorly on SPARTQA-HUMAN but improve significantly after further pretraining on SPARTQA-AUTO.
|
Key contributions include: (1) SPARTQA-HUMAN, a human-annotated benchmark for textual spatial reasoning exceeding the complexity of bAbI. (2) A novel automatic data generation method combining CFGs and spatial logic rules to create the large SPARTQA-AUTO dataset for distant supervision. (3) Demonstrating that pretraining on SPARTQA-AUTO significantly boosts LM performance on SPARTQA-HUMAN and generalizes to improve performance on external datasets (bAbI, boolQ). (4) Providing diverse question types (FR, FB, CO, YN) for detailed spatial reasoning analysis.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Spatial reasoning on natural language text.
|
Yes
|
The ability to construct mental representations of spatial scenes based on natural language descriptions (stories) and use these representations along with spatial logic rules (e.g., transitivity, symmetry, inclusion/exclusion) to infer relationships and answer questions about object locations and configurations.
|
Subset
|
To create a more challenging and realistic textual benchmark for spatial reasoning than bAbI Task 17. To explore the use of automatically generated, visually grounded data for improving LM spatial reasoning via distant supervision.
|
Textual Spatial Reasoning QA: Given a story describing objects in blocks and their spatial relationships, answer a question probing this spatial configuration. Questions fall into four types: Find Relation (FR), Find Blocks (FB), Choose Object (CO), or Yes/No (YN).
|
An instance includes a textual story (describing a scene based on an NLVR image), a question (one of four types: FR, FB, CO, YN), and the corresponding correct answer (selected from candidates or Yes/No/DK).
|
Scenes have objects with attributes in blocks. Stories provide partial descriptions. Reasoning requires applying spatial rules (transitivity, symmetry etc.). Includes "Don't Know" (DK) answers for YN questions under open-world assumption.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Procedurally-generated task examples (e.g. Creating instances from a template)
|
Test sets: SPARTQA-HUMAN = 510 QA pairs; SPARTQA-AUTO Seen Test = 15,074 QA pairs; SPARTQA-AUTO Unseen Test = 15,087 QA pairs.
|
Yes
|
Question Type (FR, FB, CO, YN), Underlying Scene Graph (for SPARTQA-AUTO), SpRL annotations (Trajector, Landmark, Spatial Indicator).
|
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
|
Accuracy (percentage of correctly answered questions). F1 score reported for YN type analysis due to potential class imbalance.
|
SPARTQA-HUMAN stories and QA pairs written by two student volunteers based on visually grounded scenes (from NLVR images, potentially rearranged). SPARTQA-AUTO stories generated using Context-Free Grammars (CFGs); QA pairs generated programmatically using spatial logic rules applied to the underlying scene graph derived from NLVR images.
|
Mix (multiple authors from industry and academia)
|
Yes
|
Provides details on CFG design and question generation modules/rules. Describes model architectures and training parameters. Includes SpRL annotations as supplementary data.
|
The paper demonstrates a successful approach to creating large-scale distant supervision data (SPARTQA-AUTO) for a complex reasoning task (spatial reasoning) by leveraging grounded visual information and symbolic rules. This strategy effectively improves language model performance on human-created spatial reasoning tests (SPARTQA-HUMAN).
|
Test, Train, Validation
|
SPARTQA-HUMAN Train = 616 QA pairs. SPARTQA-AUTO Train = 93,673 QA pairs; Dev = 15,023 QA pairs.
|
Classification task. Models select the correct answer from a list of candidates (for FR, FB, CO) or predict one of three labels (Yes/No/DK for YN).
|
Simple Mean
|
Yes
|
Performance reported per question type (FB, FR, CO, YN). Comparisons between Seen vs Unseen test sets (AUTO) and HUMAN vs AUTO datasets. Consistency and contrast set evaluations.
| null |
Generation code: https://github.com/HLR/SpartQA_generation. Baselines code: https://github.com/HLR/SpartQA-baselines.
|
SPARTQA (includes SPARTQA-HUMAN and SPARTQA-AUTO)
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
Yes
|
Yes
|
Built on complex NLVR scenes. SPARTQA-HUMAN created by annotators focusing on natural language and reasoning. SPARTQA-AUTO generated programmatically ensuring spatial consistency via grounding and rules. Human performance benchmarks provided. Consistency/contrast sets test robustness. Extrinsic evaluation on bAbI/boolQ shows positive transfer.
|
Accuracy (%), F1 Score (%)
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
Models understanding potentially incomplete textual descriptions of spatial layouts to answer inferential questions, relevant to real-world language understanding.
|
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Spatial
| null |
['Real task', 'Author-crafted', 'Crowd-sourced', 'Procedurally-generated']
|
['Random', 'Convenience', 'Targeted', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Representative']
|
['Mean']
|
bhargavaDiscoSenseCommonsenseReasoning2022
|
DiscoSense: Commonsense Reasoning with Discourse Connectives
|
Include
| null | null |
This paper introduces DISCOSENSE, a benchmark for commonsense reasoning that focuses on understanding various discourse connectives. The task requires selecting the most plausible sentence ending given a preceding context sentence and a specific discourse connective. The benchmark uses Conditional Adversarial Filtering, an extension of Adversarial Filtering, to generate difficult distractor options. Evaluations demonstrate that state-of-the-art language models find DISCOSENSE challenging, suggesting it's a valuable tool for assessing commonsense reasoning.
|
Key contributions include: (1) Creating the DISCOSENSE benchmark targeting commonsense reasoning specifically through the understanding of 37 discourse connectives. (2) Proposing Conditional Adversarial Filtering (CAF) to generate compelling, hard-to-distinguish negative options. (3) Benchmarking numerous state-of-the-art language models, highlighting the difficulty of the task (significant gap to human performance). (4) Demonstrating the utility of DISCOSENSE for transfer learning by improving performance on the HELLASWAG dataset.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Commonsense reasoning; Understanding discourse connectives and relations.
|
Yes
|
The capability to perform commonsense inference to determine the most plausible sentence ending given a context sentence and a discourse connective. This requires understanding the specific semantic relationship (e.g., causality, contrast, exemplification) implied by the connective and applying world knowledge to select the most coherent and logical continuation.
|
Subset
|
To create a more challenging commonsense benchmark less prone to superficial cues or artifacts, by specifically focusing on the reasoning required by discourse connectives and using adversarial methods (CAF) to generate strong distractors.
|
Given a context sentence and a discourse connective, select the most plausible ending sentence from four options, requiring commonsense reasoning based on the connective's meaning.
|
An instance consists of a context sentence, one of 37 discourse connectives, and four potential ending sentences. One ending is the ground truth (human-written or verified), and the other three are distractors generated via Conditional Adversarial Filtering.
|
Contexts are derived from DISCOVERY and DISCOFUSE datasets. Adversarial distractors are generated using a fine-tuned CTRL model. Human verification filters the final set of examples.
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
Total: 13,056 examples. Test set: 3,757 examples.
|
Yes
|
Discourse Connective (one of 37 types).
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
|
Accuracy (percentage of correctly chosen plausible endings).
|
Contexts and original endings sourced from DISCOVERY/DISCOFUSE. Three distractor endings generated for each example using Conditional Adversarial Filtering (CAF) involving a fine-tuned CTRL generator and a RoBERTa discriminator. The resulting examples were filtered through a two-step human verification process.
|
Academia
|
Yes
|
Detailed explanation of Conditional Adversarial Filtering (CAF) process. Details on human verifier recruitment, training, and compensation. List of 37 included discourse connectives provided. Training hyperparameters specified. Ethical considerations discussed.
|
The use of discourse connectives as the focal point for a commonsense reasoning benchmark is novel. Conditional Adversarial Filtering is a key methodological contribution for creating challenging distractors. The dataset proves effective both as a challenging benchmark and as a resource for transfer learning to related tasks like HELLASWAG.
|
Test, Train
|
Train set: 9,299 examples. No Dev split mentioned.
|
The model must choose the index corresponding to the most plausible ending sentence out of the four provided options.
|
Simple Mean
|
Yes
|
Error rate analysed per discourse connective. Ablation study analysing the impact of removing context and/or the connective.
| null |
https://github.com/prajjwall/discosense/
|
DISCOSENSE
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
Used CAF to generate hard distractors. Employed a 2-step human verification process for filtering. Final dataset demonstrates a significant human-model performance gap (~30 points). Ablation confirms models utilize both context and connectives.
|
Accuracy (%), Standard Deviation, Error Rate (%)
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
The task isolates reasoning about discourse connectives in a challenging multiple-choice format created via adversarial generation.
|
Single cohesive phenomenon
|
No
| null | null |
Reasoning
|
Commonsense
| null |
['Author-crafted', 'Crowd-sourced', 'Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean', 'Std']
|
hsiehSugarCrepeFixingHackable2023
|
SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality
|
Include
| null | null |
SugarCrepe is a multimodal benchmark for multimodal compositional understanding. The benchmark specifically ensures that all hard negative (incorrect) descriptions in multiple-choice image-to-text retrieval tasks are fluent and plausible. The benchmark is publicly available, and the data was manually reviewed for quality control.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
compositionality, compositional understanding, compositional reasoning
|
Yes
|
Compositionality is the fundamental presupposition characterizing uhman perception and linguistic processing, that enables humans to comprehend new scenes and describe those scenes by composing known atoms.
|
Subset
|
SugarCrepe isolates three domains of hard negative types: Replace, Swap, and Add. Each domain has sub-types. Replace hard negatives are Replace-Obj, Replace-Att, and Replace-Rel. Swap hard negatives are Swap-Obj and Swap-Att. Add hard negatives are Add-Obj and Add-Att.
|
Models are given an image, and two descriptions, and must choose the correct description and avoid choosing the incorrect description, termed a "hard negative."
|
A single item would contain the image, the correct description, and the hard negative
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
7512
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice
|
LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
|
The paper defines two custom metrics, where they use two blind models Vera and Grammar. The Vera score gap is the score difference between the positive and hard negative texts: V(T^p) - Vera(T^n). The Grammar score gap is defined by Grammar(T^p) - Grammar(T^n). The paper refers to the Vera score gap as the commonsense score as well.
|
SugarCrepe uses image description pairs from COCO. It then generates sensical and fluent hard negatives using an LLM (ChatGPT), filters incorrect hard negatives with human validation, and then de-biases the dataset with adversarial refinement.
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Replace, Swap, Add
| null |
https://github.com/RAIVNLab/sugar-crepe
|
SugarCrepe (Synthetic yet Unbiased Generation with Adversarially Refined Compositional REPresentation Evaluation)
|
Widely-agreed
|
Yes
|
The Vera and Grammar models may be well-established and commonly used in compositionality or linguistic tasks, but it is not apparent in the paper. No justification is provided for the use of Vera and Grammar.
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
The papers justify the improvement of the task design displayed in their benchmark, but not the choice of the task itself.
|
The authors highlight that current image-to-text compositionality benchmarks are biased in using implausible or non-fluent hard negatives, allowing blind/language-only models to pass the multimodal task. However, the authors do not justify the choice of image-to-text retrieval task formulation, besides its use in current compositionality benchmarks.
|
Reports average scores for commonsense Vera score gap and Grammar score gap. The paper also reports the pairwise better ratio between SugarCrepe and ARO+CREPE.
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
The task may be representative or real task, but the paper does not present any literature grounding the validity of the task beyond its use in current compositionally benchmarks.
|
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Compositional
| null |
['Author-crafted', 'Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Convenience']
|
['Multiple choice']
|
['LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Partially']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
renBEACONBenchmarkComprehensive2024
|
BEACON: Benchmark for Comprehensive RNA Tasks and Language Models
|
Include
|
Topic Exclusion (Is the paper about measuring the capabilities of LLMs?)
| null |
This paper presents BEACON, the first comprehensive benchmark for evaluating RNA language models across 13 tasks related to RNA structure, function, and engineering. It analyzes various models and components, highlighting the benefits of single nucleotide tokenization and ALiBi positional encoding, and introduces BEACON-B, a strong, resource-efficient baseline model.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
RNA understanding — the ability of models to perform comprehensive RNA-related tasks
|
Yes
|
the ability of models to perform comprehensive RNA-related tasks such as
Understanding RNA structure (e.g., secondary structure, contact map, distance map)
Predicting RNA function (e.g., splice sites, isoform usage, non-coding RNA function, modifications)
Supporting RNA engineering (e.g., predicting vaccine degradation, programmable RNA switches, CRISPR targeting)
|
Subset
| null |
Models are evaluated on 13 RNA-related tasks that span structural analysis (e.g., secondary structure, contact maps), functional prediction (e.g., splice sites, RNA modifications), and engineering applications (e.g., CRISPR targeting, vaccine stability). Each task involves either classification or regression at the nucleotide or sequence level, with specific evaluation metrics for the biological context.
|
one RNA sequence, typically composed of nucleotide characters (e.g., A, U, C, G), along with a corresponding label or set of labels depending on the task
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
total: 96,283
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), Top L Precision, Top-k ACC, R^2, AUC, MCRMSE, Spearmann core
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
total: 793,047 and 77,836
| null |
Simple Mean
|
No
| null | null |
https://github.com/terry-r123/RNABenchmark
|
BEACON
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Mean and std
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Biology
| null | null |
['Real task', 'Another benchmark']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response', 'Structured']
|
['Exact match', 'Correlation']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Constructed']
|
['Mean', 'Std']
|
maSpreadsheetBenchChallengingReal2024
|
SPREADSHEETBENCH: Towards Challenging Real World Spreadsheet Manipulation
|
Include
| null | null |
Agentic benchmark measuring whether LLMs can do real-world spreadsheet manipulation.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Spreadsheet manipulation.
|
No
| null |
Comprehensive
| null |
Create a code-based solution to solve some spreadsheet instructions. Can be Python code or other code. This code is then run to execute the steps and manipulate the spreadsheet.
|
A set of instructions (taken from real-world spreadsheet questions) and initial spreadsheet.
| null |
Real task examples (e.g. GitHub issues), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
912
|
Yes
|
Cell-level vs sheet-level questions.
|
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Structured response (e.g. valid JSON, API call alone)
|
Execute the code and evaluate exact match of table vs ground truth table.
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Cell-level vs sheet-level manipulation
| null |
https://spreadsheetbench.github.io
|
SPREADSHEETBENCH
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
Yes
|
No
| null |
Mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Code Generation
| null | null |
['Real task', 'LLM-generated']
|
['Convenience', 'Criterion']
|
['Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Partial']
|
['Mean']
|
guptaBiphoneModelingInter2023
|
Bi-Phone: Modeling Inter Language Phonetic Influences in Text
|
Include
| null | null |
Many users are forced to use the web in a language they’re not fluent in (the second language (L2) ), often resulting in text errors influenced by their native language (L1). This work introduces Bi-Phone, a model that uses phoneme confusions between L1 and L2 to generate realistic corrupted text, evaluates its impact on language models with the new FunGLUE benchmark, and proposes a phoneme prediction task to improve model robustness.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Authors measure robustness to L1-L2 phonetic interference in natural language understanding (NLU).
|
Yes
|
It is defined as the influence of a speaker's native language (L1) on their written use of a second language (L2), particularly phoneme-shift-based misspellings that arise due to difficulty distinguishing or producing certain L2 sounds that do not exist or differ in L1.
|
Subset
| null |
The goal is to evaluate the robustness of NLU models to phonetic misspellings caused by L1-L2 (native-second language) interference.
|
A single item in the FunGLUE task dataset is a modified version of a SuperGLUE example, where one or more words in a key field (e.g., question, hypothesis, premise) have been replaced with phonetically plausible misspellings generated by the Bi-Phone model to simulate L1-L2 interference. Each item retains the original structure of the SuperGLUE task (e.g., a question and answer pair, or a premise and hypothesis) along with the original label.
| null |
Modified from another benchmark (e.g. translation into another language)
|
N/A (should be the same as superglue)
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
No, no link is provided
| null | null | null | null | null |
Simple Mean
|
No
| null | null |
https://github.com/google-research-datasets/FunGLUE
|
FunGLUE
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
NLP
|
Understanding
| null |
['Another benchmark']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
zhuAreLargeLanguage2024
|
Are Large Language Models Good Statisticians?
|
Include
| null | null |
This paper introduces the StatQA benchmark designed to evaluate LLMs’ proficiency in specialized statistical tasks and their applicability assessment capabilities, particularly for hypothesis testing methods.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Statistical Analysis/Literacy
|
Yes
|
A typical statistical analysis task involves, given a table D and a statistical question Q, a qualified statistician should be proficient in selecting relevant columns C, choosing the appropriate statistical methods M, and computing the results based on M using C.
|
Subset
| null |
Given a statistical question and a corresponding table of data (information on columns and data types), identify the relevant columns and the appropriate statistical methods needed to derive the correct answer.
|
A signle task item is a statistical question paired with tabular data (including metadata).
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
11,623
|
Yes
|
Difficulty, Task, Results
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Prompting strategy e.g. 0-shot, 1-shot, Statistical Task
|
None,
|
https://github.com/HKUSTDial/StatQA/tree/main/StatQA
|
StatQA
|
Widely-agreed
|
No
|
Yes
|
Yes
|
No
|
No comparisons made
|
Yes
|
Yes
|
Yes
|
The authors collect the dataset using postgraduate students in statistics from Kaggle (a real-world platform for data scientists), include expert reviews of the questions, and provide a comparison between human statisticians and LLMs on the task.
| null |
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Data Analysis
| null | null |
['Real task', 'Author-crafted', 'Procedurally-generated', 'LLM-generated']
|
['Convenience']
|
['Short free response', 'Structured']
|
['Exact match']
|
['Widely-agreed']
|
['No']
|
['Yes']
|
['Comparison made']
|
['Yes']
|
['Representative']
| null |
houWikiContradictBenchmarkEvaluating2024
|
WikiContradict: A Benchmark for Evaluating LLMs on Real-World Knowledge Conflicts from Wikipedia
|
Include
| null | null |
Retrieval-augmented generation (RAG) helps mitigate limitations in large language models (LLMs), but how LLMs handle knowledge conflicts from equally trustworthy sources remains unclear. The WikiContradict benchmark, consisting of 253 high-quality, human-annotated instances, evaluates LLM responses to contradictory passages from Wikipedia. Evaluations reveal that while LLMs struggle to generate answers reflecting the conflicting nature of contexts, especially with implicit conflicts, an automated model achieves an F-score of 0.8 in estimating LLM performance, highlighting areas for further improvement.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Real-world knowledge conflicts (intra-context)
|
Yes
|
knowledge inconsistencies arise from the same or different retrieved
passages that originate from a single trusted source (Wikipedia) and are considered equally credible
|
Comprehensive
| null |
Answer questions about text passages under 5 different prompt templates
|
Question, context1, context2, answer, contradiction type, reference answer
| null |
Real task examples (e.g. GitHub issues)
|
253
|
Yes
|
question type, contradiction type
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null |
Wikipedia articles
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
prompt template, question type, contradiction type
| null |
https://huggingface.co/datasets/ibm-research/Wikipedia_contradict_benchmark
|
WikiContradict
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Knowledge
|
Conflicts
| null |
['Real task']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Short free response']
|
['Exact match', 'Human ratings', 'LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Partial']
|
['Mean']
|
konIaCevalCodeGeneration2024
|
IaC-Eval: A Code Generation Benchmark for Cloud Infrastructure-as-Code Programs
|
Include
| null | null |
Evaluating LLMs ability to generate Infrastructure-as-Code (IaC) code (part of cloud computing)
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
IaC code generation
|
No
| null |
Comprehensive
| null |
The LLM generates an Infrastructure-as-Code (IaC) program based on a set of instructions.
|
The LLM must generate a program given (i) a natural language prompt describing the problem (ii) user intent specifications written in Rego, and (iii) an example of a correct configuration in Terraform HCL.
|
All examples are based on AWS services (though there are many different AWS services)
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
458
|
Yes
|
Difficulty level
|
Unknown
|
Structured response (e.g. valid JSON, API call alone)
|
Functional correctness checks. Evaluated by (1) producing a dependency graph from the code (2) using an IaC policy engine to check whether the instruction specification are in the program.
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Difficulty level
|
pass@k (any correct answer in k trials)
|
https://github.com/autoiac-project/iac-eval
|
IaC-Eval
|
Not defined
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
Yes
|
Discuss the limitation that it only uses AWS services and ignores e.g. Azure.
|
Mean,
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
|
All examples created for the benchmark rather than being based on real-world problems.
|
Single cohesive phenomenon
|
Not applicable
| null | null |
Code Generation
| null | null |
['Author-crafted']
|
['Unknown']
|
['Structured']
|
['Reward']
|
['No definition']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial', 'Representative']
|
['Mean']
|
waghjaleECCOCanWe2024
|
ECCO: Can We Improve Model-Generated Code Efficiency Without Sacrificing Functional Correctness?
|
Include
| null | null |
Evaluating the efficiency of LLM generated code.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
LLM-generated code efficiency.
|
No
| null |
Comprehensive
| null |
(i) Code generation from natural language instructions and (ii) editing existing programs.
|
A natural language description or existing program and the task is either to generate the program in an efficient way or refactor the program to make it more efficient.
| null |
Human exam questions (e.g. GRE questions), Real task examples (e.g. GitHub issues)
|
48
| null | null |
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Structured response (e.g. valid JSON, API call alone)
|
Code "Speedup" and "Memory Reduction" versus reference solutions.
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
1262, 69
| null |
Simple Mean
|
Yes
|
The two tasks in the benchmark
| null |
https://github.com/CodeEff/ECCO
|
ECCO
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
Yes
|
Only uses Python problems and competitive coding exam questions, therefore "results may not be comprehensive enough to reflect the quality of model-generated programs".
|
Mean, variance
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
Code Generation
| null | null |
['Human exams', 'Real task']
|
['Convenience', 'Criterion']
|
['Structured']
|
['Reward']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial']
|
['Mean', 'Std']
|
wuClashEvalQuantifyingTugofwar2024
|
ClashEval: Quantifying the tug-of-war between an LLM’s internal prior and external evidence
|
Include
| null | null |
Retrieval-augmented generation (RAG) aims to reduce hallucinations and update knowledge in large language models (LLMs). A study with over 1,200 questions across six domains examines how LLMs handle correctly and incorrectly retrieved content. Findings show LLMs often adopt wrong retrieved information, especially if they lack confidence in their initial response, but are less likely to accept highly unrealistic content, presenting a significant challenge and benchmark for improving LLM accuracy when faced with conflicting information.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Conflict of internal knowledge and external evidence
|
Yes
|
Conflict between internal pre-training knowledge and context, and conflict resolution ability of LLMs
|
Comprehensive
| null |
RAG question-answering across 6 different domains
| null |
Question, domain, answer, score
|
Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
1278
|
Yes
|
domain
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
domains
| null |
https://github.com/kevinwu23/StanfordClashEval
|
ClashEval
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
No
|
3 or 10 perturbations per question
|
No
|
Knowledge
|
Conflicts
| null |
['Crowd-sourced', 'Another benchmark']
|
['Targeted']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative']
|
['Mean']
|
pressCiteMECanLanguage2024
|
CiteME: Can Language Models Accurately Cite Scientific Claims?
|
Include
| null | null |
With thousands of new scientific papers published monthly, staying updated and accurately attributing claims is challenging. The CiteME benchmark evaluates the ability of large language models (LLMs) to identify cited papers in text excerpts from recent machine learning papers, highlighting a significant gap between human performance (69.7% accuracy) and LLMs (4.2-18.5% accuracy). Introducing CiteAgent, an autonomous system built on GPT-4o that searches and reads papers, bridges this gap by achieving 35.3% accuracy, moving towards better automatic verification of claims made by LMs.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Identify referenced papers in academic articles
|
Yes
|
Abilities of LMs to correctly attribute scientific claims
|
Comprehensive
| null |
Read text excerpts that reference a single other paper, identify the referenced paper
|
excerpt, target paper title, target paper url, source paper title, source paper url, year, answer
| null |
Real task examples (e.g. GitHub issues)
|
130
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test, Train
| null | null |
Simple Mean
|
Yes
|
no commands, search only, search and read
| null |
https://huggingface.co/datasets/bethgelab/CiteME
|
CiteME
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
The benchmark is itself realistic
|
Yes
|
No
| null |
simple mean/sum
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Agents
| null | null |
['Real task']
|
['Targeted', 'Criterion']
|
['Short free response', 'Free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Partial']
|
['Mean']
|
jinShoppingMMLUMassive2024
|
Shopping MMLU: A Massive Multi-Task Online Shopping Benchmark for Large Language Models
|
Include
| null | null |
Shopping MMLU is a comprehensive benchmark for evaluating how large language models (LLMs) perform on online shopping tasks. The authors transformed online shopping tasks into a text-to-text format suitable for LLMs, evaluated over 20 different models, and analyzed performance patterns.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
performance on online shopping tasks
| null |
"Online shopping is a complex multi-task, few-shot learning problem with a wide and evolving range of entities, relations, and tasks."
|
Comprehensive
| null |
Models are presented with shopping-related prompts (such as product descriptions, user queries, or reviews) and must generate appropriate responses (like classifications, rankings, entity extraction, or product recommendations) following specific instructions.
|
shopping-related prompts (such as product descriptions, user queries, or reviews)
| null |
Real task examples (e.g. GitHub issues)
|
57 tasks and 20,799 questions
|
No
|
Task type: shopping concept understanding, shopping knowledge reasoning, user behaviour alignment, multi-lingual abilities
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
For task type
|
pass@k (any correct answer in k trials)
|
https://github.com/KL4805/ShoppingMMLU
|
Shopping MMLU
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
The authors validate their benchmark by comparing LLM performance against task-specific state-of-the-art methods on three representative tasks.
|
simple average
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
It includes real-world shopping elements (actual product descriptions, genuine user queries, etc) but the tasks are presented as isolated questions rather than as part of a complete interactive shopping experience
|
Composite phenomenon
|
Yes
| null |
No
|
Agents
|
Web
| null |
['Real task']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Soft match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
|
['Mean']
|
zhuangToolQADatasetLLM2023
|
ToolQA: A Dataset for LLM Question Answering with External Tools
|
Include
| null | null |
This paper introduces ToolQA, which is designed to faithfully evaluate LLMs’ ability to use external tools for question answering as compared to just retrieving from memorization. ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Question answering with external tool utilization.
|
Yes
|
Ability to answer a question using external tools to obtain information from a reference corpus without relying on intrinsic parametric knowledge
|
Comprehensive
| null |
Given a question, a reference corpora and tools, use the tools to retrieve information from the reference corpora that can be helpful in providing an answer
|
A single task item consists of a question, answer
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
1530
|
Yes
|
Difficulty
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Subtask dataset and difficulty
| null |
https://github.com/night-chen/ToolQA
|
ToolQA
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
Simple mean
|
Model access required (e.g. logits)
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Agents
|
Tool Use
| null |
['Author-crafted', 'Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative', 'Constructed']
|
['Mean']
|
chenCopyBenchMeasuringLiteral2024
|
CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation
|
Include
| null | null |
COPYBENCH is introduced to evaluate both literal and non-literal reproduction of copyrighted content by language models (LMs), addressing a gap where previous research only considered literal similarities. Using copyrighted fiction books, COPYBENCH assesses literal and non-literal copying, finding that while literal copying is rare, non-literal copying, such as event and character copying, is more prevalent, especially in larger models. The benchmark reveals that training-time alignment can reduce literal copying but may increase non-literal copying, and current inference-time methods are more effective for literal copying than for non-literal copying, highlighting areas for improvement in copyright mitigation strategies.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Literal and non-literal copying in LM generations
|
Yes
|
Literal copying assesses the extent to which
a model can reproduce copyright-protected content exactly as it appears in the source material, non-literal copying evaluates whether a
model generates outputs that, despite differing in
surface form (e.g., through paraphrasing), exhibit
a high degree of overlap in content.
|
Comprehensive
| null |
measure the degree of literal,
non-literal copying and fact recall on a list of
copyright-protected fiction book
| null |
Text, copying, utility
|
Text snippets from books
|
4633
|
No
| null | null |
Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
copying types, utility
| null |
https://github.com/chentong0/copy-bench
|
CopyBench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Language Modelling
|
Copyright
| null |
['Author-crafted']
|
['Unknown']
|
['Free response']
|
['Exact match', 'Human ratings', 'LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative']
|
['Mean']
|
ajithLitSearchRetrievalBenchmark2024
|
LitSearch: A Retrieval Benchmark for Scientific Literature Search
|
Include
| null | null |
iterature search questions often require deep understanding and reasoning across research articles, posing challenges for modern search engines. LitSearch, a new benchmark with 597 literature search queries about recent ML and NLP papers, is introduced to address these challenges. The benchmark, constructed from GPT-4 generated and manually written questions, reveals a significant performance gap between traditional retrieval models like BM25 and state-of-the-art dense retrievers, with LLM-based reranking further improving retrieval performance, highlighting the limitations of commercial search engines on these complex queries.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Scientific literature search
|
Yes
|
Finding literature via a specific search query—for
example, collecting related work, checking if a
method has been proposed before, or recalling a
previously seen paper
|
Comprehensive
| null |
Answer literature search questions related to a large corpus of scientific papers
|
question, answer, score
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
597
|
Yes
|
in-line citation questions, hand written questions, broad, specific
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
in-line citation questions, hand written questions, broad, specific
|
pass@k (any correct answer in k trials)
|
https://github.com/princeton-nlp/LitSearch
|
LitSearch
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
General Science
| null | null |
['Author-crafted']
|
['Criterion']
|
['Short free response', 'Free response']
|
['Exact match', 'Human ratings', 'LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative']
|
['Mean']
|
heMedEvalMultilevelMultitask2023
|
MEDEVAL: A Multi-Level, Multi-Task, and Multi-Domain Medical Benchmark for Language Model Evaluation
|
Include
| null | null |
MEDEVAL is a multi-level, multi-task, and multi-domain medical benchmark. The paper collects data from several healthcare systems and annotations from experts. It evaluates generic and domain-specific language models under zero-shot and fine-tuned settings.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
performance on medical examination healthcare tasks
|
No
| null |
Subset
| null |
They identify two types of tasks (NLU task and NLG task) at two levels (sentence-level and document-level). Sentence level, NLU: identifying diagnostic properties , Sentence level, NLG: sentence disambiguation; Document level, NLU: categorizing reports into specific
diagnostic codes, Document level, NLG: medical summarization.
|
I couldn't find the dataset; the link they included is broken. From their description in the appendix, a single item varies based on the task, but primarily consists of one of the four tasks, level name (sentence or report), and task content.
| null | null |
8,801 (I did the math here, they provide the ratios)
|
Yes
|
body parts
|
Random sample (creators defined a task space and sampled from it)
|
Multiple choice, Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), They use an ambiguity classifier as well from previous work
| null |
They adapt existing medical datasets (that haven't been used as benchmarks) and add new expert annotations
|
Academia
|
No, link is broken
| null | null | null |
train/validate/test of 7:2:1
| null |
Simple Mean
|
Yes
|
Body parts (chest, foot, ankle)
| null |
https://github.com/ZexueHe/MedEval
|
MEDEVAL
|
Not defined
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
Medicine
| null | null |
['Unknown']
|
['Random']
|
['Multiple choice', 'Free response']
|
['Exact match', 'Soft match', 'LLM-as-a-Judge']
|
['No definition']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
duPAGEDBenchmarkProcedural2024
|
PAGED: A Benchmark for Procedural Graphs Extraction from Documents
|
Include
| null | null |
Propose a dataset. Find that baseline models cannot extract optimal procedural graphs well, and that LLMs have advantages in building relevant structures.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
procedural graph extraction
|
Yes
|
"Procedural graphs... intuitively represent the execution of actions for goal achievement", and this paper focuses on the "automatic extraction of procedural graphs from procedural documents"-p10829
|
Comprehensive
| null |
given a procedural text, the model has to extract the procedural graph out of it
|
a procedural text
| null |
Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
1131 (total is 3394, train:val:test=3:1:2)
|
Yes
|
actor, action, constraint, gateway, flow
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
train: 1697, val: 566 (total is 3394, train:val:test=3:1:2)
| null | null |
Yes
|
Different constraints, different gateways, different flows
| null |
https://github.com/SCUNLP/PAGED
|
PAGED
|
Widely-agreed
|
Yes
|
No
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
they conduct human evaluation
| null | null |
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
NLP
|
Extraction
| null |
['Another benchmark', 'LLM-generated']
|
['Targeted']
|
['Free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['No']
|
['No comparison made']
|
['Yes']
|
['Constructed']
| null |
zhangToolBeHonestMultilevelHallucination2024
|
ToolBeHonest: A Multi-level Hallucination Diagnostic Benchmark for Tool-Augmented Large Language Models
|
Include
| null | null |
This paper presents ToolBH, a benchmark designed to diagnose hallucinations in tool-augmented large language models (LLMs). Hallucinations are evaluated from two dimensions: depth, using a multi-level evaluation framework, and breadth, encompassing three distinct scenarios that are likely to induce hallucinations. The authors developed seven tasks and curated 700 evaluation samples through multiple rounds of manual annotation.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Hallucination diagnosis for tool-augmented LLMs.
|
No
|
Hallucination occurs when the output is inconsistent with the input, contradicts established knowledge, or cannot be verified against factual data. In this paper, hallucination for tool-augmented LLMs occur when LLMs attempt to address tool-use requests that they believe are solvable but are inherently unsolvable. How do tool-usage LLMs behave on unsolvable tasks?
|
Subset
| null |
Take a user query and a set of available tools (multi-level diagnosis), and: decide if the query is solvable using only those tools (yes or no), if solvable, provide the steps needed to solve it using the tools (a solution plan), and if any parts aren't solvable, describe what kind of tools would be needed (specify missing functionality). In a different level (hallucination-inducing), take a user query and an incomplete or misleading toolset, and check whether the model correctly identifies tool limitations or hallucinates missing tools.
|
It consists of a user query (task), a list of tools and sub goals
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
700
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Across each subtask
| null |
https://github.com/ToolBeHonest/ToolBeHonest
|
ToolBH
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No
|
The benchmark is itself realistic
|
No
| null |
The authors argue that the data curation pipeline is designed to reflect real-world tool usage scenarios based on user queries, and this is manually validated to ensure quality. The Level 2 (in-breadth) analysis focuses on hallucination-inducing settings, where the toolset is deliberately altered to introduce incomplete or misleading information, testing whether LLMs recognize these limitations or hallucinate missing tools or capabilities.
|
Simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Agents
|
Tool Use
| null |
['Author-crafted', 'Procedurally-generated', 'LLM-generated']
|
['Targeted', 'Criterion']
|
['Short free response', 'Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['']
|
['Partial']
|
['Mean']
|
yeBenchmarkingLlmsUncertainty2024
|
Benchmarking LLMs via Uncertainty Quantification
|
Include
| null | null |
This paper introduces a new benchmarking approach for Large Language Models that incorporates uncertainty quantification using conformal prediction across five NLP tasks.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Uncertainty in LLM predictions
|
Yes
|
They formalize this using conformal prediction to produce "a prediction set of possible labels (answers) that encompasses the correct label with a user-specified error rate and expresses uncertainty as the set size. Intuitively, a larger set size indicates higher uncertainty and vice versa."
|
Comprehensive
|
The authors specifically chose conformal prediction because it offers "multiple advantages including ease of implementation, high efficiency, distribution-free and model-agnostic, and a statistically rigorous estimation of uncertainty rather than a heuristic approximation."
|
The task involves converting multiple NLP tasks into multiple-choice questions and measuring both the accuracy of LLMs' predictions and their uncertainty through the size of prediction sets generated via conformal prediction.
|
A single item consists of a multiple-choice question, where the model must predict an answer and prediction set.
| null |
Modified from another benchmark (e.g. translation into another language)
|
50,000
|
Yes
|
task type, prompt strategy, conformal score function
|
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice,
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null |
The LLMs are evaluated on their ability to select the correct multiple-choice answer and generate a prediction set that includes the correct answer .
|
Simple Mean
|
Yes
|
Results are given for: individual tasks, different conformal score functions. Also by prompting strategy and model size.
| null |
https://github.com/smartyfh/LLM-Uncertainty-Bench
|
LLM-Uncertainty-Bench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
The authors validate their benchmark by comparing conformal prediction to other uncertainty quantification methods.
|
mean
|
Model access required (e.g. logits)
|
Representative task (e.g. answering medical licensing exam questions)
|
The task represents typical multiple-choice NLP evaluation scenarios but adds an uncertainty measurement component.
|
Single cohesive phenomenon
| null | null |
Yes
|
Language Modelling
|
Calibration
| null |
['Another benchmark']
|
['Convenience', 'Criterion']
|
['Multiple choice', '']
|
['Exact match', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
guoWhatCanLarge2023
|
What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks
|
Include
| null | null |
The paper develops a benchmark to assess the capabilities of five LLMs on chemistry, using eight chemistry tasks requiring understanding, reasoning, and explanation abilities.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
chemistry ability
|
Yes
|
"We identify three key chemistry-related capabilities including understanding, reasoning and explaining to explore in LLMs and establish a benchmark containing eight chemistry tasks."
|
Subset
|
Capabiliities: understanding, reasoning and explaining
Tasks: name prediction, property prediction, yield prediction, reaction prediction, retrosynthesis, text-based molecule design, molecule captioning, and reagents selection
|
8 chemistry tasks: name prediction, property prediction, yield prediction, reaction prediction, retrosynthesis, text-based molecule design, molecule captioning, and reagents selection
|
A task item consists of a chemistry-specific prompt (e.g. reactants) and the expected output (e.g. chemical reaction product).
| null |
Real task examples (e.g. GitHub issues)
|
100
|
Yes
|
Task type: name prediction, property prediction, yield prediction, reaction prediction, retrosynthesis, text-based molecule design, molecule captioning, and reagents selection
|
Random sample (creators defined a task space and sampled from it), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null |
chemistry datasets: BBBP, Tox21, PubChem, USPTO, and ChEBI
|
Academia
|
Yes
| null | null |
Test, Validation
| null | null |
Simple Mean
|
Yes
|
Reported separately for each chemistry task.
| null |
https://github.com/ChemFoundationModels/ChemLLMBench
|
ChemLLMBench
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
The authors consider validity but consulting experts, comparing performance with established baselines and evaluating different experimental settings (e.g. prompt strategy)
|
mean and standard dev
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
Chemistry
| null | null |
['Real task']
|
['Random', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Soft match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean', 'Std']
|
wangCanLanguageModels2023
|
Can Language Models Solve Graph Problems in Natural Language?
|
Include
| null | null |
Propose a dataset. Evaluate LLMs with different prompting approaches. Propose new approaches to boost LLM performance.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
graph processing
|
Yes
| null |
Comprehensive
| null |
Reasoning with graphs and structures
|
A graph description and a corresponding question.
| null |
Procedurally-generated task examples (e.g. Creating instances from a template)
|
5,902 problems in a standard version and 29,370 problems in an extended version
|
Yes
|
difficulty level, question type
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), partial credit
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null | null |
Yes
|
question type, difficulty level
| null |
https://github.com/Arthur-Heng/NLGraph
|
NLGraph
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
the dataset is generated by reliable procedure
| null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
No
|
NLP
|
Extraction
| null |
['Procedurally-generated']
|
['Targeted']
|
['Short free response', 'Free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
| null |
wuStreamBenchBenchmarkingContinuous2024
|
StreamBench: Towards Benchmarking Continuous Improvement of Language Agents
|
Include
| null | null |
StreamBench is a benchmark designed to evaluate language agents' ability to improve over time through feedback. The authors propose a novel evaluation setting where language models must continuously learn from an input-feedback sequence, with the goal of maximizing accuracy across a range of tasks.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Continuous improvement of language agents from feedback
|
Yes
|
"to evaluate LLM agents' ability to improve themselves over an input-feedback sequence. StreamBench simulates an environment where LLM agents are exposed to a sequence of users' natural language requirements and feedback."
|
Subset
| null |
The task requires language models to continuously learn from an input-feedback sequence, improving their performance over time on various downstream tasks like text-to-SQL, coding, medical diagnosis, and question answering.
|
A single item consists of an input in natural language (e.g., data requirements, symptoms, questions), the agent's predicted output, and binary feedback indicating correctness.
|
The benchmark focuses on sequential improvement rather than static performance.
|
Real task examples (e.g. GitHub issues)
|
9,702
|
Yes
|
task type, input-output format, binary feedback signal
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), LLM post-processing (extracting answers, reformatting for automated scoring)
| null |
The benchmark integrates seven existing datasets (Spider, CoSQL, BIRD, DS-1000, ToolBench, DDXPlus, HotpotQA) but transforms them by assigning time steps to create streaming sequences.
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Split by tasks and datasets
| null |
https://github.com/stream-bench/stream-bench
|
stream-bench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
They test the streaming methods across multiple tasks/datasets, use random seeds to verify robustness in sequence ordering and they do ablation studies to validate key components of the method.
|
mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
The tasks come from real-world applications but the feedback mechanism (binary correctness) is simplified.
|
Composite phenomenon
|
Yes
| null | null |
Agents
| null | null |
['Real task']
|
['Criterion']
|
['Free response', 'Structured']
|
['Exact match', 'LLM post-processing']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
|
['Mean']
|
piUOUOUncontextualizedUncommon2024
|
UOUO: Uncontextualized Uncommon Objects for Measuring Knowledge Horizons of Vision Language Models
|
Include
| null | null |
VLMs ability to handle rare objects, which fall into the long tail of data distributions, is less studied in the current literature. . To rigorously evaluate this aspect, the authors introduce the "Uncontextualized Uncommon Objects" (UOUO) benchmark which focuses on systematically testing VLMs on rare and specialized objects.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
rare & uncommon visual grounding
|
No
| null |
Subset
| null |
They don't define the tasks. They briefly mention during the experimental setup that they will assess VLMs in object segmentation and object detection.
|
The target object name, the image with 4 objects, the Wikipedia category/domain of the target object
| null |
Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
| null |
Yes
|
Domain/Category of the object based on Wikipedia
|
Random sample (creators defined a task space and sampled from it)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), Mean IoU (Intersection over Union)
| null | null |
Academia
|
No, no link is provided
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
- Random subset: with randomly selected objects on each image
- Hard subset: with similar objects on each image
| null | null |
UOUO
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
No
| null |
No
|
Grounding
| null | null |
['Procedurally-generated', 'LLM-generated']
|
['Random']
|
['Short free response']
|
['Exact match', 'Soft match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
billahnagoudiJASMINEArabicGPT2023
|
JASMINE: Arabic GPT Models for Few-Shot Learning
|
Include
| null | null |
The paper introduces a suite of Arabic autoregressive Transformer language models ranging in size and pre-trained on a large and diverse dataset. It also introduces a benchmark for automated and human evaluation of Arabic autoregressive models, with coverage of social biases, harms, and toxicity.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
(broad) Arabic language capabilities
|
No
| null |
Comprehensive
| null |
There are multiple tasks: autocompletion, commonsense inference, word manipulation, news story generation, poetry generation, dialectal generation
|
Varies, but mostly prompt + context if needed
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
For one of the datasets, they say it's a 8:2 train/test split, with test = 1,675
|
Yes
|
Different for different datasets; poetry dataset: topics, speech transcription dataset: country/dialect.
|
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), Human ratings (text quality, preference, NOT manual scoring of other metrics), Distribution (perplexity, calibration, correlation)
| null | null |
Academia
|
Unclear
| null | null | null |
For one of the datasets, they say it's a 8:2 train/test split, with train = 14,288 and test = 1,675
| null |
Simple Mean
|
No
| null | null |
https://huggingface.co/UBC-NLP/Jasmine-350M
| null |
Not defined
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
| null |
They discuss in the limitations section the fact that they could not cover some important Arabic dialects. While they don't explicitly discuss this in the context of construct validity, I think it's an important construct validity question.
|
simple mean/sum
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
NLP
| null |
Multilinguality
|
['Author-crafted', 'Another benchmark']
|
['Random', 'Convenience', 'Targeted']
|
['Multiple choice', 'Free response']
|
['Exact match', 'Human ratings', 'Distribution']
|
['No definition']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['']
|
['Representative', 'Constructed']
|
['Mean']
|
changDrspiderDiagnosticEvaluation2023
|
Dr.Spider: A Diagnostic Evaluation Benchmark towards Text-to-SQL Robustness
|
Include
| null | null |
The paper proposes Dr Spider a text-to-SQL robustness benchmark. The authors adapt the Spider benchmark by introducing various perturbations and measuring drop in model performance.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
text-to-SQL, natural language understanding, code generation
|
No
|
"the robustness of models with perturbations on each component of the text-to-SQL task"
|
Comprehensive
| null |
Given a natural language query and a data base structure, the model should write a correct SQL query to obtain from the database what the NL query requests.
|
Natural language query + database structure + example of correct SQL query + the results of running the example SQL query on the content of the database
|
The base task is described above. The "meta task" is doing this consistently among small perturbations of the problem.
|
Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
15000
|
No
| null |
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible)
|
Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), The code is executed and results are verified against ground truth results
| null | null |
Mix (multiple authors from industry and academia)
|
Unclear
| null | null |
Test
| null |
the model generates SQL, which is then processed for grading
|
Simple Mean
|
Yes
|
Subsets are different vectors for perturbations: Perturbing the a) query semantically b) the query lexically and syntacitcally while keeping semantics invariant c) perturbing the database structure. Within each further subscores are provided.
|
difference between unperturbed and perturbed.
|
https://github.com/awslabs/diagnostic-robustness-text-to-sql
|
Dr.Spider
|
Not defined
|
Yes
|
Yes
|
No
|
Yes
|
No
|
The benchmark is itself realistic
|
No
|
No
| null |
No statistical methods used. just simple mean and differences in means.
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Code Generation
|
Natural Language
| null |
['Crowd-sourced', 'Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Random', 'Convenience']
|
['Free response']
|
['Exact match', 'Reward']
|
['No definition']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Complete']
|
['Mean']
|
zengEvaluatingLargeLanguage2024
|
Evaluating Large Language Models at Evaluating Instruction Following
|
Include
| null | null |
This paper introduces LLMBAR, a benchmark specifically designed to evaluate how well LLM evaluators can assess instruction following in LLM outputs. This benchmark tries to evaluate whether "LLM evaluators" themselves can reliably judge how well models follow instructions.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
to evaluate how well LLM evaluators can assess instruction following in LLM outputs
|
Yes
|
"We define it [instruction following] as the ability to correctly parse open-ended instructions and adhere to the specified requirements. This criterion relates to other desirable LLM properties, such as helpfulness."
|
Subset
| null |
The task requires the LLM evaluator to pick on out of two outputs that better follows a given instruction (only one is correct).
|
A single item consists of an instruction, two outputs (one that follows the instruction, one that doesn't) and a label of which is better.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
419
|
Yes
|
type (natural or adversarial, instruction source, creation method
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
natural/adverserial subsets
| null |
https://github.com/princeton-nlp/LLMBar
|
LLMBar
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
They use ablation studies on the prompting strategies, they test robustness with challenging cases, they evaluate multiple LLMs, and they should that LLMBar has high human agreement rate compared to other benchmarks.
|
simple mean and for rating-based evaluations they measure "hedging rate"
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
Yes
|
Instruction Following
| null | null |
['Author-crafted', 'Another benchmark', 'LLM-generated']
|
['Targeted', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
xuMAgICInvestigationLarge2024
|
MAgIC: Investigation of Large Language Model Powered Multi-Agent in Cognition, Adaptability, Rationality and Collaboration
|
Include
| null | null |
The paper presents a benchmark called MAGIC that is designed to evaluate Large Language Models (LLMs) in multi-agent settings. It evaluates LLMs' capabilities in multi-agent environments through competition-based scenarios and defines seven metrics to measure.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Multi-agent capabilities of LLMs
|
Yes
|
"the essential capabilities of LLMs (Wooldridge, 2009; Minsky, 1988)" in multi-agent systems, which they categorize into: "(1) Judgment and reasoning form the core cognition of agents, crucial for accurate information estimation in uncertain scenarios. (2) Self-awareness and deception are key to enhanced adaptability in agents, vital for multi-agent system. (3) Rationality serves as a metric to gauge the efficiency of an agent's behavior. It directs agents toward making decisions with the aim of optimizing their benefits by considering the potential actions of other agents rather than resorting to impulsive or uninformed actions. (4) Cooperation and coordination are two facets of collaboration, essential for effective teamwork in multi-agent systems."
|
Comprehensive
|
Capabilites: judgment, reasoning, deception, self-awareness, cooperation, coordination, and rationality
|
competition-based scenarios requiring the specified capabilities
|
A single item consists of a specific game scenario (e.g., Prisoner's Dilemma) with defined roles, rules, etc where the LLM must interact with other agents to achieve its objectives.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Procedurally-generated task examples (e.g. Creating instances from a template)
|
103
|
Yes
|
scenario type, roles, topic settings, game rules, win conditions
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code), Extended interaction (e.g. conversation, calling an API and processing the response)
|
Exact Match (accuracy, F1, precision, recall), Win rate
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
capability, performance, role
| null |
https://github.com/cathyxl/MAgIC
|
MAgIC
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
They show the correlation between the area of the radar charts and the win rates, indicating their metrics effectively capture capabilities relevant for success in multi-agent settings.
|
simple mean to aggregate performance over scenarios and roles
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Agents
| null | null |
['Author-crafted', 'Procedurally-generated']
|
['Targeted', 'Criterion']
|
['Free response', 'Interaction']
|
['Exact match', 'Reward']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
yanComprehensiveStudyTextattributed2023
|
A Comprehensive Study on Text-attributed Graphs: Benchmarking and Rethinking
|
Include
| null | null |
Propose a dataset. Conduct extensive benchmarking experiments on a wide range of models. Propose topological training.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
text-attribute graph processing
|
Yes
|
"In many real-world graphs, nodes are often associated with text attributes, giving rise to the text-attributed graphs (TAGs)"-p1
|
Comprehensive
| null |
understanding graph topology of text-attributed graphs
|
a graph
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
276,661 nodes, 2,877,927 edges
|
Yes
|
topic area
|
Random sample (creators defined a task space and sampled from it), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
train: 1,993,101 nodes, 16,735,860 edges; val: 120,564 nodes, 1,225,530 edges
| null | null |
Yes
|
topic area
|
Hits@K
|
https://github.com/sktsherlock/TAG-Benchmark
|
CS-TAG
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
|
see details in appendix B.3
| null |
NLP
|
Extraction
| null |
['Real task', 'Another benchmark']
|
['Random', 'Targeted']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
| null |
liCanLargeLanguage2024
|
Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models
|
Include
| null | null |
Propose datasets. Show that LLMs cannot process graphs well. Use the dataset to boost LLM performance.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
graph processing
|
Yes
|
"can LLMs analyze graphs like professionals?"-p2
|
Comprehensive
| null |
given a graph question, a model has to use APIs to solve the problem like human experts
|
a natural language question and a graph
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
512
|
Yes
|
task category, answer difficulty, question type
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
train: 29,260
| null | null |
Yes
|
task category, answer difficulty, question type
| null |
https://github.com/BUPT-GAMMA/ProGraph
|
ProGraph, LLM4Graph
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
the authors conduct human experiment
| null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Extraction
| null |
['Author-crafted', 'Crowd-sourced', 'LLM-generated']
|
['Criterion']
|
['Free response']
|
['Exact match', 'LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
| null |
zhangDTGBComprehensiveBenchmark2024
|
DTGB: A Comprehensive Benchmark for Dynamic Text-Attributed Graphs
|
Include
| null | null |
Propose a dataset. Benchmark popular algorithms with this dataset and showcase the limitations of current models in handling dynamic text-attributed graphs.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
dynamic text-attributed graph processing
|
Yes
|
"In those dynamic graphs, nodes and edges are typically associated with text attributes, giving rise to dynamic text-attributed graphs (DyTAGs)."-p1
|
Comprehensive
| null |
Given a graph, a model has to answer relevant question about the graph
|
graph and time stamp
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
edge: 2,637,689, node: 554,432
|
Yes
|
topic area
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
train: edge: 12,309,214, node: 2,587,353; val: edge: 2,637,689, node: 554,432
| null | null |
Yes
|
topic area
|
hits@k
|
https://github.com/zjs123/DTGB
|
DTGB
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
|
counts are estimated based on Table 1
|
No
|
NLP
|
Extraction
| null |
['Real task', 'Another benchmark']
|
['Targeted']
|
['Multiple choice', 'Free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
| null |
huangEmbraceDivergenceRicher2024
|
Embrace Divergence for Richer Insights: A Multi-document Summarization Benchmark and a Case Study on Summarizing Diverse Information from News Articles
|
Include
| null | null |
Present a text summarization dataset for articles of diverse opinions towards same events and schema to find them. Present LLM-based evaluation methods for this dataset. Show that LLMs can well summarize single documents but fail to do so for multiple.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Text summarization
|
Yes
|
"In the realm of news reporting, each event is often chronicled by multiple sources, providing a rich tapestry of perspectives and insights…we propose the Multi-document Diversity Summarization (MDDS) task, aimed at faithfully illuminating the diverse information presented in multiple sources. Following Laban et al. (2022), we formalize di- verse information as questions and answers where numerous sources can answer the same question, and the corresponding answers extracted from dif- ferent news articles exhibit a variety of opinions or perspectives "-p570
|
Subset
| null |
generate a natural-language summary that effectively captures the diverse information presented within clusters of differently-opinionated news articles centered around the same news event
|
A cluster of news articles, and question for them
| null |
Real task examples (e.g. GitHub issues), Procedurally-generated task examples (e.g. Creating instances from a template)
|
235*10 news articles
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragraph, executable code)
|
Human ratings (text quality, preference, NOT manual scoring of other metrics)
|
main results in the paper are evaluation results by humans, but the authors also propose a method to use LLM as judge.
| null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null | null |
No
| null | null |
https://github.com/salesforce/DiverseSumm
|
DIVERSESUMM
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
|
No
|
Yes
|
authors conduct human experiment
| null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
NLP
|
Summarization
| null |
['Real task', 'Procedurally-generated']
|
['Targeted']
|
['Free response']
|
['Human ratings']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
| null |
amarOpenAspBenchmarkMultidocument2023
|
OpenAsp: A Benchmark for Multi-document Open Aspect-based Summarization
|
Include
| null | null |
Present a dataset for multi-document open aspect-based summarization. Show the dataset is of high quality and it presents challenge to LLMs.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Text summarization
|
Yes
|
"In query-focused summarization (QFS), a query is highly flexible and can target specific information within particular text. In contrast, aspect-based summarization (ABS) datasets traditionally predefined small sets of generic subtopics within a common topi- cal category on which aspect-based summaries are generated. Open-ABS (OABS; Tan et al., 2020), allows aspects to differ for each source text, yet still just as subtopics in the text. "-[first page, no page number in the paper]
|
Subset
| null |
Given a set of documents on the same topic and an aspect, the task is to output a short aspect-based summary.
|
A cluster of documents and an aspect label
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
192 topics, 596 instances, 6,536 docs
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragraph, executable code)
|
n-gram (BLEU, ROUGE, chrF)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
Train: 145 topics, 476 instances, 4,878 docs; Valid: 82 topics, 238 instances, 2,168 docs
| null | null |
No
| null | null |
https://github.com/liatschiff/OpenAsp
|
OPENASP
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
the authors conduct human experiment for evaluation
| null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
NLP
|
Summarization
| null |
['Crowd-sourced', 'Another benchmark']
|
['Targeted']
|
['Free response']
|
['Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
| null |
luMathVistaEvaluatingMathematical2024
|
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
|
Include
| null | null |
The paper introduces MathVista, a benchmark for mathematical reasoning in visual contexts. This includes algebraic/arithmetic/geometric reasoning as well as interpreting functional plots and chart data. MathVista combines math questions from 28 existing multimodal datasets, plus 3 new datasets hand-annotated from internet sources.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Mathematical reasoning in visual contexts
|
Yes
|
"We propose a task taxonomy to guide the development of MathVista: (1) we identify seven mathematical reasoning types: algebraic reasoning, arithmetic reasoning, geometry reasoning, logical reasoning, numeric common sense, scientific reasoning, and statistical reasoning... and (3) we encompass a diverse array of visual contexts, including natural images, geometry diagrams, abstract scenes, synthetic scenes, as well as various figures, charts, and plots" (2)
|
Comprehensive
|
The phenomenon and corresponding tasks are explicitly laid out in the introduction and very well-motivated, including detailed task definitions in the appendix.
|
"we focus on five primary tasks: figure question answering (FQA), geometry problem solving (GPS), math word problem (MWP), textbook question answering (TQA), and visual question answering (VQA)" (2)
|
The core problem is an image (e.g. a functional chart or table), a question in text, optionally some number of multiple choice options, and a solution. Metadata includes the original benchmark category, task, type of visual context, grade level, and math skill tested.
|
Explicit examples including their metadata are given in the main body text, which is very nice.
|
Human exam questions (e.g. GRE questions), Modified from another benchmark (e.g. translation into another language)
|
5,141
|
Yes
|
original benchmark category, task, type of visual context, grade level difficulty, type of mathematical skill
|
Random sample (creators defined a task space and sampled from it), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Validation
|
1,000
| null |
Simple Mean
|
Yes
|
Task type
| null |
https://mathvista.github.io/
|
MathVista
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
"One limitation is the dataset coverage. While MATHVISTA encompasses a broad spectrum of tasks and visual contexts, there may be gaps in the representation of certain types of mathematical prob- lems and visuals." (21) (presented in the appendix)
|
simple mean/sum
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
|
Validation is called "testmini" and test itself is not released publicly
|
No
|
Reasoning
|
Mathematics
| null |
['Human exams', 'Another benchmark']
|
['Random', 'Criterion']
|
['Multiple choice', 'Short free response']
|
['Exact match', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
liuRevisitingGoldStandard2023
|
Revisiting the Gold Standard: Grounding Summarization Evaluation with Robust Human Evaluation
|
Include
| null | null |
Propose a modified summarization salience protocol, curate the Robust Summarization Evaluation (RoSE) benchmark, conduct a comparative study of human evaluation protocols. Evaluate 50 automatic metrics and their variants and demonstrate how the benchmark leads to more statistically stable and significant results.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Text summarization evaluation
|
Yes
|
"We focus on a specific summarization meta-evaluation study on summary salience. Salience is a desired summary quality that requires the summary to include all and only important information of the input article. The human evaluation of summary salience can be conducted in either reference-free or reference-based manners…the latter requires the annotators to assess the information overlap between the system output and reference summary, under the assumption that the reference summary is the gold standard of summary salience…we focus on reference-based evaluation for our human judgment dataset collection"-p4142
|
Subset
| null |
"Specifically, the evaluation process is decomposed into two steps: (1) Atomic Content Unit (ACU) Writing – extracting facts from one text sequence, and (2) ACU Matching – checking for the pres- ence of the extracted facts in another sequence. We formulate the ACU protocol as a recall-based pro- tocol, such that the first step only needs to be per- formed once for the reference summary, allowing for reproducibility and reuse of these units when performing matching on new system outputs. "-p4142
|
Given a reference summary, a system summary, and a set of Atomic Content Units (ACU), annotators have to decide whether ACUs exist in the system summary
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
1.5k docs, 10.2k Atomic Content Units (ACU)-level annotations and around 14k summary-level annotations,
|
Yes
|
topic area
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
val: 1k docs, 11.6k Atomic Content Units (ACU), 8k summaries
| null | null |
Yes
|
topic area
| null |
https://github.com/Yale-LILY/ROSE
|
RoSE
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
the authors conduct human experiment
| null |
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
NLP
|
Summarization
| null |
['Author-crafted', 'Crowd-sourced', 'Another benchmark']
|
['Targeted']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Complete']
| null |
cheangCanLMsGeneralize2023
|
Can LMs Generalize to Future Data? An Empirical Analysis on Text Summarization
|
Include
| null | null |
Propose a novel benchmark. Show that parametric knowledge stored in summarization models significantly affects the faithfulness of the generated summaries on future data. Discuss recommendations to the research community.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Text summarization
|
Yes
|
"Abstractive summarization aims to generate a concise summary that contains the critical information of the source text while ensuring the generated text is fluent and faithful. This paper studies how PLMs that excel on summarizing data originating from the same temporal context as the pre-trained corpus generalize their summarization capabilities to OOD future data."-p16205
|
Subset
| null |
generalize to data of time that is in or out of distribution of model's training data
|
The model is given a news article then asked to summarize it.
| null |
Real task examples (e.g. GitHub issues)
|
12734
|
Yes
|
in/out of distribution, source
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragraph, executable code)
|
Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null | null |
Yes
|
in/out of distribution, source
| null |
https://github.com/NLP2CT/TempoSum
|
TEMPOSUM
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
the authors conduct human experiment
| null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
NLP
|
Summarization
| null |
['Real task']
|
['Targeted']
|
['Free response']
|
['Human ratings', 'LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
| null |
liNewsBenchSystematicEvaluation2024
|
NewsBench: A Systematic Evaluation Framework for Assessing Editorial Capabilities of Large Language Models in Chinese Journalism
|
Include
| null | null |
The paper introduces a benchmark to evaluate LLM capabilities in Chinese journalism, with a focus on writing proficiency and safety adherence. It also proposes several GPT-4 based automated evaluation protocols and uses the benchmark to evaluate popular LLMs that can handle Chinese.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
journalistic editorial tasks
|
Yes
|
"this paper introduces NewsBench, a systematic evaluation framework which is focused on assessing the editorial capabilities of LLMs for not only journalistic writing proficiency but also safety adherence. For journalistic writing proficiency, we focus on language fluency, logical coherence, style alignment, and instruction fulfilment, while for safety adherence we consider six facets including civil language, bias and discrimination, personal privacy, social harm, journalistic ethics, and illegal activities."
|
Subset
| null |
Headline Generation, Summarization, Continuation of Writing, Expansion of Writing, and Style Refinement
|
For MCQ, prompts consist of instructions, context, and choices. For short answer questions, it consists of instruction and context.
| null |
Expert-crafted task examples (e.g. hand-written examples)
|
1,267
|
Yes
|
human-written answers and explanations, domain
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
| null |
MCQ vs short answer questions, different facets of safety adherence (e.g., ethics, privacy, bias)
| null |
https://github.com/IAAR-Shanghai/NewsBench
|
NewsBench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
No
|
NLP
| null | null |
['Expert-crafted']
|
['Targeted']
|
['Multiple choice', 'Short free response']
|
['Exact match', 'LLM-as-a-Judge']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
| null |
dengMobilebenchEvaluationBenchmark2024
|
Mobile-Bench: An Evaluation Benchmark for LLM-based Mobile Agents
|
Include
| null | null |
Mobile-Bench is a novel benchmark for evaluating LLM agents' capabilities in mobile device interactions. It creates a more realistic environment for benchmarking that combines API and UI operations, evaluates multi-app coordination, and introduces more nuanced evaluation metrics.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Mobile phone agent interaction capabilities
|
Yes
|
agent interactions in "a mobile phone environment that includes a platform supporting both API and UI interactions, and a corresponding dataset with multi-APP tasks."
|
Subset
| null |
The task requires an LLM agent to accomplish mobile phone operations based on a prompt instruction - this involves UI elements, API calls, and different apps.
|
a user query (instruction), target application(s), and CheckPoints (expected execution path)
|
Tasks have increasing complexity: SAST (single-app-single-task), SAMT (single-app-multi-task), and MAMT (multi-app-multi-task)
|
Real task examples (e.g. GitHub issues), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
832
|
Yes
|
Task comlexity (SAST/SAMT/MAMT), API calls, CheckPoints
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Extended interaction (e.g. conversation, calling an API and processing the response), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), checkpoint coverage
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
by task complexity and checkpoints
| null |
https://github.com/XiaoMi/MobileBench
|
Mobile-Bench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
Yes
|
Analysis of different application categories to ensure comprehensive coverage, and ablation studies to validate the importance of APIs and planning/thought components in the agent's performance.
|
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
Agents
|
Web
| null |
['Real task', 'LLM-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Interaction', 'Structured']
|
['Exact match', 'Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial', 'Representative']
|
['Mean']
|
romeroCVQACulturallydiverseMultilingual2024
|
CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark
|
Include
| null | null |
The paper introduces CVQA2, a culturally diverse multilingual Visual Question Answering benchmark that includes 10,000 questions across 31 languages and 30 countries, incorporating input from native speakers and cultural experts. The benchmark measures cultural understanding and multilingual visual question answering.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
cultural understanding and multilingual visual question answering
|
Yes
|
The phenomenon is defined as a model’s ability to understand local common-ground knowledge.
Culture is hard to define, and our CVQA ultimately serves only as a proxy to benchmark the model’s understanding of culture through local common knowledge. ...we follow Adilazuarda et al. by using common-ground knowledge (e.g., information surrounding local dishes, history, places, etc. that is generally shared by the people within the region) as a proxy of culture.
|
Subset
| null |
multiple-choice visual question answering
|
an image, question, answer options, the correct answer
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Expert-crafted task examples (e.g. hand-written examples), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
10k
|
Yes
|
Country-Language pair, language and script used, question category (10 types, e.g., Food, Pop Culture, Geography), image source (self-made vs. web-sourced), question type (e.g., “what,” “how,” “where”) via string-matching heuristics
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Language: English vs. local language performance
Question categories: e.g., Food, Pop Culture, People & Everyday Life
Image source: self-made vs. web images
Prompt type: location-aware vs. location-agnostic
Input format: multiple-choice vs. open-ended answers
| null |
https://huggingface.co/datasets/afaji/cvqa
|
CVQA
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
simple mean
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Knowledge
|
Cultural
| null |
['Author-crafted', 'Expert-crafted', 'Crowd-sourced', 'Another benchmark']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative', 'Constructed']
|
['Mean']
|
shaoNYUCTFBench2024
|
NYU CTF Bench: A Scalable Open-Source Benchmark Dataset for Evaluating LLMs in Offensive Security
|
Include
| null | null |
The benchmark aims to assess the capability of LLMs in solving CTF challenges autonomously. The NYU CTF Bench includes CTF challenges from NYU’s CSAW cybersecurity events.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
cybersecurity
|
No
| null |
Subset
| null |
CTF challenges
|
CTF challenge with challenge description, files, host/port information, and validation requirements, etc
| null |
Human exam questions (e.g. GRE questions), Expert-crafted task examples (e.g. hand-written examples)
|
200
|
Yes
|
challenge name, description, category, difficulty level, host/port info, files needed, and valid flag
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code), Extended interaction (e.g. conversation, calling an API and processing the response), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null |
tasks are sourced from NYU’s annual CSAW CTF competitions
|
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
challenge type
| null |
https://github.com/NYU-LLM-CTF/nyuctf_agents
|
NYU CTF Bench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
They discuss current tool support limitations and the need for more diverse sources of CFT challenges
|
Simple mean
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively), Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Agents
|
Web
| null |
['Human exams', 'Expert-crafted']
|
['Criterion']
|
['Free response', 'Interaction', 'Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Complete', 'Representative']
|
['Mean']
|
chenAreWeRight2024
|
Are We on the Right Way for Evaluating Large Vision-Language Models?
|
Include
| null | null |
The authors review general multi-modal capability benchmarks and finds problems to do with data leakage and questions answerable without any visual input. They automatically and then manually filter instances from these benchmarks, resulting in MMStar, a "vision-indispensible" multi-modal benchmark. Evaluation and ablation studies show that MMStar mitigates leakage better than existing benchmarks.
|
Second instance I've seen of a benchmark that is novel for being a filtration of existing benchmarks (after SADE (maExaminationCompositionalityLarge2024)). Both seem to prioritise construct validity in a helpful way.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
general multi-modal capabilities
|
Yes
|
"The core capabilities consist of two perception-related dimensions, Coarse Perception (CP) and Fine-grained Perception (FP), two reasoning-related dimensions, Instance Reasoning (IR) and Logical Reasoning (LR), and two knowledge-related dimensions, Science & Technology (ST) and Math- ematics (MA)." (15)
|
Comprehensive
| null |
Given an image and a question in text about that image, answer a multiple choice question. There are many possible topics, including mathematics, emotion perception, geography, etc., and questions may require multi-step reasoning or simply visual information retrieval.
|
An image, a multiple-choice question, a correct answer, and the original benchmark from which the instance is sourced.
| null |
Modified from another benchmark (e.g. translation into another language)
|
1,500
|
Yes
|
capability type, capability subtype, benchmark of origin
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall), Custom metrics: multi-modal gain, multi-modal leakage
| null |
Filtered from 6 other benchmarks
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Core capability type
| null |
https://mmstar-benchmark.github.io/
|
MMStar
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
"The evaluation samples for constructing the MMStar benchmark should meet three fundamental criteria: 1) Visual dependency. The collected samples can be correctly an- swered only based on understanding the visual content; 2) Minimal data leakage. The collected samples should minimize the risk of unintentional inclusion in LLMs’ training corpus, or be effec- tively transformed from uni-modal to multi-modal formats to prevent LLMs from ”recalling” the correct answers; 3) Requiring advanced multi-modal capabilities for resolution. In addition to ensuring fairness and reliability by adhering to the above criteria, we also aim for samples to cover various difficulty levels. We expect to comprehensively capture LVLMs’ multi-modal capabilities with succinct high-quality samples" (6)
|
simple mean/sum, plus comparisons to scores from the base LLMs comprising the multi-modal models (called "multi-modal gain" and "multi-modal leakage" statistics)
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
VQA
| null | null |
['Another benchmark']
|
['Criterion']
|
['Multiple choice']
|
['Exact match', 'Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
dumpalaSUGARCREPEDatasetVisionlanguage2024
|
SUGARCREPE++ Dataset: Vision-Language Model Sensitivity to Semantic and Lexical Alterations
|
Include
| null | null |
SugarCrepe++ is a multimodal benchmark for evaluating semantic and lexical understanding. The benchmark improves upon prior compositional reasoning tasks by having the model choose between two semantically equivalent but lexically dissimilar correct captions, and one lexically similar but semantically dissimilar hard negative caption for an image. The benchmark is publicly available, human-validated, and can be used to evaluate multi-modal and unimodal LLMs.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
semantic understanding, compositional reasoning, compositionality,
|
Yes
|
Semantic equivalence is when two sentences convey the same meaning, and semantic similarity is when two sentences describe the same topic. Lexical refers to words/vocabulary and hence, lexical-similarity compares a pair of sentences at the word-level. In particular, a higher overlap of vocabulary and order of occurrence should lead to higher lexical similarity.
|
Comprehensive
|
The two sub-elements are semantic equivalence detection and lexical sensitivity. The appendix also defines syntactic similarity metrics, but they relate to the dataset's construction instead of the dataset's purpose.
|
Models are given an image, and must choose between three captions: a pair of semantically equivalent but lexically different correct captions, and one hard negative caption. The triplet ensures there are pairs of semantically-equivalent, semantically-opposite, lexically-similar, and lexically-dissimilar sentences. Both multimodal and unimodal language models are evaluated.
|
Each sample in SugarCrepe++ dataset consists of an image and a corresponding triplet of captions: a pair of semantically equivalent but lexically different positive captions and one hard negative caption.
|
The appendix reports several custom metrics to measure the syntactic and lexical similarity (SLS) between two sentences, in addition the VERA and grammar model scores of the benchmark.
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
4757
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
|
The paper defines two custom metrics: ITT_{hit} for multi-modal image-to-text (ITT) models, and TOT_{hit} for uni-modal text-only (TOT) models. I is the image, (P1 and P2) are the positive captions, and N is the hard negative caption. Both metrics are binary. ITT_{hit} is 1 when (p(P1|I) > p(N|I)) ∧ (p(P2|I) > p(N|I)) for the likelihood p. TOT_{hit} is 1 when (p(P1|P2) > p(N|P2)) ∧ (p(P2|P1) > p(N|P1)), also with likelihood p. The log likelihood is proportional to the cosine similarity between the respective embeddings.
|
SugarCrepe++ uses image-caption pairs from MS-COCO, and uses fine-tuned Mistral 7B to generate a lexically different but semantically equivalent image caption, and then a coherent and fluent hard negative (incorrect) caption, that are automatically and manually validated.
|
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
The type of hard negative: Swap Object, Swap Attribute, Replace Object, Replace Attribute, Replace Relation
| null |
https://github.com/Sri-Harsha/scpp
|
SugarCrepe++
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
The authors highlight that given the sensitivity of LLMs to prompt-formatting and and adversarial prompting, lexical structure likely influences semantic understanding. However, most benchmarks evaluate semantic similarity without considering lexical influence, and fail to investigate how models understand semantic equivalence given controlled lexical constraints. The task itself -- choosing the correct caption -- appears to be standard for the compositional reasoning literature, but the paper did not ground the task in existing literature beyond its use in current benchmarks.
|
simple mean
|
Model access required (e.g. logits)
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
The paper does not justify the caption multiple-choice task as a complete real task nor ground the evaluation in a real-world scenario, but given its prevalence in the compositional reasoning literature, it could be more representative then presented.
|
Composite phenomenon
|
No
| null |
No
|
Language Modelling
|
Robustness
| null |
['Author-crafted', 'Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Convenience']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
jiLargeLanguageModels2024
|
Large Language Models as Automated Aligners for benchmarking Vision-Language Models
|
Include
| null | null |
The authors utilise LLMs to produce question-answer-reasoning triplets from COCO images. The result is Auto-Bench, a general multi-modal capability and value alignment benchmark dataset.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
"human capacities and values"
|
Yes
|
"We adopt a capacity-oriented perspective to generate visual questions, covering a broad spectrum of perception, reasoning, planning, and value alignment" (4)
Perception: "Perception-oriented questions evaluate the model’s proficiency in comprehending, interpreting, and engaging with the objects or scenes in an image"
Reasoning: "Visual reasoning involves the ability to provide logical responses based on a holistic understanding of visual information"
Planning: "goal-directed questions that require VLMs to perceive objects in an image, understand the function of each object, and integrate the rich knowledge inherent in LLMs to achieve target goals"
Value Alignment: "aligning model behaviors with human values and preventing unintended harm or deviation from expected outcomes" (5)
|
Comprehensive
| null |
Answer a general question about an image, either multiple choice or free-form. Questions can concern compositional aspects of the image, spatial reasoning and planning, etc., or can be "unethical" requests that should be refused.
|
An image, a question with potential multiple choice options, an answer, and some given rationale for the answer.
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
28.5K
|
Yes
|
capacity, skill, sub-skill, rationale for correct answer
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Mix (multiple authors from industry and academia)
|
No, no link is provided
| null | null |
Train, Validation
|
3.504M
| null |
Simple Mean
|
Yes
|
sub-skill (e.g. counting, counterfactual reasoning, physics/biology/chemistry, privacy compliance, ...)
| null | null |
Auto-Bench
|
Contested
|
No
|
No
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
"To verify the rationality of our curated data, we adopt human verification for assessment... The results indicate that the data generated by Auto-Bench largely meets human acceptance in terms of both the rationality of alignment across different dimensions" (4)
|
simple mean/sum
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
|
task_dataset_size_extra is the training set, task_dataset_size reports the human annotator-validated validation set
|
No
|
Alignment
|
Alignment
| null |
['Crowd-sourced', 'LLM-generated']
|
['Convenience']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'LLM-as-a-Judge']
|
['Contested']
|
['No']
|
['No']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
caoWorstPromptPerformance2024
|
On the Worst Prompt Performance of Large Language Models
|
Include
| null | null |
This paper introduces RobustAlpacaEval, a benchmark to evaluate the worst-case prompt performance of LLMs across semantically equivalent real-world queries. It shows that ChatGPT and six open-source LLMs from the Llama, Mistral, and Gemma families are highly sensitive to prompt phrasing, that characterizing the worst prompt is difficult, and that common techniques for improving prompt robustness offer limited success.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Prompt robustness
|
Yes
|
"Resilience of LLMs to prompt variations." In particular, "model performance across semantically equivalent and syntactically fluent prompts."
|
Subset
| null |
Follow the instructions presented in 10 semantically-equivalent prompts.
|
The task dataset has 2 columns: "instruction" contains the original instruction, and "paraphrases" contains 10 semantically-equivalent paraphrases of the original instruction.
| null |
Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
100
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible)
|
Free response (e.g. summary paragraph, executable code)
|
LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null |
The original instruction is taken from an existing benchmark and the paraphrases are synthetically generated with GPT4 and then manually reviewed.
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/bwcao/RobustAlpacaEval/blob/main/RobustAlpacaEval.jsonl
|
RobustAlpacaEval
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
Mean, worst and best out of 11
|
Outputs alone
| null |
People would not ask a model 10 times the same but they expect the same answer no matter the wording.
|
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Language Modelling
|
Robustness
| null |
['Another benchmark', 'LLM-generated']
|
['Convenience']
|
['Free response']
|
['LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['']
|
['Mean']
|
liFIREDatasetFeedback2024
|
FIRE: A Dataset for Feedback Integration and Refinement Evaluation of Multimodal Models
|
Include
| null | null |
This paper introduces FIRE, a feedback-refinement dataset, consisting of 1.1M multi-turn conversations that are derived from 27 source datasets, empowering VLMs to spontaneously refine their responses based on user feedback across diverse tasks. The authors also develop the FIRELLaVA model by fine-tuning LLaVA on FIRE-100K and FIRE-1M, and they show remarkable feedback-refining capability, outperforming untrained VLMs by 50%.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Feedback-refining capability of VLMs
|
Yes
|
VLMs may sometimes produce undesirable outputs, possibly due to omitting important details in images or misunderstanding the instructions, which prompts the need for the feedback-refining capability beyond the normal instruction-following ability. This capability enables VLMs to spontaneously refine their responses based on user feedback, enhancing the efficiency and smoothness of interactions between users and visual assistants.
|
Comprehensive
| null |
FIRE dataset and FIRE benchmark consist of various datasets covering tasks including visual question answering, image captioning, complex reasoning, OCR, chart/table/document analysis, math problems, science question answering, etc.
|
Each sample consists of an image, a related question, the ground truth answer, and a multi-turn conversation spanning n turns. This conversation includes an initial response, textual feedback, and a refined answer generated by GPT-4o in response to the feedback.
| null |
Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
11,000 samples
|
Yes
|
number of turns, responses lengths, score on the feedback
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code), Extended interaction (e.g. conversation, calling an API and processing the response)
|
n-gram (BLEU, ROUGE, chrF), For dialogue assessment, they introduce four metrics: average turn (AT), average dialogue refinement (ADR), average turn refinement (ATR), and refinement ratio (RR).
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
Train sets: 1M and 100K samples
| null |
Simple Mean
|
No
| null | null |
https://huggingface.co/datasets/PengxiangLi/FIRE
|
FIRE Bench
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Language Modelling
|
Updating
| null |
['Another benchmark', 'LLM-generated']
|
['Convenience']
|
['Multiple choice', 'Short free response', 'Free response', 'Interaction']
|
['Soft match', 'Reward']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
huangEffiBenchBenchmarkingEfficiency2024
|
EffiBench: Benchmarking the Efficiency of Automatically Generated Code
|
Include
| null | null |
The paper introduces EffiBench, a benchmark of LeetCode problems designed to assess the time and memory efficiency of LLM-written programs. Problems are filtered from HuggingFace to problems corresponding to conventional algorithmic problem types (DFS, binary search, greedy, ...).
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
writing efficient code
|
No
| null |
Comprehensive
| null |
Generate a snippet of python code to solve a LeetCode problem, matching the desired output behaviour. Solutions are checked against hidden unit tests for correctness and later efficiency.
|
A LeetCode problem looks like a prompt, following by explicit "Input" and "Output" descriptions, and then a correct example. The dataset contains canonical human-written solutions which are the most upvoted on LeetCode forums.
| null |
Human exam questions (e.g. GRE questions), Real task examples (e.g. GitHub issues)
|
1,000
|
Yes
|
difficulty level, algorithm type
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code), Structured response (e.g. valid JSON, API call alone)
|
Execution time and memory usage efficiency; unit test correctness
|
metric_face_validity is "No" because efficiency scores are computed on only the questions LLMs answer correctly, meaning the worst LLMs get the highest efficiency scores because they can only answer simple questions with smaller variation in possible solutions.
|
The authors filter by LeetCode interview frequency, meaning task instances are "real examples" by being coding problems encountered in technical interviews.
|
Academia
|
Yes
| null | null |
Test
| null | null |
Weighted Mean
|
Yes
|
Scores by algorithm problem type as well as pass@1 accuracy
|
pass@k (any correct answer in k trials)
|
https://github.com/huangd1999/EffiBench
|
EffiBench
|
Widely-agreed
|
Yes
|
No
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
No
| null |
simple mean/sum
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Code Generation
| null | null |
['Human exams', 'Real task']
|
['Criterion']
|
['Free response', 'Structured']
|
['Reward']
|
['Widely-agreed']
|
['Yes']
|
['No']
|
['No comparison made']
|
['No']
|
['Partial', 'Representative']
|
['Mean']
|
chaoJailbreakBenchOpenRobustness2024
|
JailbreakBench: An Open Robustness Benchmark for Jailbreaking Large Language Models
|
Include
| null | null |
The paper proposes a benchmark for jailbreaking LLMs (i.e. eliciting harmful content through adversarial attacks). They provide a dataset, python package and leaderboard. Each score of the benchmark is a combination of Model + Defense + Thread Model.
|
Attacks are adaptive: The strongest attack can be picked directly targeting the defense.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
| null |
No
|
Inability to elicit harmful content from LLMs.
|
Subset
| null |
Elicit a harmful response from an LLM
|
Kind of Behavior + Goal (Query) + Target (affirmative response) + Category + Source.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Expert-crafted task examples (e.g. hand-written examples), Modified from another benchmark (e.g. translation into another language)
|
100 harmful 100 benign
|
Yes
|
Source for each item
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragraph, executable code)
|
LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
|
Attack Success Rate: What percentage of items have at least one response scored "harmful".
|
only 55% of data points are novel. the others are copied.
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Validation
|
Judge validation data 300 rows.
| null |
Simple Mean
|
No
| null | null |
https://github.com/JailbreakBench
|
JailbreakBench
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
No
|
No
| null |
simple mean. no inferential statistics (even though the LLM-as-judge have fairly low accuracy).
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Single cohesive phenomenon
|
No
| null |
Yes
|
Alignment
|
Alignment
| null |
['Author-crafted', 'Expert-crafted', 'Another benchmark']
|
['Targeted']
|
['Free response']
|
['LLM-as-a-Judge']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['No']
|
['Complete']
|
['Mean']
|
wangPictureWorthThousand2024
|
Is A Picture Worth A Thousand Words? Delving Into Spatial Reasoning for Vision Language Models
|
Include
| null | null |
This paper introduces SpatialEval, a multimodal spatial reasoning dataset with four subtasks. SpatialEval tasks include map reading, maze navigation, locating objects on a grid, and QA from captioned images. Ablation studies show that LVLMs primarily use text over visual cues when provided.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Spatial reasoning
|
Yes
|
"Spatial reasoning, in particular, is fundamental to everyday human activities such as navigating environments, understanding maps, and manipulating objects. It encompasses skills that are crucial for both survival and higher-order cognition, including the ability to navigate through space, recognize patterns, and deduce relationships from spatial configurations" (2)
|
Subset
| null |
A spatial reasoning question (naming objects at coordinates, counting right turns in a maze, ...) is presented as a multiple choice question. The context comes in one of three modalities: image only, caption only, and image with caption.
|
An image, a textual description / caption of the image, a multiple choice question, and a correct answer. Example could be an artificial grid maze and the question "How many right turns are there on the provided path from S to E?"
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Unclear
| null |
No
| null |
Unknown
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Spatial reasoning subtask (spatial map, maze navigation, spatial grid)
| null |
https://github.com/jiayuww/SpatialEval
|
SpatialEval
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
mean with "error bars from 3 runs at temperature 0.2" (unsure if this is a standard error or just the range in scores) (17)
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Reasoning
|
Spatial
| null |
['Author-crafted', 'Unknown']
|
['Unknown']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean', 'Std']
|
kasaiRealTimeQAWhats2023
|
RealTime QA: What's the Answer Right Now?
|
Include
| null | null |
This paper introduces REALTIME QA, a dynamic question answering platform that evaluates systems' ability to answer questions about the current world. New questions requiring up-to-date information are released weekly. The paper presents the platform and evaluates strong baselines built on large language models (like GPT-3 and T5) combined with information retrieval (web search, DPR). Results highlight the importance of timely retrieval but also show models may provide outdated answers when retrieval is insufficient.
|
Key contributions include: (1) Proposing REALTIME QA, a novel dynamic benchmark for evaluating QA systems on their ability to use real-time information. (2) Establishing a regular (weekly) cycle for question release and evaluation. (3) Providing strong baseline results using LLMs augmented with different information retrieval techniques. (4) Analyzing the performance and failure modes of current systems on timely QA. (5) Making the platform and results publicly available.
|
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Real-time question answering; Reasoning with up-to-date information; Temporal awareness in QA systems.
|
Yes
|
The ability of a QA system to correctly answer questions about novel events or rapidly changing information necessitates access to and processing of the most current information available, unlike systems relying solely on static knowledge snapshots.
|
Subset
|
To overcome the limitations of static QA datasets and drive research towards systems capable of handling continuously evolving world knowledge and providing timely answers.
|
Given a natural language question released at a specific time, whose answer depends on the current state of the world, provide the correct, up-to-date answer. This typically requires querying external, real-time information sources.
|
A question released weekly via the REALTIME QA platform, requiring a factual answer reflecting the world state at that time. The platform manages questions and evaluates submitted answers.
|
Questions are manually generated by the benchmark organizers to specifically require timely information. They cover diverse topics and can be short-answer or yes/no. The benchmark is ongoing and dynamic.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
Dynamic/Ongoing. The dataset grows each week. The paper reports on results gathered over a year.
|
Yes
|
Question Release Timestamp, Question Type (Short-Answer/YesNo), Answer Type (Person, Org, Loc, Date, Num, Other), Required Timeliness category.
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
|
Exact Match (EM) and F1 score.
|
The authors manually create new questions each week designed to require knowledge of recent events or information that frequently changes.
|
Mix (multiple authors from industry and academia)
|
Yes
|
Benchmark operates on a weekly cycle. Baselines use GPT-3 (text-davinci-002), T5-11B, DPR, and Google Custom Search API. Details baseline configurations. Evaluation interface shown.
|
REALTIME QA's key innovation is its dynamic evaluation framework, moving beyond static datasets to continuously assess performance on questions requiring current knowledge. It highlights the challenges models face in staying up-to-date and avoiding reliance on potentially outdated parametric memory.
|
Test
| null |
Answers are expected to be concise factual strings or "Yes" / "No".
|
Simple Mean
|
Yes
|
Performance analyzed by question type, answer type, required timeliness category, and baseline system configuration (retrieval method, base model)
| null |
https://realtimeqa.github.io/
|
RealTime QA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
The benchmark's dynamic, ongoing nature is its core validity claim for measuring real-time QA ability. Questions are manually created to ensure they test timely knowledge. Performance analysis based on timeliness requirements further supports its construct validity.
|
Exact Match (EM), F1 Score (%)
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
|
The dynamic nature and focus on current events make it highly representative of real-time information needs.
|
Single cohesive phenomenon
|
No
| null |
No
|
Language Modelling
|
Updating
| null |
['Real task', 'Author-crafted']
|
['Targeted']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Mean']
|
akhbariSETLEXSEMCHALLENGEUsing2024
|
SETLEXSEM CHALLENGE: Using Set Operations to Evaluate the Lexical and Semantic Robustness of Language Models
|
Include
| null | null |
This paper introduces a synthetic benchmark designed to evaluate the robustness of large language models (LLMs) in performing set operations under lexical and semantic variation.
The benchmark systematically alters input features like token type, length, frequency, and semantic similarity to test LLMs' ability to generalize across incidental variations, i.e. their System 2 robustness.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Robustness
|
Yes
|
"Robustness in this context is System 2 robustness and requires that a perfect intelligent system exhibit no variance in task performance as incidental aspects of the input vary."
|
Subset
| null |
The task involves computing set operations (union, intersection, difference, symmetric difference) given two sets, with variations in operand size, token type (numbers or words), token length, token frequency, semantic similarity, prompting method, demonstration phrasing, and number of in-context demonstrations.
|
A prompt with this template: set construction, task definition, demonstrations, and final instructions.
| null |
Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
Not directly reported, calculated: 12,800 configurations × 50 samples per configuration = 640,000 prompts generated
|
No
| null |
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Industry
|
The code to generate it is available
| null | null |
Test
| null |
The expected response format is one of the hyperparameters of the generated prompts.
|
Simple Mean
|
Yes
|
Scores by prompt generation hyperparameter: set operations, operand sizes, token types, token length, semantic grouping, prompt style and number of demostrations.
| null |
https://github.com/amazon-science/SetLexSem-Challenge
|
SetLexSem Challenge
|
Contested
|
Yes
|
Yes
|
Yes
|
No
| null |
No
| null |
Yes
|
The authors explicitly discuss the construct validity of the SetLexSem benchmark, particularly in relation to its goal of measuring System 2 robustness, defined as invariance to incidental task features. They argue that the benchmark validly captures this construct by systematically manipulating those incidental features and observing variance in performance.
|
Mean and standard deviation
|
Outputs alone
|
Low ecology
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
Language Modelling
|
Robustness
| null |
['Procedurally-generated', 'LLM-generated']
|
['Random', 'Convenience', 'Targeted']
|
['Short free response', 'Free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
tanBenchmarkingImprovingTemporal2023
|
Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models
|
Include
| null | null |
This paper introduces TEMPREASON, a comprehensive probing dataset designed to evaluate the temporal reasoning capabilities of LLMs across three hierarchical levels (Basic, Advanced, Complex) grounded in Allen's interval algebra. The paper also proposes a novel learning framework involving temporal span extraction and time-sensitive reinforcement learning to enhance LLM temporal reasoning. Experiments show TEMPREASON is challenging for current LLMs, and the proposed framework effectively improves performance.
|
Key contributions include: (1) Creating TEMPREASON, a large-scale probing dataset for temporal reasoning with questions categorized into three difficulty levels based on established temporal logic. (2) Proposing a novel two-stage framework (TempReasoning) specifically designed to improve LLM temporal reasoning. (3) Evaluating several state-of-the-art LLMs on TEMPREASON, identifying their limitations, particularly on more complex reasoning levels. (4) Demonstrating the effectiveness of the proposed TempReasoning framework in enhancing LLM performance on the benchmark.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Temporal reasoning capability of Large Language Models.
|
Yes
|
The ability to understand and infer temporal relationships between events described in text. This includes recognizing the 13 basic temporal relations in Allen's interval algebra, composing these relations (e.g., transitivity), and performing multi-step deductive reasoning based on chains of temporal facts.
|
Comprehensive
|
To provide a systematic and less biased dataset for probing LLM temporal reasoning compared to previous datasets, and to facilitate the development of methods specifically aimed at improving this capability.
|
Temporal Reasoning QA: Given a textual context and a question targeting a specific temporal reasoning skill (Basic, Advanced, or Complex based on Allen's algebra), select the correct answer from multiple choices.
|
An instance includes a context passage, a question testing temporal reasoning, and several multiple-choice options, with one designated as the correct answer.
|
Questions are designed to probe understanding of the 13 basic Allen relations, 22 advanced triple patterns, and complex deductive chains. Contexts are generally short and focused. Multiple-choice format is used.
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Procedurally-generated task examples (e.g. Creating instances from a template)
|
Total: 49,818 QA pairs. Test set: 9,964 examples.
|
Yes
|
Temporal Reasoning Level (Basic/Advanced/Complex), Temporal Relation/Pattern Type.
|
Random sample (creators defined a task space and sampled from it), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
|
Accuracy (percentage of correctly answered multiple-choice questions).
|
The dataset was created semi-automatically using templates based on the rules and compositions within Allen's interval algebra across three complexity levels. Generated QA pairs were then filtered for quality and naturalness by human crowdworkers on Amazon Mechanical Turk.
|
Mix (multiple authors from industry and academia)
|
Yes
|
Details the 3 reasoning levels based on Allen's algebra. Describes the semi-automatic generation process with templates. Details human filtering on MTurk. Describes the TempReasoning framework (TSE model based on LUKE, TSRL policy model based on Flan-T5-XL, reward function). Lists baseline models and evaluation setups.
|
The benchmark's systematic structure based on Allen's interval algebra allows for fine-grained probing of temporal reasoning skills. The proposed TempReasoning framework demonstrates a targeted approach to improving this specific capability in LLMs.
|
Test, Train, Validation
|
Train: 34,872 examples. Dev: 4,982 examples.
|
The model needs to select the index of the correct answer choice.
|
Simple Mean
|
Yes
|
Performance reported per reasoning level (Basic, Advanced, Complex). Analysis also performed across different QA settings (closed-book, open-book, reasoning QA).
| null |
https://github.com/DAMO-NLP-SG/TempReason
|
TEMPREASON
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
Dataset based on established Allen's interval algebra. Systematically covers Basic, Advanced, and Complex reasoning patterns. Human filtering via MTurk ensures data quality. Empirical results align with difficulty hierarchy (Basic > Advanced > Complex), supporting construct validity.
|
Accuracy (%)
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
Uses controlled contexts and templated questions to isolate and test specific temporal reasoning skills derived from Allen's algebra.
|
Composite phenomenon
|
Yes
| null |
No
|
Reasoning
|
Temporal
| null |
['Author-crafted', 'Crowd-sourced', 'Procedurally-generated']
|
['Random', 'Targeted', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
wangJourneyBenchChallengingOnestop2024
|
JourneyBench: A Challenging One-Stop Vision-Language Understanding Benchmark of Generated Images
|
Include
| null | null |
JourneyBench is a diverse vision-language understanding benchmark using AI-generated images from the Midjourney platform. These images are paired with text prompts for QA tasks through various extensive human annotation and human-machine-in-the-loop filtering systems. The resulting benchmark is substantially harder than multimodal benchmarks that use common images from COCO or Flickr.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
"Vision Language Understanding"
|
No
| null |
Comprehensive
| null |
JourneyBench has five tasks: "MCOT (multimodal chain-of-thought), multi-image MCOT (MMCOT), fine-grained cross-modal retrieval (CR), open-ended visual question answering (VQA) with hallucination triggers, and imaginary image captioning" (2).
|
Too many highly bespoke tasks to describe in two sentences. E.g., "Strictly Complementary MCOT" consists of GSM8K questions where quantities are replaced with visual processing subtasks, e.g. "Brianna bakes as many cookies as there are Stormtroopers in this image."
| null |
Human exam questions (e.g. GRE questions), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Expert-crafted task examples (e.g. hand-written examples), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
~13.5K
|
Yes
|
visual reasoning category, task subcategory
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM post-processing (extracting answers, reformatting for automated scoring)
| null |
Extreme variability in task format
|
Academia
|
Yes
|
2200 total human annotation hours!
| null |
Test
| null | null |
Simple Mean
|
Yes
|
subtask, including with and without distractors
|
recall@k
|
https://journeybench.github.io/
|
JourneyBench
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
"existing Visual Language Understanding (VLU) benchmarks... tend to emphasize commonly occurring subjects, predicates, and objects, over unusual or abstract scenes. This enables models to excel by leveraging previously acquired common-world knowledge without necessarily understanding the actual content of the images. While this bias might inflate scores on academic benchmarks, it can lead to significant challenges when transitioning to real-world applications" (2)
|
simple mean/sum
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
No
|
VQA
| null | null |
['Human exams', 'Author-crafted', 'Expert-crafted', 'Crowd-sourced', 'Another benchmark', 'LLM-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Soft match', 'LLM post-processing']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
luLearnExplainMultimodal2022
|
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
|
Include
| null | null |
This paper introduces SCIENCEQA, a large-scale multimodal benchmark featuring ~21k multiple-choice science questions (grades 1-12) across diverse topics. Questions can involve text, images, or both. Uniquely, each question is annotated with a detailed explanation comprising relevant background knowledge (lecture) and step-by-step reasoning (solution). The authors propose using language models to generate these explanations as Chains-of-Thought (CoT) and demonstrate that this process significantly improves answer accuracy.
|
Key contributions include: (1) Creating SCIENCEQA, a large (~21k), diverse (multiple science topics, grades 1-12), multimodal benchmark for science QA. (2) Providing detailed, structured explanations (Lecture + Solution) for each question. (3) Proposing and evaluating the generation of these explanations as Chains-of-Thought (CoT) using language models. (4) Demonstrating that training models to generate CoT explanations boosts their answer accuracy on the QA task.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Multimodal reasoning; Multi-hop reasoning; Explanation generation (Chain-of-Thought); Science question answering.
|
Yes
|
The ability to answer science questions by integrating information from text and/or images, performing multi-step reasoning that may involve recalling background knowledge (lecture component) and deriving the answer through logical steps (solution component), and explicitly articulating this process as a Chain-of-Thought explanation.
|
Comprehensive
|
To provide a rich, large-scale benchmark for evaluating deeper reasoning and interpretability in multimodal science QA, addressing limitations of previous datasets. To investigate Chain-of-Thought generation as a mechanism for improving reasoning.
|
Given a science question with multimodal context (text and/or image), select the correct multiple-choice answer and generate a detailed textual explanation consisting of a relevant lecture (background knowledge) and a step-by-step solution (reasoning process).
|
An instance includes: the question text, context (text and/or image URL), multiple-choice options (A-E), the correct answer index, grade level, topic, skills tested, and the gold explanation text structured as {Lecture, Solution}.
|
Covers grades 1-12. Topics include Natural Science, Social Science, Language Science. Context can be text-only, image-only, or both. Skills cover various scientific practices. Answers are multiple-choice.
|
Human exam questions (e.g. GRE questions), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks)
|
Total: 21,208 examples. Test set: 4,241 examples.
|
Yes
|
Grade, Topic, Skills, Context Type, Question Text, Options, Answer Index, Lecture Text, Solution Text, Image URL (if applicable).
|
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
|
Answer Accuracy (%). Explanation Quality: BLEU-4, ROUGE-L
|
Questions sourced from open curriculum websites. Explanations (Lecture + Solution) were written by human annotators with STEM backgrounds recruited via Upwork, following guidelines and multiple verification rounds.
|
Mix (multiple authors from industry and academia)
|
Yes
|
Describes the annotation process via Upwork, including annotator qualifications and payment. Details baseline models (UnifiedQA-v2 T5, GPT-3 text-davinci-002) and experimental setups (fine-tuning, few-shot prompting). Describes the Multimodal Chain-of-Thought (MM-CoT) method. Provides dataset statistics and examples.
|
SCIENCEQA's unique contribution is the large-scale combination of multimodality, diverse science topics/grades, and detailed, structured explanations (Lecture + Solution), making it a rich resource for studying complex reasoning and explanation generation. The demonstration of CoT improving performance is a key finding.
|
Test, Train, Validation
|
Train: 12,726 examples. Validation: 4,241 examples. Mini-test/val sets also available.
|
Models primarily need to select the correct multiple-choice answer. They can also be trained/evaluated on generating the free-form explanation text.
|
Simple Mean
|
Yes
|
Performance analyzed by input modality, topic, grade level, and question type. Comparison between models trained with vs. without CoT explanation generation.
| null | ERROR: type should be string, got " https://scienceqa.github.io" |
SCIENCEQA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
Yes
|
Yes
|
Large-scale (~21k) and diverse (subjects, grades, modalities). Sourced from real science curricula. Annotations performed by qualified annotators (STEM backgrounds) with multi-round verification. Structured explanations (Lecture+Solution) provide richer signal. Empirical results demonstrate CoT explanations improve reasoning performance.
|
Accuracy (%), BLEU-4, ROUGE-L
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
Uses curriculum-style science questions to evaluate reasoning and explanation abilities relevant to science education and understanding.
|
Composite phenomenon
|
Yes
| null |
No
|
Reasoning
| null | null |
['Human exams', 'Author-crafted', 'Crowd-sourced']
|
['Random', 'Convenience', 'Targeted']
|
['Multiple choice', 'Free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Representative']
|
['Mean']
|
chenDrAcademyBenchmarkEvaluating2024
|
Dr.Academy: A Benchmark for Evaluating Questioning Capability in Education for Large Language Models
|
Include
| null | null |
This paper introduces Dr.Academy, a benchmark for evaluating the question generation capabilities of LLMs in educational contexts. It evaluates questions generated by LLMs across general, monodisciplinary, and interdisciplinary domains using a cognitive framework based on Anderson and Krathwohl’s taxonomy. The quality of LLM's output is evaluated by automatic metrics which correlate with human scores.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Question generation for education
|
Yes
|
"According to Anderson and Krathwohl’s educational taxonomy, we consider that high-quality questioning in the educational field must meet the following characteristics: i) achieve a higher level across the six domains including memory, understanding, application, analysis, evaluation, and creation; ii) be relevant to the given context; iii) comprehensively cover the content of the context, and iv) also reflect the important knowledge of this context."
|
Comprehensive
| null |
The LLMs are prompted to generate educational questions based on textual contexts, accross 3 domains (general, monodisciplinary and multidisciplinary) and mapped to the 6 levels from Anderson & Krathwohl’s taxonomy.
|
The context the LLM has to use to generate the educational questions.
| null |
Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
30,000 contexts (10,000 per domain)
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code)
|
LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null |
The contexts are generated based on pre-existing question-answering datasets (general domain: SQuAD, monodisciplinary: MMLU)
|
Academia
|
No, no link is provided
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Yes, subscores are provided by domain (general, mono-humanities, mono-sciences, interdisciplinary) and by evaluation metric (consistency, relevance, coverage, and representativeness).
| null | null |
Dr.Academy
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
| null |
No
|
Yes
|
Yes
|
The authors directly assess the validity of their benchmark through theoretical alignment with Anderson and Krathwohl’s taxonomy, expert evaluation of the evaluation metrics, and empirical correlation with human judgments.
|
Simple mean to aggregate automatic scores, Pearson and Spearman correlation between human and automatic ratings, and Krippendorff’s Alpha inter-rater agreement for human ratings.
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
Education
| null | null |
['Another benchmark', 'LLM-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Free response']
|
['LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Complete']
|
['Mean', 'Other']
|
lyuMMScanMultimodal3D2024
|
MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations
|
Include
| null | null |
The paper develops MMScan, a benchmark of 3D scenes that tests spatial and attribute understanding via visual grounding and QA tasks. 3D scene data from an existing dataset is annotated with a human-machine-in-the-loop setup, and human annotators create questions from these annotations.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Multi-modal 3D perception
|
No
| null |
Comprehensive
| null |
The model is given a 3D scene and either (i) a question to answer about an object in the scene, or (ii) a textual description of an object it must locate with a bounding box.
|
QA task instances are nontrivial, open-ended questions, like "Where can I get a comfortable seat in this room?" Visual grounding questions ask to identify objects like "the wooden guitar leaning against the white wall."
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
504,790
|
Yes
|
annotation type (object / region, space / attribute)
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM post-processing (extracting answers, reformatting for automated scoring), 3D IoU-based Average Precision
|
ChatGPT categorises free responses into "Correct," "Ambiguous," and "Error." It's unclear how this maps to accuracy but accuracy is reported.
| null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
2,016,833 train; 514;016 validation
| null |
Simple Mean
|
Yes
|
annotation type (object / relation, space / attribute)
| null |
https://tai-wang.github.io/mmscan/
|
MMScan
|
Contested
|
Yes
|
No
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
"For constructing a multi-modal 3D dataset, we prioritize selecting a foundational 3D scene dataset with extensive, real-scanned sensor data to minimize the sim-to-real gap" (3)
|
simple mean/sum
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
|
Sizes combine the 3D visual grounding benchmark and the 3D QA benchmark both reported separately in the paper.
|
No
|
Grounding
| null | null |
['Author-crafted', 'Crowd-sourced', 'Another benchmark']
|
['Convenience']
|
['Free response']
|
['Exact match', 'Soft match', 'LLM post-processing', 'Soft match']
|
['Contested']
|
['Yes']
|
['No']
|
['Comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
heExploringCapacityPretrained2023
|
Exploring the Capacity of Pretrained Language Models for Reasoning about Actions and Change
|
Include
| null | null |
The paper introduces four core Reasoning about Actions and Change (RAC) tasks as a unified textual benchmark, carefully designed to minimize confounding linguistic factors (e.g., grounding) and maintain a sharp focus on RAC. The resulting benchmark, TRAC (Textual Reasoning about Actions and Change), includes problems of varying complexity and enables more fine-grained evaluation of language models, with an emphasis on assessing their structural generalization capabilitie — crucial for effective RAC.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Two fundamental tasks, Projection and Executability, which directly target the essential
knowledge of RAC; two composite tasks, Plan-Verification and Goal-Recognition for
more comprehensive problem settings.
|
Yes
|
While transformers are able to induce and utilize knowledge from a large number
of training examples, it remains a great challenge to efficiently generalize to structurally more complex problems on TRAC.
|
Subset
| null |
There are four tasks introduced in this benchmark. There definitions are listed below.
Project: This task assesses the outcome of executing actions. Given an initial state s and a sequence of N applicable actions ⃗a, the goal is to determine whether a proposition q would hold after executing ⃗a. The context includes s and ⃗a, and the query is q.
Executability: This task focuses on action preconditions. Given an initial state s and a sequence of N actions ⃗a, the goal is to determine whether ⃗a can be executed sequentially starting from s. Here, the context is s, and the query is ⃗a.
Plan Verification (PV): In planning, the goal is to generate a sequence of actions to achieve a desired outcome. TRAC adopts a verification variant of this task: Given an initial state s, a goal g (expressed as a proposition), and a sequence of N actions ⃗a, the task is to determine whether executing ⃗a from s will successfully achieve g. The context includes s and g, while the query is ⃗a.
Goal Recognition (GR): This task involves identifying the goal based on a partial observation of actions. In the simplified version used in TRAC, given an initial state s, a candidate goal g, and a partial action sequence ⃗a, the system must decide whether g is the true goal—that is, whether ⃗a is a prefix of an optimal plan for achieving g. The context consists of s and ⃗a, and the query is g.
|
Context, Query and Answer.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Procedurally-generated task examples (e.g. Creating instances from a template)
|
3k
|
Yes
|
length of action sequences.
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
10k, 2k
| null |
Simple Mean
|
Yes
|
action sequences of length 4 and 5.
| null |
https://github.com/sysulic/trac/tree/main
|
Textual Reasoning about Actions and Change (TRAC)
|
Contested
|
Yes
|
No
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Reasoning
|
Planning
| null |
['Author-crafted', 'Procedurally-generated']
|
['Targeted']
|
['Short free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['No']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
edmanCUTEMeasuringLLMs2024
|
CUTE: Measuring LLMs’ Understanding of Their Tokens
|
Include
| null | null |
The paper introduces CUTE, a benchmark designed to test the orthographic knowledge of large language models (LLMs), specifically their understanding of the character composition of tokens. It evaluates multiple LLMs on tasks requiring spelling, character-level similarity, and text manipulation.
|
It also includes a Russian version (CUTE-Rus).
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Orthographic knowledge
|
No
|
1. Do LLMs know which characters make up
their tokens?
2. Do LLMs understand the difference between
semantic and orthographic similarity?
3. Can LLMs manipulate text at the character
level?
|
Subset
| null |
The task is to evaluate whether LLMs can identify, compare, and manipulate the character-level structure of their tokens through a series of prompts.
|
Prompt and expected answer.
| null |
Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
14,000 (not specified in paper, found on Hugging Face)
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null |
Synthetic datasets derived from high-frequency English words (sourced from Google Web Trillion Word Corpus) and simple sentences (from TinyStories).
|
Academia
|
Yes
|
Available on Hugging Face: https://huggingface.co/datasets/leukas/cute
| null |
Test
| null | null |
Simple Mean
|
Yes
|
By task category (composition, similarity, manipulation) and granularity (e.g., character-level vs. word-level), and by language (English, Russian).
| null |
https://github.com/Leukas/CUTE
|
CUTE
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Simple mean
|
Outputs alone
|
Low ecology, humans wouldn’t usually ask LLMs to do these tasks.
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
NLP
| null | null |
['Another benchmark', 'LLM-generated']
|
['Targeted', 'Criterion']
|
['Short free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
tianSciCodeResearchCoding2024
|
SciCode: A Research Coding Benchmark Curated by Scientists
|
Include
| null | null |
SciCode is a benchmark consisting of multi-step scientific code generation problems. Scientists curate code implementations from published research in their field and write test cases for python implementations of these problems. Frontier reasoning models evaluated on SciCode struggle to achieve double-digit accuracy on the most "realistic" evaluation setup.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Coding for scientific research problems
|
Yes
|
"Solving the main problems requires deep scientific background knowledge, strong analytical capabilities to decompose complex problems into simpler ones and correctly solve each, and the ability to integrate partial into complete solutions" (2)
|
Comprehensive
| null |
A problem is presented from a natural scientific field, and the model must compose python functions for each subproblem and integrate their behaviour into the solution of the main problem. Automated test cases run against the proposed answer.
|
Models receive a main problem and each subproblem one at a time, with potential scientific context. A subproblem might be "Write a Haldane model Hamiltonian on a hexagonal lattice," asking for a python function.
|
Different variants of the task subtly change the phenomenon under study (the authors address this). E.g., an evaluation mode that removes scientific background is testing "inherent scientific knowledge," whereas included background shifts focus to "coding and instruction-following capabilities" (6).
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Expert-crafted task examples (e.g. hand-written examples)
|
288
|
Yes
|
field, subfield
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragraph, executable code), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), Unit test cases
| null | null |
Academia
|
Yes
|
Most impressive setup for expert annotation I have seen in my batch so far
| null |
Test, Train
|
50
| null |
Subproblem accuracy aggregated by main problem
|
No
| null | null |
https://scicode-bench.github.io/
|
SciCode
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
Yes
|
"Realistic and current problems sourced from scientists’ everyday research tasks or influential papers. This ensures SciCode’s relevance to real-world applications"
"Problems curated to have zero overlap with publicly available datasets to prevent potential data contamination" (2)
|
mean/sum, where problem correct means all subproblems must be correct
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
|
Subproblems decomposed from 80 main problems (65 test, 15 train)
|
Yes
|
Code Generation
| null | null |
['Author-crafted', 'Expert-crafted']
|
['Targeted']
|
['Free response', 'Structured']
|
['Exact match', 'Reward']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial']
|
['Mean']
|
liWhenLlmsMeet2024
|
When LLMs Meet Cunning Texts: A Fallacy Understanding Benchmark for Large Language Models
|
Include
| null | null |
FaLlacy Understanding Benchmark (FLUB) contains multiple choice, classification, and explanation questions about "cunning texts." These are snippets from posts on a Chinese online forum, which human annotators filter and then annotate with multiple choice questions, a "cunning type" classification, and an explanation.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
fallacy understanding
|
Yes
|
"whether LLMs can understand cunning texts that may contain misleading, wrong premise, intentional ambiguity, and so forth" (1)
|
Comprehensive
| null |
The model is given a snippet of "cunning text" from the Chinese online forum Ruozhiba and told to either select an option answering the question or otherwise explain the fallacy at play in the text.
|
A text snippet might be a riddle like "Which one weighs more, a ton of iron or a ton of cotton?", with the correct multiple choice answer being ""A ton of iron" and "a ton of cotton" both weigh one ton and are the same weight."
|
These are not fallacies. They're more like riddles and wordplay.
|
Crowd-sourced task examples (e.g. Prolific-created tasks)
|
834
|
Yes
|
cunning type
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
|
Using LLM-as-a-judge to assess LLMs' abilities to comprehend puzzles does not pass face validity, especially when the scores are only correlated 0.57 with human judgements.
Also, for LLM-as-a-judge, the inter-rater agreement with human evaluators is not valid as humans rate on a different scale (1-5) from GPT-4 (1-10).
| null |
Academia
|
Yes
| null |
Construct validity is extremely off for this paper, I suspect for language barrier reasons
|
Test
| null | null |
Geometric mean
|
Yes
|
Selection, classification, and explanation scores
| null |
https://github.com/THUKElab/FLUB
|
FLUB
|
Contested
|
No
|
No
|
No
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
"our data come entirely from the real world and are all carefully constructed by netizens" (4)
|
simple mean/sum
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
Yes
|
Reasoning
|
Logical
| null |
['Crowd-sourced']
|
['Criterion']
|
['Multiple choice', 'Free response']
|
['Exact match', 'LLM-as-a-Judge']
|
['Contested']
|
['No']
|
['No']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
wangNeedleMultimodalHaystack2024
|
Needle In A Multimodal Haystack
|
Include
| null | null |
MM-NIAH is a multimodal (image+text) variant of the conventional "needle in a haystack" task in NLP. Authors concatenate documents from OBELICS to produce long-context documents, embed sentences into text or artifacts into images, and prompt MLLMs to answer questions about these "needles."
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
long-context multimodal understanding
|
No
| null |
Comprehensive
| null |
"For a comprehensive evaluation, we design three types of tasks, including retrieval, counting, and reasoning in our MM-NIAH" (2). Retrieval is standard multimodal IR, counting refers to counting artefacts in the document, and reasoning can be visual-compositional, commonsense, etc.
|
Authors might take an arbitrary long-context multimodal document and introduce textual needles like "The penguin counted 2 bananas," then ask at the end of the context "How many bananas did the penguin count in total?" Equivalently, small images will be overlayed on images in the document and asked about at the end.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
18,000
|
Yes
|
needle placement depth (in % of context), total context length, needle modality (image or text), task type
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), Soft Accuracy for counting task
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
520
| null |
Simple Mean
|
Yes
|
heatmap of average accuracy by context length and position of needle in document
| null |
https://mm-niah.github.io/
|
Needle In A Multimodal Haystack (MM-NIAH)
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
simple mean/sum
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
|
task_dataset_size is for train; task_dataset_size_extra is for validation; test split is stated to exist in the paper but there is no record of it or a stated size
|
No
|
VQA
|
Long Context
| null |
['Author-crafted']
|
['Convenience']
|
['Multiple choice', 'Short free response']
|
['Exact match', 'Soft match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
yuMMvetEvaluatingLarge2024
|
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
|
Include
| null | null |
MM-Vet (MultiModal Veterinarian) is a general benchmark for vision-language capabilities, emphasising the integration of multiple capabilities per problem. Questions are sourced from "various online sources" and authors-hand annotate most of the answers. As questions are often open-ended and diverse, the benchmark uses LLM-scoring with GPT-4 as a judge.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Six core capabilities: "recognition, optical character recognition, knowledge, language generation, spatial awareness, and math" (2) and their interactions for more complex tasks
|
Yes
|
Long bulleted definitions on pp. 3-4, but very broad, like "Recognition refers to the general visual recognition capability, including recognizing scenes, objects, object attributes (color, material, shape, etc), counting, and various other high-level vi- sual recognition tasks"
|
Comprehensive
| null |
16 tasks employing multiple vision-language capabilities, each a free response question. E.g., a question like "How many gallons of supreme gasoline can I get with $50?" with a corresponding image qualifies as a question employing both optical character recognition and math.
|
An image, e.g. of a sign for gas, a question, like "How many gallons of supreme gasoline can I get with $50?", and possibly multiple accepted answers, like "13.6 OR 13.7."
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks)
|
205
|
Yes
|
capabilities tested
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
|
The authors use LLM-as-a-judge to evaluate all questions, including those with numerical answers that could be exactly matched, to "[allow] any style of model outputs to be evaluated with a unified consistent metric" (5)
| null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
capabilities tested
| null |
https://github.com/yuweihao/MM-Vet
|
MM-Vet
|
Contested
|
Yes
|
Yes
|
No
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
mean and variance
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
VQA
| null | null |
['Author-crafted', 'Crowd-sourced']
|
['Convenience']
|
['Short free response', 'Free response']
|
['LLM-as-a-Judge']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean', 'Std']
|
zhangMIntRec20LargescaleBenchmark2024
|
MIntRec2.0: A Large-scale Benchmark Dataset for Multimodal Intent Recognition and Out-of-scope Detection in Conversations
|
Include
| null | null |
MIntRec2.0 is a multimodal dataset (image, text, and audio) to assess intent recognition. Scenes are sourced from TV shows with corresponding subtitles, and models must match one of 30 defined intent classes.
|
The authors spend the bulk of their attention on benchmarking their own multimodal fusion approach and conduct only a small comparative experiment between ChatGPT and humans, taking ChatGPT's score as representative of LLMs.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
multimodal intent recognition
|
Yes
|
"Understanding human intentions in multimodal scenarios... perceiving user tones, expressions, and body language" (1)
|
Comprehensive
| null |
Models are given a scene from a TV show, partitioned into dialogue steps, and must classify the intent of the utterance into one of 30 intent classes. "Out-of-scope" is an available class for utterances not expressing intent.
|
A sequence of dialogue turns, with corresponding image, text, and audio, including an annotation for who is speaking. The correct intent class might be "Taunt," "Criticize," "Care," ...
| null |
Produced media (TV sitcom scenes)
|
3,230
|
Yes
|
in-scope vs. out-of-scope
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
9,989 train; 1,821 val
|
Multi-class classification with 30 possibilities is a kind of multiple choice...
|
Simple Mean
|
Yes
|
in-scope vs. out-of-scope accuracy
| null |
https://github.com/thuiar/MIntRec2.0
|
MIntRec2.0
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
Yes
|
"The intent taxonomies are highly applicable across various domains, offering con- siderable promise for real-world applications (Further discussions can be found in Appendix H)" (5)
|
simple mean/sum
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
|
Dialogue samples from 1,245 dialogues total
|
Yes
|
VQA
|
Understanding
| null |
['Author-crafted']
|
['Convenience']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
shenTaskBenchBenchmarkingLarge2024
|
TaskBench: Benchmarking Large Language Models for Task Automation
|
Include
| null | null |
TaskBench is a framework for evaluating how well large language models (LLMs) can automate complex tasks. It addresses three stages of task automation: task decomposition, tool selection and parameter prediction. It introduces Tool Graph - a novel representation of tools and their dependencies.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
task automation
|
Yes
|
"task automation, which involves decomposing complex tasks described by user instructions into sub-tasks and invoking external tools to execute them, playing a central role in autonomous agents."
|
Comprehensive
| null |
The task requires models to generate task steps and a tool graph based on the user instruction
|
A single item consists of a user instruction and the output contains detailed task steps and a tool graph with parameters.
| null |
Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
17,331
|
Yes
|
tool graph structure, number of tools, domain , tool names, tool dependencies, parameters required
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), Correlation (Matthew's correlation, Pearson's r)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
domain categories, different graph structure types, and complexity levels
| null |
https://github.com/microsoft/JARVIS
|
TaskBench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
They include human evaluation of the dataset quality, asking experts to rate samples on naturalness, complexity, and alignment. They compare to existing baselines.
|
simple means to report F1 scores and ROUGE metrics
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
|
The task simulates real user instructions that would be given to autonomous agents, but in a controlled environment.
|
Composite phenomenon
|
Yes
| null |
Yes
|
Agents
|
Tool Use
| null |
['Procedurally-generated', 'LLM-generated']
|
['Targeted', 'Criterion']
|
['Free response', 'Structured']
|
['Exact match', 'Soft match', 'Correlation']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial', 'Representative']
|
['Mean']
|
huangDAcodeAgentData2024
|
DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models
|
Include
| null | null |
A code generation benchmark specifically for agent-based data science tasks.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Data Science Code Generation
|
No
| null |
Comprehensive
| null |
Execute a data science set of instructions in an agentic setting e.g. you might need to save the results with a specific filename.
|
A set of requirements to carry out (natural language instructions) and contextual information (files in the environment and constraints on actions).
|
The range of tasks is fairly unclear.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
500
|
Yes
|
Difficulty level, question topic
|
Convenience sample (creators found a set of tasks that was readily accessible), Unknown
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), Execution-based evaluation. e.g. run the agent's code and see if it matches the ground-truth results. Plus different rubrics for each task.
|
The evaluation rubrics are fairly comprehensive. A strong point of the paper.
|
Unclear how this is done.
|
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Difficulty level and question topic
| null |
https://github.com/yiyihum/da-code
|
DA-Code
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
Mean,
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively), Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Code Generation
| null |
Data Science
|
['Real task', 'Author-crafted']
|
['Convenience', 'Unknown']
|
['Structured']
|
['Exact match', 'Reward']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Complete', 'Partial']
|
['Mean']
|
parcalabescuVALSETaskindependentBenchmark2022
|
VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena
|
Include
| null | null |
This paper proposes VALSE (Vision And Language Structured Evaluation), a novel benchmark designed for testing general-purpose pretrained vision and language (V&L) models for their visio-linguistic grounding capabilities on specific linguistic phenomena. The authors cover a broad spectrum of basic linguistic phenomena affecting the linguistic and visual modalities. The overall weak performance of these models indicates that there is a need for a reliable foiling dataset targeting the visual grounding capabilities of V&L models through the lens of linguistic constructs.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Visio-linguistic grounding
|
Yes
|
The phenomenon is defined as the ability of VLMs to ground linguistic phenomena, from morphosyntax to semantics, in the visual modality. For example, recent evidence suggests that models are insensitive to linguistic distinctions of verb-argument structure and word order.
|
Subset
|
VALSE is composed of 6 pieces, each corresponding to a specific linguistic phenomenon: existence, plurality, counting, relations, actions and coreference. For all pieces, given a visual input, a model is asked to distinguish real captions from foils, where a foil is constructed from a caption by altering a word or phrase that realizes a specific linguistic phenomenon, e.g., semantic number of nouns, verb argument structure, or coreference.
|
Two tasks:
- Binary classification: predict whether an image-sentence pair is foiled
- Predict image-sentence matching score between the image and the caption vs the image and the foil caption
|
piece category (sub-phenomenon), image, caption, foil
| null |
Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
5668
|
Yes
|
piece category (sub-phenomenon), difficulty
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean, Weighted Mean
|
Yes
|
difficulty, sub-phenomenon
|
pairwise ranking accuracy
|
https://github.com/Heidelberg-NLP/VALSE/tree/main
|
VALSE
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
No
|
VQA
| null | null |
['Another benchmark', 'Procedurally-generated']
|
['Targeted']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
huangConMeRethinkingEvaluation2024
|
ConMe: Rethinking Evaluation of Compositional Reasoning for Modern VLMs
|
Include
| null | null |
We introduce ConMe – a compositional reasoning (CR) benchmark and a novel data generation pipeline leveraging VLMs to produce ‘hard CR Q&A’.
Our pipeline autonomously generates, evaluates, and selects challenging compositional reasoning questions, establishing a robust CR benchmark.
Our benchmark provokes a noteworthy, up to 33%, decrease in CR performance compared to preceding benchmarks, reinstating the CR challenge even for state-of-the-art VLMs.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Compositional reasoning
|
Yes
|
Compositionality Reasoning (CR) is the ability of the VLM to recognize and attend to the language concepts beyond objects (i.e., nouns), such as attributes, relations, finegrained object alternatives, and more, in both the image and text of a visual-language pair.
|
Comprehensive
|
Both integrated and sub-elements are measured
|
On the collected text-image pairs, grouped by the presence of certain CR concepts, such as relations, attributes, etc., randomly “flip” the present CR concept in the positive text to form a “negative alternative” text (having the CR concept wrong). The VLM’s preference for the resulting negative is then compared to the true positive source text thus testing the VLM’s ability to entail the correct text from the image. The questions are framed as a binary multiple-choice selection.
|
A text-image pair where the text describes certain CR concepts, such as relations, attributes
| null |
LLM- and VLM- generated task examples
|
24347
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Accuracy scores on three data partitions: Replace-Attribute, Replace-Object, Replace-Relation.
| null |
https://huggingface.co/conme/ConMe
| null |
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
Simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Yes
| null |
No
|
Reasoning
|
Compositional
| null |
['LLM-generated']
|
['Targeted']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
liEvoCodeBenchEvolvingCode2024
|
EvoCodeBench: An Evolving Code Generation Benchmark with Domain-Specific Evaluations
|
Include
| null | null |
Code generation benchmark with evolving questions (updated every 6 months)
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Code generation
|
No
| null |
Comprehensive
| null |
Given a requirement and a repository, LLMs are tasked to generate the code for the repository.
|
An LLM has to generate a function given requirements and the rest of a repo (e.g. repo-level code generation).
|
Fairly unclear.
|
Real task examples (e.g. GitHub issues)
|
275
|
Yes
|
Domain of codebase
|
Random sample (creators defined a task space and sampled from it), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Structured response (e.g. valid JSON, API call alone)
|
Execution-based / functional correctness. Pass unit tests.
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Domain of the codebase
|
pass@k (any correct answer in k trials)
|
https://github.com/seketeam/EvoCodeBench
|
EvoCodeBench
|
Contested
|
Yes, but fairly poor task definition.
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
Only covers Python code generation.
|
Mean,
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Code Generation
| null | null |
['Real task']
|
['Random', 'Targeted']
|
['Structured']
|
['Reward']
|
['Contested']
|
['Yes', 'Partially']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial']
|
['Mean']
|
gongEvaluationLLMsSyntaxaware2024
|
Evaluation of LLMs on Syntax-Aware Code Fill-in-the-Middle Tasks
|
Include
| null | null |
Create a "Fill-in-the-Middle" code benchmark for LLMs and uses it to make claims about effective pretraining in code LLMs.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Fill-in-the-middle coding tasks (specific form of code generation)
|
Yes
|
Generating code to fill in a 1-5 tokens-long gap.
|
Comprehensive
| null |
A model is presented with code with a 'code block' masked. The model then has to generate code to complete the function based on natural language instructions.
|
A natural language description of the function along with an incomplete function (some of it is masked).
| null |
Human exam questions (e.g. GRE questions), Real task examples (e.g. GitHub issues)
|
17,720
|
Yes
|
Code topic
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Structured response (e.g. valid JSON, API call alone)
|
Execution-Based Evaluation (unit tests)
| null |
Codeforces problems (coding exam questions) and GitHub (real-world cases)
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Code topic
| null |
https://github.com/gonglinyuan/safim
|
SAFIM
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
|
The benchmark has high construct validity because they essentially define the phenomena as the task e.g. don't call it a 'code generation' benchmark as many others do.
|
Mean,
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
Yes
|
Code Generation
| null | null |
['Human exams', 'Real task']
|
['Criterion']
|
['Structured']
|
['Reward']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Partial', 'Representative']
|
['Mean']
|
bhatiaLocalConceptsUniversals2024
|
From Local Concepts to Universals: Evaluating the Multicultural Understanding of Vision-Language Models
|
Include
| null | null |
Current benchmarks often overlook a crucial aspect of cultural diversity: how universal concepts are represented across cultures. GlobalRG addresses this gap with two tasks inspired by popular vision-and-language benchmarks, image-text retrieval and visual grounding. Extensive evaluations reveal notable cross-cultural discrepancies. In particular, they show that even when models retrieve or ground images that appear culturally diverse, those images frequently share underlying Western-centric elements.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
multicultural visual understanding
|
Yes
|
Models' pre-training on large-scale datasets tends to predominantly contain images from Western cultures. The underrepresentation of certain cultures in the data translates into performance disparities across cultures.
|
Subset
| null |
- 1st task: Cultural visual grounding, which evaluates models' ability to ground culture-specific concepts within an image.
- 2nd task: Retrieval across universals: They also introduce the novel task of Retrieval across
Universals, aimed at retrieving culturally diverse images for a given universal concept. Formally, let Q be a set of textual queries representing universal concepts, and I the set of images from different cultures. Given a query, the goal is to retrieve a ranked list of images R that maximizes both relevance and cultural diversity.
|
a cultural concept, the image, the region/country
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
3,000
|
Yes
|
region/country
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall), Diversity@k: that measures the cultural diversity among the retrieved images, helping to identify models’ bias towards specific countries or regions.
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Country, region, culture
| null |
https://huggingface.co/datasets/UBC-VL/GlobalRG-Retrieval
|
GlobalRG
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
No
| null |
No
|
Knowledge
|
Cultural
| null |
['Author-crafted', 'Another benchmark']
|
['Targeted']
|
['Multiple choice']
|
['Exact match', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative']
|
['Mean']
|
kannenAestheticsCulturalCompetence2024
|
Beyond Aesthetics: Cultural Competence in Text-to-Image Models
|
Include
| null | null |
This work introduces CUBE, the first benchmark designed to evaluate the cultural competence of Text-to-Image (T2I) models through the lenses of cultural awareness and cultural diversity. Using structured knowledge bases and large language models, the authors construct a scalable framework and dataset spanning 8 countries and 3 cultural domains (cuisine, landmarks, and art), revealing major cultural representation gaps in current T2I systems.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
cultural competence in text-to-image models.
|
Yes
|
Our benchmark aspires to enable reliable, trustworthy, and tangible measurement of text-to-image generative models for two distinct yet complementary behaviors: cultural awareness (i.e., the model’s ability to reliably and accurately portray objects associated with a particular culture), and cultural diversity (i.e., the model’s ability to suppress oversimplified stereotypical depiction for an underspecified input that references a specific culture).
|
Subset
| null |
The task is defined as evaluating text-to-image models on their cultural competence by measuring two components:
Cultural awareness – the ability to faithfully and realistically generate images of specific cultural artifacts.
Cultural diversity – the ability to produce a varied and representative set of cultural outputs from under-specified prompts, using a quality-weighted diversity metric.
|
A text prompt referring to a specific cultural artifact (e.g., a dish, landmark, or clothing item) associated with a particular country - this is used to evaluate the model's generation.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Expert-crafted task examples (e.g. hand-written examples), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
1k
|
Yes
|
Each item includes metadata such as the artifact name, country of origin, and concept category (cuisine, landmark, or art), and is used to assess cultural awareness or diversity.
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean, None
|
No
| null | null |
https://github.com/google-research-datasets/cube
|
CUBE (CUltural BEnchmark for Text-to-Image models)
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
simple mean + std
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Knowledge
|
Cultural
| null |
['Author-crafted', 'Expert-crafted', 'Procedurally-generated', 'LLM-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response']
|
['Human ratings', 'LLM-as-a-Judge']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative', 'Constructed']
|
['Mean', 'Std']
|
yangInterCodeStandardizingBenchmarking2023
|
InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
|
Include
| null | null |
A standard coding benchmark with an interactive environment. An early agentic coding benchmark.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Interactive code generation
|
No
|
(Implied definition) - Involves writing code over many steps to achieve some task an a code environment
|
Comprehensive
| null |
Given a coding instruction in natural language, the agent has to issue code over multiple steps to achieve the goal. They can choose to submit their work at any time.
|
A single set of natural language instructions and a docker-based environment. Each question has a gold standard solution.
| null |
Modified from another benchmark (e.g. translation into another language)
|
1351
|
Yes
|
Difficulty level, programming language
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Extended interaction (e.g. conversation, calling an API and processing the response), Structured response (e.g. valid JSON, API call alone)
|
Execution-based evaluation (unit tests)
| null |
Adapt text-to-code datasets (NL2Bash, Spider, MBPP) to an agentic setting.
|
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Problem difficulty, programming language
|
None, Minor evaluation of success after n actions (agentic setting). Not main metrics.
|
https://intercode-benchmark.github.io
|
InterCode
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
Yes
|
Limited set of programming languages
|
Mean, standard errors
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
Yes
|
Agents
|
Coding
| null |
['Another benchmark']
|
['Convenience']
|
['Interaction', 'Structured']
|
['Reward']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial']
|
['Mean', 'Std']
|
yanCodeScopeExecutionbasedMultilingual2024
|
CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and Generation
|
Include
| null | null |
Evaluation of code understanding and generation capacities. "An execution-based, multilingual, multitask, multidimensional evaluation benchmark"
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
LLM capabilities on coding tasks
|
Yes
|
Defines the high-level phenomenon as consisting of eight different sub-tasks (code summarization, code smell, code review, automated testing, Program synthesis, Code translation, Code repair, Code optimization). Each of these has a clear definition in the paper.
|
Comprehensive
|
They explicitly make big claims about the phenomenon. "We built the first-ever comprehensive benchmark for evaluating LLMs on code understanding and generation tasks"
|
Each of the eight sub-tasks has a clear operationalisation e.g. "The input is a programming scenario described in natural language, including sample inputs and outputs of the problem, while the expected output is code that can solve the corresponding problem"
|
Eight different categories, but all are some coding task with instructions in natural language e.g. generate a function which specific characteristics.
| null |
Human exam questions (e.g. GRE questions), Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
13390
|
Yes
|
Task, Type of task (Summarisation, problem solving, efficiency)
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), Execution-based metrics.
| null |
Many different sources.
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Tasks, Task type
|
pass@k (any correct answer in k trials)
|
https://github.com/WeixiangYAN/CodeScope
|
CodeScope
|
Contested
|
Yes
|
Mixed. For the execution-based tasks, yes, but for the code summarisation tasks they use BLEU/CodeBLEU
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
No
| null |
Mean, standard deviation
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Code Generation
| null | null |
['Human exams', 'Real task', 'Another benchmark']
|
['Convenience']
|
['Structured']
|
['Exact match', 'Soft match', 'Reward']
|
['Contested']
|
['Yes']
|
['Partially']
|
['Realistic']
|
['No']
|
['Partial', 'Representative']
|
['Mean', 'Std']
|
zhangCarefulExaminationLarge2024
|
A Careful Examination of Large Language Model Performance on Grade School Arithmetic
|
Include
| null | null |
The paper introduces GSM1k, that mirros GSM8K-style but guaranteed absent from model pre-training to show the LLM genuine reasoning capabilities rather than memorization. LLMs stumble on it compared with GSM8k, exposing memorization in many, though frontier models still generalize well.
|
This dataset is human-written, and rigourously matching with GSM8K that produces benefits of comparing with prior gold benchmark, GSM8K.
|
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Elementary mathematical reasoning/ grade-school level Math Word-Problem
|
Yes
|
GSM1k consists of 1205 problems requiring only elementary mathematical reasoning to solve.We created GSM1k using human annotators. Annotators were prompted with 3 example GSM8k problems and asked to produce novel problems of a similar difficulty level. The precise instructions and UI given to the annotators is available in Appendix A. All problem annotators were instructed to create problems solvable with only basic arithmetic (addition, subtraction, multiplication, and division) and which did not require any advanced math concepts. As is the case with GSM8k, all problem solutions are positive integers. No language models were used to construct this dataset.
|
Subset
|
The benchmark primarily examines to distinct the dataset contamination effect.
|
For each item in GSM1k, the model is given a grade‑school word problem and must compute the solution using only basic arithmetic (addition, subtraction, multiplication, and division). It is expected to output a single positive‑integer answer, which is scored by exact numeric match against the gold key.
| null | null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Expert-crafted task examples (e.g. hand-written examples), Modified from another benchmark (e.g. translation into another language)
|
1205
|
Yes
|
Answer magnitude bucket, estimated resolution steps
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM post-processing (extracting answers, reformatting for automated scoring)
|
Error analysis with human ablatin to check mis-formatted but correct answers
|
GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers. Both GSM8k and the new GSM1k benchmark are synthetic, human‑written word‑problem datasets—they look like questions you might find on a grade‑school exam, but they were created from scratch by annotators, not lifted from any real textbook, SAT, GRE, or other examination papers.
Expert-crafted task examples: they did quality checks for selecting reviewers (After initial creation, each task was manually 116 reviewed by a subset of trusted annotators selected for strong past performance. )
|
Industry
|
Partially released; At the present, we release only 50 examples from GSM1k to prevent worries around data contamination. (In paper: We do not intend to release GSM1k publicly at this time to prevent a similar problem of data 51 contamination occurring in the future. However, we plan to run recurring evaluations of all major 52 open- and closed- source releases and to continually update our results. We will also open source our 53 entire evaluation code so that the public version of our results can be reproduced. Additionally, we 54 commit to open sourcing the entire benchmark when either 1) the top open source models score over 55 95% on GSM1k or 2) June 2025, whichever comes earlier. )
| null | null |
Test
| null |
Last numeric token should include the integer-format answers.
|
Simple Mean
|
No
| null | null |
Github
|
GSM1K
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
A comparable difficulty to the original benchmark, Human indistinguishability, Matched human difficulty, Prompt/Answer-format ablations
|
Mean, Spearman/Pearson Correlations
(For completeness, they also report the standard Pearson but also mention that Pearson is not the ideal metric due to the curve-of-best-fit not appearing linear.)
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
The questions are modeled on existing GSM8k grade‑school questions.
|
Single cohesive phenomenon
|
Not applicable
| null |
Yes
|
Reasoning
|
Mathematics
| null |
['Author-crafted', 'Expert-crafted', 'Another benchmark']
|
['Targeted', 'Criterion']
|
['Short free response']
|
['Exact match', 'Human ratings', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean', 'Other']
|
wuEvaluatingAnalyzingRelationship2024
|
Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models
|
Include
| null | null |
This work introduces R-Bench, a new benchmark designed to evaluate hallucinations in inter-object relationships. It identifies three key sources of hallucination and reveals that LVLMs often ignore visual input, depend too heavily on language priors, and struggle with spatial reasoning due to long-tail distribution biases in training data.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
visual understanding, hallucination
|
Yes
|
Existing LVLMs often tend to generate responses that are inconsistent with the content of the images. This issue is particularly critical for LVLMs, which are expected to accurately comprehend images and produce answers consistent with the content of the visual input. There exists a notable gap in addressing hallucinations related to inter-object relationships.
|
Subset
| null |
Take a image-level and instance-level questions and provide the labels of "yes" or "no"
|
<image> question: Is there a man swinging a bat? answer: Yes
| null |
LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
11651
|
Yes
|
number of objects, number of relationship
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
No, link is broken
| null | null |
Test
| null |
it's a classification task with "yes" and "no" labels
|
Simple Mean
|
Yes
| null | null |
https://github.com/mrwu-mac/R-Bench
|
R-Bench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
VQA
| null | null |
['LLM-generated']
|
['Convenience']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
liNaturalBenchEvaluatingVisionlanguage2024
|
NaturalBench: Evaluating Vision-Language Models on Natural Adversarial Samples
|
Include
| null | null |
This paper introduces NaturalBench to evaluate vision-language models on their natural adversarial samples, i.e. samples that challenge models significantly more than humans. NaturalBench offers comprehensive skill tags to assess compositional reasoning abilities and highlights model biases in VLMs.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
visual reasoning on adversarial samples
|
Yes
|
For discriminative tasks like visual recognition, adversarial samples are images that models misclassify. For generative VLMs trained on tasks like VQA, we define their adversarial samples as image-question pairs that humans can easily answer but models cannot.
|
Subset
|
NaturalBench requires diverse visio-linguistic skills, such as attribute bindings, spatial/action/part relations, and advanced reasoning, including comparison
and logic. They tag each sample with all applicable skills from a defined taxonomy of 27 skills.
|
They incorporate two tasks:
- binary VQA: yes or no
- multiple-choice VQA
|
Each sample includes two images, two questions based on these images, and corresponding gold answers that are intentionally contradictory. It also contains metadata describing the type of skill or reasoning required to answer the questions. It also contains the source dataset from which the images were taken.
| null |
Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
| null |
Yes
|
The samples are tagged based on a skill taxonomy the authors crafted that contains: 8 types of objects, 8 types of attributes, 3 types of relations (with spatial relation further divided into 4 subtypes), and 5 types of reasoning.
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null |
The authors performed a human evaluation/verification of the generated VQA pairs.
|
Academia
|
Yes
| null | null |
Test
|
1,900
| null |
They introduce three aggregated metrics. “question accuracy” (Q-Acc) metric to award a point only if a model correctly answers a question for both images. “image accuracy” (I-Acc) metric awards a point when a model correctly answers both questions for an image. “group accuracy” (G-Acc) metric awards one point when a model correctly answers all four pairs.
|
No
| null | null |
https://huggingface.co/datasets/BaiqiL/NaturalBench
|
NaturalBench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
Yes
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Language Modelling
|
Robustness
| null |
['Another benchmark', 'LLM-generated']
|
['Convenience']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Partial']
|
['Mean']
|
yinSafeWorldGeodiverseSafety2024
|
SafeWorld: Geo-Diverse Safety Alignment
|
Include
| null | null |
The paper introduces a benchmark to evaluate LLMs' ability to generate culturally sensitive and legally compliant responses across diverse global contexts. It also proposes a multi-dimensional automatic safety evaluation framework for assessing the contextual appropriateness, accuracy, and comprehensiveness of responses.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
alignment with geo-diverse safety standards
|
Yes
|
"ability to respond appropriately, precisely, and helpfully to queries involving a culturally or legally sensitive content" (page 4)
with cultural and legal safety defined as:
"Cultural safety defines an environment that is spiritually, socially, emotionally, and physically safe for people [44]. It is about adhering to cultural and social norms, which dictate appropriate scenario within a society." (page 4)
"Legal safety refers to abiding the policies enacted by governments, with each country having its own set of regulations designed to maintain social order and stability. These rules establish standards for acceptable scenario, resolve conflicts, and protect the rights and well-being of individuals and communities." (page 4)
|
Subset
| null |
There are four types of tasks, each consist of a scenario -- illustrating a culturally or legally sensitive or insensitive -- and a question. Depending on the task, the LLM is expected to answer or not answer, identify the insensitive scenario / guideline violation, etc.
|
Query (scenario + question), type of query, ground-truth norm/policy
| null |
LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
2,342
|
Yes
|
query type, ground truth norms/policies, country/region
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragraph, executable code)
|
LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
Train: 45,746
| null |
Simple Mean
|
No
| null | null |
https://github.com/PlusLabNLP/SafeWorld
|
SAFEWORLD
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Alignment
|
Safety
| null |
['LLM-generated']
|
['Targeted']
|
['Free response']
|
['LLM-as-a-Judge', 'LLM post-processing']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.