bibkey
stringlengths 18
52
| title
stringlengths 31
151
⌀ | inclusion
stringclasses 1
value | exclusion_criteria
stringclasses 1
value | exclusion_criteria_detail
stringclasses 2
values | short_summary
stringlengths 48
766
⌀ | contribution
stringclasses 88
values | phenomenon_short
stringclasses 6
values | target_phenomenon
stringlengths 3
360
⌀ | phenomenon_defined
stringclasses 2
values | phenomenon_definition
stringlengths 10
964
⌀ | definition_scope
stringclasses 2
values | purpose_extra
stringclasses 81
values | task_definition
stringlengths 14
1.39k
⌀ | task_item_definition
stringlengths 7
3.27k
⌀ | task_definition_detail
stringlengths 1
1.19k
⌀ | task_source
stringlengths 14
460
⌀ | task_dataset_size
stringlengths 2
309
⌀ | task_dataset_metadata
stringclasses 2
values | dataset_metadata_detail
stringlengths 1
570
⌀ | dataset_sampling_method
stringclasses 18
values | response_format
stringclasses 52
values | metric_definition
stringlengths 3
419
| metric_definition_detail
stringlengths 21
1.18k
⌀ | task_source_detail
stringlengths 6
829
⌀ | authorship
stringclasses 7
values | benchmark_availability
stringclasses 18
values | procedural_extra
stringclasses 45
values | notes_extra
stringclasses 40
values | task_train_val
stringclasses 6
values | task_dataset_size_extra
stringlengths 2
549
⌀ | response_format_detail
stringclasses 88
values | metric_aggregation
stringclasses 26
values | metric_subscores
stringclasses 2
values | metric_subscores_detail
stringlengths 6
1.07k
⌀ | metric_metascoring
stringclasses 17
values | benchmark_location
stringlengths 6
117
⌀ | benchmark
stringlengths 3
146
⌀ | phenomenon_contested
stringclasses 3
values | task_face_validity
stringclasses 21
values | metric_face_validity
stringclasses 18
values | result_interpretation
stringclasses 2
values | results_comparison
stringclasses 2
values | results_comparison_explanation
stringclasses 3
values | results_realism
stringclasses 7
values | results_human_baseline
stringclasses 2
values | results_author_validity
stringclasses 15
values | results_author_validity_detail
stringlengths 17
1.19k
⌀ | metric_statistics
stringlengths 4
405
⌀ | metric_access
stringclasses 2
values | task_ecology
stringclasses 17
values | task_ecology_detail
stringlengths 5
580
⌀ | definition_integrity
stringclasses 3
values | definition_integrity_detail
stringclasses 3
values | task_dataset_size_detail
stringclasses 64
values | metric_fewshot
stringclasses 2
values | phenomenon_taxonomy_root
stringclasses 30
values | phenomenon_taxonomy_leaf
stringclasses 32
values | phenomenon_taxonomy_alternate
stringclasses 8
values | task_source_clean
stringlengths 11
119
| dataset_sampling_method_clean
stringclasses 18
values | response_format_clean
stringclasses 29
values | metric_definition_clean
stringclasses 77
values | phenomenon_contested_clean
stringclasses 3
values | task_face_validity_clean
stringclasses 5
values | metric_face_validity_clean
stringclasses 4
values | results_realism_clean
stringclasses 5
values | results_author_validity_clean
stringclasses 4
values | task_ecology_clean
stringclasses 14
values | metric_statistics_clean
stringclasses 10
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
liuRepoBenchBenchmarkingRepositorylevel2024
|
RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems
|
Include
| null | null |
Evaluating whether models can do repository-level code generation.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Repository-level coding tasks.
|
No
| null |
Comprehensive
| null |
Three sub-tasks. (1) the ability to retrieve the most relevant code snippets from a repo (2) predict the next line of code and (3) do both simultaneously.
|
Retrieval: identify the most relevant code snippets to predict the next line given an in-file context. Generation: predict the next line of code based on a given in-file context.
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
1669
|
Yes
|
Task, difficulty level, different code masking settings
|
Random sample (creators defined a task space and sampled from it), Specific criteria (items were taken from a larger set based on specified rules)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
|
Not the best. Most programming benchmarks use some kind of execution base metric. Not applicable for the retrieval task here but would be applicable for the generation task.
| null |
Academia
|
Yes
| null | null |
Test, Train
|
25301
| null |
Simple Mean
|
Yes
|
Task, programming language,
|
accuracy@k
|
https://github.com/Leolty/repobench
|
REPOBENCH
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
mean,
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Code Generation
| null | null |
['Real task', 'Author-crafted']
|
['Random', 'Criterion']
|
['Structured']
|
['Exact match', 'Soft match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Partial', 'Representative']
|
['Mean']
|
zhangBenchmarkingDataScience2024
|
Benchmarking Data Science Agents
|
Include
| null | null |
This paper introduces DSEval, a novel bencmark for evaluating LLMs as data science agents.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Data science
|
Yes
|
Executing a wide array of data-centric tasks, including manipulation, aggregation, visualization, and analysis, through natural language commands.
|
Comprehensive
| null |
The task involves responding to a natural language data science query using both the query itself and a stateful runtime session, which provides contextual information such as variables, execution history, and files. The agent must generate and return executable code that solves the query.
|
Each problemset is represented as a Python (*.py) file with YAML syntax inside to “configure” the problem, including the query, validator configurations, execution restrictions, and external data required.
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
825
|
Yes
|
Difficulty
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
|
The “Pass Rate”, which is the number of problems passed divided by all problems in the benchmark, is the default metric used to assess the quality of an agent.
| null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null | null |
Yes
|
By subtask dataset
| null |
https://github.com/MetaCopilot/dseval/tree/master
|
DSEval
|
Widely-agreed
|
Yes
|
No
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
Somewhat
|
While not explicitly addressed, the dataset construction ensures that the task items are sourced from real-world data science tasks -- which adds to its construct validity.
| null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Code Generation
| null | null |
['Real task', 'Author-crafted', 'LLM-generated']
|
['Targeted', 'Criterion']
|
['Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['No']
|
['Realistic']
|
['Yes']
|
['Partial']
| null |
kimFANToMBenchmarkStresstesting2023
|
FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions
|
Include
| null | null |
FANTOM is a comprehensive benchmark with 10,000 questions designed to evaluate theory-of-mind reasoning capabilities in LLMs by tracking beliefs across multi-party conversations with information asymmetry. The benchmark reveals that even leading LLMs struggle significantly with belief tracking compared to humans, demonstrating only "illusory" theory-of-mind abilities that break down when tested systematically.
|
(1) First conversation-based theory-of-mind benchmark, (2) explicit test for consistency checks across diverse question formats, (3) 256 multiparty conversations around a certain topic
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Theory of mind, conversational question-answering
|
Yes
|
The goal of FANTOM is to effectively measure how well models can track the belief of multiple characters in conversations where some information may be inaccessible to some participants. Our aim is to design questions at different levels that evaluate a model’s capability for a coherent understanding of others’ mental states. In doing so, we are particularly interested in identifying instances of illusory ToM, which we define as
situations where a model may answer some questions correctly but fails to answer others that require the same type of ToM reasoning.
|
Subset
| null |
Models read a multiparty conversation (short or full) and answer six types of questions (free‑form, multiple‑choice, list, yes/no) about participants’ beliefs or answerability.
Model responses are expected to be "yes", "knows", "does know", or similar type of answers.
| null | null |
Crowd-sourced task examples (e.g. Prolific-created tasks), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
f 256 conversations with 1,415 (Theory of mind) belief questions, 703 fact-checking questions, and 2689 answerability questions.
|
No
|
belief questions (theory of mind; how the characters beliefs of something), fact-checking questions, answerability questions
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
|
BELIEFQ[Dist.] judged by sentence‑BERT cosine vs. two references and token‑F1; others via strict match
|
Conversations autogenerated with davinci‑003,
inaccessible info & QA generated with GPT‑4,
all sets manually validated by 32 MTurk workers
|
Academia
|
Yes
|
https://hyunw.kim/fantom/
| null |
Test
| null |
Open text, binary or multiple choice types
|
Simple Mean
|
Yes
|
Sub‑scores are broken out by each question family — BELIEF (choice, distance, token‑F1), ANSWERABILITY (list accuracy, Y/N F1, “All”), plus FACT token‑F1. Additional splits cover first‑ vs. second‑order (cyclic/acyclic) beliefs and short/full conversation contexts.
| null | null |
FANToM
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
conversation grounding (reporting bias), , adversarial false answers (wrong options are overlapped with the words in context to rule our surface matching), multi-format question sets (illusory success test by consistency checks across different formats), Manual MTurk validation (conversation-answer coherence and correctness)
|
Means, comparisons with percentage point gaps.
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
The reframed dataset contains questions asking how the person beliefs about certain information when the character randomly joins/leaves the conversation to create natural information gaps.
|
Composite phenomenon
|
Yes
| null |
No
|
Theory of Mind
| null | null |
['Crowd-sourced', 'LLM-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'LLM-as-a-Judge']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
huInfiAgentDABenchEvaluatingAgents2024
|
InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks
|
Include
| null | null |
This paper introduces InfiAgent-DABench, a benchmark designed to evaluate LLM-based agents on data analysis tasks. These tasks require agents to solve the tasks end-to-end by interacting with an execution environment.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Data Analysis
| null |
Data analysis is a systematic process of examining, cleaning, transforming, and modeling data to discover useful information, inform conclusions, and support decisionmaking
|
Subset
| null |
Given a file containing data and a question based on the data; generate executable code to answer the question and provide the final answer
|
Each item contains a question, concepts tested, constraints, answer format, file name which points to the data table and difficulty level
| null |
Real task examples (e.g. GitHub issues),
|
257
|
Yes
|
Difficulty Level, Concepts
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null |
Another LLM is used in the pipeline to reformat the answer
|
Simple Mean
|
No
| null | null |
https://github.com/InfiAgent/InfiAgent/tree/main
|
huInfiAgentDABenchEvaluatingAgents2024
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
Yes
|
The authors define a 6-point evaluation covering suitalbleness, reasonableness, value, restrictiveness, alignment and correctness of the generated questions and conduct a human evaluation to verify each question. 85% samples are qualified and kept in final, demonstrating the effectiveness of their dataset construction method. They also compare GPT-4 generated questions to human-made ones and show that the questions written by human experts are quite similar to those generated by GPT-4.
| null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Code Generation
| null | null |
['Real task', 'Unknown']
|
['Convenience']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial']
| null |
fanNPHardEvalDynamicBenchmark2024
|
NPHardEval: Dynamic Benchmark on Reasoning Ability of Large Language Models via Complexity Classes
|
Include
| null | null |
NPHardEval is a comprehensive benchmark consisting of 900 algorithmic tasks across P, NP-Complete, and NP-Hard complexity classes, designed specifically to evaluate large language models' reasoning capabilities. It targets to accurately assess LLMs' algorithmic problem-solving abilities across varying computational complexity levels.
|
(1) 9 distinct tasks × 10 difficulty levels,
(2) automatic generation & verification pipeline,
(3) first benchmark to ground LLM evaluation in computational‑complexity theory,
(4) open‑sourced
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
reasoning, complexity
|
Yes
|
Reasoning is operationalised as solving decision problems in P, NP‑Complete, and NP‑Hard
|
Subset
| null |
Each task provides a specific problem scenario along with expected output format (such as True/False decisions, path lists, integers, color mappings, or other specified formats).
|
One problem instance + required output (True/False, path list, integer, colour mapping, etc.).
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Procedurally-generated task examples (e.g. Creating instances from a template)
|
900 per monthly release (100 per task)
|
Yes
|
Complexity class, task type, algorithmic difficulty level
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), LLM post-processing (extracting answers, reformatting for automated scoring)
| null |
Data synthesis for graph/linear tasks, difficulty depending on size/weights
|
Academia
|
Yes
|
MIT licensed, monthly refreshed/released the additional data (partially)
| null |
Test
|
NA
| null |
Weighted Mean
|
Yes
|
Task difficulty, complexity classes, model performances
| null |
https://github.com/casmlab/NPHardEval
|
NPHardEval
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
Wilcoxon tests, variance analysis
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
Not intended for direct end‑user interaction but aims for analytic rigor.
|
Composite phenomenon
|
Yes
| null |
Yes
|
Reasoning
| null | null |
['Author-crafted', 'Procedurally-generated']
|
['Targeted', 'Criterion']
|
['Short free response']
|
['Exact match', 'LLM post-processing']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Other']
|
wangCanLanguageModels2024
|
Can Language Models Serve as Text-Based World Simulators?
|
Include
| null | null |
This paper introduces a new benchmark, containing a dataset of text game state transitions and accompanying game tasks. They use this to directly quantify how well LLMs can serve as text-based world simulators. They test GPT-4 on this dataset and find that, despite its impressive performance, it is still an unreliable world simulator without further innovations.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
abilities of LLMs to directly simulate virtual environments
|
No
|
The phenomenon is not defined with details. I understood it as the ability to simulate the world.
|
Comprehensive
| null |
In this task, LLMs serve as world simulators in text-based virtual environments, in which an agent receives observations and proposes actions in natural language in order to complete certain objectives.
|
Each task is a human-authored text games that each simulate a different scientific or commonsense reasoning concept. Each item is the game state
(s_t, r_t, d_t) as well as its intermediate state s_act (t+1) at each time step t as a JSON object.
| null |
Modified from another benchmark (e.g. translation into another language), The dataset is derived from the open BYTESIZED32 corpus.
|
76,369 virtual text environment state transitions
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/cognitiveailab/GPT-simulator/tree/main
|
BYTESIZED32-State-Prediction
|
Not defined
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
Simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Authors' description is unclear
|
Not applicable
| null |
No
|
Grounding
| null | null |
['Another benchmark', 'Another benchmark']
|
['Random']
|
['Free response']
|
['Exact match']
|
['No definition']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
liGSMplusComprehensiveBenchmark2024
|
GSM-Plus: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers
|
Include
| null | null |
The authors create GSM‑PLUS, an adversarial extension of GSM8K that perturbs each seed question in eight different ways to test the robustness of LLM mathematical reasoning. They observe the huge performance drops from models on critical‑thinking and arithmetic variations with four prompting strategies.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
reasoning, math
|
Yes
|
Math abilities (numerical, arithmetic, understanding, distractor handling, and critical thinking) with 8 perturbation types
|
Subset
| null |
Given a word‑problem, the model must output the numerical answer.
|
Question context + Question (+additional constraints with variation on the question)
|
Robustness is measured by the consistency of solving both the seed (original) question and its perturbed variants.
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
1319 seed (original) data, 10552 perturbed data
|
Yes
|
Seed, perturbation type, subcategory, gold answer
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), LLM post-processing (extracting answers, reformatting for automated scoring), Distribution (perplexity, calibration, correlation)
|
PDR (performance drop rate), ASP (accurately-solved pairs)
|
GPT‑4 first rewrites each GSM8K test item, and approximately 20% of variations are manually revised by paid annotators.
|
Academia
|
Yes
|
https://qtli.github.io/GSM-Plus/
| null |
Test
| null | null |
Simple Mean
|
Yes
|
different perturbation level types (Numerical Substitution; Digit Expansion; Integer-decimal-fraction Conversion; Adding Operation; Reversing Operation; Problem Understanding; Distractor Insertion; Critical Thinking.)
| null | null |
GSM-Plus
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
Annotation consistency scores, and pass-rates to justify the validty.
|
Means and percentage differences
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
Reasoning
|
Mathematical
| null |
['Author-crafted', 'Crowd-sourced', 'LLM-generated']
|
['Targeted']
|
['Short free response']
|
['Exact match', 'LLM post-processing', 'Distribution']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
zhangMultiTrustComprehensiveBenchmark2024
|
MULTITRUST: A Comprehensive Benchmark Towards Trustworthy Multimodal Large Language Models
|
Include
| null | null |
The paper introduces a benchmark on the trustworthiness of MLLMs across five primarcy aspects: truthfulness, safety, robustness, fairness, and privacy. It benchmarks 20+ MLMMs and highlights the complexities introduced by multi-modality.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
trustworthiness
|
Yes
|
"Drawing on extensive studies in trustworthy LLMs and distilling from relevant literature of MLLMs, we pinpoint 5 primary aspects of trustworthiness for evaluating MLLMs, including truthfulness, safety, robustness, fairness, and privacy. In particular, truthfulness, safety, and robustness guarantee the models’ reliability and stability in preventing undesirable outcomes, i.e., errors, harms, and variations under different conditions" (page 3)
|
Subset
| null |
There are 32 tasks that are generative and/or discriminative, multimodal or test-only. Wide range from NSFW image description to PII leakage in conversations They utilize off the shelf datasets, augment existing ones, and create their own.
|
Varies greatly from task to task. Some include images and a prompt, others just a prompt.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
"more than 15k" (page 103)
|
Yes
|
image information, types of queries
|
Targeted items (creators defined a task space and chose tasks within it strategically), Unknown
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), Distribution (perplexity, calibration, correlation), Correlation (Matthew's correlation, Pearson's r)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
In the main body, it's sub-aspect (e.g., truthfulness, safety, robustness). In the appendix, there are subsets of many of the tasks/existing benchmarks they run.
| null |
https://github.com/thu-ml/MMTrustEval
|
MULTITRUST
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean/sum, correlation between overall rankings and general capabilities based on MMBench
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Alignment
|
Safety
| null |
['Author-crafted', 'Another benchmark']
|
['Targeted', 'Unknown']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'LLM-as-a-Judge', 'Distribution', 'Correlation']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Representative']
|
['Mean', 'Other']
|
bittonVisITbenchDynamicBenchmark2023
|
VisIT-Bench: A Dynamic Benchmark for Evaluating Instruction-Following Vision-and-Language Models
|
Include
| null | null |
VisIT-Bench (Visual InsTruction Benchmark) is a benchmark for evaluating instruction-following vision-language models for real-world use. The authors curated 70 “instruction families” that they believe instruction-tuned vision-language models should be able to address. We conduct a large-scale empirical comparison of multimodal instruction-following models using their benchmark.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Instruction following capabilities
|
No
| null |
Subset
| null |
The format of the questions is either MCQ or binary QA (Yes/No).
They provide a collection of 70 different open-generation tasks, like reasoning over plots, object recognition, location understanding etc.
|
Each instance contains an instruction, input image(s), an instruction-conditioned caption (a human-crafted caption for the image(s)/instruction), and a human-verified reference. Instructions are image-contextual imperative requests or questions, e.g., for an image of pancakes, a user asks “How can I cook this in a healthy way?”.
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
592
|
Yes
|
instruction type, image source
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), Elo ratings, Win rate
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
stratification of results based on the instruction category
| null |
https://huggingface.co/datasets/mlfoundations/VisIT-Bench
|
VisIT-Bench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Instruction Following
| null | null |
['Crowd-sourced', 'Another benchmark', 'LLM-generated']
|
['Targeted']
|
['Multiple choice']
|
['Soft match', 'Human ratings', 'LLM-as-a-Judge', 'Reward']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Complete']
|
['Mean']
|
singhIndicGenBenchMultilingualBenchmark2024
|
IndicGenBench: A Multilingual Benchmark to Evaluate Generation Capabilities of LLMs on Indic Languages
|
Include
| null | null |
This benchmark introduces INDICGENBENCH; a benchmark for evaluating LLMs
on user-facing generation tasks across a diverse set 29 of Indic languages covering 13
scripts and 4 language families. INDICGENBENCH is composed of diverse generation
tasks like cross-lingual summarization, machine translation, and cross-lingual question
answering. INDICGENBENCH extends existing benchmarks to many Indic languages through
human curation providing multi-way parallel evaluation data for many under-represented Indic languages.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Indic language generation capabilities
|
Yes
|
The phenomenon is defined is the ability to generate text in Indic languages.
|
Subset
| null |
Generation tasks are cross-lingual summarization, machine translation, and cross-lingual question answering.
|
Summarization and translation: A text sentence
Question-answering: A sentence as a question and multiple choices
| null |
Modified from another benchmark (e.g. translation into another language)
|
train:~10k, test:~100k, dev:~60k
| null |
language
|
Random sample (creators defined a task space and sampled from it)
|
Multiple choice, Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Industry
|
Yes
| null | null |
Test, Train
| null | null |
Simple Mean
|
Yes
|
language, and sub-task
| null |
https://github.com/google-research-datasets/indic-gen-bench/
|
IndicGenBench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
NLP
| null |
Multilinguality
|
['Another benchmark']
|
['Random']
|
['Multiple choice', 'Free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
castillo-boladoPromptsDynamicConversational2024
|
Beyond Prompts: Dynamic Conversational Benchmarking of Large Language Models
|
Include
| null | null |
This paper introduces the LTM benchmark designed to evaluate the Long-Term Memory (LTM) and Continual Learning (CL) capabilities of conversational agents. The LTM Benchmark engages agents in a single, prolonged conversation, incorporating multiple tasks and distractions to simulate realistic and meaningful interactions.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Long Term Memory
|
Yes
|
The ability to recall and effectively utilize past information. LTM encompasses several skills related to the generation and management of memories, which include but are not limited to recall, information integration, and the handling of conflicting information.
|
Subset
| null |
An LLM engages in a prolonged, dynamic conversation where multiple tasks are interleaved. Within this conversation, specific pieces of information, termed 'needles,' are included amidst unrelated content ('haystack'). These needles are essential for completing subsequent tasks. The LLM is later queried on these tasks, requiring it to retrieve and integrate the relevant needles from the conversation history, thereby assessing its long-term memory and information integration capabilities.
|
A dynamic, multi-turn conversation containing interleaved messages from different tasks, within which specific "needles" (relevant sentences) are included.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
33
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Industry
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
They include the separate runs for each test scenario subset in the Appendix.
| null |
https://github.com/GoodAI/goodai-ltm-benchmark
|
LTM Benchmark
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
| null | null | null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
NLP
|
Long Context
| null |
['Author-crafted', 'LLM-generated']
|
['Targeted']
|
['Short free response']
|
['Exact match', 'Soft match', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['']
|
['Partial']
| null |
chuTimeBenchComprehensiveEvaluation2024
|
TimeBench: A Comprehensive Evaluation of Temporal Reasoning Abilities in Large Language Models
|
Include
| null | null |
TIMEBENCH is a comprehensive multi-task benchmark for evaluating large language models' temporal reasoning capabilities across symbolic, commonsense, and event-level reasoning. The benchmark evaluates various LLMs under different prompting conditions, revealing significant performance gaps compared to humans while providing detailed analyses of errors and scaling behaviors.
|
This work unifies previously scattered temporal reasoning datasets under a coherent taxonomic framework with standardized evaluation metrics.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
temporal reasoning
|
Yes
|
TIMEBENCH focuses on a comprehensive evaluation of the temporal reasoning capabilities of large language models in challenging and complex scenarios. To achieve this goal, we summarize the difficulties and challenges faced in temporal reasoning, categorize them into three levels, and integrate diverse task formats to better align with the intricate nature of temporal reasoning (Sec 2.1)
|
Comprehensive
|
symbolic, commonsense, event relations
|
The model is required to use temporal reasoning to assign the correct entailment label (premise-hypothesis pair), extract the correct short answer span, and identify all correct options from a 4-choice multi-select question, or generate sentences with temporal-keywords.
|
Context (+ temporal keywords) + question/premise (+ answer options) to predict answers.
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
19,000
|
Yes
|
Categories (symbolic/commonsense/event), type(free-form reading comprehension, natural language inference, generation, multi-select questions), human score
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null |
Examples are filtered and resampled while some keeping some filtered synthetic generation.
|
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Subtask, Categories
| null | ERROR: type should be string, got " https://github.com/TimeBench/TimeBench" |
TIMEBENCH
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
Analysis shortcomings (implicit reasoning, symbolic arithmetic)
|
Mean, percentage
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
|
3 major categories, 10 tasks, 15 subtasks within the 19,000 instances.
Approximately 11k symbolic, 3.1K commonsense, 4.9k event relevant instances.
|
Yes
|
Reasoning
|
Temporal
| null |
['Real task', 'Author-crafted', 'Another benchmark']
|
['Convenience', 'Targeted']
|
['Multiple choice', 'Short free response', 'Free response', 'Structured']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
srinivasanCLiMBContinualLearning2022
|
CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
|
Include
| null | null |
The paper introduces CLiMB, a benchmark designed to evaluate continual learning (CL) for multimodal tasks, addressing the challenges of learning both new multimodal and unimodal tasks over time. It shows that while common CL methods can reduce forgetting in multimodal learning.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Continual learning, multimodal reasoning, knowledge transfer
|
Yes
|
Upstream Continual Learning of Multimodal Tasks: A candidate model 𝑀
M encounters a sequence of vision-language tasks... We evaluate two primary model properties in the upstream phase: upstream knowledge transfer from past learned tasks to new tasks, and withstanding forgetting of previously-seen tasks.
Downstream Transfer to Low-Shot Tasks: We evaluate the low-shot adaptation ability of the model after learning each upstream vision-language task
|
Subset
|
The paper provides specific operational definitions for components like knowledge transfer, forgetting, and low-shot adaptation.
|
Learning from a sequence of different multimodal (vision-and-language) tasks in a continual learning (CL) setting, and then transferring to low-shot multimodal and unimodal tasks.
|
Each item framed as a classification problem: some input and a target label, where the inputs can be vision-only ( an image), language-only (a sentence or question), or multimodal (an image paired with a question or caption).
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
N/A
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), VQAScore
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/GLAMOR-USC/CLiMB
|
CLiMB
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean, std, relative performance changes
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
|
the benchmark consists of the already existent benchmarks and doesn't provide any numbers in the paper
| null |
Language Modelling
|
Updating
| null |
['Real task', 'Another benchmark']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response']
|
['Exact match', 'Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Constructed']
|
['Mean', 'Std']
|
liVRSBenchVersatileVisionlanguage2024
|
VRSBench: A Versatile Vision-Language Benchmark Dataset for Remote Sensing Image Understanding
|
Include
| null | null |
VRSBench is a versatile vision-language dataset and benchmark for remote sensing image understanding. This comprehensive dataset not only addresses the limitations of previous datasets that either ignore detailed object information or suffer from quality control issues but also enriches the field by providing a diverse range of annotations including detailed captions, object referring, and visual question answering with rich object information and verified by human annotators.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
remote-sensing image understanding
|
Yes
|
Remote sensing models seek to understand the Earth’s surface using imagery captured from overhead, offering a unique perspective of our physical world. This technique is instrumental in various applications, such as land use mapping, urban planning, precision agriculture, disaster management, etc.
|
Comprehensive
| null |
They construct three distinct tasks for advancing remote sensing image under-
standing:
• VRSBench-Cap: This challenge requires the prediction of a comprehensive description for a
given remote sensing image, encapsulating intricate object details and contextual relevance.
• VRSBench-Ref: The task involves identifying and localizing specific objects from a given
remote sensing image based on textual descriptions.
• VRSBench-VQA: This task aims to answer questions related to visual content in a given
remote sensing image.
|
Each item in the dataset consists of an image, its caption, and a list of questions and answers based on the image. Each question is accompanied by metadata indicating its type, which specifies the aspect of the image it refers to (e.g., object coordinates, object position, object category, etc.).
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
37,408
|
Yes
|
Question type: which specifies the aspect of the image the question refers to (e.g., object coordinates, object position, object category, etc.).
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), Intersection over Union (IoU)
| null | null |
Academia
|
Yes
| null | null |
Test, Train
|
Train: 85,813
| null |
Simple Mean
|
No
| null | null |
https://huggingface.co/datasets/xiang709/VRSBench
|
VRSBench
|
Contested
|
Yes
|
Yes
|
No
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
No
|
VQA
| null | null |
['Crowd-sourced', 'Another benchmark', 'LLM-generated']
|
['Convenience']
|
['Short free response', 'Free response']
|
['Exact match', 'Soft match', 'LLM-as-a-Judge', 'Soft match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
linTruthfulQAMeasuringHow2022
|
TruthfulQA: Measuring How Models Mimic Human Falsehoods
|
Include
| null | null |
Introduces TruthfulQA, a benchmark to evaluate the generation of false statements that imitate common human misconceptions across e.g. health, law, finance and politics. Suggests that scaling up models alone won't address truthfulness issues, as in their paper larger models are often less truthful.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
truthfulness
|
Yes
|
"We use a strict standard, where a claim counts as true if it describes the lit-
eral truth about the real world. Claims that are only true according to a belief system or tradition (e.g. “Libras have friendly personalities”) are counted as false. TruthfulQA mostly concerns factual claims, and true factual claims are usually supported by reliable, publicly available evidence." p.3-4
|
Subset
| null |
TruthfulQA contains questions to identify answers representing common human misconceptions, requiring models to generate truthful responses instead of repeating falsehoods found in their training data.
|
Question along with sets of reference true and false answers, and a source.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
817
|
Yes
|
category, filtered/unfiltered
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
category, filtered/unfiltered
| null | ERROR: type should be string, got " https://github.com/ sylinrl/TruthfulQA" |
TruthfulQA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
They validate their reference answers through independent validators who disagreed with only 7% of the authors' judgments
|
Simple Means, the percentage of questions answered correctly.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
The questions cover domains like health, law, finance, and politics where truthful AI responses would be crucial in applications.
|
Single cohesive phenomenon
|
Not applicable
|
817 questions: 437 "filtered" questions (adversarially filtered to be difficult for GPT-3-175B) and 380 "unfiltered" questions (expected to be difficult but not tested against the model).
|
No
|
Alignment
|
Alignment
| null |
['Author-crafted']
|
['Targeted']
|
['Multiple choice', 'Short free response']
|
['Exact match', 'Human ratings', 'LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
samdarshiConnectingDotsEvaluating2024
|
Connecting the Dots: Evaluating Abstract Reasoning Capabilities of LLMs Using the New York Times Connections Word Game
|
Include
| null | null |
The paper introduces a dataset of New York Times Connections puzzles with custom metrics to evaluate top language models against human players of varying skill levels. Results show that the best models significantly underperform both novice and expert humans. Based on the empirical analysis, the work develops a knowledge taxonomy to analyze model limitations in word categorization tasks.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
(Latent) reasoning, Lexical categorization in LLMs
|
Yes
|
Abstract reasoning represents a person’s ability to solve problems, identify patterns, and work with logical systems. We propose the NYT Connections Game as a test bed for investigating the abstract reasoning capabilities of both humans and large language models (LLMs). (Section 1)
|
Subset
| null |
Each example supplies a list of 16 unordered words taken from a single NYT Connections puzzle. The model is expected to partition them into four disjoint clusters of four words and name the underlying category for each cluster, producing its solution in one shot without feedback or retries.
|
A single Connections puzzle = a list of 16 words + hidden gold groupings + category labels.
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
438 puzzles
|
Yes
|
Category difficulty, taxonomy label for each knowledge grouping.
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation)
| null |
An archival site consisting of all possible answer choices and
their corresponding categorizations. As the NYT does not maintain an archive of NYT Connections puzzles, we resorted to an external, third-party site for data collection. Our data spans daily problems from the conception of NYT Connections in June 2023 to August 2024. (Section 3.1.)
|
Academia
|
Yes
|
Human study used volunteer peers (14‑60 yrs).
| null |
Test
| null |
model outputs category name + four‑word list per line
|
Simple Mean
|
Yes
|
Distribution for each metric, success rates per reasoning type
| null | ERROR: type should be string, got "\thttps://github.com/mustafamariam/LLM‑Connections‑Solver" |
NYT Connections
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Success requires broad types of knowledge
|
simple mean, weighted and unweighted clustering scores, frequency counts, Fleiss Kappa
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
Models/humans see identical word grids, single‑shot constraint for both.
|
Composite phenomenon
|
Yes
|
438 puzzles = 7,008 word instances; 1,752 category instances
|
Yes
|
Reasoning
| null | null |
['Real task', 'Author-crafted']
|
['Convenience']
|
['Structured']
|
['Exact match', 'Distribution']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['Yes']
|
['Representative']
|
['Mean', 'Other']
|
herediaXNLIeuDatasetCrosslingual2024
|
XNLIeu: a dataset for cross-lingual NLI in Basque
|
Include
| null | null |
XNLIeu is an expanded version of the XNLI benchmark that includes Basque, created by machine-translating and then manually post-editing the original English data to support cross-lingual NLI research in low-resource languages. Experiments with various LLMs show that post-editing significantly improves performance and that the translate-train strategy is most effective, though its advantage lessens when applied to natively created datasets.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Natural language understanding with the focus on cross-lingual natural language inference (NLI).
|
Yes
|
The Natural Language Inference (NLI) task consists in classifying pairs of sentences –a premise and a hypothesis– according to their semantic relation: entailment, when the meaning of the premise entails that of the hypothesis; contradiction, when both sentences have opposing truth conditions and can not co-occur at the same time; and neutral, when both sentences are not semantically related.
|
Subset
| null |
The task is to classify pairs of sentences—a premise and a hypothesis—into one of three categories based on their semantic relationship: entailment, contradiction, or neutral.
|
a premise, a hypothesis, and the related label (entailed, contradicts, or neutral)
| null |
Expert-crafted task examples (e.g. hand-written examples), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
392, 702
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/hitz-zentroa/xnli-eu
|
XNLIeu
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
The impact of machine translation vs. professional post-edition.
They justify the creation of a native Basque set to address biases and artefacts common in translation-based datasets.
|
simple mean and std
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
NLP
|
Understanding
|
Multilinguality
|
['Expert-crafted', 'Crowd-sourced', 'Another benchmark']
|
['Targeted', 'Criterion']
|
['Short free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean', 'Std']
|
alamCTIBenchBenchmarkEvaluating2024
|
CTIBench: A Benchmark for Evaluating LLMs in Cyber Threat Intelligence
|
Include
| null | null |
This paper introduces CTIBench, a benchmark designed to assess LLMs’ performance in cyner threat intelligence (CTI) applications. CTIBench includes multiple datasets focused on evaluating knowledge acquired by LLMs in the cyber-threat landscape.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Cyber threat intelligence
|
Yes
|
Cyber Threat Intelligence (CTI) is the ability to process and analyze vast amounts of unstructured threat and attack data; allowing security analysts to utilize more intelligence sources. It involves the collection, analysis, and dissemination of information about potential or current threats to an organization’s cyber systems can provide actionable insights to help organizations defend against these attacks.
|
Subset
| null |
The CTIBench benchmark evaluates LLMs on five cybersecurity tasks: answering multiple-choice CTI questions (CTI-MCQ), mapping vulnerabilities to their root causes (CWEs), predicting severity scores (CVSS), extracting attack techniques (MITRE ATT&CK), and attributing threats to actors or malware. Each task provides a text input (e.g. vulnerability descriptions or threat reports) and expects structured CTI-relevant outputs.
|
1. CTI-MCQ: One row contains a multiple-choice question with a question string, four answer options, and the correct answer label.
2. CTI-RCM: A CVE description (free-text) and the corresponding CWE label representing the root cause.
3. CTI-VSP: One row has a CVE description and the associated CVSS v3.1 vector string with detailed severity metrics.
4. CTI-ATE: One row contains a description of a threat behaviour and a list of MITRE ATT&CK technique IDs mentioned in the report.
5. CTI-TAA: Each row contains a threat report from a reputed vendor mapped to an Avanced Persistent Threat (APT) group.
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
4947
|
No
| null |
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Scores are presented separately for each task subset
| null |
https://github.com/aiforsec/cti-bench
|
CTIBench
|
Widely-agreed
|
Somewhat. Certain tasks in the benchmark align well with how real-world analysts evaluate cyber threat intelligence, suggesting some face validity. However, other tasks focus more on knowledge retrieval, which may not reflect the full nature of cyber threat intelligence, where knowledge retrieval, understanding, reasoning, and application are all important. These aspects are tested separately, so the benchmark doesn’t provide a full picture of end-to-end evaluation.
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
|
The authors sample task items from real-world cyber threat issues and reformat them for evaluation. While this means the tasks are grounded in real-world problems, it remains unclear whether the evaluation of LLMs aligns with how cyber threat analysts would perform such assessments.
| null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Alignment
|
Safety
| null |
['Real task', 'Author-crafted', 'LLM-generated']
|
['Random', 'Convenience', 'Targeted']
|
['Multiple choice', 'Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Partially']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Representative']
| null |
jinJailbreakingLargeLanguage2024
|
Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters
|
Include
| null | null |
The paper proposes JAMBench - a benchmark for LLM jailbreaks against content moderation classifiers used as guardrails. The dataset is specifically designed to not trigger input-level harm classifiers, but trigger output-level harm classifiers to enable the study of evading output-level detection through jailbreaks.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
natural language understanding, jailbreaking, harmful behaviour
|
No
|
Vulnerability of content moderation classifiers to jailbreaks.
|
Subset
| null |
The task is to elicit a harmful response by an LLM while circumventing content moderation.
|
A harmful question
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
160
|
Yes
|
Category, severity of harm
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall), A non-defined "jailbreak success rate". likely LLM-as-a-Judge but unclear.
| null |
author-crafted from scratch.
|
Academia
|
Yes
| null |
Extremely little information is provided about the process and rationale of creating items. However, it is clear that datasets are designed to be adversarial to current content moderation. This has major implications on the construct validity but is not discussed at all.
|
Test
| null |
The base task is eliciting harmful response from an LLM while evading content moderation (so the LLM outputs Free Response). However, for the benchmark, we are utilizing [Harmful Question + LLM + Content Moderator ] and receive a harmfulness score that should be deceptive and hence the wrong filter decision is made.
|
Simple Mean
|
Yes
|
harm domain: Hate, Sexual, Violence, Self-Harm
| null |
https://github.com/Allen-piexl/llm_moderation_attack
|
JAMBench
|
Not defined
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
Experiments are repeated 5 times but resulting information onf uncertainty are not reported.
|
Model access required (e.g. logits)
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
Alignment
|
Safety
| null |
['Author-crafted']
|
['Targeted']
|
['Multiple choice']
|
['Exact match', 'Reward']
|
['No definition']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Unknown']
|
zhouVLUEMultitaskMultidimension2022
|
VLUE: A Multi-Task Multi-Dimension Benchmark for Evaluating Vision-Language Pre-training
|
Include
| null | null |
VLUE is a vision-language benchmark consisting of 4 representative VL tasks, each equipped with a private test set annotated on images from wild distribution. The authors evaluate the efficiency-performance trade-off of representative VLP models and build a Pareto SOTA landscape for current VLP research. Additionally they provide an extensive analysis of the generalization ability and the efficiency-performance trade-off of representative VLP models.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
- Generalization and transferability of VLP models
- Efficiency-performance trade-off
|
No
|
The authors do not explicitly define the above phenomena. They only mention the following:
- Generalization: VLP models have already seen the images in the downstream datasets and their captions before fine-tuning and evaluating on them, overestimating their transfer and generalization abilities.
- Performance-efficiency: "We refer to the goal of this phenomenon as “Pareto SOTA” following, which means that there is no other model currently better than it on all the dimensions of interest such as performance and efficiency. Therefore, we believe it is necessary to measure and report performance-efficiency trade-off."
|
Comprehensive
| null |
VLUE covers a set of fundamental VL tasks, including image-text retrieval, visual question answering, visual reasoning, and visual grounding.
|
Depending on the task:
- retrieval: query and the relevant image
- reasoning: pair of images and a natural statement, True/False label
- grounding: image, target object, target location
- QA: image, question and answer
- captioning: image and caption
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
74,509
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
more than 2M
| null |
Simple Mean
|
No
| null | null |
https://github.com/MichaelZhouwang/VLUE/tree/main
|
VLUE
|
Not defined
|
No
|
Yes
|
No
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Language Modelling
|
Adaptability
| null |
['Crowd-sourced', 'Another benchmark', 'LLM-generated']
|
['Convenience']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Soft match']
|
['No definition']
|
['No']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative', 'Constructed']
|
['Mean']
|
liuExposingAttentionGlitches2023
|
Exposing Attention Glitches with Flip-Flop Language Modeling
|
Include
| null | null |
This paper introduces FFLM, a synthetic benchmark designed to investigate "attention glitches" that cause Transformer-based language models to make sporadic reasoning errors.The benchmark tests models' ability to copy binary symbols across long distances while ignoring intervening tokens. The paper with this benchmark further explores regularisation and architectural tweaks to mitigate them.
|
Provides large‑scale empirical study (10 k+ models) and mechanistic analysis.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Reasoning, (Robust long‑range) memory
|
Yes
|
A flip-flop language modeling (FFLM) task is defined on sequences of write,
read, and ignore instructions: write sets the memory state to a certain value which is later
retrieved by read, while ignoring any contents in between. Authors are interested in whether language models can learn a flip-flop language from samples, which they define as processing the read operations perfectly to understand model output inaccuracies. (Sections 1 and 3.1)
|
Subset
|
They assume that the correct memory retrieval (subset) as the absence of incorrect responses.
|
Predict the next token in sequences built from write/read/ignore instructions plus binary data so that every read must output the latest write bit.
|
A length‑T token sequence, evaluation checks all read positions.
| null |
Procedurally-generated task examples (e.g. Creating instances from a template)
|
16M training sequences
|
Yes
|
difficulty (fixed sequence lengths, writing probability, reading probability, computationally ignoring probability)
|
Random sample (creators defined a task space and sampled from it)
|
Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
| null |
Deterministic read position index, binary digits
|
Simple Mean
|
Yes
|
Three canonical splits: (1) In-distribution, (2) sparse out-of-distribution token sets, (3) dense out-of-distribution sets where tokens are frequently appearing in read/write functions
| null |
https://huggingface.co/datasets/synthseq/flipflop
|
FFLM
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
500 seed replicate study
|
Mean error, scatter plots, attention heatmaps
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
Synthetic stress‑test (not user interaction scenarios)
|
Single cohesive phenomenon
|
No
|
16M training sequences, 160K sparse o.o.d. sequences , and 4K dense o.o.d. sequences
|
Yes
|
NLP
|
Long Context
| null |
['Procedurally-generated']
|
['Random']
|
['Short free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
merdjanovskaNoiseBenchBenchmarkingImpact2024
|
NOISEBENCH: Benchmarking the Impact of Real Label Noise on Named Entity Recognition
|
Include
| null | null |
NOISEBENCH evaluates label noise in Named Entity Recognition (NER) models. It provides multiple variants of the same dataset with different types of real noise (expert errors, crowd- sourcing errors, automatic annotation errors and LLM errors).
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Label Noise on Named Entity Recognition
|
Yes
|
"Benchmark for measuring the impact of label noise in the training data on the prediction quality of trained NER models"
|
Subset
| null |
Named entity recognition (NER), which requires detecting and classifying named entity types in text.
|
Sentence where named entities must be identified and classified into one of four entity types (PER, ORG, LOC, MISC).
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
Test: 3,427 sentences (test set)
|
Yes
|
noise level percentage, error types, entity counts
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null |
Benchmark is derived from the CoNLL-03 dataset, which consists of real news articles with manually annotated named entities
|
Academia
|
Yes
| null | null |
Test, Train, Validation
|
Train: 5,885 sentences; Validation: Approximately 17%
| null |
Simple Mean
|
Yes
|
Noise types (Expert, Crowd++, Crowd, Distant, Weak, LLM); Per-class metrics (LOC, ORG, PER, MISC); Token-level vs. entity-level F1 scores
| null |
https://github.com/elenamer/NoiseBench
|
NOISEBENCH
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
When comparing real noise to simulated noise, they provide evidence showing how models immediately memorize real noise patterns rather than going through distinct learning phases.
|
Micro-averaged entity-level F1 score reported as means across 3 runs with standard deviations. Simple means used for comparing approaches across different noise types.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
Named entity recognition is a common NLP task with potential real-world applications.
|
Composite phenomenon
|
Yes
|
test set: 3,427 sentences, 5,725 entity mentions; training set: 5,885 sentences, 9,685 entity mentions.
|
No
|
NLP
|
Extraction
| null |
['Real task', 'Another benchmark']
|
['Criterion']
|
['Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['Yes']
|
['Representative']
|
['Mean', 'Std']
|
jinMMToMQAMultimodalTheory2024
|
MMToM-QA: Multimodal Theory of Mind Question Answering
|
Include
| null | null |
The paper introduces MMToM-QA, the first benchmark that evaluates Theory-of-Mind (ToM) reasoning across multimodal inputs (video and text), containing diverse test questions and synthetic training videos. The authors propose BIP-ALM, a novel approach that combines Bayesian inverse planning with language models to extract unified representations from multimodal data, demonstrating that while current large language and multimodal models lack robust ToM capacity, their method narrows the gap to human-level performance.
|
New multimodal Theory-of-Mind benchmark (text + video)
|
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Theory-of-Mind, (social) reasoning, multi-modal
|
Yes
|
While the recent ToM benchmarks provide welldesigned, cognitively informed tools, they share several notable limitations. One such limitation is the dependence on massive training data, which raises the concern that these models work by finding data patterns in a way that deviates from humanlike ToM reasoning. But ToM reasoning goes beyond merely text comprehension or video understanding. Hence the benchmark targets to evaluate how a model can infer mental states from either words or vision separately or fuse the separate information to form a single coherent mental scene.
| null | null |
For each item, the model receives a short household video clip and a textual description of the scene and actions, then asked to decide which of two candidate mental‑state hypotheses is more plausible.
|
One clip (RGB‑D frames) + accompanying textual scene/action description + 1 question with two options.
| null |
Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
1000 videos
|
Yes
|
Question type, timestamps, ground‑truth goals/beliefs, scene graphs, RGB‑D, segmentation, 3‑D poses, camera parameters
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
|
binary options (overall chance = 50%)
|
Dataset items are entirely synthetic: avatar videos rendered in simulator; scene & action narratives drafted via templates and polished by GPT‑4.
|
Academia
|
Yes
| null | null |
Test, Train
| null | null |
Simple Mean
|
Yes
|
True/False belief, short/long-term, Goal inference given true/false/updated beliefs or given future actions
| null |
https://chuanyangjin.com/mmtom-qa
|
MMToM‑QA
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
Qualitative analysis and ablations to show why their method (BIP-ALM) succeeds where LLMs fail.
|
simple mean, statistical tests
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
High controllability, annotation (but lacks real‑world sensor noise), human test set for generalisation.
|
Composite phenomenon
|
Yes
|
Test (600 Questions over 134 videos), Validation (not separately mentioned)
|
No
|
Theory of Mind
| null | null |
['Procedurally-generated', 'LLM-generated']
|
['Targeted']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean', 'Tests']
|
dengCOLDBenchmarkChinese2022
|
COLD: A Benchmark for Chinese Offensive Language Detection
|
Include
| null | null |
COLDATASET is dataset of Chinese sentences with binary offensive/non-offensive labels covering topics of race, gender, and region. COLDETECTOR is a baseline detector trained on this dataset.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Offensive language detection in Chinese
|
Yes
|
"The contents with any form of targeted offense to individuals or groups are consid- ered offensive language. It includes veiled or di- rect offensive content expressing rudeness, disrespect, insults, threats and profanity based on as- pects such as race, religion, sex, or sexual orientation"
|
Subset
| null |
The task involves classifying Chinese text samples as either "offensive" or "non-offensive," where offensive/toxic language, and hate speech are not distinguished.
|
Chinese Social Media text sample with a binary label (offensive/non-offensive), and in the test set, a more fine-grained label classifying it into one of four subcategories.
| null |
Real task examples (e.g. GitHub issues)
|
test: 5,323
|
Yes
|
topic, fine-grained labels, average character length, topic-related keywords presence
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
train/dev: 32,157
| null |
Simple Mean
|
Yes
|
Four subcategories (attacking individuals, attacking groups, anti-bias, other non-offensive) and also by topic categories.
| null |
https://github. com/thu- coai/COLDataset
|
COLD
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
First, report high inter-annotator agreement. Second, compare against the translated Jigsaw dataset. Third, conduct an ablation study.
|
Reporting accuracy, precision, recall, and F1 scores, both as macro averages across all categories and separately for offensive and non-offensive classes.
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
The classification of content as offensive/non-offensive directly mirrors a real-world content moderation task performed on social media platforms. However, in production systems, this task would likely include additional context (like user history, reports, engagement metrics).
|
Composite phenomenon
|
Yes
| null |
No
|
Alignment
|
Safety
| null |
['Real task']
|
['Targeted', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
|
['Mean']
|
krojerAreDiffusionModels2023
|
Are Diffusion Models Vision-And-Language Reasoners?
|
Include
| null | null |
GDBench is a benchmark designed to assess vision-and-language reasoning in diffusion-based models using image-text matching. GDBench aggregates 8 existing datasets/benchmarks to measure text retrieval, image retrieval, and bias towards religious groups, national identity, and sexual orientation. The code and benchmark setup are publicly available.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
compositional reasoning, image-text matching, vision-and-language reasoning
|
No
|
Vision-and-language reasoning is implicitly defined as the "fine-tuned understanding of vision and language," and the ability for models "to understand how various objects and parts compose together" (1).
|
Subset
| null |
"We present our method Diffusion Image-Text Matching (ITM). Our goal is to assign a score to an image(x)-text(w) pair (x, w) which is broadly useful for downstream applications. We provide (x, w) to the diffusion model and task it to “edit” the image according to the text. Our main intuition is if the image is not described by the text, a lot of edits are needed to fit the text, in which case it gets a low score, and vice-versa" (4). The benchmark includes "7 ability-centric" ITM tasks, and one bias task (6).
|
A single item would be an image, and a text caption, which can either be a hard negative or a positive caption.
|
Broadly, the tasks of GDBench are split into image retrieval tasks and text retrieval tasks. The benchmark is designed primarily for diffusion models.
|
Modified from another benchmark (e.g. translation into another language)
| null |
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Generated image
|
Distribution (perplexity, calibration, correlation)
|
The paper defines a custom metric for the normalized image retrieval error on page 5. The metric is intended to measure the "relative difference of how much easier or harder it becomes to denoise the image with a given text relative to when no text is given" (5). The metric is intended for diffusion models. The paper also measures the religious, nationality, and sexual orientation biases in the image outputs using effect size, or the defined "normalized association score" on page 7.
|
The 8 distinct tasks are based on 8 existing datasets (Flickr30k, Winoground, Aro, ImageCoDe, SVO, CLEVR, Pets) instead of the creation of an entirely new dataset. The GDBench then reports scores on the sub-elements per sub-part of the subsumed benchmarks.
|
Mix (multiple authors from industry and academia)
|
Yes
| null |
Though the benchmark aggregates existing datasets, it is included due to its contribution in creating a more extensive and comprehensive benchmark and its extension of image-text matching tasks to diffusion models.
|
Test
| null |
The model must edit the provided image until it matches the provided positive or negative caption.
| null |
Yes
|
SVO: Verb, Subj, Obj; ImageCoDe: Static, Video; ARO: VG Attr., VG Rel., COCO Ord., Flickr Ord. Bias: Religion, Nationality, Sexuality
| null |
https://github.com/McGill-NLP/diffusion-itm
|
GDBench (Generative-Discriminative Evaluation Benchmark)
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
The authors highlight that image-text-matching (ITM) tasks have "become a standard paradigm for diagnostic vision-and-language datasets" because they enable "interpretable evaluation on many downstream skills" (6). Thus, the authors implicitly position the dataset as a valid construct to measure vision-and-language reasoning.
|
Image retrieval error, effect score bias
|
Model access required (e.g. logits)
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Grounding
| null | null |
['Another benchmark']
|
['Convenience', 'Criterion']
|
['Free response']
|
['Distribution']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
tsurutaSARSCoV2InteractionDataset2024
|
A SARS-CoV-2 Interaction Dataset and VHH Sequence Corpus for Antibody Language Models
|
Include
| null | null |
AVIDa-SARS-CoV-2 is a dataset featuring the antigen-variable domain of heavy chain of heavy chain antibody (VHH) interactions obtained from two alpacas immunized with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) spike proteins. AVIDa-SARS-CoV-2 includes binary labels indicating the binding or non-binding of diverse VHH sequences to 12 SARS-CoV-2 mutants, such as the Delta and Omicron variants. The authors report benchmark results for predicting SARS-CoV-2-VHH binding using VHHBERT pre-trained on VHHCorpus-2M and existing general protein and antibody-specific pre-trained language models.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
binding prediction, i.e. predict the binding site of an antibody.
|
Yes
|
The antibody discovery task is a binary sequence classification that distinguishes antibodies that bind to SARS-CoV-2.
|
Subset
| null |
Binary classification: whether the antibody binds to a specific antigen at the antibody sequence level.
|
- VHH_sequence: Amino acid sequence of VHH
- Ag_label: Antigen Type
- label: Binary label represented by 1 for the binding pair and 0 for the non-binding pair
- subject_species: Species of the subject from which VHH was collected
- subject_name: Name of the subject from which VHH was collected
- subject_sex: Sex of the subject from which VHH was collected
| null |
Real task examples (e.g. GitHub issues)
|
77,003
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Industry
|
Yes
| null | null |
Test, Train
|
2M
| null |
Simple Mean
|
No
| null | null |
https://huggingface.co/datasets/COGNANO/AVIDa-SARS-CoV-2
|
AVIDa-SARS-CoV-2
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Biology
| null | null |
['Real task']
|
['Convenience']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Complete']
|
['Mean']
|
maAgentBoardAnalyticalEvaluation2024
|
AgentBoard: An Analytical Evaluation Board of Multi-turn LLM Agents
|
Include
| null | null |
This paper introduces AGENTBOARD, a benchmark designed to evaluate LLM agents capabilities in partially observable environments, multi-round interactions and diverse tasks through a unified evaluation framework and fine-grained metric analysis.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Agent Abilities for Real-World Problem Solving
|
Yes
|
The ability to perform step-by-step planning in diverse tasks and partially observable environments via multi-turn interactions.
|
Comprehensive
| null |
A task in AGENTBOARD presents an agent with a real-world scenario—such as an embodied, game, or tool-use environment—where it must perform actions, receive feedback, and plan over multiple rounds in a partially observable setting.
|
A task item includes an environment definition, a goal to achieve, step-wise observations, and defined action space
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
1013
|
Yes
|
Difficulty
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Across each subtask dataset, difficulty and subskill
| null |
https://huggingface.co/datasets/hkust-nlp/agentboard/tree/main
|
AGENTBOARD
|
Contested
|
No
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
Simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Reasoning
|
Planning
| null |
['Author-crafted', 'Another benchmark']
|
['Convenience']
|
['Short free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['No']
|
['Yes']
|
['Realistic']
|
['No']
|
['Partial']
|
['Mean']
|
maruNibblingHardCore2022
|
Nibbling at the Hard Core of Word Sense Disambiguation
|
Include
| null | null |
The authors introduce new challenging test sets for Word Sense Disambiguation evaluation specifically designed to evaluate model resilience on rare word senses and present a more rigorous evaluation framework.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Word Sense Disambiguation (WSD): automatically assigning a correct meaning to an ambiguous word in context
|
Yes
|
"Word Sense Disambiguation (WSD), the task of automatically assigning a meaning to an ambiguous word in context"
|
Subset
| null |
Word Sense Disambiguation (WSD) is the task of automatically assigning the correct meaning to an ambiguous word within a given context, selecting from a predefined sense inventory.
|
A single item consists of a context (sentence or paragraph) containing a target ambiguous word.
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
7,253 instances
|
Yes
|
word sense frequency, domain information, presence in training data
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
micro-averaged F1 and macro-averaged F1 scores
|
Yes
|
Presence in training data (SemCor; non-SemCor instances); Word sense frequency (WordNet first sense vs non-first sense); Dataset (ALL, ALLNEW, S10NEW, 42D, hardEN, softEN)
| null |
https://github.com/SapienzaNLP/wsd-hard-benchmark
| null |
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
| null |
F1 scores (micro-averaged and macro-averaged) as the primary statistical method.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
WSD is a fundamental NLP capability that would be used as a component within larger systems.
|
Single cohesive phenomenon
|
Not applicable
| null |
No
|
NLP
|
Understanding
| null |
['Real task', 'Another benchmark']
|
['Targeted', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
huangConMeRethinkingEvaluation2024
|
ConMe: Rethinking Evaluation of Compositional Reasoning for Modern VLMs
|
Include
| null | null |
ConMe is a multimodal compositional reasoning benchmark that presents a novel automatic hard negative generation pipeline using VLMs. It is publicly-available, manually verification, and presents a complementary automatic analysis and error categorization pipeline.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
compositional reasoning
|
Yes
|
Compositional reasoning "is the ability of the VLM to recognize and
attend to the language concepts beyond objects (i.e., nouns), such as attributes, relations, finegrained object alternatives, and more, in both the image and text of a VL pair" (1).
|
Subset
| null |
A model is given an image, a correct caption, and a hard negative caption, and must choose the correct caption.
|
A single item contains the image, the generated correct caption, and the generated hard negative caption.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
24347
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation)
|
The benchmark evaluates sample accuracy and perplexity.
|
ConMe modifies the SugarCrepe examples with their own novel text generation pipeline for VLM-created hard negatives. First, GPT-4V generates "a detailed description of the input image," used as the "ground truth" (4). Then, 4 downstream VLMs (LLaVA 1.6-7b, LLaVA 1.5-7b, InstructBLIP Flan-T5, InstructBLIP Vicuna-7b) then generate descriptions of the image. Next, GPT-4V receives the description it produced, and the descriptions of the VLMs, and is prompted "to generate multiple challenging compositional reasoning questions based on the generated descriptions" from the VLMS (4). The models then cycle through disregarding the GPT-4V generated samples that all VLM models correctly answer, and improving the kept samples to become more challenging. The authors manually verified a random subsample of 1000 samples from ConMe.
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/jmiemirza/ConMe
|
ConMe (Confuse Me)
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
The authors highlight that image-to-text matching tasks rely on "LLM only negative text generation pipeline(s)" that produces "improbable" or "outlier" captions for the given image (1). As a result, the authors claim that using VLMs in the hard negative generation pipeline is required for a more accurate benchmark of compositional reasoning in VLMS.
|
Simple mean/accuracy
|
Model access required (e.g. logits)
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
No
|
ConMe uses three partitions used in SugarCrepe: replace-att, replace-obj, and replace-rel hard negatives. ConMe uses the same 3846 images as SugarCrepe, but generates more examples per image across the three partitions.
|
No
|
Reasoning
|
Compositional
| null |
['Author-crafted', 'Another benchmark', 'LLM-generated']
|
['Convenience', 'Targeted']
|
['Multiple choice']
|
['Exact match', 'Distribution']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
chenCurriculumBroadcoverageBenchmark2022
|
Curriculum: A Broad-Coverage Benchmark for Linguistic Phenomena in Natural Language Understanding
|
Include
| null | null |
Current models do not provide insight into how well a language model captures distinct linguistic skills essential for language understanding and reasoning. In this paper, authors introduce CURRICULUM as a new format of NLI benchmark for evaluation of broad-coverage linguistic phenomena. CURRICULUM contains a collection of datasets that covers 36 types of major linguistic phenomena and an evaluation procedure for diagnosing how well a language model captures reasoning skills for distinct types of linguistic phenomena.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Linguistic phenomena in NLU
|
Yes
|
For this phenomenon, authors try to measure how well a language model captures distinct linguistic skills essential to language understanding and reasoning.
|
Comprehensive
| null |
Natural language inference (NLI). More specifically, authors provide a group of tasks motivated by three benchmarks: GLUE Diagnostic, Rainbow, and DNC. In addition, we include many more subtasks focusing on complex reasoning types such as deductive logic and analytical thinking.
|
Each single item has a premise, a hypothesis, and a target label.
| null |
Modified from another benchmark (e.g. translation into another language)
|
171,252
|
Yes
|
difficulty level
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), Correlation (Matthew's correlation, Pearson's r)
| null | null |
Academia
|
Yes
| null | null |
Test, Train
|
540,437
| null |
Simple Mean
|
Yes
|
sub-phenomenon, difficulty
| null |
https://github.com/eric11eca/curriculum-ling?tab=readme-ov-file
|
CURRICULUM
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
No
|
NLP
|
Understanding
| null |
['Another benchmark']
|
['Convenience', 'Targeted']
|
['Short free response']
|
['Exact match', 'Correlation']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
xieOSWorldBenchmarkingMultimodal2024
|
OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
|
Include
| null | null |
OSWorld introduces a scalable, executable computer environment supporting real OSs (Ubuntu, Windows, macOS) to evaluate multimodal agents on 369 open-ended real-world tasks. It includes complex setups, execution-based evaluation, and detailed analysis of LLM/VLM agents' capabilities and deficiencies.
|
They put significant emphasis on that they are "The first-of-its-kind scalable, real computer environment for multimodal agents, supporting task setup, execution-based evaluation, and interactive learning across operating systems." <- with emphasis on the first
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Multimodal tool use, open-ended task execution in real OS environments, reasoning (though they don't mention word reasoning even once, the tasks implicitly require reasoning)
|
No
|
The phenomenon is implicitly defined through the benchmark’s design and the types of tasks it includes. Authors describe the benchmark as evaluating agents’ ability to complete open-ended, real-world computer tasks using multimodal perception and actions (such as screenshots, accessibility trees, mouse/keyboard inputs) across various applications and operating systems.
|
Subset
| null |
Open-ended computer activity, which is described in natural language, executed by an agent inside a real operating system environment. Each task includes an initial state setup, a goal instruction, and a custom execution-based evaluation script to determine success.
|
"Each example is carefully annotated with a natural language instruction, a setup configuration with corresponding files and setup actions for initialization of initial states upon our provided VM image, and a manually crafted evaluation script to check if the task is successfully executed." (page 7)
| null |
Human exam questions (e.g. GRE questions), Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Expert-crafted task examples (e.g. hand-written examples), Modified from another benchmark (e.g. translation into another language)
|
412
|
Yes
|
human difficulty, task feasibility, application domain, task type
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code), Extended interaction (e.g. conversation, calling an API and processing the response)
|
Exact Match (accuracy, F1, precision, recall), Execution-based evaluation scripts
| null |
They sourced tasks from an incredible number of sources, diverse between each other - there is a table (appendix B.3) taking almost two pages to just list program sourced.
|
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
By application domain: OS, Office, Daily, Professional, Workflow
By OS: Ubuntu or Window
By task difficulty: Easy, Medium, Hard (based on human completion time)
By feasibility: Feasible and Infeasible tasks
By input modality: Screenshot, Accessibility Tree, SoM etc.
| null |
https://os-world.github.io/
|
OSWorld
|
Not defined
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
| null |
1. Human Baseline
They conducted human evaluations across all tasks, showing that humans (without prior expousure to the tasks) achieved +/- 72.36% accuracy, while top models performed under 12.24%, showing that the tasks are achievable.
2. Realistic Task Design
The tasks are based on real-world scenarios, sourced from impressively large number of sources: from user forums, tutorials, and many others, to everyday computer workflows.
3. Execution-Based Evaluation
They designed 134 custom, deterministic evaluation scripts to assess functional correctness and objective and reproducible scoring.
4. Model Performance Analysis
The authors analysed how models fail - such as difficulty with GUI grounding, interaction noise, and poor generalisation across applications - they tried aligning that observed performance with the skills they intended to measure through the benchmark.
4. Comparative Difficulty
They compare OSWorld to other benchmarks like WebArena and show that OSWorld tasks take longer for humans to complete and are harder for models to solve to support the idea that OSWorld includes more complex tasks, which are supposed to be closer to the real-world abilities.
|
Just simple mean, with occasional reporting of variance or distribution plots.
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
|
The benchmark involves full workflows from A to Z - such as software installation, document editing, web navigation, and multi-application coordination. These tasks are executed in real OS environments (Ubuntu and Windows) and they use apps/programs (e.g. Chrome, LibreOffice, VLC), to operate as general-purpose assistants.
|
Composite phenomenon
|
Yes
|
"OSWORLD benchmark [...] encompasses 369 real computing tasks defined and executed on Ubuntu. Additionally, we provide a set of 43 tasks for Windows built on the OSWORLD environment." (page 6)
|
Yes
|
Agents
|
Tool Use
| null |
['Human exams', 'Real task', 'Author-crafted', 'Expert-crafted', 'Another benchmark']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response', 'Interaction']
|
['Exact match', 'Reward']
|
['No definition']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['']
|
['Complete']
|
['Mean', 'Std']
|
dasEXAMSVMultidisciplineMultilingual2024
|
EXAMS-V: A Multi-Discipline Multilingual Multimodal Exam Benchmark for Evaluating Vision Language Models
|
Include
| null | null |
EXAMS-V is a multi-discipline multimodal multilingual exam benchmark for evaluating vision language models. The questions come in 11 languages across 20 school disciplines. The evaluation results demonstrate that this is a challenging dataset, which is difficult even for advanced vision–text models such as GPT-4V and Gemini.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Multilingual and multilingual model’s multitask accuracy (knowledge acquired during pretraining) across a diverse set of subjects
|
No
|
For LLM evaluation, standardized testing akin to school examinations has proven to be an effective measure of a model’s capabilities.
|
Comprehensive
| null |
The task is a multiple-choice question answering task.
|
language, subject, grade, question, choices, answers, image, image type
| null |
Human exam questions (e.g. GRE questions), Modified from another benchmark (e.g. translation into another language)
|
4,800
|
Yes
|
language, grade, subject
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test, Train
|
16,500
| null |
Simple Mean
|
Yes
|
subject, language
| null |
https://huggingface.co/datasets/Rocktim/EXAMS-V
|
EXAMS-V
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
VQA
| null | null |
['Human exams', 'Another benchmark']
|
['Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative']
|
['Mean']
|
jiangBRAINTEASERLateralThinking2023
|
BRAINTEASER: Lateral Thinking Puzzles for Large Language Models
|
Include
| null | null |
The paper introduces BRAINTEASER, a multiple‑choice benchmark that probes large language models’ ability for lateral thinking, a creative, non‑linear reasoning that overrides default commonsense associations. It describes a three‑step construction pipeline (web crawling and filtering of puzzles, semi‑automatic distractor generation, and semantic + contextual reconstructions) that yields high‑quality items while controlling for memorization.
|
First publicly-‑available benchmark for lateral‑thinking evaluation,
Detailed error analysis regarding memorisation & commonsense traps.
|
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
(Lateral/creativee) reasoning
|
Yes
|
The paper targets lateral reasoning, defined as a creative problem-solving approach that differs from vertical thinking (commonsense association and inference). It requires solving puzzles that cannot be resolved through straightforward commonsense associations alone, demanding non-linear reasoning patterns.
|
Subset
|
- Although authors note lateral thinking comprises four skills (awareness, random stimulation, alternatives, alteration), they do not operationalize them separately; sub‑elements are not separately measured.
- Authors explicitly contrast lateral vs vertical thinking and position benchmark as complementary to commonsense QA suites.
|
Multiple-choice QA where the model selects the correct explanation to a brain‑teaser puzzle among 4 options (one may be “None of the above”).
|
One puzzle = question stem + 4 answer choices
|
Sentence/word puzzles -- each puzzle has 2 reconstruction variants (semantic/context) to resist memorization.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
1,119 puzzles
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
|
Group accuracy = 1 only if model answers all three variants correctly.
|
373 core puzzles crawled from riddles.com, rd.com, etc.,
distractors via COMET (commonsense pre-trained language models),
reconstruction prompts via GPT‑4
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
|
Core 373 originals expanded 3 times, with reconstructions.
| null |
Simple Mean
|
Yes
|
Separate scores for Sentence/ Word and for Original/Semantic/Context splits.
| null |
https://github.com/1171-jpg/BrainTeaser
|
BRAINTEASER
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
- The human annotators rated 99% of the original QA pairs valid, and 97%-100% of the semantic/context reconstructions as consistent with the original QA pairs.
|
Simple proportion, human/model comparisons
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
The puzzles are artificial and deliberately engineered, so they are unlikely to appear in day‑to‑day user interactions. Rather, they approximate the kind of creative‑reasoning challenges that could arise across diverse downstream tasks. Thus ecological validity seems to be low, serving research evaluation rather than direct operational use.
|
Single cohesive phenomenon
|
Not applicable
|
627 sentence, 492 word; originals + reconstructions
|
Yes
|
Reasoning
| null | null |
['Real task', 'Author-crafted', 'LLM-generated']
|
['Convenience', 'Targeted']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Complete', 'Constructed']
|
['Mean']
|
ushioGenerativeLanguageModels2022
|
Generative Language Models for Paragraph-Level Question Generation
|
Include
| null | null |
QG-Bench, a comprehensive benchmark for paragraph-level question generation (QG) that unifies existing question answering datasets into a standard format. The authors fine-tune LMs for the QG task and evaluate them using both automatic metrics and human evaluation.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Question generation
|
Yes
|
"Question generation is the task of generating a question given an in- put context consisting of a document, a paragraph or a sentence, and an answer where the question is anchored"
|
Subset
| null |
Generate a natural language question given an input paragraph and an answer span that appears within that paragraph.
|
Paragraph, a sentence within that paragraph, an answer span, and the target question to be generated.
| null |
Modified from another benchmark (e.g. translation into another language)
|
SQuaD train: 75,722
|
Yes
|
language, domain, average patagraph character length
|
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code)
|
n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics), Correlation (Matthew's correlation, Pearson's r)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
SQuaD validation: 10,570, SQuaD test: 11,877
| null |
Simple Mean
|
Yes
|
Scores by language, domain, model input type
| null |
https://github.com/asahi417/lm-question-generation
|
QG-Bench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
Validation of the benchmark through manual evaluation where human annotators rate generated questions across three criteria (grammaticality, understandability, and answerability).
|
Simple mean scores for each metric. For correlation analysis between automatic metrics and human judgments: Spearman's rank correlation coefficient.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
NLP
|
Extraction
| null |
['Another benchmark']
|
['Convenience', 'Criterion']
|
['Free response']
|
['Soft match', 'Human ratings', 'Correlation']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean', 'Other']
|
gingOpenendedVQABenchmarking2024
|
Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy
|
Include
| null | null |
This paper propose a novel VQA benchmark based on well-known visual classification datasets which allows a granular evaluation of text-generative vision-language models and their comparison with discriminative vision-language models. To improve the assessment of coarse answers on fine-grained classification tasks, the authors suggest using the semantic hierarchy of the label space to ask automatically generated follow-up questions about the ground-truth category.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Open-ended visual understanding
|
No
| null |
Comprehensive
| null |
Open-ended Visual Question Answering (oVQA), which tests vision-language models (VLMs) on their visual understanding by asking questions via natural language. Unlike multiple choice VQA, where answers can be chosen from a predefined set of options, oVQA requires the model to generate the answer rather than simply choosing the option with the highest score.
|
Image, question and a golden answer
| null |
Modified from another benchmark (e.g. translation into another language)
|
95,864
|
Yes
|
attribute type
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), - Contains: A less restrictive option is to consider a response correct if the prediction contains the true class name after preprocessing - ClipMatch: matching the prediction and label using cosine similarity in a vector embedding space
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Stratification of results based on the used VQA benchmarks
| null |
https://github.com/lmb-freiburg/ovqa
|
OVQA
|
Not defined
|
There is no specified phenomenon besides the models' ability to answer open-ended questions.
|
Yes
|
No
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean and variance
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
| null |
Authors' description is unclear
|
Not applicable
| null |
No
|
VQA
| null | null |
['Another benchmark']
|
['Criterion']
|
['Short free response']
|
['Exact match', 'Soft match']
|
['No definition']
|
['No']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Representative']
|
['Mean', 'Std']
|
hwangMultitaskBenchmarkKorean2022
|
A Multi-Task Benchmark for Korean Legal Language Understanding and Judgement Prediction
|
Include
| null | null |
This work introduces LBOX OPEN, the first large-scale benchmark of Korean legal AI datasets, comprising a legal corpus of 147k precedents and multiple tasks including classification, legal judgment prediction, and summarization. It also presents LCUBE, the first Korean legal language model trained on this corpus.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
legal language understanding and legal judgment prediction in the Korean context
|
Yes
|
we release LBOX OPEN, the first large-scale Korean legal AI benchmark that consists
of six datasets: (1) a large-scale legal precedent corpus (PRECEDENT CORPUS), (2) two classification tasks (CASE NAME, STATUTE), (3) two legal judgement prediction tasks (LJP-CRIMINAL, LJPCIVIL), and (4) one summarization task (SUMMARIZATION).
|
Subset
| null |
legal text classification (predicting case names and statutes from factual case descriptions), legal judgment prediction (estimating punishment ranges or claim acceptance levels from case facts), and summarization (generating summaries of legal rulings and reasoning sections).
|
input text and the corresponding label or output in case of summarisation
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Expert-crafted task examples (e.g. hand-written examples)
|
14.1k
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
147k precedents (259M tokens)
| null |
Simple Mean
|
No
| null | null |
https://github.com/lbox-kr/lbox-open
|
LBOX OPEN
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
They discuss how the tasks are grounded in real-world legal processes, and mention that legal judgment prediction tasks remain especially challenging
|
simple mean and std
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
it is based on the real document, but not used in the real cases, and tasks as summarisation and classification are the constructed ones
|
Composite phenomenon
|
Yes
| null |
No
|
Law
| null | null |
['Real task', 'Author-crafted', 'Expert-crafted']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response', 'Free response']
|
['Exact match', 'Soft match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial', 'Constructed']
|
['Mean', 'Std']
|
shiLargeLanguageModels2023
|
Large Language Models Can Be Easily Distracted by Irrelevant Context
|
Include
| null | null |
The paper introduces GSM-IC, a variant of the GSM8K arithmetic reasoning dataset that includes irrelevant sentences to test large language models' distractibility. The authors evaluate several prompting strategies on LLMs, revealing significant performance drops when irrelevant information is present, and explore mitigation strategies including self-consistency decoding, exemplar design, and explicit instructions that partially restore performance.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
reasoning, distractibility (irrelevant context QA)
|
Yes
|
Filtering out irrelevant information is essential for handling real-world tasks. Our evaluation indicates that despite the strong performance on challenging reasoning problems, state-of-the-art language models still have fundamental weaknesses in context understanding and identifying the relevant information from the input. Our findings suggest that in order to gain a more holistic understanding of the reasoning capability of language models, future work should also consider the model sensitivity to irrelevant context, in addition to solving more challenging problems. (pg 2.)
|
Subset
| null |
Given a maths word problem containing one irrelevant sentence, models are expected to output the numeric answer.
|
Each item = (Problem and Answer). The problems are derived from GSM8K and distractor sentences.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
4000 problems
|
Yes
|
Reasoning steps, distractor categories (topic/ overlap/ number range)
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null |
First, derive from existing benchmark (GSM8K),
Then, create distractors via templates.
Finally, manually verify grammaticality and answer invariance.
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
100 problems for validation.
|
Answers are expected to be exact integers; authors mark incorrect if formatting differs.
|
Simple Mean
|
Yes
|
Accuracies by reasoning step or by distractor category
| null |
https://github.com/google-research-datasets/GSM-IC
|
GSM‑IC
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Validty factors (topic, overlap, number rage) and the discussion for construct limitations show that the task isolates the distractability (irrelevant context), which authors address as the current limitation of the reasoning.
|
Percentage, comparison
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
Because the distractor sentences are from controlled templates, the new benchmark may nt be realistic due to the superfluous information built from synthetic perturbations.
|
Single cohesive phenomenon
|
Not applicable
|
58052 problems total but 4000 instances randomly drawn from the full set as an evaluation subset for cheaper computational cost.
| null |
Reasoning
|
Mathematical
| null |
['Author-crafted', 'Another benchmark', 'Procedurally-generated']
|
['Convenience', 'Targeted']
|
['Short free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
leeVHELMHolisticEvaluation2024
|
VHELM: A Holistic Evaluation of Vision Language Models
|
Include
| null | null |
The paper's main contributions are three-fold. First, the authors identify the aspects that are both applicable to VLMs and important to evaluate from either a technological or societal perspective. Second, they assemble 21 existing VLM benchmark datasets, which are sets of image-text prompts and expected output, and map to the aspects to ensure complete coverage. Third, they standardize the evaluation procedures so that apple-to-apple comparisons can be made across the models.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
visual understanding
|
Yes
|
This benchmark measures multiple visual phenomena in VLMs and more specifically: bias, fairness, knowledge, multilinguality, reasoning, robustness, toxicity, and visual perception
|
Comprehensive
| null |
A scenario represents a use case for a VLM and is identified by a task (e.g., question answering, code generation, and captioning) and a usage category such as the domain, origin, language, or subject. An example scenario is “visual question answering on medical images” where the task is visual question answering and the usage category is medical images.
|
the aspect of the task, a prompt, an image, a gold response and the corresponding metric reference for this task
| null |
Modified from another benchmark (e.g. translation into another language)
|
9,000
|
Yes
|
aspect of the task and the respective metrics to be used
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), LLM post-processing (extracting answers, reformatting for automated scoring), Win rate
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
The results are stratified based on the used benchmarks
| null |
https://crfm.stanford.edu/helm/vhelm/v2.0.1/
|
VHELM
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
VQA
| null | null |
['Another benchmark']
|
['Convenience']
|
['Short free response', 'Free response']
|
['Exact match', 'Soft match', 'LLM-as-a-Judge', 'LLM post-processing', 'Reward']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative']
|
['Mean']
|
wangM4GTbenchEvaluationBenchmark2024
|
M4GT-Bench: Evaluation Benchmark for Black-Box Machine-Generated Text Detection
|
Include
| null | null |
M4GT-Bench, a multilingual, multi-domain benchmark for detecting machine-generated text (MGT). The benchmark extends the previous dataset and contains three distinct tasks: Binary MGT Detection (classifying text as human-written or machine-generated), multi-way Generator Detection (which LLM generated text), change Point Detection (text transitions from human-written to machine-generated).
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Machine-generated text (MGT) detection
|
Yes
|
Machine-generated text detection as the task of identifying and differentiating LLM-generated text from genuine human-generated text.
|
Subset
| null |
Benchmark tasks: (1) Binary classification to determine if text is human-written or machine-generated, (2) Multi-way classification to identify which specific LLM generated a text, and (3) Boundary detection to identify where text transitions from human-written to machine-generated content.
|
Text sample, label human-written/machine-generated, model (if applicable)
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
73,288 (total machine)
|
Yes
|
Domain; language; generating LM; boundary position
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), Mean Absolute Error
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
Mixed Dataset, PeerRead, e.g. ChatGPT as Generator: 3,649 train; 1,522 test; 505 dev
| null |
Simple Mean
|
Yes
|
LM generator, language, domain
| null |
https://github.com/mbzuai-nlp/M4GT-Bench
|
M4GT-Bench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
|
Yes
|
Yes
|
Human evaluation for multi-way generator detection, where they test how well humans can distinguish between different LLM generators.
|
Simple means for the main metrics
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
Represent realistic scenarios where one would want to detect fully machine-generated text, identify which model generated text, or find the point where human writing ends and machine generation begins.
|
Composite phenomenon
|
Yes
| null |
No
|
NLP
| null | null |
['Real task', 'Another benchmark', 'LLM-generated']
|
['Convenience', 'Targeted']
|
['Multiple choice', 'Short free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
comsaBenchmarkReasoningSpatial2023
|
A Benchmark for Reasoning with Spatial Prepositions
|
Include
| null | null |
The paper introduces a new benchmark designed to evaluate the inferential capabilities of statements involving spatial prepositions. Featuring original datasets in both English and Romanian, the benchmark explores the boundaries of large language models’ reasoning about spatial relations.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Spatial reasoning
|
Yes
|
The benchmark proposes the challenge of using spatial prepositions to refer to abstract concepts in addition to physical relationships.
|
Subset
| null |
Take two premises with prepositions and determine if the conclusion holds.
|
Two premies, one conclusion, whether the conclusion holds.
| null |
Expert-crafted task examples (e.g. hand-written examples)
|
800
|
Yes
|
The preposition used in a premise / conclusion in an example.
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Industry
|
Yes
| null | null |
Test
| null | null | null |
Yes
|
items containing different prepositions
| null |
https://github.com/google-research/language/tree/master/language/spatial_prep
| null |
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null | null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Reasoning
|
Spatial
| null |
['Expert-crafted']
|
['Targeted']
|
['Short free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
| null |
luoCODISBenchmarkingContextdependent2024
|
CODIS: Benchmarking Context-dependent Visual Comprehension for Multimodal Large Language Models
|
Include
| null | null |
This paper introduces the CODIS benchmark for evaluating MLLMs on their ability to incorporate free-form textual context to improve image understanding. The authors show that current MLLMs underperform compared to humans and struggle to effectively extract and utilize contextual information to improve their understanding of images.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
context-dependent visual comprehension
|
Yes
|
understand the extent to which MLLMs can leverage context to enhance their visual comprehension
|
Subset
| null |
Take an image and question, and provide a short-form answer
|
<image>
Question: When was this photo probably taken, the first or second half of the year?
Context: I took this photo when I was in Australia.
Ground-truth Answer: First half
| null |
Expert-crafted task examples (e.g. hand-written examples)
|
706
|
Yes
|
Taxonomy of Context
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
|
They report accuracy as the main evaluation metric, assessed by humans for the primary results. They also evaluate LLM-as-judge for accuracy measurement in ablation study
| null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
| null | null |
https://github.com/THUNLP-MT/CODIS
|
CODIS
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
Five annotators participated in the data collection process. To ensure the quality of our dataset, each submission by an annotator was cross-checked by the other four.
|
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
VQA
| null | null |
['Expert-crafted']
|
['Criterion']
|
['Short free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
|
['Mean']
|
lalCaTbenchBenchmarkingLanguage2024
| null |
Include
| null | null | null | null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
(causal/ temporal) reasoning
|
Yes
|
While LLMs appear to generate good plans, it’s unclear how well they understand important aspects of the steps themselves. We thus use CAT-BENCH to test whether LLMs can identify step dependencies that reflect the causal and temporal structure of the plan. We find that current LLMs struggle to identify step dependencies, often performing close to random chance, raising more questions about their understanding of instructional text. (Section1)
|
Subset
|
Faithfulness in explanation qualities
|
A step order prediction with plan text and question of how many step happens before/after the certain step. The output is expected to be yes/no binary answer with optional explanation generation.
|
One binary question about a specific order pair of steps and ground-truth label
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
4260
|
Yes
|
Recipe, temporal step, causal dependency
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), Human ratings (text quality, preference, NOT manual scoring of other metrics)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Weighted Mean
|
Yes
|
Analysis by class, relation direction (before/after), step distance (close/distant)
| null |
https://github.com/StonyBrookNLP/CaT-Bench
|
CaT-Bench: Benchmarking Language Model Understanding of Causal and Temporal Dependencies in Plans
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
High inter‑annotator agreement and consistency metrics
|
Means, standard deviations, weighted Fleiss‑k
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
End‑users would not normally answer yes/no (binary) dependency questions so it is constructed task.
Plus, the benchmark captures a key capability (causal/temporal reasoning over plans) that underlies practical applications such as recipe‑guidance, robotics and agent planning, so it is representative of real‑world needs even though the evaluation setting is synthetic.
|
Composite phenomenon
|
No
|
2840 (balanced subset for evaluation) + 1420 (held‑out non‑dependent questions used for analysis)
|
Yes
|
Reasoning
|
Planning
| null |
['Real task', 'Author-crafted']
|
['Convenience', 'Targeted']
|
['Multiple choice', 'Free response']
|
['Exact match', 'Human ratings']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative', 'Constructed']
|
['Mean', 'Std']
|
suActPlan1KBenchmarkingProcedural2024
|
ActPlan-1K: Benchmarking the Procedural Planning Ability of Visual Language Models in Household Activities
|
Include
| null | null |
This paper introduces ActPlan-1K for evaluating VLMs on procedural and counterfactual reasoning tasks. By combining natural language descriptions with simulated environment images, the benchmark assesses the ability of VLMs to generate coherent action plans.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
multimodal reasoning, procedural planning
|
Yes
|
Whether VLMs can generate plausible action plans for multi-modal embodied AI task
|
Subset
| null |
Given household environment E, there are manipulable objects set O. For each household activity T , an VLM agent A takes in the task description T and environment images {I1, I2, ...} as input, generates procedural plan P∗ that can accomplish the task. The household environments have multiple interior spaces therefore there are multiple images to ensure that necessary spatial information is provided.
|
NL Task description: "The task goal is to assemble gift baskets. .... <description of the environment>"
Gold Plan
| null |
Expert-crafted task examples (e.g. hand-written examples)
|
1000
|
Yes
|
normal or counterfactual
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Structured response (e.g. valid JSON, API call alone)
|
n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
| null | null |
https://github.com/HKUST-KnowComp/ActPlan-1K
|
ActPlan-1K
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Reasoning
|
Planning
| null |
['Expert-crafted']
|
['Targeted']
|
['Structured']
|
['Soft match', 'Human ratings']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative', 'Constructed']
| null |
yinGeoMLAMAGeodiverseCommonsense2022
|
GeoMLAMA: Geo-Diverse Commonsense Probing on Multilingual Pre-Trained Language Models
|
Include
| null | null |
In this paper, a benchmark dataset is introduced, Geo-diverse Commonsense Multilingual Language Models Analysis (GEOMLAMA), for probing the diversity of the relational knowledge in multilingual PLMs.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
geo-diverse commonsense probing
|
Yes
|
do PLMs store geo-diverse commonsense knowledge?
|
Subset
| null |
investigating whether PLMs are capable of predicting correct answers among all the possibilities of different countries.
|
A prompt and a gold answer. For each concept, there are multiple masked multilingual prompts with specified country information [X] querying geo-diverse knowledge about the concept.
| null |
Expert-crafted task examples (e.g. hand-written examples)
|
3125
|
Yes
|
Language of the prompt.
|
Random sample (creators defined a task space and sampled from it)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation)
| null | null |
Academia
|
Yes
| null | null | null | null | null | null |
Yes
|
By language.
| null |
https://github.com/WadeYin9712/GeoMLAMA
|
GEOMLAMA
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Model access required (e.g. logits)
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Knowledge
|
Cultural
| null |
['Expert-crafted']
|
['Random']
|
['Short free response']
|
['Exact match', 'Distribution']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
| null |
kesenViLMAZeroshotBenchmark2024
|
ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models
|
Include
| null | null |
This paper introduces VILMA, a zero-shot benchmark for evaluating VidLMs, designed to require strong temporal understanding. The authros adopt the following methodology: (i) they harvest high-quality examples from existing video-language datasets; (ii) they create counterfactual examples or ‘foils’, so that a test requires distinguishing correct from counterfactual video+text pairs; (iii) they create a proficiency test to gauge if a model learns the capabilities we deem necessary to solve the main test; (iv) they apply automatic and manual validation of the examples and their counterfactuals to control for biases and to ensure a high-quality evaluation benchmark.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
temporal grounding in video-language models
|
Yes
|
In principle, VidLMs can visually ground linguistic phenomena which are beyond the reach of
image-language models (ILMs), since videos include dynamically evolving phenomena (e.g., events, actions, physical processes). This temporal dimension makes learning more complex.
|
Comprehensive
| null |
- The Action Counting task probes the ability of models to accurately count the occurrences of actions within a given video input stream.
- The Situation Awareness task shows how effectively VidLMs grasp the interaction between visual clues and verbal context by testing whether they recognise actors, actions, and their relationships.
- The Change of State task examines the ability of VidLMs (i) to recognise and distinguish different sub-phases of actions, especially those that induce a change of state (CoS) of objects or entities involved in it; and (ii) to align the beginning and ending phases of these actions across modalities.
- The Rare Actions task probes how well VidLMs identify novel compositions and recognise unusual interactions between human beings and objects.
- The Spatial Relations task focuses on the ability of models to distinguish different spatial and spatio-temporal relations related to the actions carried out in a video (e.g. moving an object ‘over’, or ‘towards’ another object).
|
Video frames, ground truth video caption, foil video caption
| null |
Modified from another benchmark (e.g. translation into another language)
|
5,934
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
results are stratified based on the temporal dimension/temporal task
| null |
https://github.com/ilkerkesen/ViLMA/tree/main
|
ViLMA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
Yes
|
They manually checked every video-caption-foil sample, retaining only those in which the foil was unambiguously false with respect to the input video. This resulted in the removal of 1278 (15.11%) of samples in the proficiency tests. The main tests were validated independently, in a study conducted on AMTurk.
|
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Grounding
| null | null |
['Another benchmark']
|
['Convenience']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial']
|
['Mean']
|
kurticMathadorLMDynamicBenchmark2024
|
Mathador-LM: A Dynamic Benchmark for Mathematical Reasoning on Large Language Models
|
Include
| null | null |
The paper introduces Mathador-LM, a arithmetic reasoning benchmark where models must reach a target number using five input numbers and basic arithmetic operations. Evaluations across various LLMs show that even top models perform far below the level of 3rd-grade students, revealing significant gaps in mathematical reasoning while avoiding test-data contamination issues.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
reasoning, planning, math
|
Yes
|
LLM performance of interpreting a ruleset and planning a valid sequence of arithmetic is critical for their reasoning evaluation. We propose an alternative pathway towards reliable examination of LLM performance via dynamic, one-time benchmarks that mitigate contamination by being created on-the-fly, independently for each evaluation run.. This approach mitigates issues such as test-set leakage into training data and provides a reliable method to evaluate closed-source models, even in the absence of detailed information
about their training data.
|
Subset
| null |
Given 5 base numbers and a target, generate up to 4 arithmetic steps that yield the target while obeying game constraints.
|
Target number, a list of 5 base numbers, a 4‑line expression sequence as a response.
| null |
Procedurally-generated task examples (e.g. Creating instances from a template)
|
1000 items
|
Yes
|
Difficulty (calculated by the average attainable score across all solutions for the target)
|
Random sample (creators defined a task space and sampled from it), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall)
|
Normalised game‑score accuracy (to yield 0-100%)
|
All items come from an algorithmic generator that samples numbers within predefined ranges and verifies solvability. There are no hand‑written examples, no crowd‑sourcing, no exam questions, and no LLM‑generated prompts.
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Difficulty level, error type
| null |
https://github.com/IST-DASLab/Mathador-LM
|
Mathador-LM
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
stability study across different regenerated datsets showing low variance
|
simple mean, 95% confidence interval, percentage point of performance gains over baselines
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
No
| null |
Yes
|
Reasoning
|
Mathematical
| null |
['Procedurally-generated']
|
['Random', 'Criterion']
|
['Free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean', 'Tests']
|
zhangCABComprehensiveAttention2023
|
CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling
|
Include
| null | null |
CAB is a multimodal benchmark assessing long-range modeling in transformers across computer vision, natural language processing, speech processing, and time-series forecasting. It is publicly available and composed of 7 tasks, spanning 9 datasets, to measure noncausal self, causal self, noncausal cross, and causal cross attention with a custom metric.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Long-range modeling
|
Yes
|
Long range modeling is implicitly defined as "longer sequence modeling in different domains" (1).
|
Subset
|
The four types of attention to be measured are noncausal self attention, causal self attention, noncausal cross attention, causal cross attention.
|
There are "seven tasks covering four research fields ... computer vision, natural language processing, speech processing, and time series forecasting" (3). The tasks are Text-to-Speech Synthesis (TTS), Summarization (Sum), Long Sequence Time-series Forecasting (LSTF), Point Cloud Completion (PCC), Langauge Modeling (LM), Masked Language Modeling (MLM), Super-Resolution (SR).
|
The benchmark is composed of 9 distinct datasets with different features.
|
Each task has its own dataset and metric.
|
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
| null |
No
| null |
Unknown
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM post-processing (extracting answers, reformatting for automated scoring), Distribution (perplexity, calibration, correlation), Mel-Cepstral Distortion is a measure of audio quality for TTS. A custom index is defined to balance all the evaluation metrics.
|
TTS uses MCD, MSD. Sum uses ROUGE. LSTF uses MSE, MAE. PCC uses CD and F-Score. LM and MLM uses PPL. SR uses PSNR and SSMI. The paper defines a custom metric, compositional index (CI), which is "a normalized score to balance the influence among evaluation metrics, and high CI represents excellence. It is computed as follows: a) we transform all evaluation metrics beforehand, so that a higher score indicates better performance; b) we then normalize each transformed metric with Z-score normalization; c) after normalization, the score of each evaluation metric is averaged within each task, and is further averaged across tasks" (5).
| null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Custom compositional index (CI)
|
Yes
|
Scores are reported for each attention type, per task, per sub-metric in each task, and with the total CI.
| null |
https://github.com/Shark-NLP/CAB
|
CAB (Comprehensive Attention Benchmark)
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
Yes
|
The authors highlight that evaluating long-range modeling requires assessing "standard bidirectional (or noncausal) self attention" and "cross attentions and unidirectional (or causal) attentions, which are equally important to downstream applications" (1). CAB is proposed to measure causal and cross self attention, in addition to noncausal self attention.
|
Simple mean/sum, custom normalized aggregate metric
|
Model access required (e.g. logits)
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Composite phenomenon
|
Yes
|
Each task has a different target sequence length.
|
No
|
NLP
|
Long Context
| null |
['Real task', 'Another benchmark']
|
['Unknown']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Soft match', 'LLM post-processing', 'Distribution', '']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Mean', 'Other']
|
chenMLLMasajudgeAssessingMultimodal2024
|
MLLM-as-a-Judge: Assessing Multimodal LLM-as-a-Judge with Vision-Language Benchmark
|
Include
| null | null |
This paper introduces the MLLM-as-a-Judge benchmark to assess the ability of MLLMs in assisting judges across diverse modalities, encompassing three distinct tasks: Scoring Evaluation, Pair Comparison, and Batch Ranking.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
MLLM-as-a-judge
| null |
capability of MLLMs in tasks of Scoring Evaluation, Pair Comparison and Batch Ranking.
|
Comprehensive
| null |
Take a single MLLM response, provide the score; Take two MLLM responses, compare which one is better; Take a batch of MLLM responses, provide a ranking
|
<two MLLM responses>, Judgement: B
| null |
Expert-crafted task examples (e.g. hand-written examples)
|
17903
|
Yes
|
input setting
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number)
|
Correlation (Matthew's correlation, Pearson's r)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
| null | null |
https://github.com/Dongping-Chen/MLLM-Judge
|
MLLM-as-a-Judge
|
Widely-agreed
|
Yes
| null |
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
we implement cross-validation between different annotators and conduct continuous monitoring to ensure they are maintaining objectivity and fairness.
| null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
No
|
LLM as a Judge
| null | null |
['Expert-crafted']
|
['Targeted']
|
['Short free response']
|
['Correlation']
|
['Widely-agreed']
|
['Yes']
|
['']
|
['No comparison made']
|
['Yes']
|
['Partial', 'Constructed']
| null |
gandhiUnderstandingSocialReasoning2023
|
Understanding Social Reasoning in Language Models with Language Models
|
Include
| null | null |
The paper introduces BigToM, a comprehensive benchmark containing 5,000 Theory-of-Mind scenarios created through procedural generation using GPT-4-populated causal templates and validated by human raters. This benchmark addresses key limitations in existing ToM evaluations of large language models, specifically the inconsistent results across previous studies and methodological concerns about evaluation validity. This benchmark aims to provide a more rigorous framework for assessing how well AI systems can understand human mental states.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Theory-of-mind, (social) reasoning
|
Yes
|
Theory of Mind (ToM) is defined as the ability to attribute latent mental states (beliefs, desires, knowledge, emotions) to agents and use them to explain or predict behavior. By representing ToM scenarios as causal graphs, we can systematically intervene on variables, generate control conditions, and probe different aspects of an LLM’s ToM capabilities.
|
Subset
| null |
Multiple‑choice comprehension questions where a model is expected to infer an agent’s belief or action given a short story generated from a causal template.
|
Story, Question, 2 answer options, Correct answer, Condition category.
| null |
Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
5000 items
|
Yes
|
Condition category (e.g., True/False Belief in terms of theory-of-mind)
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Condition category
| null |
https://sites.google.com/view/social-reasoning-lms
|
BigToM
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
expert review + participant rating studies, coherence test
|
Mean with 95% Confidence Interval
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
Synthetic narratives rather than real‑world usage, designed for controlled probing.
|
Composite phenomenon
|
Yes
| null |
Yes
|
Theory of Mind
| null | null |
['Procedurally-generated', 'LLM-generated']
|
['Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean', 'Tests']
|
bandarkarBelebeleBenchmarkParallel2024
|
The BELEBELE Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants
|
Include
| null | null |
BELEBELE is a multiple-choice machine reading comprehension benchmark designed to evaluate language models' multilingual capabilities. It's covering diverse languages and scripts from high-, medium-, to low-resource languages.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
reading comprehension, multilingual
|
Yes
|
Multilingual reading comprehension as the ability to understand text passages and correctly answer multiple-choice questions about those passages across different languages.
|
Subset
| null |
Multiple-choice reading comprehension: LMs read a passage and answer a question about it by selecting the correct option from four possible answers.
|
passage from FLORES-200, question about the passage, four multiple-choice answers
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
67.5k training samples, 3.7k development samples
|
Yes
|
language, script, language family
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null |
Adapted FLORES-200 machine translation dataset. Questions and answers were created in English by professional annotators.
|
Academia
|
Yes
| null | null |
Test, Train
| null | null |
Simple Mean
|
Yes
|
Per-language scores,
Scores by language family,
Scores by script type
| null |
https://github.com/facebookresearch/belebele
|
BELEBELE
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
Statistical t-tests to ensure the distribution of features; Training a logistic regression;
Comparison with human performance; Cross-correlation with established benchmarks
|
Simple mean accuracy
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
Yes
|
NLP
|
Understanding
|
Multilinguality
|
['Crowd-sourced', 'Another benchmark']
|
['Targeted', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
panchalWhatSayWhen2024
|
What to Say and When to Say it: Live Fitness Coaching as a Testbed for Situated Interaction
|
Include
| null | null |
Open-ended, asynchronous interactions, where an AI model may proactively deliver timely responses or feedback based on the unfolding situation in real-time, are an open challenge. This work presents the QEVD-FIT-COACH benchmark and dataset, which explores human-AI interaction in the challenging, yet controlled, real-world domain of fitness coaching – a task which intrinsically requires monitoring live user activity and providing immediate feedback.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Situated interaction
|
Yes
|
A notable type of situated interaction is the instructional or coaching scenario, where an instructor guides a user through a complex activity, such as live fitness coaching.
|
Subset
| null |
Feedback Structure: The feedbacks in the QEVD-FIT-COACH benchmark have the following structure: At the start of each exercise, acknowledging feedback is given once the user has started; otherwise, a reminder to do so is provided. A corrective feedback is provided as soon as a mistake is clearly visible. Similarly, when the user begins to correct their mistake, feedback is provided to acknowledge and guide the user to successfully correct the error. If the user is performing the exercise correctly, feedback focuses on repetition counting. Finally, at the end of each exercise, a feedback focused on the overall performance during that exercise is provided.
|
Video frames, a list of feedback statements that correspond to a specific timestep
| null |
Real task examples (e.g. GitHub issues), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
28,326
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Free response (e.g. summary paragraph, executable code), Extended interaction (e.g. conversation, calling an API and processing the response)
|
n-gram (BLEU, ROUGE, chrF), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
377,678
| null |
Simple Mean
|
No
| null | null |
https://github.com/Qualcomm-AI-research/FitCoach
|
QEVD-FIT-COACH
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
Yes
|
No
| null |
simple mean
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Single cohesive phenomenon
|
No
| null |
No
|
User Interaction
| null | null |
['Real task', 'LLM-generated']
|
['Random']
|
['Free response', 'Interaction']
|
['Soft match', 'LLM-as-a-Judge']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Complete']
|
['Mean']
|
zhangCABComprehensiveAttention2023
|
CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling
|
Include
| null | null |
Current benchmarks testing different attention architectures for long-term modelling only focuses on the standard bidirectional (or noncausal) self attention, and completely ignores cross attentions and unidirectional (or causal) attentions.
In this paper, we propose Comprehensive Attention Benchmark (CAB) with four distinguishable attention patterns, namely, noncausal self, causal self, noncausal cross, and causal cross attentions. In seven tasks, CAB validates efficient attentions in eight backbone networks to show their generalization across neural architectures.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Long sequence modelling capability of transformers with different attention mechanisms
|
No
|
It is defined as the performance of the transformer models in handling long-sequence tasks. There are seven tasks, such as text-to-speech synthesis, summarization, Long Sequence Time-series Forecasting, etc. The sequence length considered 'long' would be different for each task.
|
Comprehensive
| null |
There are seven long-sequence tasks ranging from Computer Vision, NLP and time-series forecasting handled by transformers. Examples include Super-Resolution, Masked Language Modelling, Long Sequence Time-series Forecasting.
|
A long-sequence task, the dataset for the task, sequence length, evaluation metric for performance, transformer model type, attention mechanism
| null |
Real task examples (e.g. GitHub issues)
| null |
Yes
|
The backbone transformer architecture and attention architecture
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), Distribution (perplexity, calibration, correlation), MCD, MSD, PSNR, SSIM
|
Different metrics used to evaluate each task.
| null |
Academia
|
Yes
| null | null |
Test
| null |
Different outputs for each task. For example, for summarisation, it produces a summary paragraph; for Super-Resolution, it convert low-resolution (16 × 16) face images into high-resolution (128 × 128) images.
| null |
Yes
|
Results are provided separately for each task with different metrics.
| null |
https://github.com/Shark-NLP/CAB
|
Comprehensive Attention Benchmark (CAB)
|
Not defined
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null | null |
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Composite phenomenon
|
Yes
|
So one dataset for each task, seven tasks in total
|
No
|
NLP
|
Long Context
| null |
['Real task']
|
['Convenience']
|
['Free response']
|
['Exact match', 'Soft match', 'Distribution', 'Soft match']
|
['No definition']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Complete']
| null |
tianDiagnosingFirstorderLogical2021
|
Diagnosing the First‑Order Logical Reasoning Ability Through LogicNLI
|
Include
| null | null |
The paper introduces LogicNLI, a NLI‑style benchmark crafted to diagnose large language models’ first‑order logic (FOL) reasoning abilities. It disentangles logic from commonsense by procedurally generating facts, rules and statements covering seven FOL operators, and evaluates models along four axes: accuracy, robustness, generalization and proof‑based traceability. Experiment results show substantial gaps to human performance and highlight weaknesses in negation and universal quantification.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
(logical) reasoning
|
Yes
|
Authors define FOL reasoning as multi‑step inference using the seven foundational logical operators applied to simple propositions expressed in natural language.
|
Subset
| null |
Given a set of natural‑language facts and rules, model is expected to predict whether a hypothesis is entailment, contradiction, neutral, or paradox relative to the premise.
|
premise texts (facts + rules), hypothesis statement, groud-truth label, proofs
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Procedurally-generated task examples (e.g. Creating instances from a template)
|
2000
|
Yes
|
label balance, hop counts, subject/predicate/vocab sizes, sequence lengths
|
Random sample (creators defined a task space and sampled from it), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
No, link is broken
| null | null |
Test, Train, Validation
|
16000 (train), 2000 (validation)
| null |
Simple Mean
|
Yes
|
robustness, generalization, traceability
| null |
https://github.com/tianyikillua/LogicNLI
|
LogicNLI
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
complementary test suits and analyses per operator
|
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Reasoning
|
Logical
| null |
['Author-crafted', 'Procedurally-generated']
|
['Random', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
zhouRICAEvaluatingRobust2021
|
RICA: Evaluating Robust Inference Capabilities Based on Commonsense Axioms
|
Include
| null | null |
RICA is a benchmark for evaluating language models' ability to make robust commonsense inferences despite linguistic variations and logical perturbations. Starting from first‑order‑logic templates that encode commonsense axioms, the authors automatically crawl ConceptNet and ATOMIC, then apply 24 perturbation types (negation, antonym, paraphrase, etc.). Experiments show that pre-trained language models perform poorly on robust inference tasks even after fine-tuning, highlighting a significant gap between current AI systems and human-level understanding.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
natural language inference (for commonsense knowledge), first-order logic
|
Yes
|
Authors formalize each commonsense axiom as a first‑order‑logic implication and deem a model successful only if it solves all logically‑equivalent probes for that axiom
|
Subset
| null |
To test language models' commonsense reasoning and robustness to linguistic variations, the authors transform first-order logic axioms into multiple syntactically different statements expressing the same inferential relationship, then evaluate models through task-specific probes, considering models successful only if they perform like humans across all variations of the axioms.
|
Premise + conclusion statement (+ with a [MAST] token) + a pair of novel entity strings
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Procedurally-generated task examples (e.g. Creating instances from a template)
|
2600
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
No, link is broken
| null | null |
Test, Train, Validation
|
8000 (train)
| null |
Simple Mean
|
No
| null | null |
https://sites.google.com/usc.edu/rica
|
RICA
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
Ablation studies, perturbation analysis, knowledge control
|
Simple mean + standard deviations
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
|
2600 testsets (joint human‑verified + curated), 8000 training sets (plus optional 100,000 noisy or 257,000 raw datasets)
| null |
Reasoning
|
Logical
| null |
['Crowd-sourced', 'Procedurally-generated']
|
['Targeted']
|
['Multiple choice', 'Short free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean', 'Std']
|
agrawalLargeLanguageModels2022
|
Large Language Models are Few-Shot Clinical Information Extractors
|
Include
| null | null |
Datasets for benchmarking LLMs on five different clinical NLP tasks: clinical sense disambiguation, biomedical evidence extraction, coreference resolution, medication status extraction, and medication attribute extraction. They show that LLMs perform well on these tasks despite not being specifically trained for the clinical domain.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
clinical information extraction
|
Yes
|
Clinical information extraction as "the extraction of important variables trapped in clinical notes" and structuring clinical variables from unstructured text, (handling ambiguous jargon and nonstandard phrasal structure specific to clinical text).
|
Subset
| null |
Tasks in clinical information extraction: 1) clinical sense disambiguation, 2) biomedical evidence extraction, 3) coreference resolution, 4) medication status extraction, and 5) medication attribute extraction.
|
For example, in clinical sense disambiguation, an item consists of a clinical note and an abbreviation to be expanded.
| null |
Real task examples (e.g. GitHub issues), Expert-crafted task examples (e.g. hand-written examples)
| null |
Yes
|
medication type, status, relation types, antecedent-pronoun pairs
|
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null |
Clinical text from primarily the CASI dataset (Clinical Acronym Sense Inventory) and creates new annotations on this text.
|
Academia
|
Yes
| null | null |
Test, Train, Validation
| null | null |
Simple Mean
|
Yes
|
Scores for each of the five tasks.
| null |
https://huggingface.co/datasets/mitclinicalml/clinical-ie
| null |
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
Primary metrics used are F1 scores, accuracy, recall, and precision. Partially, micro and macro averages are reported.
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
NLP
|
Extraction
|
Medicine
|
['Real task', 'Expert-crafted']
|
['Convenience', 'Criterion']
|
['Short free response', 'Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Constructed']
|
['Mean']
|
guhaLegalBenchCollaborativelyBuilt2023
|
LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
|
Include
| null | null |
LEGALBENCH is a comprehensive benchmark for evaluating language models' legal reasoning capabilities, consisting of 162 tasks spanning six distinct categories of legal reasoning that were designed and hand-crafted by legal professionals. The benchmark provides evaluation code, prompts, and a common vocabulary bridging legal frameworks and AI development, enabling rigorous assessment of both open-source and commercial language models on practically useful legal reasoning skills.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
(legal) reasoning
|
Yes
|
Six reasoning types: issue‑spotting, rule‑recall, rule‑application, rule‑conclusion, interpretation, rhetorical‑understanding.
|
Subset
| null |
input text (from single sentences to two‑page documents) + prompt + label (classification/ generation).
|
one legal question, text snippet, gold answer/label.
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
91000
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null |
LEGALBENCH was constructed from a mix of existing legal datasets (restructured for the few-shot LLM paradigm), and hand-crafted datasets created and contributed by legal professionals (included as authors on this work).
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null |
LEGALBENCH tasks also span different formats: multiple-choice questions (35 tasks),
open-generation (7 tasks), binary classification (112 tasks), and multi-class/multi-label classification (8 tasks). Tasks range from 50–2 000 samples (avg. approximately 563 samples per task)
|
Weighted Mean
|
Yes
|
162 individual task scores (task structure/reasoning types/legal domains/language variation)
| null |
https://huggingface.co/datasets/nguha/legalbench
|
LEGALBENCH
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
The benchmark is itself realistic
|
No
|
Partially; addressed their own limitations.
|
We note that the purpose of this work isn’t to evaluate whether computational systems should replace lawyers and legal officers, or to understand the positive and negative
impacts of that replacement. Our goal is to construct artifacts that enable the relevant stakeholders and affected communities to better understand, empirically, the capacity for LLMs to perform different types of legal tasks. Given the proliferation of computational legal tools, we believe that answering this question is vital for ensuring their safe and ethical usage. (section 1)
|
Mean and Standard deviation
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
Clause classification, supply-chain disclosure compliance, privacy-policy entailment
|
Composite phenomenon
|
Yes
| null | null |
Law
| null | null |
['Real task', 'Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Representative']
|
['Mean', 'Std']
|
chenToMBenchBenchmarkingTheory2024
|
ToMBench: Benchmarking Theory of Mind in Large Language Models
|
Include
| null | null |
ToMBENCH is a bilingual benchmark that evaluates language models' Theory of Mind (ToM) capabilities through classic psychology tasks measuring distinct social cognition abilities in a multiple-choice format. Built entirely from scratch to avoid data contamination, the benchmark reveals that even advanced models like GPT-4 still significantly underperform humans in understanding and attributing mental states, particularly when subjected to coherence stress tests. Their aim with ToMBENCH is to enable an efficient and effective evaluation of LLMs’ ToM capabilities, thereby facilitating the development of LLMs with inherent social intelligence.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
reasoning, Theory-of-mind
|
Yes
| null |
Comprehensive
|
Bilingual to avoid training-data leakage and enable the unbiased evaluation.
|
Given a short social story, a question, and 4 answer options, the model is expected to choose a single correct option.
|
story, question, option A to D, correct answer
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
2,860 questions (934 stories) per English/Chinese
|
Yes
|
language, task type, ability dimension, story length, annotator agreement
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null |
scenarios from social media posts
|
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean, Weighted Mean
|
Yes
|
8 classic psychology tasks, 31 distinct social cognition abilities
| null | null |
ToMBench
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
contamination checks, coherence stress test
|
simple mean with percentage point
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Theory of Mind
| null | null |
['Real task', 'Author-crafted', 'Another benchmark']
|
['Targeted', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
wuDetectRLBenchmarkingLLMgenerated2024
|
DetectRL: Benchmarking LLM-Generated Text Detection in Real-World Scenarios
|
Include
| null | null |
Evaluate LLM-generated text in realistic scenarios. The benchmark collects human-written texts from high-risk domains, generates comparable texts using popular LLMs, and applies various attack methods to simulate real-world conditions.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
LLM-generated text detection
|
Yes
|
Defines LLM-generated text detection as the task of discriminating between human-written and LLM-generated texts with a focus on real-world scenarios.
|
Subset
| null |
Detect whether text is human-written or LLM-generated, with a focus on real-world scenarios that include various prompt usages, human revisions, and writing noises.
|
Text sample that is either human-written or generated by an LLM, along with its corresponding label.
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
100,800 human-written samples
|
Yes
|
domain, LLM type, attack type, text length
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation)
| null | null |
Academia
|
Yes
| null | null |
Test, Train
|
Task-dependent train and test split (Table 2, p. 4)
|
Requires a binary classification (human-written vs. LLM-generated) for each text sample.
|
Simple Mean
|
Yes
|
domain, LLM type, attack type, text length intervals
| null |
https://github.com/NLP2CT/DetectRL
|
DetectRL
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
Specifically discuss the validity of their benchmark by comparing it to existing benchmarks and explaining why DetectRL better represents real-world scenarios.
|
Simple mean to aggregate across different settings. For each detector, they report AUROC and F1 Score values for each specific condition and the average across those conditions.
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
DetectRL simulates realistic conditions under which LLM-generated text detection would need to operate - while not being a complete real task.
|
Composite phenomenon
|
Yes
| null |
No
|
NLP
|
Detection
| null |
['Real task', 'Author-crafted', 'Procedurally-generated', 'LLM-generated']
|
['Targeted', 'Criterion']
|
['Multiple choice']
|
['Exact match', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial']
|
['Mean']
|
patelMultiLogiEvalEvaluatingMultistep2024
|
Multi-LogiEval: Towards Evaluating Multi-Step Logical Reasoning Ability of Large Language Models
|
Include
| null | null |
Multi-LogiEval evaluates language models' multi-step logical reasoning across propositional, first-order, and non-monotonic logic with varied inference rules and depths. Tests on leading models reveal substantial performance drops as reasoning complexity increases, exposing critical gaps in logical reasoning capabilities.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
reasoning, multi-step logic
|
Yes
|
The ability to perform multi-step reasoning – drawing conclusions from provided multiple premises -- is a hallmark of human intelligence. Our work aims to bridge these gaps by creating a more comprehensive and logically complex evaluation dataset by incorporating varying numbers of reasoning depths (i.e., multi-steps) to reach conclusions. Our work systematically evaluates multi-hop logical reasoning over various inference rules and their combinations. (section 1)
|
Subset
| null |
Given a context (story with logical statements) and a question (candidate conclusion), the model is expected to predict Yes/No. (Binary entailment classification)
|
Context, question, answer, metadata (e.g., logic type, logic depth)
| null |
Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
1552
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
15 types (3 logics × 5 depths)
| null |
https://github.com/Mihir3009/Multi-LogiEval
|
Multi-LogiEval
|
Widely-agreed
|
Yes
| null |
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
Error analysis, validity discussion
|
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
Yes
|
Reasoning
|
Logical
| null |
['Procedurally-generated', 'LLM-generated']
|
['Targeted', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
luWebLINXRealworldWebsite2024
|
WEBLINX: Real-World Website Navigation with Multi-Turn Dialogue
|
Include
| null | null |
WebLinx introduces a large-scale benchmark of 100K interactions across 2300 expert demonstrations of conversational web navigation. Authors develop a multimodal agent able of interpreting both visual and textual input to complete web-based tasks with long context understanding and planning capabilities.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Multimodal reasoning, web navigation
|
Yes
|
“We define the real-world problem of conversational web navigation: given the initial user instruction, an agent must complete a real-world task inside a web browser while communicating with the user via multi-turn dialogue.” (Page 1, Introduction)
|
Subset
| null |
The task is defined as conversational web navigation, where an agent must complete a user-specified goal on a real-world website by interacting with the web interface (e.g., clicking, typing, submitting forms) while engaging in multi-turn dialogue with the user. The agent receives inputs such as browser screenshots, DOM elements, and dialogue history to predict the next action at each turn.
|
Each item is one step in a task, where the agent sees the current web page, past actions, and what the user said, and must decide what to do next—like clicking a button or typing text. Many of these steps together make up a full task.
|
Each task unfolds as a multi-turn dialogue between a user (called in the paper instructor) and an agent (called in the paper navigator), with actions done in a real browser environment. The task goal may not be fully known at the start and often evolves over the conversation, making long-term memory and contextual understanding important.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Expert-crafted task examples (e.g. hand-written examples), Procedurally-generated task examples (e.g. Creating instances from a template)
|
"100K interactions across 2300 expert demonstrations of conversational web navigation."
|
Yes
|
website category, subcategory, geographic region, instructor visual access, AI assistance
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Extended interaction (e.g. conversation, calling an API and processing the response), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
|
The exact match metric is calculated for each turn in the task, comparing the predicted action (such as click, say, text input) with the ground truth action.
| null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
Train - 24418, validation - 1717
| null |
Simple Mean, aggregated using the micro-average of turn-level scores
|
Yes
|
Subscores are provided for element-based actions, text-based actions, and intent matching (whether the correct action type is predicted).
| null |
https://mcgill-nlp.github.io/weblinx/
|
WEBLINX
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No
|
The benchmark is itself realistic
|
No
|
No
| null | null |
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
|
Benchmark is based on interacting with people, where the end can have different results, depending on how the conversation goes - therefore making it a bit of grey area.
|
Composite phenomenon
|
No
|
It is quite confusing from the text to count the size of the set, especially that it is based on interactions which are dependent variable - I am not fully sure if the sizes in task_dataset_size_extra are correct, I based them on table 8 as it is the only table showing active turns.
|
No
|
Agents
|
Web
| null |
['Real task', 'Author-crafted', 'Expert-crafted', 'Procedurally-generated']
|
['Targeted', 'Criterion']
|
['Short free response', 'Interaction', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Complete']
| null |
zhangMuCGECMultireferenceMultisource2022
|
MuCGEC: a Multi-Reference Multi-Source Evaluation Dataset for Chinese Grammatical Error Correction
|
Include
| null | null |
The paper introduces MuCGEC, a multi-reference multi-source evaluation dataset for Chinese Grammatical Error Correction. This dataset contains different Chinese-as-a-Second-Language (CSL) learner sources, with each sentence corrected by three independent annotators and reviewed by a senior annotator.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Chinese Grammatical Error Correction
|
Yes
|
"Given a potentially noisy input sentence, grammatical error correction (GEC) aims to detect and correct all errors and produce a clean sentence"
|
Comprehensive
| null |
Models should detect and correct all grammatical errors in a given Chinese sentence, while preserving the original meaning.
|
Potentially erroneous Chinese sentence (input) paired with multiple human-annotated grammatically correct reference sentences (outputs).
| null |
Real task examples (e.g. GitHub issues)
|
1,092,285 (Lang8); 95,320 (HSK)
|
Yes
|
error types, sentence source, number of references per sentence, number of edits, character counts per sentence
|
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
1,996 sentences for NLPCC18-test, 2,000 for CGED-test, and 1,942 for Lang8-test
| null |
Simple Mean
|
Yes
|
(1) different data sources, (2) different error types, and (3) different numbers of references
| null |
https://github.com/HillZhang1999/MuCGEC
|
MuCGEC
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
1) establishing human performance baselines, (2) annotator error patterns, (3) demonstrating multiple references improve evaluation accuracy, and (4) demonstrate their character-based metrics.
|
Simple mean, char-based F0.5 scores for overall performance, along with precision and recall.
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
The task uses real sentences written by CSL learners that contain actual grammatical errors.
|
Composite phenomenon
|
Yes
| null |
No
|
NLP
|
Understanding
| null |
['Real task']
|
['Convenience', 'Criterion']
|
['Free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
|
['Mean']
|
spragueMuSRTestingLimits2024
|
MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning
|
Include
| null | null |
The paper introduces MuSR, a long‑form natural‑language problems that require multistep “soft” reasoning combining commonsense, deductive and theory‑of‑mind inference. Generated through a neurosymbolic pipeline, these long-form problems reveal critical gaps in current models' reasoning capabilities despite being consistently solvable by humans.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
reasoning, commonsense knowledge, multi-step deductive
|
Yes
|
“Multistep soft reasoning” = reasoning that integrates narrative facts with implicit commonsense rules over several inference steps to reach an answer.
First, a number of prior benchmarks do not have natural text. Others do not
blend commonsense and multistep reasoning. Finally, we want a dataset that contains ground-truth intermediate structure and which is not solvable with rules. (section2)
|
Subset
| null |
Given a narrative and a question (multiple‑choice), the model is expected to choose the correct answer by reconstructing hidden reasoning chains.
|
a narrative (~1 000 words), a question, and answer options
|
Each item links to an underlying reasoning tree of ~10 steps and 6‑9 implicit commonsense facts.
|
Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
756
|
Yes
|
logic depth, number of commonsense facts, ground-truth intermediate facts
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null |
Instances produced by GPT‑4 via a neurosymbolic tree‑construction and “chaptering” pipeline.
|
Academia
|
No, link is broken
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
domain types, prompt variants, model baselines, prompt conditions
| null |
https://github.com/Zayne%E2%80%91Sprague/MuSR
|
MuSR
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
human/rule baselines, ablations, dataset difficulties
|
mean + standard deviatin, significance test. proportion
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
Synthetic but linguistically natural expressions, not a real‑world workflow or scenarios.
|
Single cohesive phenomenon
|
Not applicable
| null |
Yes
|
Reasoning
|
Commonsense
| null |
['Procedurally-generated', 'LLM-generated']
|
['Targeted']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean', 'Std']
|
caoWenMindComprehensiveBenchmark2024
|
WenMind: A Comprehensive Benchmark for Evaluating Large Language Models in Chinese Classical Literature and Language Arts
|
Include
| null | null |
WenMind is a benchmark for Chinese Classical Literature and Language Arts (CCLLA). It spans 42 tasks across three sub-domains (Ancient Prose, Ancient Poetry, Ancient Literary Culture), in both domain- and capability-oriented formats.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Understanding, generation, and knowledge of Chinese classical language and literature
|
Yes
|
Understanding, generating, and applying knowledge of ancient Chinese texts across prose, poetry, and literary culture
|
Comprehensive
| null |
42 distinct tasks derived from classical Chinese language skills, such as translation, comprehension, poetry writing, idiom interpretation, etc.
|
Each item is a QA pair with metadata, eg. "Translate the following ancient Chinese sentence into modern Chinese."
|
Tasks categorized by sub-domain and cognitive ability (understanding, generation, knowledge).
|
Human exam questions (e.g. GRE questions), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
4875
|
Yes
|
Domain, capability, question format, task name (coarse/fine-grained), question, and answer
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null |
Internet exam databases, open-source Chinese text corpora (C2MChn, WYWEB), and LLM generations (ERNIE-3.5), all standardized and filtered into Q&A formats
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Scores per task, domain (Prose/Poetry/Culture), and capability (Understanding/Generation/Knowledge)
| null |
https://github.com/SCUT-DLVCLab/WenMind
|
WenMind
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
Authors report 89.4% agreement between ERNIE-3.5 scoring and human evaluation across representative models and tasks
|
Stratified human agreement evaluation on LLM-graded items; comparisons to BLEU/F1 for scoring validity.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
Maybe representative of how LLMs may be used in educational and cultural settings, but mostly a "knowledge test".
|
Composite phenomenon
|
Yes
| null |
No
|
Knowledge
|
Cultural
| null |
['Human exams', 'Author-crafted', 'Another benchmark', 'LLM-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Soft match', 'LLM-as-a-Judge']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative', 'Constructed']
|
['Mean']
|
ouyangCliMedBenchLargeScaleChinese2024
|
CliMedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large Language Models in Clinical Scenarios
|
Include
| null | null |
CliMedBench is a Chinese clinical medical benchmark with 33.7k QA items across 14 core scenarios derived from real-world medical records and exams, measuring LLMs’ clinical reasoning and language abilities. It includes evaluations of 11 LLMs.
|
Proposes a novel adaptive testing method (agent-based CAT) grounded in Item Response Theory to alleviate cost concerns, but stresses the "real" benchmark is still just evaluating all models across all questions
|
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
medical reasoning, diagnostic accuracy, clinical QA
|
Yes
|
Defines clinical reasoning and related capabilities such as hallucination resistance, information retrieval, and instruction via the "personas" of real medical practitioners eg. radiographer; pharmacist
|
Subset
|
The selected "personas" feel a bit random: doctor, med student, patient, radiologist, pharmacist. Surely there are other people in a hospital.
|
Multiple-choice, sequencing, and open-ended questions based on real-world Chinese clinical scenarios.
|
One task item can be a MCQ from an EHR, a sequencing task like reordering surgical steps, or open-ended responses like writing discharge summaries
| null |
Human exam questions (e.g. GRE questions), Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
33735
|
Yes
|
clinical role, task type, scenario ID, source (e.g., EHR, exam), evaluation axes if relevant
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Free response (e.g. summary paragraph, executable code), Sequencing
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics)
|
Mix of automatic metrics and expert review
|
Combination of electronic health records, exam data, expert rephrasing, and LLM generation with filtering
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
14 clinical scenarios and 7 evaluation axes
| null |
https://github.com/Optifine-TAT/CliMedBench
|
CliMedBench
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
Medical professionals rated benchmark scenarios and Spearman correlation to another benchmark (MedBench) was computed
|
Simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
|
Evaluation of actual clinical decisions/tasks, but in a segmented way
|
Composite phenomenon
|
Yes
| null |
No
|
Medicine
| null | null |
['Human exams', 'Real task', 'Author-crafted', 'LLM-generated']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Free response', 'Free response']
|
['Exact match', 'Soft match', 'Human ratings']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial', 'Representative']
|
['Mean']
|
sabourEmoBenchEvaluatingEmotional2024
|
EmoBench: Evaluating the Emotional Intelligence of Large Language Models
|
Include
| null | null |
EmoBench evaluates emotional intelligence (EI) covering emotional understanding and application with 400 questions in English and Chinese.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Emotional Intelligence
|
Yes
|
Emotional Intelligence, ""the ability to monitor feelings of our own and understand
feelings of others, differentiate between them, and leverage this information to guide our thoughts and actions"". Authors use understanding and application as core components.
|
Comprehensive
|
Use both a breakdown of possible emotions and a breakdown into application and understanding
|
Scenario-based multiple-choice questions; some just requiring selection of most effective action, others requiring action selection + definition of emotion in scenario
|
Scenario + MCQ on emotions, causes, or best action
|
Task items are theory-grounded and include taxonomies of emotions
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
400
|
Yes
|
Emotion type, correct label, category
|
Unknown
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null |
GPT-4 used for inspiration in creating scenarios, final content human-authored
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
By task type, language, subcategory
| null |
https://github.com/Sahandfer/EmoBench
|
EmoBench
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
Exhaustive human baselining via online survey of 48 participants (humans outperform all LLMs on average)
|
Accuracy, Fleiss’ k for human agreement
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
Fully fictional; while grounding in psychological theory seems very solid, task is completely from-scratch, not e.g. from existing personality/emotional intelligence exams (should those exist)
|
Composite phenomenon
|
Yes
| null |
Yes
|
Psychology
| null | null |
['Author-crafted']
|
['Unknown']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean', 'Other']
|
garcia-ferreroThisNotDataset2023
|
This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models
|
Include
| null | null |
The paper introduces a large semi-automatically generated dataset of circa 400,000 descriptive sentences about commonsense knowledge that can be true or false in which negation is present in about 2/3 of the corpus in different forms.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
negation probing
|
Yes
|
Although large language models (LLMs) have apparently acquired a certain level of grammatical knowledge and the ability to make generalizations, they fail to interpret negation, a crucial step in Natural Language Processing.
|
Subset
| null |
Take a natural language sentence, and classify its truth value.
|
A brother is never a female person who has the same parents as another person. True.
| null |
Expert-crafted task examples (e.g. hand-written examples)
|
90,281
|
Yes
|
types and amount of negative knowledge
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
Train: 268,505; validation: 2,514
| null | null |
Yes
|
By types and amount of negative knowledge
| null |
https://github.com/hitz-zentroa/This-is-not-a-Dataset
| null |
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
3.2 Dataset quality assessment: Human Evaluation addresses the validation of the
generation process and the different templates used, that is to say, whether the sentences in the dataset are grammatical and that overall represent true and false knowledge as expected.
| null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null |
No
|
NLP
|
Understanding
| null |
['Expert-crafted']
|
['Targeted']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
| null |
josephFactPICOFactualityEvaluation2024
|
FACTPICO: Factuality Evaluation for Plain Language Summarization of Medical Evidence
|
Include
| null | null |
FACTPICO evaluates the factuality of LLM-generated plain language summaries of medical randomized controlled trials (RCTs). It features fine-grained expert annotations across five key dimensions and includes both human and LLM-generated rationales.
|
Rich analysis, but incredibly unscalable - all evaluation was done by expert humans, which explains why they only tested 3 models
|
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Factuality in plain language summarization
|
Yes
|
Factuality is defined as accurate representation of critical trial elements (Population, Intervention, Comparator, Outcome - that's PICO) and their results, with particular focus on correctness of added explanatory content
|
Subset
| null |
Generate plain-language summaries of abstracts of randomized controlled trials.
|
Each item is an RCT abstract, the output always its summary.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
115
|
Yes
|
PICO annotations, human rationales.
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code)
|
Human ratings (text quality, preference, NOT manual scoring of other metrics)
|
Time-intensive human scoring of free-text responses across the PICO dimensions
|
Based on RCT descriptions from Evidence Inference 2.0 dataset, sampled for exclusion of those which already have human summaries
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Subscores for each PICO element and Evidence Inference
| null |
https://github.com/lilywchen/FactPICO
|
FACTPICO
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
Yes
|
Yes
|
Authors analyze inter-annotator agreement, correlation with expert judgments, and rationale similarity.
|
Flesch Kincaid, Rouge-L, Kendalls, Spearmans
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
Summarizing technical trial outcomes into accessible language is a real-world medical communication need.
|
Single cohesive phenomenon
|
Not applicable
| null |
No
|
NLP
|
Summarization
| null |
['Author-crafted', 'Another benchmark']
|
['Targeted', 'Criterion']
|
['Free response']
|
['Human ratings']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial']
|
['Other']
|
liuRevisitingDeIdentificationElectronic2023
|
Revisiting De-Identification of Electronic Medical Records: Evaluation of Within- and Cross-Hospital Generalization
|
Include
| null |
Boundary include - measures LLM capabilities in theory and does provide a benchmark, but tests pre-ChatGPT models including a self-trained CNN
|
Benchmark for de-identification of protected health information (PHI) in Chinese electronic medical records, with a focus on cross-hospital generalization. Constructs a multi-hospital dataset and evaluates various models and domain generalization (DG) techniques to assess performance under domain shift.
|
Pre-LLMs so uses the dataset - which is still a valid benchmark - as training data for a CNN and a BERT fine-tuning run.
|
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Generalization on de-identification tasks
|
Yes
|
Anonymization
|
Subset
| null |
Detect and remove personal health information mentions (e.g., names, locations, dates) in clinical records from three Chinese hospitals.
|
Each item is a sentence or span from an electronic medical record, with relevant tokens labeled using tags corresponding to personal data categories, but the labels hidden. The task is to recreate these token labels.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
500
|
Yes
|
Sentence and mention counts, health information category counts per dataset
|
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
|
Predicted span of tags must exactly match correct span and category
|
Data collected from three Chinese hospitals - no idea how that got around data protection laws even with anonymization; they must be quite loose there - and hand-annotated
|
Academia
|
Yes
| null | null |
Test, Train, Validation
|
400
|
Sequence output with personal data tagged
|
Simple Mean
|
Yes
|
Per PII category (e.g., PERSON, DATE, ID)
| null |
https://github.com/lanyangyang93/Revisiting-De-Identification
| null |
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null | null |
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
|
De-identification is a required step for real-world medical data sharing and lots of other data sharing contexts.
|
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Medicine
| null | null |
['Real task', 'Author-crafted']
|
['Random', 'Convenience']
|
['Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Complete']
| null |
|
liuConvBenchMultiturnConversation2024
|
ConvBench: A Multi-Turn Conversation Evaluation Benchmark with Hierarchical Ablation Capability for Large Vision-Language Models
|
Include
| null | null |
This paper introduces the ConvBench to evaluate LVLMs across hierarchical capabilities such as perception, reasoning, and creativity. It enables fine-grained error attribution and includes an automatic evaluation framework.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
multi-turn visual conversation
|
Yes
|
Open-ended multi-turn visual conversations
|
Comprehensive
| null |
Take an image, and provide answer during multi-turn conversation
|
<image> user question 1, model answer 1, user question 2, model answer 2
| null |
Expert-crafted task examples (e.g. hand-written examples), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
577
|
Yes
|
hierarchy of multimodal capabilities
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code)
|
LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean, Weighted Mean
|
No
| null | null |
https://github.com/shirlyliu64/ConvBench
|
ConvBench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
User Interaction
| null | null |
['Expert-crafted', 'Another benchmark', 'LLM-generated']
|
['Criterion']
|
['Free response']
|
['LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Representative']
|
['Mean']
|
dinhSciExBenchmarkingLarge2024
|
SciEx: Benchmarking Large Language Models on Scientific Exams with Human Expert Grading and Automatic Grading
|
Include
| null | null |
SciEx is a multilingual, multimodal benchmark of university-level computer science exams. It includes freeform questions with expert grading that contain both text and images. It compares to a baseline of human students, as all questions are real exam questions.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
scientific reasoning, problem solving
|
Yes
|
"Solve scientific tasks" via exam-style problem solving, including reasoning, proof generation, and algorithmic thinking
|
Subset
| null |
Answer university CS exam questions, some of which are multimodal and freeform, with human-graded performance.
|
Each item consists of a CS exam question (text and possibly images), and the LLM must generate a free-text or structured response.
|
Questions span various formats, languages (English and German), and topics (e.g. AI, databases, algorithms)
|
Human exam questions (e.g. GRE questions)
|
154
|
Yes
|
max score, student average, gold reference answer, difficulty level, language, modality
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Free response (e.g. summary paragraph, executable code), Structured response (e.g. valid JSON, API call alone)
|
Human ratings (text quality, preference, NOT manual scoring of other metrics)
|
Expert graders score each answer; LLM-as-a-judge methods are evaluated via correlation with these expert scores but not used in the benchmark
|
CS exams from Karlsruhe Institute of Technology (2022–2024), authored by instructors
|
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Grouped by difficulty, modality, and language
| null |
https://github.com/TuAnh23/SciEx
|
SciEx
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
Comparison of expert vs LLM grading; examined factors like image use, question difficulty, language; measured grader bias. Detailed comparison to student performance, though number of students is not listed.
|
Pearson correlation, RMSE, differences to student baselines
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
Reflects typical LLM use cases in academic or tutoring settings, but the authors specifically claim the benchmark is for scientific work, which makes QA on exam questions representative at best
|
Composite phenomenon
|
No
| null |
No
|
General Science
| null | null |
['Human exams']
|
['Convenience']
|
['Free response', 'Structured']
|
['Human ratings']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Other']
|
zhangUnveilingTapestryConsistency2024
|
Unveiling the Tapestry of Consistency in Large Vision-Language Models
|
Include
| null | null |
This paper introduces the ConBench benchmark to evaluate the consistency of LVLMs across prompts with varying solution spaces centered on the same knowledge point. The authors reveal key patterns in LVLM behavior and propose a trigger-based diagnostic refinement method to improve consistency and indirectly enhance captioning performance.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
consistency
|
Yes
|
Although LVLMs can generate high-quality responses to task prompts, we discover that for correctly answered cases, simply modifying the prompt will result LVLMs in providing contradictory responses.
|
Comprehensive
| null |
Take an image and question, provide a short-form answer
|
<image> Question: How many real cats in the image. A) One B) Two C) Three D) Four. Answer: A
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
4000
|
Yes
|
Hierarchical Core Capability, Question Type
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Hierarchical Core Capability, Question Type
| null |
https://github.com/foundation-multimodal-models/ConBench
| null |
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Language Modelling
|
Robustness
| null |
['Author-crafted', 'Another benchmark']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Representative']
|
['Mean']
|
fierroMuLanStudyFact2024
|
MULAN : A Study of Fact Mutability in Language Models
|
Include
| null | null |
The authors create MULAN , a benchmark for evaluating the ability of English language models to anticipate time-contingency, covering both 1:1 and 1:N relations.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
anticipating when facts are time-contingent
|
Yes
|
time awareness in LLMs, specifically for encoding of fact mutability in their representations and for the comparative ease of editing of mutable facts versus immutable ones.
|
Subset
| null |
Given a subject–relation query (input), the task is to predict the correct object(s) (output), where queries may involve either immutable or mutable facts.
|
subject–relation query (input), correct object(s) (output).
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
Test: 35000 / 31410.
|
Yes
|
Immutable or not.
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
Train: 6230 / 6820; Validation: 5780 / 5910.
| null | null |
Yes
|
Immutability-1 and Immutability-N.
| null |
https://github.com/coastalcph/fact_mutability
|
MULAN
|
Contested
|
Yes
|
Yes
| null |
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
No
| null |
No
|
Reasoning
|
Temporal
| null |
['Author-crafted']
|
['Criterion']
|
['Short free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
| null |
bitton-guettaVisualRiddlesCommonsense2024
|
Visual Riddles: a Commonsense and World Knowledge Challenge for Large Vision and Language Models
|
Include
| null | null |
This paper introduces Visual Riddles, a benchmark designed to evaluate LLMs on complex visual reasoning tasks that require commonsense and world knowledge. The dataset includes 400 carefully crafted riddles, each combining images, questions, and textual hints, revealing significant performance gaps between current models and human reasoning abilities.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
knowledge, visual understanding
|
Yes
|
While humans easily recognize such contextual nuances, existing image-understanding models struggle to integrate visual cues with world knowledge stemming from cultural aspects, life-experiences, and physical or social knowledge
|
Comprehensive
| null |
(1) Main Task: Solve open-ended questions.
(2) Utilizing Hints: Use textual aids to identify key visual clues in riddles.
(3) Employing Attributions: Apply web-sourced attributions to improve world-knowledge.
(4) Multiple Choice: Select the correct answer to the riddle from five options.
(5) Automatic Evaluation: Evaluate open-ended answers in two scenarios— Reference-Free, assessing the correctness of a candidate answer (CA) based only on the visual riddle, and Reference-Based, comparing CAs to the ground truth answer (GTA).
| null |
it covers five different subtasks
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Expert-crafted task examples (e.g. hand-written examples)
|
500
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
|
They use human rating as primary evaluation method, while also compare the performance between LLM-as-a-judge and human rating in ablation study
| null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null | null |
Yes
|
Open-ended VQA, Multiple-choice VQA, Open-ended VQA Automatic Evaluation
| null |
https://huggingface.co/datasets/visual-riddles/visual_riddles
|
Visual Riddles
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
VQA
| null | null |
['Author-crafted', 'Expert-crafted']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Human ratings', 'LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative', 'Constructed']
|
['Mean']
|
liQuantifyingAdaptabilityPretrained2022
|
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks
|
Include
| null | null |
TaskBench500 is a benchmark designed to systematically measure how LLMs adapt to new tasks. It comprises "500 procedurally generated sequence modeling tasks" spanning "lexical semantics, sequence processing, memorization, logical reasoning, and world knowledge" (4696). The benchmark is publicly available.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
adaptability, compositional adaptability
|
Yes
|
Language model (LM) adaptability is how LMs are tuned to "perform" a "new task" they were not trained to complete, with adaption techniques like "fine-tuning or prompt-tuning" (4696).
|
Comprehensive
|
Task adaption is split into the domains of measuring memory, composition, and distribution matching.
|
The benchmark defines 500 tasks total tasks. It first defines atomic tasks that are then "combined using a set of composition operators to produce more complex tasks" (4698). Atomic tasks span lexical tasks, factual tasks, and random relation tasks, and composition operators include word-level and sequential compositions.
|
Although there are 500 distinct tasks, "every task takes as input a word or word sequence, and outputs either a boolean value or a set of words/word sequences" (4698).
| null |
Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation), Correlation (Matthew's correlation, Pearson's r)
|
To measure memorization, accuracy is used. To measure compositions of atomic tasks, Pearson correlation is used. Both metrics are referred to as the adaptability metric for their task. To measure how models learn new distributions, the paper defines a custom metric to produce "an aggregated probability mass assigned to all easier task and all harder tasks in a task pair," so it should be easier to adapt to an easier task than a harder task (4703).
| null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Atomic, Word-level Comp, Seq Comp
| null |
https://github.com/facebookresearch/task_bench
|
TaskBench500
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
The authors distinguish between previous attempts to study "generalization to new examples" and the paper's "systematic study of adaption to new tasks" (4698). The authors identify that "new pre-raining adaption schemes are evaluated using small suites of curated tasks" which are "poorly suited for answering larger, structural questions" like "can we predict how quickly (and how effectively) pre-trained LMs can be adapted to perform it" (4696).
|
Simple mean, average
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
No
| null |
No
|
Language Modelling
|
In-context Learning
| null |
['Another benchmark', 'Procedurally-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response', 'Free response']
|
['Exact match', 'Distribution', 'Correlation']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative', 'Constructed']
|
['Mean']
|
duEmbSpatialbenchBenchmarkingSpatial2024
|
EmbSpatial-Bench: Benchmarking Spatial Understanding for Embodied Tasks with Large Vision-Language Models
|
Include
| null | null |
This paper introduces the EmbSpatial-Bench benchmark to evaluate the spatial understanding capabilities of LVLMs in embodied environments. The authors also propose EmbSpatial-SFT, an instruction-tuning dataset aimed at enhancing LVLMs' spatial reasoning from an egocentric perspective
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
spatial understanding
|
Yes
|
understand spatial relationships between object in embodied scenarios
|
Subset
| null |
Take a spatial image and multiple-choice question, provide the answer
|
<spatial image> Question: How are television and shelf positioned in relation to each other in the image? A: ..., B: ..., C: ..., D: ....
| null |
Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
3640
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
|
For likelihood strategy, which uses the option with the highest probability generated by
the model, the model output logits is needed
| null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/mengfeidu/EmbSpatial-Bench
|
EmbSpatial-Bench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
simple mean
|
Model access required (e.g. logits)
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
VQA
| null | null |
['Procedurally-generated', 'LLM-generated']
|
['Random']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative', 'Constructed']
|
['Mean']
|
chungCanVisualLanguage2024
|
Can visual language models resolve textual ambiguity with visual cues? Let visual puns tell you!
|
Include
| null | null |
This paper introduces the UNPIE benchmark to evaluate multimodal understanding in machines using puns. By pairing puns with explanatory images, the study tests models on tasks like grounding, disambiguation, and reconstruction, showing that visual context significantly enhances performance over text-only approaches.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
visual understanding
|
Yes
|
Humans possess multimodal literacy, allowing them to actively integrate information from various modalities to form reasoning. Faced with challenges like lexical ambiguity in text, we supplement this with other modalities, such as thumbnail images or textbook illustrations. Is it possible for machines to achieve a similar multimodal understanding capability?
|
Comprehensive
| null |
(1) pun grounding: to identify the specific phrase in a sentence that forms a pun
(2) pun disambiguation: to choose the translation that best matches the image provided as a pun disambiguator.
(3) pun reconstruction: to recreate the original English pun sentence using a translated version with potentially no ambiguity.
| null |
Three different tasks are proposed
|
Expert-crafted task examples (e.g. hand-written examples), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
1000
|
Yes
|
language
|
Specific criteria (items were taken from a larger set based on specified rules)
|
image
|
Exact Match (accuracy, F1, precision, recall)
| null |
To generate pun explanation images, the expert annotators prompt the DALLE 3 model to create images; The paper also designs a cooperative framework between machines and humans for pun translation
|
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
The report scores for different tasks (subsets) separately
| null |
https://github.com/JiwanChung/VisualPun_UNPIE
|
UNPIE
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Grounding
| null | null |
['Expert-crafted', 'LLM-generated']
|
['Criterion']
|
['Free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
kumarVisionlanguageModelsUnderstand2024
|
Do Vision-Language Models Understand Compound Nouns?
|
Include
| null | null |
Compun is a multimodal benchmark to assess how models understand compound nouns using text-to-image retrieval tasks. The dataset is publicly available, manually curated, and focuses on noun+noun compound nouns.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
compound nouns
|
Yes
|
"A compound noun (CN) is a noun formed from two or more words combined to create a single noun with a new meaning." (519).
|
Subset
|
Compun focuses "primarily on the noun + noun type" of compound nouns (519).
|
The task is "text-to-image retrieval where, given a text prompt with a CN [compound noun]" the model must "select the correct image that shows the CN among a pair of distractor images that show the constituent nouns that make up the CN" (519).
|
"Each instance in Compun corresponds to a unique compound noun and includes one image representing the compound noun, along with two additional distractor images" (520).
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
400
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
|
The paper defines a binary accuracy metric, where the score is 1 if the cosine similarity of the prompt and positive caption is greater than the similarities between the prompt and the negative captions, and 0 otherwise.
|
The paper defines a new pipeline to generate prompts for text-to-image retrieval, where given a compound noun, a LLM generates "multiple diverse captions" where "each caption describes a scene with the compound noun as a key object in it. Finally, the captions are used to construct a custom prompt for text-to-image retrieval" (520). Expert annotators manually collect the images per instance, and MTurk was used to decide the most common meaning if a compound noun had several interpretations.
|
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/sonalkum/Compun
|
Compun
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
The authors ground the text-to-image retrieval task in previous cognitive science research, and discuss how the benchmark can be improved by expanding to different types fo compound nouns and by using novel metrics for retrieval.
|
Average
|
Model access required (e.g. logits)
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
No
| null |
No
|
NLP
|
Understanding
| null |
['Author-crafted', 'Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Convenience', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative', 'Constructed']
|
['Mean']
|
flachsGrammaticalErrorCorrection2020
|
Grammatical Error Correction in Low Error Density Domains: A New Benchmark and Analyses
|
Include
| null | null |
CWEB is a benchmark for grammatical error correction that is publicly available and manually annotated by experts. It contains website data from Common Crawl, and includes sentences with low and high error density.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Grammatical error correction
|
Yes
|
"Grammatical error correction (GEC) is the task of automatically editing text to remove grammatical errors" (8467).
|
Comprehensive
| null |
The model is given text and must identify and correct grammatical errors.
|
A single item contains a sentence with in-line corrections.
| null |
Modified from another benchmark (e.g. translation into another language)
|
13574
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation)
|
The paper uses F_{0.5} and ERRANT, standard grammar error correction metrics, cited on page 8471, to assess the correctness of the correction. Perplexity and semantic similarity are used to measure the semantic change in a sentence after the edit.
|
The websites are in English and derived from the first 18 dumps of Common Crawl. Text is filtered to remove non-English and incomplete sentences using justText. The data is manually corrected by expert annotators. The dataset is split into CWEB-S (sponsored websites) and CWEB-G (generic) websites.
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
Development/Test 6729/6845
| null | null |
No
| null | null |
https://github.com/SimonHFL/CWEB
|
CWEB (Corrected WEBsites)
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
The authors detail that grammar error correction (GEC) models must "perform well in the open-domain setting and generalize, not only to writing produced in the educational context, but also to language production 'in the wild'" (8647). The authors also highlight that a strong GEC benchmark must evaluate "domain adaptation and low precision" in texts with low error density (8647).
|
Simple mean
|
Model access required (e.g. logits)
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Single cohesive phenomenon
|
No
| null |
No
|
NLP
|
Understanding
| null |
['Another benchmark']
|
['Random']
|
['Free response']
|
['Exact match', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Mean']
|
joshiILTURBenchmarkIndian2024
|
IL-TUR: Benchmark for Indian Legal Text Understanding and Reasoning
|
Include
| null | null |
This paper introduces IL-TUR, a benchmark designed to evaluate NLP models for legal text understanding and reasoning in the Indian legal context. The benchmark covers both monolingual (English, Hindi) and multilingual tasks across 9 Indian languages. It provides baseline models and a leaderboard with comparison in automating legal document processing.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Legal text understanding and reasoning in Indian and multilingual context
|
Yes
|
The tasks should cater exclusively to the legal domain. Solving a task should require in-depth knowledge and understanding of the law and its associated areas. [...] Moreover, solving legal tasks should require knowledge about the law as well as commonsense knowledge and societal norms about the world.
| null | null |
There are 6 types of tasks: sequence classification, multi-class classification, Classification and extraction, classification, multi-label classification, retrieval, and generation
|
text (text, short text, paragraph, sentences) and the output label or text
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Expert-crafted task examples (e.g. hand-written examples), Crowd-sourced task examples (e.g. Prolific-created tasks), Procedurally-generated task examples (e.g. Creating instances from a template)
|
~15,100, but the information is provided for some tasks and datasets
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), BERTScore, GLEU
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
some tasks/datasets have this info some not
| null |
Simple Mean
|
No
| null |
majority@k (majority vote over k trials)
|
https://exploration-lab.github.io/IL-TUR/
|
IL-TUR
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
"The tasks should cater exclusively to the legal domain. Solving a task should require in-depth knowledge and understanding of the law and its associated areas [...] as well as commonsense knowledge and societal norms."
|
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
CJPE (Court Judgment Prediction with Explanation) or BAIL (Bail Prediction) are framed as supervised classification tasks using labeled court documents, which is a constructed task.
Legal Named Entity Recognition (L-NER) and Rhetorical Role Prediction (RR), which model foundational capabilities for downstream legal NLP applications are representative tasks to model the legal system, the rest are partial real tasks as the use case is the real legal docs
|
Composite phenomenon
|
Yes
| null |
No
|
Law
| null | null |
['Real task', 'Author-crafted', 'Expert-crafted', 'Crowd-sourced', 'Procedurally-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response', 'Free response']
|
['Exact match', 'Soft match', 'Soft match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial', 'Representative', 'Constructed']
|
['Mean']
|
royBenchCLAMPBenchmarkEvaluating2023
|
BenchCLAMP: A Benchmark for Evaluating Language Models on Syntactic and Semantic Parsing
|
Include
| null | null |
BenchCLAMP evaluates Constrained LAnguage Model Parsing on syntactic and semantic parsing tasks. It provides context-free grammars using both prompt-based learning and fine-tuning across different data resource settings.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
syntactic parsing, semantic parsing
|
Yes
|
"Parsing tasks are generally not considered a testbed for such evaluation. The outputs of parsing tasks are structured objects such as parse trees or code. State-of-the-art systems thus involve task- or dataset-specific model architectures and target representation constraints. Evaluating language models on parsing tasks test capabilities not captured by commonly used evaluation tasks."
|
Comprehensive
| null |
Generating syntactic or semantic representations from natural language inputs, which includes context-free grammars for semantic and syntactic parsing datasets, as well as a constrained decoding interface to generate only valid outputs covered by these grammars.
|
Natural language utterance; structured output representation (constituency parse tree, dependency relations, formal meaning representation)
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
train: 5500
|
Yes
|
dataset type, output formalism, evaluation metric
|
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Industry
|
Yes
| null | null |
Test, Train
|
train:5500, dev: 550
| null |
Simple Mean
|
Yes
|
Data resource settings; parsing datasets; constraint settings
| null |
https://github.com/microsoft/semantic_parsing_with_constrained_lm
| null |
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
Mean scores across different data splits and standard deviation for low-resource settings
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
|
While not all examples are derived from real user interactions, they are designed to represent realistic use cases for parsing.
|
Composite phenomenon
|
Yes
| null |
Yes
|
NLP
|
Understanding
| null |
['Real task', 'Another benchmark']
|
['Convenience', 'Criterion']
|
['Free response', 'Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Representative']
|
['Mean', 'Std']
|
ryanRevisitingNonEnglishText2023
|
Revisiting non-English Text Simplification: A Unified Multilingual Benchmark
|
Include
| null | null |
MULTI-SIM, benchmark for multilingual text simplification, containing complex-simple sentence pairs across 12 languages. They show improvements from multilingual training for non-English languages and strong performance of Russian for cross-lingual transfer.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
automatic text simplification, multilingual
|
Yes
|
"Automatic text simplification (ATS) is the task of reducing the complexity of a text without chang- ing its original content and meaning"
|
Comprehensive
| null |
Automatic Text Simplification transforms complex sentences into simpler versions that maintain the original meaning but reduce linguistic complexity.
|
Pair of sentences: a complex sentence and its corresponding simpler version that preserves the original meaning.
| null |
Real task examples (e.g. GitHub issues), Expert-crafted task examples (e.g. hand-written examples), Modified from another benchmark (e.g. translation into another language)
|
train: 653,468
|
Yes
|
language, script, domain, collection approach, simplification type
|
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code)
|
n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics)
| null | null |
Academia
|
Yes
| null | null |
Test, Train
|
test: 6,306; dev: 6,728
| null |
Simple Mean
|
Yes
|
Scores for each language dataset separately, single-dataset fine-tuning, joint language fine-tuning, zero-shot cross-lingual transfer
| null |
https://github.com/XenonMolecule/MultiSim
|
MULTI-SIM
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
Yes
|
Yes
|
Yes
|
(1) human evaluation to assess the quality of model outputs on their benchmark;
(2) inter-annotator agreement using Krippendorff's alpha; (3) analyze corpus statistics to understand dataset quality; (4) acknowledge limitations "we cannot guarantee the quality of each resource or validate the methods that the original authors used to create them"
|
Automatic Evaluation Metrics (SARI; BLEU); Measure inter-annotator agreement using Krippendorff's alpha
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
Sentence pairs come from real-world simplification efforts intended for actual audiences with lower literacy levels.
|
Composite phenomenon
|
No
| null |
Yes
|
NLP
|
Summarization
| null |
['Real task', 'Expert-crafted', 'Another benchmark']
|
['Convenience', 'Criterion']
|
['Free response']
|
['Soft match', 'Human ratings']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['Yes']
|
['Partial']
|
['Mean']
|
siREADINChineseMultitask2023
|
READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises
|
Include
| null | null |
READIN: a Chinese multi-task benchmark with REalistic And Diverse Input Noises. READIN contains four diverse tasks and requests annotators to re-enter the original test data with two commonly used Chinese input methods: Pinyin input and speech input.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Robustness to realistic input noises
|
Yes
|
Robustness to "realistic and diverse input noises" in Chinese NLP, specifically focusing on user-generated inputs in real-world applications.
|
Subset
| null |
Comparing performance on clean data versus data with keyboard and speech input noises.
|
Clean test example from an existing Chinese NLP dataset paired with multiple noisy versions of the same example, along with the original expected output.
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
train: 34,371
|
Yes
|
Input noise type, error rate, annotator information, original task type
|
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
dev: 4,430; test: 8,570
|
Response format varies depending on the task
|
Micro-Average, Worst-Average
|
Yes
|
input noise type (keyboard/speech)
| null |
https://github.com/thunlp/READIN
|
READIN
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
(1) Human evaluations showing plausibility of crowd-sourced noises; (2) Diversity analysis of collected noises; (3) Character-level error rates; quantify noise in test sets; (4) Qualitative case studies
|
Micro-Average, Worst-Average
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
Simulates real-world user input scenarios where people make typing errors or speech recognition systems misinterpret accented speech.
|
Composite phenomenon
|
Yes
| null |
No
|
Language Modelling
|
Robustness
| null |
['Crowd-sourced', 'Another benchmark']
|
['Convenience', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response', 'Structured']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial']
|
['Mean']
|
schwettmannFINDFunctionDescription2023
|
FIND: A Function Description Benchmark for Evaluating Interpretability Methods
|
Include
| null |
Boarderline. Evaluates LLM-based automated interp methods. Ultimately does evaluate capabilities of LLMs so including.
|
Evaluating the ability of LLMs as interpretability agents as a proxy for how well they might perform in automated interpretability pipelines. i.e. can LLMs recover the functions from input/output data alone.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Automated interpretability capabilities
|
No
| null |
Comprehensive
|
Very broad phenomena "evaluating automated interpretability methods"
|
Recover the details about a program (e.g. a maths program/string program...) from examples of just the input and outputs.
|
A ground truth function and an operator that might add some noise to the data. Operationalised in an agentic way where the model can call the function many times and examine its behaviour.
|
Very simplified version of the real-life phenomena.
|
Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
Yes
|
Type of function (numeric, string, and synthetic neural modules)
|
Random sample (creators defined a task space and sampled from it), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), Execution-based (unit tests)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Function type (numeric, string, neural modules)
| null |
https://github.com/multimodal-interpretability/FIND
|
FIND
|
Contested
|
Highly simplified version of the phenomena
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
Discuss that the benchmark is a test of "necessary, but not sufficient, capabilities for automated interpretation." "The ultimate test of these interpretation methods’ effectiveness must be their ability to generate actionable insights about real models, which FIND does not evaluate."
|
Mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
2,275
|
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Reasoning
| null | null |
['Procedurally-generated']
|
['Random', 'Targeted']
|
['Free response']
|
['Exact match', 'LLM-as-a-Judge', 'Reward']
|
['Contested']
|
['Partially']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
zuoPatentEvalUnderstandingErrors2024
|
PatentEval: Understanding Errors in Patent Generation
|
Include
| null | null |
PatentEval, a benchmark annotated by human experts, tailored for assessing language models of different sizes and capacities. This includes pairwise comparisons and detailed analysis of error types in each output.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
patent text evaluation
|
Yes
|
Ability to generate high-quality patent text for two distinct tasks in machine-generated patent texts: claims-to-abstract generation, and the generation of the next claim given previous ones.
|
Subset
| null |
"Evaluating two distinct tasks in machine-generated patent texts: claims-to-abstract generation, and the generation of the next claim given previous ones"
|
Patent claims as input, paired with two outputs (machine-generated or human-written claims) that are evaluated through pairwise comparison by human experts.
| null |
Real task examples (e.g. GitHub issues)
|
400
|
Yes
|
patent domain, pairwise comparison, claim dependency
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragraph, executable code)
|
Human ratings (text quality, preference, NOT manual scoring of other metrics)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Win and Draw Rates
|
Yes
|
Error types, model, domain
| null |
https://github.com/ZoeYou/PatentEval
|
PatentEval
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
(1) Expert annotation from patent lawyers; (2) Correlation analysis between human judgments and automated metrics; (3) Statistical evaluation of pairwise annotations; (4) Detailed error analysis
|
Win and draw rates from pairwise comparisons. For automated metrics and human judgment evaluation: Kendall's Tau
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
Real subtasks within the patent drafting process, though they don't cover the complete patent drafting workflow.
|
Composite phenomenon
|
Yes
| null |
No
|
Law
| null | null |
['Real task']
|
['Criterion']
|
['Free response']
|
['Human ratings']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
|
['Mean', 'Other']
|
yinNaturalLanguageCode2023
|
Natural Language to Code Generation in Interactive Data Science Notebooks
|
Include
| null | null |
A benchmark for data science tasks (natural language to code) in a Jupyter notebook. A benchmark with "realistic NL intents, rich notebook context, and a series of interrelated problems"
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Data science code generation
|
Yes
|
Clearly defines what a computational notebook is and that the aim of the phenomenon is to generate code for the next cell that satisfies the user's intent. Scope of intended code falls within "data wrangling" or "EDA"
|
Comprehensive
| null |
Generate code to fulfil the user intent for a specific cell, provided with the notebook history in-context. Code length is usually ~1-3 lines (pretty basic)
|
A natural language question specifying the user intent for the following cell.
|
All tasks involve pandas manipulations.
|
Real task examples (e.g. GitHub issues), Expert-crafted task examples (e.g. hand-written examples)
|
1,078
|
Yes
|
Data source (existing task vs newly created task)
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), Execution-based evaluation (unit tests)
| null | null |
Industry
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Task source (existing task vs new task)
|
pass@k (any correct answer in k trials)
|
https://github.com/google-research/arcade-nl2code/
|
ARCADE
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
The benchmark is itself realistic
|
No
|
Yes
|
- All tasks use pandas, which is realistic of real DS notebooks but still not 100% coverage of the phenomenon e.g. don't consider plotting problems.
|
Mean, error bars on figures in appendix.
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively), Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
Yes
|
Code Generation
| null | null |
['Real task', 'Expert-crafted']
|
['Convenience']
|
['Structured']
|
['Exact match', 'Reward']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete', 'Partial']
|
['Mean', 'Std']
|
zhangCABComprehensiveAttention2023
|
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks
|
Include
| null | null |
We present a large-scale empirical study on LM adaptability using TASKBENCH500, a benchmark of 500 procedurally generated sequence modeling tasks.
We evaluate three facets of adaptability, finding that: (1) adaptation methods vary in memorizing small datasets; (2) some show compositional adaptability to complex tasks; and (3) label distribution mismatches arise from differences in intrinsic label difficulty.
Our results show that adaptability to new tasks can be systematically analyzed, similar to generalization.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
LM Adaptability
|
Yes
|
Adaptability: 'Adapting pre-trained language models (LMs) by finetuning their parameters or input prompts for downstream tasks' (Page 1)
|
Comprehensive
| null |
They define a set of atomic tasks, which are combined using a set of composition functions to produce more complex tasks.
The atomic tasks include lexical tasks, factual tasks, and random relation tasks. The composition functions are word-level and sequential compositions.
|
For each task f, they construct a dataset D(f) = {(x_i, y_i)}, where x_i is sampled from the task's input distribution (e.g., most common words), and y_i is uniformly sampled from the set of valid outputs f(x_i), i.e., y_i ~ Unif(f(x_i)).
|
For evaluation, they measure the model’s average per-token accuracy on both training and test splits of the dataset D(f).
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
| null |
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null |
Every task takes as input a word or word sequence, and outputs either a boolean value or a set of words/word sequences.
|
Simple Mean
|
No
| null | null |
https://github.com/facebookresearch/task_bench
|
TASKBENCH500
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
The authors explicitly justify their synthetic task generation approach as a more controlled, interpretable, and generalizable way to study model adaptability, enabling them to identify which task attributes make learning easier or more difficult, something that is difficult to study with existing real-world datasets.
'For benchmarks built from collections of real-world datasets, the makeup and difficulty of these datasets is often difficult to characterize precisely: differences in annotation standards, annotation quality, and dataset size mean that models often exhibit very different performance on datasets designed to evaluate model performance on the same abstract task.' (Page 3)
| null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
No
| null |
No
|
NLP
|
Long Context
| null |
['Author-crafted']
|
['Convenience', 'Targeted']
|
['Short free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
| null |
kumarVisionlanguageModelsUnderstand2024
|
Do Vision-Language Models Understand Compound Nouns?
|
Include
| null | null |
We curate Compun, a novel benchmark with 400 unique and commonly used Compound Nouns (CN), to evaluate the effectiveness of VLMs in interpreting CNs.
We perform an in-depth analysis to highlight CLIPs’ limited understanding of certain types of CNs.
We present an alternative framework that moves beyond hand-written templates for text prompts widely used by CLIP-like models. We employ a Large Language Model to generate multiple diverse captions that include the CN as an object in the scene described by the caption. Our proposed method improves CN understanding of CLIP by 8.25% on Compun.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
VLMs' ability to understand compound nouns
|
Yes
|
A compound noun (CN) is a noun formed from two or more words combined to create a single noun with a new meaning.
Interpreting the meaning of CNs by decoding the implicit semantic relation between their constituent nouns has attracted interest in NLP for decades.
Though extensively studied in NLP, whether modern vision-language models (VLMs) understand CNs is under-explored. Their paper fills in this gap.
|
Comprehensive
| null |
Each instance in Compun corresponds to a unique compound noun and includes one image representing the compound noun (CN), along with two additional distractor images. These distractor images depict the individual constituent nouns that form the CN.
Given the class name (or the CN), the task of a VLM is to retrieve (or select) the correct image among the distractors.
|
A unique compound noun, one image representing the compound noun (CN), two additional distractor images that depict the individual constituent nouns that form the CN.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
400
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null |
Multiple choice as in the VLM picks up one image out of three options
| null |
No
| null | null |
https://github.com/sonalkum/Compun
|
Compun
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
No
| null |
No
|
NLP
|
Understanding
| null |
['Author-crafted']
|
['Random']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
| null |
asaiBUFFETBenchmarkingLarge2024
|
BUFFET: Benchmarking Large Language Models for Few-shot Cross-lingual Transfer
|
Include
| null | null | null | null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Few-shot Cross-lingual Transfer
|
Yes
|
Few-shot cross-lingual transfer is defined as the ability to adapt models to a task in a new language using a limited number of training data in the target language.
|
Comprehensive
| null |
Converting all tasks into a unified text-to-text format, where models must generate appropriate outputs given inputs with k-shot examples in target languages.
|
Instruction, k-shot training and validation examples, test examples (input text and expected output)
| null |
Modified from another benchmark (e.g. translation into another language)
| null |
Yes
|
language, task type, data curation method, output format, languages per task
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null | null | null | null |
hierarchical averaging
|
Yes
|
language, dataset, task type, resource level, fine-tuning/ICL
| null |
https://huggingface.co/datasets/BuffetFS/BUFFET
|
BUFFET
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Dataset score is calculated as a macro-average of the per-language score.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
Yes
|
Language Modelling
|
In-context Learning
|
Multilinguality
|
['Another benchmark']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean']
|
hareshClevrSkillsCompositionalLanguage2024
|
ClevrSkills: Compositional Language And Visual Reasoning in Robotics
|
Include
| null | null |
We ask the question: if the models are taught the low-level capabilities, can they compose them in novel ways to achieve high-level tasks like cleaning the table without having to be explicitly taught so?
To this end, we present ClevrSkills - a benchmark suite for compositional reasoning in robotics. The dataset contains trajectories generated on a range of robotics tasks with language and visual annotations as well as multi-modal prompts as task specification.
We benchmark multiple different VLM baselines on ClevrSkills and show that even after being pre-trained on many tasks, these models fail on compositional reasoning in robotics tasks.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Compositional reasoning/generalization in robotics
|
Yes
|
Compositional generalization is a hallmark feature of human intelligence. Unlike any other animals, humans can receive instructions in natural language and successfully perform previously unseen tasks with minimal to no task-specific learning or adaptation.' (Page 1)
| null | null |
In ClevrSkills, they benchmark robotic models on a set of simple manipulation tasks, such as pick, place, throw, touch and push, and evaluate their ability to generalize to complex tasks based on these low-level capabilities.
Tasks are organized into three levels (L0 → L1 → L2), where higher-level tasks build on skills from lower levels to assess compositional reasoning.
|
A task prompt (plain text or multi-modal), a sequence of action labels (skill traces and language annotations), corresponding RGB observations from multiple camera views, key visual steps, and dense rewards over time.
It represents one complete trajectory for solving a specific manipulation task.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
330k trajectories on 33 tasks
|
Yes
|
For each of 330k trajectory, it contains many types of annotation, including language, action classes, bounding boxes for objects, visibility annotations, key-steps, rewards (for offline RL), camera parameters and more.
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Extended interaction (e.g. conversation, calling an API and processing the response)
|
Exact Match (accuracy, F1, precision, recall)
|
Per-task success rates
| null |
Industry
|
Yes
| null | null |
Test
| null |
A series of robot actions
|
Simple Mean
|
Yes
|
Success rates on L0, L1 and L2 tasks
| null |
https://github.com/Qualcomm-AI-research/ClevrSkills
|
ClevrSkills
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Reasoning
|
Compositional
| null |
['Author-crafted']
|
['Targeted']
|
['Interaction']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
| null |
hanMedSafetyBenchEvaluatingImproving2024
|
MedSafetyBench: Evaluating and Improving the Medical Safety of Large Language Models
|
Include
| null | null |
The paper introduces the first benchmark dataset designed to measure the medical safety of LLMs. It uses the dataset to evaluate and improve the medical safety of LLMs using fine-tuning.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
medical safety
|
Yes
|
"we define an LLM to be aligned with medical safety standards if its output is not only accurate but also consistent with the AMA (American Medical Association)'s Principles of Medical Ethics." (page 3)
|
Comprehensive
| null |
They prompt LLMs with harmful medical requests and evaluate the harmfulness of their responses
|
harmful request, category of harm
| null |
LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
900
|
Yes
|
principle violated
|
Random sample (creators defined a task space and sampled from it)
|
Free response (e.g. summary paragraph, executable code)
|
LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null |
They validate the LLM-generated dataset by conducting a user study with 25 doctors to check that the generated prompts violate one of the nine principles of medical ethics.
|
Academia
|
Yes
| null | null |
Test, Train
|
Train: 900
| null |
Simple Mean
|
Yes
|
By LLM used to generate the prompts
| null |
https://github.com/AI4LIFE-GROUP/med-safety-bench
|
MedSafetyBench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
| null |
They indirectly address it by noting in their discussion: "In practice, one could consider
introducing nuance to the definition. For example, levels of acceptable risk may vary among medical subspecialties (e.g., emergency medicine vs. neurological surgery vs. dermatology) and based on a patient’s condition and personal preference (e.g., a patient with a condition that has no established treatment options may be more willing to try risky experimental procedures). Aligning LLMs to account for different levels of acceptable risk and be tailored to different medical subspecialties is a future research direction"
|
simple mean, standard error of the mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Medicine
| null | null |
['LLM-generated']
|
['Random']
|
['Free response']
|
['LLM-as-a-Judge']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['']
|
['Partial']
|
['Mean']
|
shivagundeLargerProbesTell2023
|
Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
|
Include
| null | null |
Psycholinguistic datasets for negation and role reversal, which extend existing smaller benchmarks using GPT-3. Evaluation of multiple LMs on these extended benchmarks reveals performance drops.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
negation, role reversal
|
Yes
|
Negation sensitivity (LMs ability to understand negation);
Role reversal (ability to understand reversing semantic roles)
|
Subset
| null |
Accurately predicting masked/target words in constructed sentence pairs - given negation and role reversal.
| null |
Pair of sentences, e.g., an affirmative sentence and its negated counterpart.
|
Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
3000
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
negation, role reversal, models, dataset versions
|
pass@k (any correct answer in k trials)
|
https://github.com/text-machine-lab/extending_psycholinguistic_dataset
|
NEG-1500-SIMP, ROLE-1500
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
"We validated our new datasets through human evaluation. For this, we randomly selected 100 samples from each of the extended datasets"
|
Simple means, McNemar test, Minimum detectable effect (MDE)
|
Model access required (e.g. logits)
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
NLP
|
Understanding
| null |
['Another benchmark', 'LLM-generated']
|
['Targeted', 'Criterion']
|
['Short free response']
|
['Exact match', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean', 'Other']
|
wuSmartPlayBenchmarkLLMs2024
|
SMARTPLAY : A BENCHMARK FOR LLMS AS INTELLIGENT AGENTS
|
Include
| null | null |
This paper introduces SmartPlay, a benchmark for assessing LLMs as intelliget agents using 6 different games including Rock-Paper-Scissors, Tower of Hanoi, Minecraft. Each game features a unique setting, providing up to 20 evaluation settings and infinite environment variations. Each game in SmartPlay uniquely challenges a subset of 9 important capabilities of an intelligent LLM agent, including reasoning with object dependencies, planning ahead, spatial reasoning, learning from history, and understanding randomness.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
LLMs as intelligent agents
|
No
|
While the authors do not explicitly provide a definition for intelligent agents, they provide some key properties of agents from which the target phenomenon aims to be captured—i.e., LLM agents as systems capable of long-horizon planning, probabilistic reasoning, spatial reasoning to understand the 3D world, and learning from interactions or mistakes. This is further decomposed into 9 measurable abilities; long text understanding, reasoning, instruction following, planning, generalization, understanding the odds, learning from interactions, error/mistake handling and spatial reasoning.
|
Comprehensive
| null |
An LLM is provided with environment-specific inputs; either textual descriptions or visual descriptions (via natural language)—along with manuals containing background knowledge, rules, and examples. The LLM must then interact with the environment by selecting actions from a predefined action space to achieve task objectives across multiple trials or rollouts.
|
A task item is a description of a game, the rules, actions, environment and expected behaviour.
|
A task item consists of a description of the game, its rules, available actions, environment state, and the expected agent behavior.
|
Real task examples (e.g. GitHub issues)
|
6 games with 20 different evaluation settings
|
Yes
|
Each task item is annotated with the abilities required i.e. long text understanding, reasoning, instruction following, planning, generalization, understanding the odds, learning from interactions, error/mistake handling and spatial reasoning.
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Extended interaction (e.g. conversation, calling an API and processing the response)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Across each game in the task
| null |
https://github.com/microsoft/SmartPlay
|
SMARTPLAY
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
Yes
|
Somewhat
|
The authors use real-world-inspired games to evaluate LLM capabilities as intelligent agents, but the extent to which success in these environments generalizes to real-world agentic behavior remains an open question.
|
Scores are normalised relative to human performance.
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Agents
| null | null |
['Real task']
|
['Convenience']
|
['Interaction']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Other']
|
liWMDPBenchmarkMeasuring2024
|
The WMDP Benchmark: Measuring and Reducing Malicious Use with Unlearning
|
Include
| null | null |
The paper introduces a benchmark with questions that serve as a proxy measurement of hazardous knowledge in biosecurity, cybersecurity, and chemical security. It also introduces a state-of-the-art unlearning method to reduce model performance on the benchmark.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
hazardous capabilities related to biosecurity, cybersecurity, and chemistry knowledge
|
No
|
Knowledge of biosecurity, cybersecurity, and chemistry that could be hazardous
|
Subset
| null |
MCQ questions with "knowledge that is a precursor, neighbor, or component of the hazardous knowledge we wish to remove" (page 4)
|
MCQ question, options, correct answer
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Expert-crafted task examples (e.g. hand-written examples)
|
3,668
|
Yes
|
topic area (biology, cyber, chemistry)
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null | null | null | null |
Simple Mean
|
Yes
|
topic area (bio, cyber)
| null |
https://huggingface.co/datasets/cais/wmdp
|
The WMDP Benchmark
|
Contested
|
No
|
Yes
|
No
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Alignment
|
Safety
| null |
['Author-crafted', 'Expert-crafted']
|
['Targeted']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['No']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean']
|
renValueBenchComprehensivelyEvaluating2024
|
ValueBench: Towards Comprehensively Evaluating Value Orientations and Understanding of Large Language Models
|
Include
| null | null |
This paper introduces ValueBench, a benchmark for evaluating value orientations and value understanding in LLMs, grounded in realistic human-AI interactions to test for value orientations, along with new tasks for evaluating value understanding in an open-ended value space.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Value orientation and value understanding
|
Yes
|
Values are concepts or beliefs about desirable end states or behaviors that transcend specific situations. Value Orientation is the extent to which an LLM exhibits preferences or inclinations toward specific human values — i.e., how aligned its responses are with particular value statements or stances. Value Understanding is the extent to which an LLM can recognise, interpret, and reason about human values — including identifying relationships between values, inferring values from behaviours or statements, and generating expressions that reflect particular values.
|
Comprehensive
| null |
In value orientation, an LLM is given a value loaded question converted from psychometric statements abd asked to respond with advice. In value understanding, an LLM is prompted to identify relevant values on both positive and negative samples. For each value pair, the LLMs are required to sequentially output the definition of both values, a brief explanation of their relationship, the corresponding relationship label, and a final assessment of relevance (1 if relevant and 0 otherwise). An LLM can also be presented with a value name and is requireed to generate a value-reflecting statement or a pairs of values and required to identify semantic relationships.
|
For value orientation a single task item is a rephrased value statement to advice-seeking question. For value understanding, the task items can be a behavioural statement, a value name or pairs of values.
| null |
Real task examples (e.g. GitHub issues)
|
44 psychometric inventories, 453 value dimensions and 1989 value orientation questions
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Across each value orientation and subtask dataset
| null |
https://github.com/Value4AI/ValueBench
|
ValueBench
|
Widely-agreed
|
Yes
|
No
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
The benchmark contains questions sourced from psychometric analysis tests and shows correlations in the results for tests that examine similar behaviours.
| null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null |
No
|
Alignment
|
Alignment
| null |
['Real task']
|
['Convenience']
|
['Short free response']
|
['Exact match', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['No']
|
['No comparison made']
|
['Yes']
|
['Partial']
| null |
wangDecodingTrustComprehensiveAssessment2023
|
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models
|
Include
| null | null |
The paper introduces a comprehensive trustworthiness evaluation for large language
models with a focus on GPT-4 and GPT-3.5. The benchmark introduced considers diverse perspectives including toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Trustworthiness
|
No
|
"toxicity, stereotype bias, adversarial robustness, out-of-distribution robustness, robustness on adversarial demonstrations, privacy, machine ethics, and fairness." (page 2)
|
Comprehensive
| null |
Presenting LLMs with different scenarios that invoke one of the subsets of trustworthiness (e.g., stereotypical input) and assessing output.
|
Varies by task (the benchmark includes many tasks and datasets) but seems to include user prompt, system prompt, choices (when prompts are MCQ)
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
| null |
Yes
|
system prompt, prompt template
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null | null | null | null |
Simple Mean, Weighted Mean
|
Yes
|
different system prompts, normal vs challenging prompts, different adversarial demonstrations, different adversarial text generation strategies, different demographic groups and stereotype topics, different types of PII, different sensitive attributes
|
pass@k (any correct answer in k trials)
|
https://github.com/AI-secure/DecodingTrust
|
DecodingTrust
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
indirectly address it
|
In the limitations section: "Subjectivity. Trustworthiness perspectives such as toxicity, stereotype bias, machine ethics, and fairness involve subjectivity and should be human-centric in their definitions and evaluations. Here we aim to provide our objective observations, and leave the analysis of how these metrics are aligned with human as an important future work to explore model behaviors based on human understanding"
|
simple mean, weighted mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
|
They provide the total # of prompts used for each trustworthiness subset (in the context of calculating computational costs) but since they run a lot of variations (e.g., different system prompts), it's unclear what the size of the bare benchmark is. This is in Appendix K.
|
Yes
|
Alignment
|
Alignment
| null |
['Author-crafted', 'Another benchmark', 'LLM-generated']
|
['Targeted']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Distribution']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial', 'Representative']
|
['Mean']
|
chakrabortyCounterTuringTest2023
|
Counter Turing Test (CT2): AI-Generated Text Detection is Not as Easy as You May Think – Introducing AI Detectability Index
|
Include
| null | null |
Counter Turing Test (CT2) is a benchmark to evaluate the robustness of AI-generated text detection techniques. The AI Detectability Index (ADI) is a metric to rank LLMs according to how detectable their outputs are as machine-generated versus human-written.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
AI-generated text detection
|
Yes
|
"If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship", Detectability of AI-generated text
|
Comprehensive
| null |
The Counter Turing Test (CT2) evaluates the robustness of AI-generated text detection techniques: (i) watermarking, (ii) perplexity estimation, (iii) burstiness estimation, (iv) negative log-likelihood curvature, (v) stylometric variation, and (vi) classifier-based approaches.
|
Pair of texts on the same topic (one human-written and one AI-generated).
| null |
Real task examples (e.g. GitHub issues), Procedurally-generated task examples (e.g. Creating instances from a template)
|
100000
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation)
| null | null |
Mix (multiple authors from industry and academia)
|
No, no link is provided
| null | null |
Test
| null | null |
Formula combining perplexity and burstiness
|
Yes
|
Detection method, LLM used
| null | null |
CT2
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
No
| null |
mean, standard deviation, entropy calculations, z-scores, p-values, bootstrapping, Le Cam's lemma, multiplicative damping factors
|
Model access required (e.g. logits)
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null |
No
|
NLP
|
Detection
| null |
['Real task', 'Procedurally-generated']
|
['Convenience', 'Criterion']
|
['Multiple choice']
|
['Exact match', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean', 'Std', 'Tests', 'Other']
|
tuWaterBenchHolisticEvaluation2024
|
WaterBench: Towards Holistic Evaluation of Watermarks for Large Language Models
|
Include
| null | null |
WaterBench is a benchmark for evaluating LLM watermarks across detection and generation quality. The paper also presents a hyper-parameter search method to control watermarking strength, and automatic evaluation using GPT4-Judge. The dataset is publicly available and human-validated.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
LLM watermarks
|
Yes
|
A watermarked LLM generates texts "with a biased distribution of tokens, which distinguishes it from unwatermarked texts, ... , the goal of watermarking is to achieve high detection accuracy while maintaining the generation quality" (1517).
|
Comprehensive
| null |
WaterBench consists of 9 tasks with 5 unique task settings, spanning "a wide range of input and output length" (1520). The first setting is Short Input, Short Answer, and has two tasks to evaluate factual knowledge. The second setting is Short Input, Long Answer, with two Long-form QA tasks. The third category is Long Input, Short Answer, with reasoning and code completion tasks. The fourth setting is Long Input, Long Answer, with two summarization tasks. The last setting is open-ended generation, where the task is instruction-following.
|
Each task is sourced from a different dataset and has its own features.
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
2405
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall), Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), Generation Metric and Generation Quality Drop are never explicitly defined in the paper.
|
The paper defines watermarking strenth as the true positive rate to ensure all watermarks are of similar intensity during evaluation. WaterBench also uses GPT4-Judge, which "measures which model's output the GPT-4 system prefers when shown two responses for the same instruction" (1521). The paper reports the "True Positive Rate, True Negative Rate, Generation Metric, and Generation Quality Drop for all tasks" (1523). 100 responses are randomly sampled for human annotation as well.
|
Each task is sourced from a different dataset.
|
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/THU-KEG/WaterBench
|
WaterBench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
Yes
|
The authors highlight that evaluating watermark must "evaluate the generation and detection" methods, and ensure fair comparisons between watermarking methods (1517). Additionally, the authors highlight that the tasks must be diverse, go beyond "text completion," and measure "generation quality" and alignment with "human preferences" (1517-1518).
|
True Positive Rate, True Negative Rate, Generation Metric and Generation Quality Drop
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Single cohesive phenomenon
|
No
| null |
No
|
NLP
|
Detection
| null |
['Real task', 'Another benchmark']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response', 'Free response']
|
['Exact match', 'Human ratings', 'LLM-as-a-Judge', 'LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Mean']
|
luoMMMRSMultimodalMultiGSD2024
|
MMM-RS: A Multi-modal, Multi-GSD, Multi-scene Remote Sensing Dataset and Benchmark for Text-to-Image Generation
|
Include
| null | null |
MMM-RS is a large, multi-modal multi-GSD, and multi-scene remote sensing text-to-image generation benchmark. It is publicly available, aggregated and filtered from existing datasets, and contains information-rich captions.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
remote sensing text-to-image generation
|
No
|
Remote sensing image generation is the ability to prompt a multimodal model to generate a high-quality remote sensing images.
|
Comprehensive
| null |
A multimodal model is given an information-rich text prompt and must generate the described remote sensing image.
|
An remote sensing image and an information-rich text prompt, specific to its image modality. For example, the prompt may contain satellite type, weather type, category, resolution, subject, etc.
| null |
Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
2,103,273
|
No
|
=
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Image
|
Distribution (perplexity, calibration, correlation)
|
The paper uses Frechet Inception Distance (FID) and Inception Score (IS).
|
"The MMM-RS dataset is derived from 9 publicly available RS datasets: MRSSC2.0 [16], Inria [19], NaSC-TG2 [45], GID [28], WHU-OPT-SAR [14], HRSC2016 [40], TGRS-HRRSD [42], fMoW [5], and SEN1-2 [25]" (4). It contains images across three modalities: RGB, Synthetic Aperture Radar, and Near Infrared. Multi-scene remote sensing images are synthesized at different scales and weather conditions using physics models and multimodal models. The process is outlined in Figure 4 on Page 6.
|
Academia
|
Yes
| null | null |
Test, Train
|
Optional train set is defined at 200,000 samples.
| null |
Simple Mean
|
No
| null | null |
https://github.com/ljl5261/MMM-RS
|
MMM-RS (Multi-modal, Multi-GSD, Multi-scene Remote Sensing)
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
Yes
|
The authors highlight that a remote sensing text-to-image generation dataset should be multimodal across data and image types, and be information-rich.
|
Simple mean
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Grounding
| null | null |
['Another benchmark', 'LLM-generated']
|
['Targeted', 'Criterion']
|
['Free response']
|
['Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Mean']
|
devriesDUMBBenchmarkSmart2023
|
DUMB: A Benchmark for Smart Evaluation of Dutch Models
|
Include
| null | null |
DUMB, a benchmark for evaluating Dutch language models across nine tasks. The authors propose Relative Error Reduction (RER) for better cross-task comparison and evaluate pre-trained models, finding that current Dutch models underperform while identifying strategies for future model improvements.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Dutch language understanding
|
No
| null |
Subset
| null |
Word-level tasks (POS tagging, NER), word-pair tasks (word sense disambiguation, pronoun resolution), sentence-pair tasks (causal reasoning, natural language inference), and document-level tasks (sentiment analysis, abusive language detection, question answering).
|
Task-specific, e.g., input text (word, sentence, or document) with corresponding label/target output.
| null |
Modified from another benchmark (e.g. translation into another language)
|
train (sum): 283,112
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragraph, executable code)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
test (sum): 23,406 ; dev (sum): 17,152
| null |
Relative Error Reduction (RER)
|
Yes
|
Scores for each of the nine individual tasks; grouped average scores by model type, model size, pre-training language
| null |
https://github.com/wietsedv/dumb
|
DUMB
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
Binomial mixed effects regression models
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
Standard NLP evaluation benchmarks rather than complete real-world applications, but they are important for real-world Dutch language processing applications.
|
Composite phenomenon
|
Yes
| null |
No
|
NLP
|
Understanding
|
Multilinguality
|
['Another benchmark']
|
['Convenience', 'Criterion']
|
['Short free response', 'Free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative', 'Constructed']
|
['Tests']
|
hsiehSugarCrepeFixingHackable2023
|
SugarCrepe: Fixing Hackable Benchmarks for Vision-Language Compositionality
|
Include
| null | null |
We introduce SUGARCREPE, a new benchmark for vision-language compositionality evaluation.
We employ large language models, instead of rule-based templates used in previous benchmarks, to generate fluent and sensical hard negatives, and utilize an adversarial refinement mechanism to maximally reduce biases.' (abstract)
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Compositional understanding of vision-language models
|
Yes
|
Through compositional reasoning, humans can comprehend new scenes and describe those scenes by composing known atoms.
For instance, compositionality allows people to differentiate between a photo of “a girl in white facing a man in black” and “a girl in black facing a man in white”.
Vision-language research has sought to develop models that can similarly comprehend scenes and express them through compositional language.' (page 1)
|
Comprehensive
| null |
This is formulated as an image-to-text retrieval task.
It evaluates a vision-language model’s ability to distinguish the correct caption for an image from a closely matched, compositionally altered hard negative, thereby testing its compositional understanding of visual scenes.
|
An image paired with two captions: one positive caption that correctly describes the image, one hard negative caption that is compositionally similar but incorrect.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
7512
|
Yes
|
It provides types of compositional perturbations applied to generate hard negatives. These include REPLACE-OBJ, REPLACE-ATT, REPLACE-REL, SWAP-OBJ, SWAP-ATT, ADD-OBJ, and ADD-ATT, which specify the type of atomic concept (object, attribute, relation) and the operation used (replace, swap, add).
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null |
Multiple choice as each image presents two captions (a positive and a hard negative), and the model choose which one correctly describes the image.
| null |
Yes
|
Seven fine-grained hard negative types under three categories of REPLACE, SWAP, and ADD:
REPLACE: REPLACE-OBJ (object substitutions), REPLACE-ATT (attribute substitutions); REPLACE-REL (relation substitutions);
SWAP: SWAP-OBJ (object swaps), SWAP-ATT (attribute swaps);
ADD: ADD-OBJ (adding an object), ADD-ATT (adding an attribute).
| null |
https://github.com/RAIVNLab/sugar-crepe
|
SugarCrepe
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
The benchmark was proposed to address the construct validity failures in prior benchmarks (e.g., ARO, CREPE) by showing that a blind model can even succeed without using visual input.
They address it in their benchmark by improving hard negative generation using ChatGPT, followed by adversarial refinement, to ensure captions differed only in compositional meaning and remove any biases.
| null |
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null |
No
|
Reasoning
|
Compositional
| null |
['Author-crafted']
|
['Targeted']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
| null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.