bibkey
stringlengths 18
52
| title
stringlengths 31
151
⌀ | inclusion
stringclasses 1
value | exclusion_criteria
stringclasses 1
value | exclusion_criteria_detail
stringclasses 2
values | short_summary
stringlengths 48
766
⌀ | contribution
stringclasses 88
values | phenomenon_short
stringclasses 6
values | target_phenomenon
stringlengths 3
360
⌀ | phenomenon_defined
stringclasses 2
values | phenomenon_definition
stringlengths 10
964
⌀ | definition_scope
stringclasses 2
values | purpose_extra
stringclasses 81
values | task_definition
stringlengths 14
1.39k
⌀ | task_item_definition
stringlengths 7
3.27k
⌀ | task_definition_detail
stringlengths 1
1.19k
⌀ | task_source
stringlengths 14
460
⌀ | task_dataset_size
stringlengths 2
309
⌀ | task_dataset_metadata
stringclasses 2
values | dataset_metadata_detail
stringlengths 1
570
⌀ | dataset_sampling_method
stringclasses 18
values | response_format
stringclasses 52
values | metric_definition
stringlengths 3
419
| metric_definition_detail
stringlengths 21
1.18k
⌀ | task_source_detail
stringlengths 6
829
⌀ | authorship
stringclasses 7
values | benchmark_availability
stringclasses 18
values | procedural_extra
stringclasses 45
values | notes_extra
stringclasses 40
values | task_train_val
stringclasses 6
values | task_dataset_size_extra
stringlengths 2
549
⌀ | response_format_detail
stringclasses 88
values | metric_aggregation
stringclasses 26
values | metric_subscores
stringclasses 2
values | metric_subscores_detail
stringlengths 6
1.07k
⌀ | metric_metascoring
stringclasses 17
values | benchmark_location
stringlengths 6
117
⌀ | benchmark
stringlengths 3
146
⌀ | phenomenon_contested
stringclasses 3
values | task_face_validity
stringclasses 21
values | metric_face_validity
stringclasses 18
values | result_interpretation
stringclasses 2
values | results_comparison
stringclasses 2
values | results_comparison_explanation
stringclasses 3
values | results_realism
stringclasses 7
values | results_human_baseline
stringclasses 2
values | results_author_validity
stringclasses 15
values | results_author_validity_detail
stringlengths 17
1.19k
⌀ | metric_statistics
stringlengths 4
405
⌀ | metric_access
stringclasses 2
values | task_ecology
stringclasses 17
values | task_ecology_detail
stringlengths 5
580
⌀ | definition_integrity
stringclasses 3
values | definition_integrity_detail
stringclasses 3
values | task_dataset_size_detail
stringclasses 64
values | metric_fewshot
stringclasses 2
values | phenomenon_taxonomy_root
stringclasses 30
values | phenomenon_taxonomy_leaf
stringclasses 32
values | phenomenon_taxonomy_alternate
stringclasses 8
values | task_source_clean
stringlengths 11
119
| dataset_sampling_method_clean
stringclasses 18
values | response_format_clean
stringclasses 29
values | metric_definition_clean
stringclasses 77
values | phenomenon_contested_clean
stringclasses 3
values | task_face_validity_clean
stringclasses 5
values | metric_face_validity_clean
stringclasses 4
values | results_realism_clean
stringclasses 5
values | results_author_validity_clean
stringclasses 4
values | task_ecology_clean
stringclasses 14
values | metric_statistics_clean
stringclasses 10
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
mundlerSWTBenchTestingValidating2024
|
SWT-Bench: Testing and Validating Real-World Bug-Fixes with Code Agents
|
Include
| null | null |
A benchmark for generating code tests (unit tests) from natural language user GitHub issues.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Automatic code test generation (i.e. generating unit tests for issues)
|
Yes
|
The ability to generate valid tests to reproduce an issue in a codebase.
|
Comprehensive
| null |
Given a GitHub issue in natural language, you must write tests to reproduces the described issue.
|
A GitHub issue (taken from SWE-Bench), code that contains the issue and code with a 'golden patch' that has the issue fixed. The goal is to write unit tests that fail on the faulty code but pass after the patch is added.
|
Very comprehensive details about task definition.
|
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
1900
|
Yes
|
Length of the GitHub issue in tokens, original GitHub repository
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Structured response (e.g. valid JSON, API call alone)
|
Whether the faulty code fails on the test and the gold-standard code passes it.
| null |
SWE-bench, which originates from real GitHub issues
|
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Description length in tokens, original GitHub repository
| null |
https://github.com/logic-star-ai/SWT-Bench
|
SWT-Bench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
Limitations in how the phenomenon was operationalised - all problems are in Python.
|
simple mean
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Agents
|
Coding
| null |
['Real task', 'Another benchmark']
|
['Criterion']
|
['Structured']
|
['Reward']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Mean']
|
davidsonEvaluatingLanguageModel2024
|
EVALUATING LANGUAGE MODEL AGENCY THROUGH
NEGOTIATIONS
|
Include
| null | null |
The paper introduces a dynamic framework for evaluating LLMs using negotiation games in self-play and cross-play settings. They find that only closed-source models are able to successfully complete the task and that stronger LLMs don't always win over weaker opponents.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Alignment
|
Yes
|
Alignment metrics of interest are internal and external faithfulness as defined in Section 2.3, and the ability to follow instructions. [...] We measure instruction-following behavior of staying within the maximum number of words allowed to generate notes/messages (note/msg instruct) and the ability to correctly format internal offer indications using valid JSON (format instruct). [... (from 2.3)...]. . In natural language processing (NLP), faithfulness is a concept used to describe how accurately a model’s reasoning explains its answers/actions. To measure internal faithfulness, agents are asked to summarize acceptable offers for each Issue in their mental notes. [...] If Alice makes an offer to Bob for fewer slices than she stated as acceptable, we register this as an instance of internal unfaithfulness.
|
Subset
|
The paper is a bit unfocused in what it measures. The title says "Agency", the authors mainly note "Alignment" as motivation, and there is also a degree of "Negotiation skill" and "Theory of Mind".
|
The task is a series of negotiation games, where LLMs are given rules, a persona, protocols, and goals. Agents do both internal deliberation and external negotiation, and the game ends when a completion criteria is reached.
|
A single task is one round of a negotiation game that is either self-play or against another model.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
| null |
Yes
|
prompts, game settings, issues
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Extended interaction (e.g. conversation, calling an API and processing the response)
|
Exact Match (accuracy, F1, precision, recall), Number of rounds completted
| null |
The authors generate a list of Games, Issues. It seems these were crafted manually
|
Academia
|
Yes
| null |
This "benchmark" defines too many phenomena to fit neatly in the framework
|
Test
| null |
Negotiation
|
Simple Mean
|
Yes
|
Scores are reported for different types of games.
| null |
https://github.com/epfl-dlab/LAMEN/
| null |
Contested
|
Partially
|
Partially
|
Yes
|
No
|
No comparisons made
|
It is an entirely constructed scenario (no available realistic setting)
|
No
|
No
| null |
mean with variance
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
The tasks simulates agent negotiations (so no humans involved)
|
Composite phenomenon
|
Yes
| null | null |
Alignment
|
Alignment
| null |
['Author-crafted']
|
['Targeted']
|
['Interaction']
|
['Exact match', 'Reward']
|
['Contested']
|
['Partially']
|
['Partially']
|
['Not possible']
|
['No']
|
['Constructed']
|
['Mean', 'Std']
|
helweMAFALDABenchmarkComprehensive2024
|
MAFALDA: A Benchmark and Comprehensive Study of Fallacy Detection and Classification
|
Include
| null | null |
The paper introduces MAFALD, a benchmark that provides a unified classification of fallacies and provides a taxonomy. It features manually annotated data with explanations, a tailored annotation scheme, and an evaluation method for subjective NLP tasks. Various language models and human performance are evaluated on fallacy detection and classification in a zero-shot learning setting.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
fallacies in reasoning
|
Yes
|
A fallacy is an erroneous or invalid way of reasoning. A fallacy is an argument where the premises do not entail the conclusion. Sub-elements: Fallacy of credibility, fallacy of logic, appeal to emotion
|
Comprehensive
| null |
Given a text, detect fallacies and classify them
|
Level 0: binary classification (fallacy or not), Level 1: groups fallacies into Aristotle’s categories: ‘Pathos’ (appeals to emotion), ‘Ethos’ (fallacies of credibility), and ‘Logos’ (fallacies of logic, relevance, or evidence), Level 2 contains fine-grained fallacies within the
broad categories of Level 1. For instance, under fallacy of credibility, we have specific fallacies such as appeal to tradition, ad populum, and guilt by association.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
9735
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
3 levels (different granularity)
| null |
GitHub
|
MAFALDA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
simple mean/sum
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Logical
| null |
['Author-crafted', 'Crowd-sourced', 'Another benchmark']
|
['Convenience', 'Targeted']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean']
|
niuRAGTruthHallucinationCorpus2024
|
RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models
|
Include
| null | null |
This paper targets word-level hallucinations in various tasks and domains in the RAG setting. It presents approximately 18,000 responses generated using RAG from diverse LLMs which are annotated at the word level for hallucination intensity. Hallucination frequencies are benchmarked across various LLMs, and hallucination detection methods are assessed versus a small LLM fine-tuned using the proposed dataset, RAGTruth.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
hallucination detection, specifically for RAG applications
|
Yes
|
"Hallucination in the context of LLMs usually refers to a situation where the
model generates content that is not based on factual or accurate information"
|
Subset
| null |
For a given reference-response pair, determine if it contains hallucinated content at the response level and span level.
|
A single item consists of source information (reference), an LLM-generated response (which may contain various degrees of hallucination), annotation of the location and type of hallucination (if any), and a brief annotated explanation of the hallucination observed.
|
Additional meta-data regarding the model and inference hyperparameters used to generate each sample is provided, along with details regarding the source and task type for the reference texts.
|
Real task examples (e.g. GitHub issues), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
2700
|
Yes
|
source information index, generating model, temperature, whether quality issues are present in the sample, task type of the data, source of the original content, prompt used to generate the response, base content for RAG
|
Random sample (creators defined a task space and sampled from it), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
15090 (train)
| null |
Simple Mean
|
Yes
|
by task type (QA, summarization, data-to-text writing)
| null |
https://github.com/ParticleMedia/RAGTruth
|
RAGTruth
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
Benchmark statistics and quality checking are described. Hallucination density is assessed across models used to generate the data, in relation to context length, and versus position in the text.
| null |
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Composite phenomenon
|
Yes
| null | null |
Retrieval
| null |
Factuality
|
['Real task', 'Crowd-sourced', 'Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Random', 'Targeted']
|
['Short free response', 'Free response', 'Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Complete']
| null |
wangIELMOpenInformation2022
|
IELM: An Open Information Extraction Benchmark for Pre-Trained Language Models
|
Include
| null | null |
They introduce a new open information extraction (OIE) benchmark designed to evaluate the relational knowledge stored in pre-trained language models (LMs) such as BERT and GPT (published in 2022). Their method involves transforming these pre-trained LMs into zero-shot OIE systems to assess their performance on both existing and novel factual OIE datasets. Their results show that pre-trained LMs achieve competitive performance, even surpassing state-of-the-art supervised OIE methods on certain datasets without any additional training data.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
open information extraction i.e. answering “fill-in-the-blank” questions when given
a pre-defined relation category
|
Yes
|
"In this work, we set up a new open information extraction (OIE) benchmark, called IELM, towards testing the general and open relational information stored in pre-trained LMs."
|
Comprehensive
|
For definition_integrity - the paper looks at both standard OIE and factual OIE.
|
"In this work, we set up a new open information extraction (OIE) benchmark, called IELM, towards testing the general and open relational information stored in pre-trained LMs. We refer to OIE as it is a task that is designed to extract open relations from massive corpora without requiring a pre-defined relation category."
|
"For open information extraction (OIE), we take an input as a NP-chunked sentence and output a set of triples. Below is an example.
Input DylanNP was born in MinnesotaNP, and was awarded Nobel PrizeNP.
Output (Dylan; born in; Minnesota), (Dylan; awarded; Nobel Prize).
NP denotes the noun phrase."
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Based on knowledge graphs (KG) e.g. Wikidata
|
27,400,440 triples 6,096,709 arguments 5,418 predicates 9,925,937 documents
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
No, link is broken
| null | null |
Test
|
The dataset size above is summed over 4 datasets in Table 2.
|
Output is a set of triples
| null |
Yes
|
Metrics are reported for each OIE dataset (CaRB(existing), Re-OIE206 (existing), TAC KBP-OIE (novel), Wikidata-OIE (novel)).
| null |
https://github.com/cgraywang/IELM - This repository is empty.
|
IELM
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
They carry out an error analysis:
"We argue that we are measuring a lower bound for what LMs know. To further understand the shortcomings of the current method, we conduct an error analysis of the errors in precision on all datasets. We choose BERTLARGE for the study. We sample 100 documents from the Wikidata-OIE dataset, and manually check the reasons for the errors."
They find error from: incorrect arguments, missing pairs in predicate mapping, correct triples that are not covered by Wikidata, and incorrect predicate phrases.
|
The authors carry out some error analysis: "We argue that we are measuring a lower bound for what LMs know. To further understand the shortcomings of the current method, we conduct an error analysis of the errors in precision on all datasets. We choose BERTLARGE for the study. We sample 100 documents from the Wikidata-OIE dataset, and manually check the reasons for the errors"
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
No
| null | null |
NLP
|
Extraction
| null |
['Crowd-sourced', 'Procedurally-generated']
|
['Convenience']
|
['Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Other']
|
heTGEAErrorAnnotatedDataset2021
|
TGEA: An Error-Annotated Dataset and Benchmark Tasks for Text Generation from Pretrained Language Models
|
Include
| null | null |
TGEA (Text Generation Error Annotation) is an error-annotated dataset with multiple benchmark tasks for text generation. Following the authors hierachical error taxonomy, crowdsourced workers manually labeled 12k erroneous sentences with semantic information, including error types, associated text spans, error corrections and rationals behind errors.
|
Validation: Crowdsourced workers manually checked each of those sentences and detected 12k erroneous sentences.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Text generation error analysis
|
Yes
|
"The key interest of this dataset is detecting and annotating text generation errors from PLMs."
|
Subset
| null |
The task requires models to analyze machine-generated Chinese text to detect, locate, classify, correct, and explain generation errors according to a comprehensive taxonomy of error types.
|
A single item consists of machine-generated Chinese text with annotations marking error spans, associated spans, corrections, error type classifications, and explanatory rationales.
| null |
LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
47,058
|
Yes
|
error type classification, token counts, error span locations, span distances, error distribution
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), Distribution (perplexity, calibration, correlation)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
train (37,646), Dev (4,706), test (4,706)
| null |
None, Separate metrics for each sub-task with no single aggregated score
|
Yes
|
Erroneous text detection, Erroneous and associated span detection, Error type classification, Error correction, Rationale generation
| null |
https://download.mindspore.cn/dataset/TGEA/
|
TGEA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
The authors validate their benchmark with inter-annotator agreement statistics for different tasks, Cohen's Kappa coefficients, a rigorous quality control protocol, annotation verification on sampled texts, and human performance baselines.
|
Simple means for performance metrics; agreement percentages and Cohen's Kappa for annotation reliability.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Factuality
| null | null |
['LLM-generated']
|
['Targeted', 'Criterion']
|
['Short free response', 'Free response']
|
['Exact match', 'Soft match', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean', 'Other']
|
huangCEvalMultiLevelMultiDiscipline2023
|
C-EVAL: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models
|
Include
| null | null |
The paper introduces C-EVAL evaluation suite for assessing advanced knowledge and reasoning abilities of foundation models in Chinese, It spans four difficulty levels and 52 disciplines. It also introduces C-EVAL HARD a subset of challenging subjects that require advanced reasoning.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Knowledge and reasoning in Mandarin Chinese and on questions situated in the Chinese context
|
No
| null |
Comprehensive
| null |
Multiple choice questions from real-world human exams in China at different difficultly levels (e.g., high school, college) and different disciplines (e.g., STEM, humanities).
|
An MCQ question with four possible answers.
| null |
Human exam questions (e.g. GRE questions)
|
12342
|
Yes
|
topic area (e.g., STEM, humanities) and difficultly level (e.g., middle school)
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
Dev: 260, Valid: 1346
| null |
Simple Mean
|
Yes
|
Subject/exam (and by extension difficulty)
| null |
https://github.com/hkust-nlp/ceval/tree/main
|
C-EVAL
|
Contested
|
They follow the lead of popular knowledge and reasoning benchmarks, so it's hard to say here.
|
Not sure about this. Compared to other similar benchmarks, yes. In general, probably not.
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Knowledge
|
Cultural
| null |
['Human exams']
|
['Convenience']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Partially']
|
['Partially']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean']
|
myungBLEnDBenchmarkLLMs2024
|
BLEnD: A Benchmark for LLMs on Everyday Knowledge in Diverse Cultures and Languages
|
Include
| null | null |
The paper introduces BLEND, a novel benchmark comprising hand-crafted question-answer pairs designed to evaluate LLMs on everyday cultural knowledge across 16 countries/regions and 13 languages, including low-resource ones. It demonstrates significant performance disparities among models, showing cultural and linguistic biases, especially in underrepresented regions.
|
answer format: short-answer and MCQ, 52.6k question-answer pairs, BLEND includes 500 question templates that reflect daily life aspects across six socio-cultural categories: food, sports, family, education, holidays/celebrations/leisure, and work-life.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
cultural knowledge and multilingual cultural commonsense understanding
|
Yes
|
knowledge of everyday cultural practices that are specific to different countries and regions. This includes understanding what people commonly do, eat, or experience in their daily lives within a specific cultural and linguistic context. Specifically, dimensions such as food, sports, celebrations, education, family, and work-life are considered.
|
Subset
| null |
The task is to evaluate large language models on their ability to correctly answer short-answer and multiple-choice questions about everyday cultural practices from various countries and regions, using either local languages and English. Human evaluation is conducted on short-answer questions with annotators coming from the tested regions.
|
"Al-en-06": {
"question": "대한민국 학교 급식에서 흔히 볼 수 있는 음식은 무엇인가요?",
"en_question": "What is a common school cafeteria food in your country?",
"annotations": [
{
"answers": [
"김치"
],
"en_answers": [
"kimchi"
],
"count": 4
},
{
"answers": [
"밥",
"쌀밥",
"쌀"
],
"en_answers": [
"rice"
],
"count": 3
},
...
],
"idks": {
"idk": 0,
"no-answer": 0,
"not-applicable": 0,
"others": []
}
},
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Procedurally-generated task examples (e.g. Creating instances from a template)
|
52.6k
|
Yes
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
by language (native and English)/country (region)
| null |
https://github.com/nlee0212/BLEnD
|
BLEnD
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
for short-answer questions, there is a human evaluation, which to some extent can represent the validity of the questions
| null |
simple mean, Anova for p-values, Tukey-HSD
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Knowledge
|
Cultural
| null |
['Author-crafted', 'Crowd-sourced', 'Procedurally-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response']
|
['Exact match', 'LLM post-processing']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean', 'Tests']
|
yaoWebShopScalableRealWorld2022
|
WebShop: Towards Scalable Real-World Web
Interaction with Grounded Language Agents
|
Include
| null | null |
The paper introduces WebShop, a simulated online shopping environment where agents try to follow natural language instructions to find and buy the right products. WebShop benchmark is designed to test how well agents can search, navigate, and make decisions on the web. The authors train models using imitation and reinforcement learning, and show that the best ones can even handle similar tasks on real sites like Amazon and eBay.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Natural language understanding and sequential decision-making in web environments.
|
No
|
To evaluate agents that can understand human-provided natural language instructions and perform grounded actions in a realistic web environment, e.g generating search queries, navigating results, selecting options, and (at the end, if succesful) purchasing a product that matches the instruction.
|
Subset
| null |
The task is to follow a natural language instruction to find and purchase a product in a simulated ecommerce environment. Agent must search, navigate pages, select product options, and choose the best match based on the instruction.
|
Natural language instruction - specifying a desired product (including attributes, options, and price constraints), with the starting state of the simulated shopping environment. The agent must then complete the task by navigating and interacting with the website to find and purchase a suitable product.
| null |
Real task examples (e.g. GitHub issues), Crowd-sourced task examples (e.g. Prolific-created tasks)
|
500
|
Yes
|
product category, product attributes, product options, product price
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Free response (e.g. summary paragarph), Extended interaction (e.g. conversation, calling an API and processing the response)
|
reward is computed based on the final product chosen by the agent, compared against known attributes, options, and price of the target product.
| null | null |
Academia
|
Yes
| null |
Here the evaluation is fully automated, which allows for easier reproduction - which seems like a significant advantage compared to others.
| null |
“[...] a total of 12,087 instructions into an i.i.d. distributed train / development / test split of 10,587 / 1,000 / 500 instances"
| null |
Simple Mean
|
Yes
|
Paper reports breakdowns by reward components: attribute match score, option match score, price match, and type match.
| null |
https://webshop-pnlp.github.io/
|
WebShop
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
Yes
|
Yes
|
Yes
|
They discuss the performance gap between models and humans, quite detailed analysis of error types (e.g. failure in option matching or limited exploration), evidence of sim-to-real transfer to Amazon and eBay, aiming to indicate the external validity, as well as component-wise ablations and choice oracle (the model doesn't have to chose) experiments to diagnose bottlenecks
|
The authors report average task score and success rate across trials. They also include standard deviation/error bars in some result plots (e.g. Figure 4), mainly to show the variation across multiple runs.
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
WebShop simulates online shopping using real product data and realistic ux, but it operates in a custom environment with a simplified interface and deterministic search engine. So while the core interactions reflect a real-world activity, it doesn’t capture the full complexity or variability of actual web browsing with human properly in the loop or user's behaviour.
|
Composite phenomenon
|
No
| null | null |
Agents
|
Web
| null |
['Real task', 'Crowd-sourced']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Free response', 'Interaction']
|
['Reward']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['Yes']
|
['Partial']
|
['Mean', 'Std']
|
sanyalRobustLRDiagnosticBenchmark2022
|
ROBUSTLR: A Diagnostic Benchmark for Evaluating Logical Robustness of
Deductive Reasoners
|
Include
| null | null |
Deductive reasoning is an important skill that modern language models should possess. However, small logical perturbations of deductive reasoning problems can lead to inconsistent model responses. To test this consistency, the paper introduces RobustLR a benchmark consisting of logical problems ("theories") and variations thereof that should be consistenly answered correctly by models.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
robustness of deductive reasoning against small shifts in logical operators or rephrasing.
|
Yes
|
"We consider a deductive reasoner (language model) to be logically robust if the model behavior is consistent across various logical perturbations."
|
Comprehensive
|
Consistency here can be misinterpreted: The perturbations applied to problems cause different conclusions. Consistency is here defined as being accurate across perturbations, i.e. changing the label when the input changes. This is in contrast to many other works that regard consistency as invariance.
|
The task has 2 levels: The underlying task is conducting deductive reasoning. This is a classification problem: "True", "False" "Unknown". The "meta-task" is being consistent across a set of related problems.
|
One item in the benchmark is a set: "original problem" + a set of perturbations on the problem. Each problem is a set of facts, rules and deduction.
| null |
Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
No
| null |
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null |
The synthetic nature of the benchmark is very much limiting the ecological validity of the benchmark for real user interaction, but the authors are very clear and transparent about it. The lack of ecological validity is compensated by internal validity.
|
Test
| null |
yes
|
Simple Mean
|
Yes
|
different kinds of perturbations of the problem.
| null |
https://github.com/INK-USC/RobustLR
|
RobustLR
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
The authors clearly state limitations due to simple composition of rules used for perturbations and the synthetic toy nature of the dataset. They also validate that humans can achieve good scores on the problems while langauge models dont.
|
mean of weighted-F1 scores
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Logical
|
Robustness
|
['Procedurally-generated']
|
['Random', 'Convenience']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
albalakFETABenchmarkFewSample2022
|
FETA: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue
|
Include
| null | null |
Examines few-sample task transfer across 17 subtasks (e.g., utterance-level classification, dialogue-level classification, span extraction, multiple-choice) in open-domain dialogue with diverse properties (dyadic vs. multi-party, anonymized vs. recurring speaker, varying dialogue lengths).
|
Claims to be "the the first large-scale benchmark for task transfer in dialogue, with 132 source target-task pairs"
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Task transfer, transferring knowledge contained in related tasks, in few-sample settings (10% of original instance set)
|
Yes
|
Task transfer, transferring knowledge contained in related tasks. Definition 3 (Task Transfer). Given a source task TS = {YS, fS(XS)} and target task TT =
{YT , fT (XT )}, task transfer is the use of a learning algorithm, A, to improve the learning of fT by using the knowledge in TS.
They also define Few-Sample: For this reason, we focus on the fewsample setting, defined in FETA as 10% of the original instance set. Out of 10%, 5%, and 1%,
10% was empirically determined to be the smallest percentage that retains labels from all label sets in both the train and development partitions.
|
Subset
|
They define seperately: (1) Cross-dataset task transfer, when XS ≠ XT ,
we also have P(XS) ≠ P(XT ) and DS ≠ DT ;
domain shift; vs (2) intra-dataset task transfer, when XS = XT , there is no domain shift.
|
The tasks are classic NLP tasks subsumed in dialog - e.g., emotional recognition during chit-chat conversations, or character identification from a TV transcript.
|
Input = a dialogue (from DailyDialog); Subtask = Emotion Recognition; Output = Happiness; OR Input = a transcript from a TV Show (from Friends); Subtask = QA, question = How long did Rachael train for?; Output = 2 weeks.
|
They focus on intra-dataset transfer but not cross-domain transfer.
|
Modified from another benchmark (e.g. translation into another language), Human TV show; Human chitchat dialogues
|
71,212
|
Yes
|
They provide the datasource (dialog, friends), the task name (e.g., emotion recognition, or QA), and the a categorisation of task type (e.g., utterance classification vs span extraction)
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Depends on the subtask category (Utterance Classification, Dialogue Classification, Multiple Choice, Span Extraction)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
|
Was originally run as a challenge for a ACL 2023 workshop
| null |
Test, Train, Validation
|
Train=28,261, Dev = 5,132
| null |
Simple Mean
|
Yes
|
They provide results over the task categories - Utterance Classification, Dialogue Classification, Multiple Choice, Span Extraction
|
They calculate a top1-score: " to understand how models and algorithms perform if the best source task is known ahead of time. This score is calculated as the maximum score over source tasks averaged over target tasks"
|
https://alon-albalak.github.io/feta-website/
|
FETA
|
Widely-agreed
|
Partially
|
Partially
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Mean, and they they show a delta (for change in aggregate sources across all tasks). It is unclear if this is a range or a standard deviation. I think it's a range.
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
Using the model for various tasks contained in dialogue seems a more general ecologically valid use case, than the Friends transcript understanding but this could also be an applied usecase.
|
Composite phenomenon
|
Yes
| null | null |
Language Modelling
|
Adaptability
| null |
['Another benchmark', 'Author-crafted']
|
['Convenience']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Partially']
|
['Partially']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
beanLINGOLYBenchmarkOlympiadLevel2024
|
LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low Resource and Extinct Languages
|
Include
| null | null |
The paper introduces LINGOLY, a new benchmark built on Linguistics Olympiad puzzles in low-resource and extinct languages to test genuine reasoning capabilities in LLMs. The benchmark is crafted covering diverse reasoning complexity, linguistic subject areas, instruction types, and high/low resources. The paper uncovers error pattenrs between high and low resource settings and show the ongoing challenges in multi-step, out-of-domain reasoning.
|
The important contribution is to define the reasoning tasks with necessity and sufficiency: the task cannot be done without reasoning and can be done via reasoning. For the fair evaluation, the paper propose to use low-resource languages to learn the linguistic and grammatical patterns (necessity) that are rare online (sufficiency). The error patterns shows that the LLMs still struggle with the complex (multi-step, ood) reasoning tasks.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Multi-step, out-of-domain linguistic reasoning, low-resource languages,
|
Yes
|
We argue that a benchmark task measures reasoning if the task 1) cannot be
done without reasoning (necessity) and 2) can be done via reasoning (sufficiency). However, the combination of these features is difficult to achieve in practice since memorisation and contamination may reduce the necessity of reasoning, and in tasks which draw on background knowledge, as in most ‘commonsense’ benchmarks[7], reasoning itself is insufficient to complete the task.
|
Subset
|
No-context baseline -- evaluate if the model performance drops when the context is removed. This concept is to assess the performance if the model has relied on memorization or reasoning from the linguistic clues in the context.
|
The task is to understand genuine reasoning capabilities of LLMs by providing context of low-resource linguistic information and questions to solve based on the given context (or without context to penalize the memorized knowledge). The expected output is a concise textual answer that can be matched up with ground-truth labels.
|
Below is a problem sheet…
{PREAMBLE}
{CONTEXT}
{QUESTIONS}
{SUBQUESTIONS}
Now respond to the following…
{REPEAT 1 QUESTION}
Format your response as…
{FORMAT TEMPLATE}
|
Compare the model performance with and without contextual information to penalize the memorized knowledge and evaluate the genuine reasoning abilities of LLMs using the linguistic cues from the given knowledge.
|
Human exam questions (e.g. GRE questions), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
| null |
Yes
|
human difficulty, linguistic subjects, task format, language
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice, Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
|
The metric report Exact Match with a standard script for LLMs to output JSON in a single pass. This is different from employing "LLM post-processing" in the sense of an additional LLM-based step to reformat and judge the responses.
They exclude all questions where the answer is “fuzzy” (i.e., accepts synonyms or free
text response) because they cannot automate the evaluation of synonym similarity across languages.
|
The task from LINGOLY is adapted from official Linguistics Olympiads puzzle sets rather than everyday language usage scenarios or standard benchmarking corpora.
|
Academia
|
Yes
| null |
One critical point is whether language models provide poor performances due to the unfamiliar format or out-of-domain reasoning -- the mismatch between the puzzle's presentation style and the distribution of model instruction templates may cause certain reasoning failures depending on model types.
It would be nice to see how benchmarks have certain patterns with model types.
|
Test
|
1,133 questions all for testing.
|
Free response is existed but excluded from evaluation (The only case where an instance has a missing answer is when the intended answer was a free response, e.g., “explain your reasoning”. These questions are included in the dataset but removed from the scoring as they are not compatible with being machine-scored.)
|
Simple Mean
|
Yes
|
Human difficulty, puzzle format, linguistic subject, language resourcedness
| null |
The huggingface is working great while the Github zip file requires passcode to get access.
|
LINGOLY
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
Across models, performance is consistently higher on problems with easier human difficulty and higher resource languages than those of harder difficulty and lower-resource languages.
(LLMs tested have limited reasoning abilities about low-resource languages and do not achieve the multi-step reasoning required in the harder questions, in addition to errors of following instructions alongside core reasoning tasks.)
|
The authors use a weighted mean in calculating an approximate human performance threshold but not for model performance. They take a weighted average of the annual medal thresholds for ‘Advanced’ problems.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
While the benchmark comes from authentic Linguistic Olympiad puzzles, they are still competition-style questions rather than real world usage scenarios. Hence it can be categorized as representative task of a speciflized exam setting.
|
Single cohesive phenomenon
|
No
| null | null |
Reasoning
|
Logical
| null |
['Human exams', 'Author-crafted']
|
['Convenience']
|
['Multiple choice', 'Short free response', 'Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
nasirGameTraversalBenchmarkEvaluatingPlanning2024
|
GameTraversalBenchmark: Evaluating Planning Abilities Of Large Language Models Through Traversing 2D Game Maps
|
Include
| null | null |
The paper investigates the planning capabilities of LLMs by proposing GameTraversalBenchmark (GTB), a benchmark consisting of diverse 2D grid-based game maps. The paper also provide metrics to give insights towards planning abilities in LLMs.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Planning abilities of LLMs
|
No
| null |
Subset
| null |
The task is a game based on 2D maps. They consider a generated map as one data point for the benchmark. The map’s generated objective coordinates are the points where the LLM agent needs to traverse to attain the most rewards.
|
Each item is a 2D grid-based map if alphanumeric characters.
| null |
LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
150
|
No
| null |
Unknown
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), The paper defines a reward score
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/umair-nasir14/Game-Traversal-Benchmark/
|
GameTraversalBenchmark (GTB)
|
Not defined
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean and STD
|
Outputs alone
| null | null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Reasoning
|
Planning
| null |
['LLM-generated']
|
['Unknown']
|
['Structured']
|
['Exact match', 'Reward']
|
['No definition']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['']
|
['Mean', 'Std']
|
feiLawBenchBenchmarkingLegal2024
|
LawBench: Benchmarking Legal Knowledge of Large Language Models
|
Include
| null | null |
LawBench tests 21 models on 20 Chinese legal tasks (500 instances each), which are classified along Bloom's taxonomy into knowledge memorization, understanding, and application. It is the first benchmark for the Chinese legal domain, and the first for civil law (vs. common law) jurisdictions.
|
Most of these tasks are compiled/sampled from existing benchmarks, notably JEC-QA and the CAIL series. However some tasks are created originally - eg. asking legal students to choose suitable questions or scraped from a legal Q&A website.
|
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
legal knowledge memorization, understanding, and application
|
Yes
|
LawBench is the first evaluation benchmark developed for the Chinese legal domain. It defines the phenomenon in terms of legal knowledge capabilities mapped to cognitive levels from Bloom’s Taxonomy.
|
Subset
|
Bloom's taxonomy for task grouping
|
Perform 20 specific legal functions using text-based input and return a defined output (of various forms, including classification label, summary, number)
|
Varies strongly between the 20 tasks, but generally: a legal input (fact description, question, judgement) and a required output of various forms.
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
10000
|
Yes
|
Task ID, blooms taxonomy level (used to indicate difficulty), task type, metric
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM post-processing (extracting answers, reformatting for automated scoring)
|
Metrics are task-specific.
|
Most tasks adapted from existing legal datasets: CAIL, JEC_QA, and LEVEN.
|
Mostly academia, 1 research institute, 1 high school
|
Yes
| null | null |
Test
| null |
Response format varies by task. Dataset sampling above: mostly "convenience sampled"/rehashed from existing benchmarks.
|
Simple Mean
|
Yes
|
By task (each of 20), by blooms taxonomy level (each of memorization, understanding, application), by zero-shot vs. one-shot
| null |
https://github.com/open-compass/LawBench
|
LawBench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
Simple means and macro-averaging (mean across tasks, which is identical here because each task has same # of instances)
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
Validity varies strongly between tasks. Memorization tasks (2/20) do not reflect real-world human work. Most others are taken from benchmarks in QA format. Some are "partial real tasks" eg. answering legal questions scraped from a legal QA site.
|
Composite phenomenon
|
Yes
| null | null |
Law
| null | null |
['Real task', 'Author-crafted', 'Another benchmark', 'LLM-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Soft match', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean']
|
yuksekgonulWhenWhyVisionlanguage2023
|
When and Why Vision-Language Models Behave like Bags-Of-Words, and What to Do About It?
|
Include
| null | null |
This paper creates the Attribution, Relation, and Order (ARO) benchmark to systematically evaluate the ability of VLMs to understand different types of relationships, attributes, and order information. They demonstrate that VLMs can perform well on image-text retrieval over existing datasets without using the composition and order information.
|
The authors propose a simple finetuning method that improves model understanding of attributes and relations by introducing two types of composition-aware hard negatives: visually similar images to emphasize fine-grained differences, and captions with scrambled word order to enforce sensitivity to syntax.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Compositional understanding in VLMs
|
No
| null |
Subset
| null |
ARO consists of Visual Genome Attribution, to test the understanding of objects’ properties; Visual Genome Relation, to test for relational understanding; and COCO-Order & Flickr30k-Order, to test for order sensitivity in VLMs.
|
A sample would be an image, 1 true and 1 false statement about the image, the two objects presented in the image, the attributes of the objects
| null |
Modified from another benchmark (e.g. translation into another language)
|
28,700
|
No
| null |
Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
|
Stratification based on the four introduced tasks: 1) Visual Genome Attributions, 2) Visual Genome Relations, 3) COCO Order and 4) Flickr30k Order
| null |
https://huggingface.co/datasets/gowitheflow/ARO-Visual-Attribution
|
ARO
|
Not defined
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
macro-accuracy
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Compositional
| null |
['Another benchmark']
|
['Criterion']
|
['Multiple choice', 'Short free response']
|
['Exact match']
|
['No definition']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
xieWhodunitBenchEvaluatingLarge2024
|
WhodunitBench: Evaluating Large Multimodal
Agents via Murder Mystery Games
|
Include
| null | null |
The paper evaluates LLMs ability to participate in (and answers questions about) murder mystery games. In the arena component (agents play as either detective or murderer in a multi-agent setting), the agents are tested on win rate against the other models. The QA component is split based on capability categories (Perception, Role-Play, Decision-making and Cognition)
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
The authors evaluate four distinct capabilities: multi-modal perception, interaction, reasoning and goal achievement.
|
Yes
|
• Multi-modal Perception is the most basic ability for LMAs, which requires LMAs to perceive
information from the multimodal environment (e.g., vision and language).
• Interaction requires LMAs, whether through role-playing or direct engagement, to communicate
with the environment or other agents to gather essential information for task completion.
• Reasoning requires LMAs to combine their internal knowledge with newly gathered information
to perform long-chain, multi-step reasoning.
• Decision Making and Goal Achievement requires LMAs to establish clear goals and make
independent decisions in response to environmental changes. This autonomous decision-making is
crucial for effectively navigating and completing tasks in dynamic settings.
|
Subset
|
Since the benchmarks evaluates many things, the level of detail differs between the constructs.
|
The agent arena component is based on "winning" in a murder mystery game, whereas the Chain-of-Evaluation component is based on a QA format.
|
In the arena setting, each task item is a single murder mystery game with a winner. In the CoE, each task is a multiple-choice question.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks)
|
3000
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice, Extended interaction (e.g. conversation, calling an API and processing the response)
|
Exact Match (accuracy, F1, precision, recall), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), Win rate
|
Some metrics (e.g., Role-playing) is entirely LLM-derived
|
The arena is based on a script, and the questions are manually annotated. The murder game scripts ccome from real sources.
|
Academia
|
Repo without any code is provided.
| null | null |
Test
|
Only reported approximately
|
CoE is multiple choice, arena is extended interaction
|
Simple Mean
|
No
| null | null |
https://github.com/jun0wanan/WhodunitBench-Murder_Mystery_Games
|
WhodunitBench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Simple mean (no variance or standard reported)
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
It is based on a pure "fictional" game, with the hope that capabilities are general enough to transfer.
|
Composite phenomenon
|
Yes
| null | null |
Agents
| null | null |
['Author-crafted', 'Crowd-sourced']
|
['Convenience']
|
['Multiple choice', 'Interaction']
|
['Exact match', 'LLM-as-a-Judge', 'Reward']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
saparinaAMBROSIABenchmarkParsing2024
|
AMBROSIA: A Benchmark for Parsing Ambiguous Questions into Database Queries
|
Include
| null | null |
Paper introduces a new benchmark dataset designed to evaluate text-to-SQL parsers' ability to handle ambiguous user requests. The dataset includes questions demonstrating scope ambiguity, attachment ambiguity, and vagueness, along with their interpretations and corresponding SQL queries. The authors highlight that existing large language models (LLMs) struggle with these ambiguities, suggesting a need for improved parser development.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
text-to-SQL parsing
|
Yes
|
Evaluation of text-to-SQL parsers capable of recognizing and interpreting ambiguous
requests
|
Comprehensive
| null |
text-to-SQL parsing, generate database, validate generated databases
|
Question, prompt, SQL query, scope/ambiguity/vagueness, generated database, score (human annotation)
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
5093
| null | null |
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), Human ratings (text quality, preference, NOT manual scoring of other metrics)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
| null | null |
https://ambrosia-benchmark.github.io/
|
AMBROSIA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
|
No
| null |
mean and variance
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Code Generation
|
Natural Language
| null |
['Author-crafted', 'LLM-generated']
|
['Targeted']
|
['Structured']
|
['Exact match', 'Human ratings']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean', 'Std']
|
augustyniakThisWayDesigning2022
|
This is the way: designing and compiling
LEPISZCZE, a comprehensive NLP benchmark for
Polish
|
Include
| null | null |
Authors introduce LEPISZCZE, a new, comprehensive benchmark for
Polish NLP with a large variety of tasks and high-quality operationalization of the
benchmark. LEPISZCZE was designed with flexibility in mind. Including new models,
datasets, and tasks is as simple as possible while still offering data versioning and
model tracking. In the first run of the benchmark, 13 experiments (task
and dataset pairs) were tested based on the five most recent LMs for Polish. Five
datasets from the Polish benchmark are reused and eight novel datasets are added.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
model performance on Polish language across various tasks (13)
| null |
The ability of language models to understand and process Polish language across a diverse range of NLP tasks, evaluated using 13 task-dataset pairs that include classification, natural language inference, and sequence labeling tasks.
|
Subset
| null |
Each task in the LEPISZCZE benchmark is defined as a standard NLP problem—such as classification, sequence labeling, or natural language inference—applied to Polish-language datasets. These tasks test specific linguistic capabilities of models, like sentiment analysis, named entity recognition, part-of-speech tagging, and others.
|
there are datasets for 13 tasks.
|
Entailment Classification, Q&A Classification, Sentiment Analysis, Paraphrase Classification, Abusive Clauses Detection, Aspect-based Sentiment Analysis , NER, POS Tagging, Political Advertising Detection, Punctuation Restoration, Punctuation Restoration.
Dialogue Acts Classification
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
30,003
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
204,504 and 9,970
| null |
Simple Mean
|
No
| null | null |
https://huggingface.co/clarin-pl , https://github.com/CLARIN-PL/LEPISZCZE
|
LEPISZCZE
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
mean and standard deviation
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
Multilinguality
| null | null |
['Real task', 'Author-crafted', 'Crowd-sourced', 'Another benchmark']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean', 'Std']
|
huiUDABenchmarkSuite2024
|
UDA: A Benchmark Suite for Retrieval Augmented Generation in Real-world Document Analysis
|
Include
| null | null |
The paper introduces the UDA (Unstructured Document Analysis) benchmark. UDA questions are expert-annotated Q&A pairs on PDF and HTML documents, constructed from datasets of academic papers, financial reports, and Wikipedia pages.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Analysing unstructured documents
|
No
|
Vague and multifaceted: "we propose a benchmark suite that enables the evaluation of various components of RAG-based unstructured document analysis"
|
Subset
| null |
LLMs are given an unstructured document and a factual question about the contents of that document. The correct answer is some extracted text or figure from the document.
|
An unstructured document might be a financial report in PDF format, containing tabular data. The question might ask for the total value of some column, like "total vested shares during the 2012 fiscal year, in millions," and correct answers might be [1.46, 1.45972].
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
29,590
|
Yes
|
topic area
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null |
Hand-written answers are "expert annotated" by the authors of six Q&A datasets; the authors curate and filter these without changing the labels.
|
Academia
|
Yes
| null | null |
Test
| null |
"Free responses" are intended to be extracted from the provided file's text.
|
Simple Mean
|
Yes
|
Scores by underlying Q&A dataset, context type (whether document chunks are provided by RAG or by human annotators)
|
pass@k (any correct answer in k trials)
|
https://github.com/qinchuanhui/UDA-Benchmark
|
UDA
|
Widely-agreed
|
Yes
|
No
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Simple mean/sum; % improvement between contexts
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Retrieval
| null | null |
['Author-crafted', 'Another benchmark']
|
['Convenience']
|
['Short free response', 'Free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['No']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean', 'Other']
|
xiaFOFOBenchmarkEvaluate2024
|
FOFO: A Benchmark to Evaluate LLMs’ Format-Following Capability
|
Include
| null | null |
FOFO Is a benchmark for domain-specific format following capabilities. It evaluates a wide array of domains and subdomains across a diverse set of formats from specific medical forms to Maple. The specific examples are generated using GPT-4 and human validation.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Format following
|
Yes
|
"precise adherence to specified formats given by humans"
|
Subset
| null |
The task is to generate dummy data in a specified format defined by detailed instructions within a given domain.
|
A single formatting instruction with a domain (e.g., Manufacturing), a subdomain (e.g., Optimization), and a format (e.g., "Standard Operating Procedures") with an example of the format.
| null |
LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
494
|
Yes
|
domain,subdomain,format
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Structured response (e.g. valid JSON, API call alone)
|
LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
| null | null |
https://github.com/SalesforceAIResearch/FoFo
|
FOFO
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null | null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
While following formatting instructions is real, the data is still dummy.
|
Composite phenomenon
|
Yes
| null | null |
Instruction Following
| null | null |
['LLM-generated']
|
['Convenience']
|
['Structured']
|
['LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
| null |
wangMINTEvaluatingLLMs2024
|
MINT: EVALUATING LLMS IN MULTI-TURN INTERACTION WITH TOOLS AND LANGUAGE FEEDBACK
|
Include
| null | null |
MINT extends existing benchmark to evaluate the effects of code interpreter usage and multi-turn feedback on LLM performance. It filters benchmark task to ones that benefit from feedback and multi-turn interactions and evaluates different feedback types from "lazy user" to "informative user" and with(out) tools.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Reasoning, coding, and decision-making
|
No
| null |
Subset
|
Each high-level phenomena is measured separately
|
The task is how performance on existing benchmarks (QA) increases when given access to GPT-4 feedback and/or code interpretor.
|
The tasks come from different benchmarks. Most are in a QA format.
| null |
Modified from another benchmark (e.g. translation into another language)
|
586
|
Yes
|
source dataset
|
Random sample (creators defined a task space and sampled from it)
|
Short free response (e.g. single word or number), Extended interaction (e.g. conversation, calling an API and processing the response)
|
Exact Match (accuracy, F1, precision, recall)
| null |
The tasks are sampled from 8 different benchmarks.
|
Academia
|
Yes
| null | null |
Test
| null |
While the expected result is often a short free response, it can be created through interaction.
|
Simple Mean
|
Yes
|
Provided by number of turns of feedback
| null |
https://github.com/xingyaoww/mint-bench
|
MINT
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
They do a partial study with actual human feedback on the benchmark tasks.
|
No
|
No
| null | null |
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Agents
|
Coding
| null |
['Another benchmark']
|
['Random']
|
['Short free response', 'Interaction']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['No']
|
['Representative']
| null |
valmeekamPlanBenchExtensibleBenchmark2023
|
PlanBench: An Extensible Benchmark for Evaluating Large Language Models on Planning and Reasoning about Change
|
Include
| null | null |
PlanBench introduces a suite of tasks relevant to planning using similar formats to the International Planning Competition. The tasks are taken from either Blocksworld or logistics and also obfuscated to avoid reliance on common-sense knowledge.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Planning
|
Yes
|
planning involves coming up with a course of actions (policy) which when executed would take an agent from a certain initial state to a desired world state
|
Subset
| null |
The main task (planning) is given a description of a state (e.g., block configuration), rules, and a goal state, come up with a plan that transforms from state the goal state. The sub-tasks are variations of components.
|
A specified state, actions, and goal state + a query for what the LLM should do (compe up with a plan, predict plan execution) etc.
|
There are in total 8 different tasks with slightly different goals (e.g., direct planning, replanning, execution prediction)
|
Procedurally-generated task examples (e.g. Creating instances from a template)
|
1910
|
Yes
|
domain
|
Random sample (creators defined a task space and sampled from it)
|
Free response (e.g. summary paragarph), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null |
The plan is a fairly structured set of actions, but not quite as structured as e.g., an API
|
Simple Mean
|
Yes
|
Domain, Obfuscated (Bool)
| null |
https://github.com/karthikv792/LLMs-Planning
|
PlanBench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
The task is based on real competition but which has a level of gaminess
|
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Planning
| null |
['Procedurally-generated']
|
['Random']
|
['Free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
| null |
zhangMELAMultilingualEvaluation2024
|
MELA: Multilingual Evaluation of Linguistic Acceptability
|
Include
| null | null |
The paper intorduces a multilingual acceptability judgement benchmark covering a diverse set of 10 languages, all annotated by expert linguists. The acceptability judgment task tests a language model’s ability to distinguish syntactically acceptable sentences from unacceptable ones in a human language. The paper establishes LLM baselines on this benchmark, and investigates cross-lingual transfer in acceptability judgements with XLM-R.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Linguistic Acceptability
|
Yes
|
The acceptability judgment task tests a language model’s ability to distinguish syntactically acceptable sentences from unacceptable ones.
|
Comprehensive
| null |
The acceptability judgment task tests a language model’s ability to distinguish syntactically acceptable sentences from unacceptable ones.
|
a sentence
| null |
hand-written by linguists in respective languages, taken from textbooks, handbooks and journal articles in theoretical syntax + some examples taken from previous benchmarks
|
46k
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall), Matthews Correlation Coefficient (MCC, Matthews), which is a measure of similarity between binary distributions taking values from -1 to 1 and always yielding 0 for any two uncorrelated distributions, regardless of class imbalance.
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
train set: 33'293, validation:3'970
| null |
Simple Mean
|
No
| null | null |
https://github.com/sjtu-compling/MELA
|
MELA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
| null |
No
|
No
|
No
| null |
simple mean and standard deviation
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Multilinguality
| null | null |
['Expert-crafted']
|
['Random']
|
['Multiple choice']
|
['Exact match', 'Correlation']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean', 'Std']
|
etxanizLatxaOpenLanguage2024
|
Latxa: An Open Language Model and Evaluation Suite for Basque
|
Include
| null | null |
The paper introduces 4 multiple-choice evaluation datasets for Basque: EusProfi-ciency, comprising 5,169 questions from official language proficiency exams; EusReading, comprising 352 reading comprehension questions; EusTrivia, comprising 1,715 trivia questions from 5 knowledge areas; and EusExams, comprising 16,774 questions from public examinations.
|
Another contribution of the paper is Latxa, a family of large language models for Basque ranging from 7 to 70 billion parameters. Latxa is based on Llama 2, which they continue pretraining on a new Basque corpus comprising 4.3M documents and 4.2B tokens.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
language proficiency, knowledge and reasoning
|
No
| null |
Subset
|
The benchmark includes 4 different tasks
|
There are 4 tasks: reading comprehension, language proficency, mcq questions on Basque language and culture, and mcq questions on Basque government
|
an mcq question
| null |
Human exam questions (e.g. GRE questions)
|
~7.5k
|
No
| null |
Unknown
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/hitz-zentroa/latxa?tab=readme-ov-file
| null |
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
No
| null |
accuracy, F1, standard deviation
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Multilinguality
| null | null |
['Human exams']
|
['Unknown']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative']
|
['Mean', 'Std', 'Other']
|
tangStrucbenchAreLarge2024
|
Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data?
|
Include
| null | null |
The paper introduces a new benchmark to assess LLMs’ proficiency in structuring tables and introduces a novel fine-tuning method, cognizant of data structures, to bolster their performance.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Generating structured tabular data
|
Yes
|
LLMs are tasked with generating complex struc- tured tables, a process that involves understanding both the content and the specific format require- ments, such as LaTeX syntax. This task extends beyond simple text generation as it demands preci- sion not just in content creation but also in adhering to a detailed and precise structural format.
|
Comprehensive
| null |
The task is generating structured tabular data.
|
text tables, HTML tables, and LaTeX tables and their description
| null |
Modified from another benchmark (e.g. translation into another language)
|
~16k
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Structured response (e.g. valid JSON, API call alone)
|
P-Score (Prompting Score) and H-Score (Heuristical Score)
| null | null |
Academia
|
Yes
| null | null |
Test, Train
|
Train: 14.1k, Test:1700
| null |
Simple Mean
|
No
| null | null |
https://github.com/gersteinlab/Struc-Bench?tab=readme-ov-file
|
Struc-Bench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
No
| null | null |
Code Generation
| null | null |
['Another benchmark']
|
['Random']
|
['Structured']
|
['LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
riemenschneiderExploringLargeLanguage2023
|
Exploring Large Language Models for Classical Philology
|
Include
| null | null |
They define two probing tasks to investigate the knowledge acquired by models pre-trained on Classical texts. The experiments provide the first benchmarking analysis of existing models of Ancient Greek.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
| null |
No
|
The tasks are supposed to assess semantic and world knowledge in LLMs.
|
Comprehensive
| null |
Measuring semantic and world knowledge in LLMs
|
A sentence
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
~550
|
No
| null |
Unknown
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Link is provided but the data is not there
| null | null |
Test, Train
| null | null | null |
No
| null | null |
https://github.com/Heidelberg-NLP/ancient-language-models/tree/main
| null |
Not defined
| null |
Yes
|
Yes
|
No
| null |
No
|
No
|
No
| null | null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null | null | null | null | null |
Multilinguality
| null | null |
['Author-crafted']
|
['Unknown']
|
['Multiple choice']
|
['Exact match']
|
['No definition']
|
['']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
| null |
qiPreservingKnowledgeInvariance2023
|
Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction
|
Include
| null | null |
The paper introduces ROBUST, a benchmark designed to evaluate open information extraction models by measuring their ability to generalize knowledge extraction across syntactically diverse sentences that share the same semantic content.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
the generalization of open information extraction
|
Yes
|
[...] each example is a knowledge-invariant clique that consists of sentences with structured
knowledge of the same meaning but with different syntactic and expressive forms. [...] a model is judged to be robust if its performance is consistently accurate on the overall cliques.
|
Comprehensive
| null |
Open Information Extraction (OpenIE) aims to extract n-ary knowledge tuples {(a1,p,a2,...,an)} consisting of n arguments and one predicate from the natural text.
|
Sentences with arguments and one predicate form a set (clique), where sentences are semantically invariant.
|
The base task is OpenIE. Each tuple of sentence+arguments+predicate within a clique is analyzed. The "meta-task" is doing well on the worst tuple within one clique.
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
1272 cliques, 4971 sentences
|
No
| null |
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
|
I agree that studying the minimum score achieved by a set of semantically equivalent items captures a notion of robustness. However, the authors often mention "distribtuion shift". Unfortunately, it is not clear what the training distribution is and what the test distribution is in this work, and subsequently it is not clear how the distribution shifts between these two. In my humble opinion, "distributional shift" is a misnomer, they just "enrich the existing data generating process", not change it.
| null |
Test
| null |
n-tuples of text are extracted from the resonse.
|
Simple Mean
|
No
| null |
minimum
|
https://github.com/qijimrc/ROBUST
|
ROBUST
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
For each tuple, the F1 is computed, then across a clique the minimum is computed and aggregated across the dataset as mean.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Extraction
| null |
['Author-crafted', 'Another benchmark', 'Procedurally-generated']
|
['Random', 'Convenience']
|
['Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean']
|
shahWhenFLUEMeets2022
|
WHEN FLUE MEETS FLANG: Benchmarks and Large Pre-trained
Language Model for Financial Domain
|
Include
| null | null |
the Financial Language Understanding
Evaluation (FLUE), an open-source comprehensive
suite of benchmarks for the financial
domain. These include new benchmarks across
5 NLP tasks in financial domain as well as common
benchmarks used in the previous research.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
natural language understanding in the financial domain
|
Yes
|
The ability of LLMs to perform across 5 financial tasks such as financial sentiment analysis, news headline classification, named entity recognition, structure boundary detection, and question answering.
|
Subset
| null |
The task is defined as evaluating language models on a suite of five financial domain NLP tasks: financial sentiment analysis, news headline classification, named entity recognition, structure boundary detection, and question answering.
|
N/A, for every task there will be a respective item
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
969, 234, 2282, 302, 131, 333
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
for all 5 tasks: 19,367 and 2,674
| null |
Simple Mean
|
No
| null | null |
https://salt-nlp.github.io/FLANG/
|
FLUE
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
Simple mean: F1 scores and accuracy. MSE. nDCG and MRR. Perplexity
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
Finance
| null | null |
['Real task', 'Another benchmark']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean', 'Other']
|
kalyanWikiDONewBenchmark2024
|
WikiDO: A New Benchmark Evaluating Cross-Modal Retrieval for Vision-Language Models
|
Include
| null | null |
The authors argue that current VLM benchmarks are insufficient to assess the OOD generalization capability of models due to high visual and linguistic similarity between the evaluation and finetuning datasets. The propose WIKIDO which consists of image-text data derived from Wikipedia Diversity Observatory, a diverse source of Wikipedia articles spanning several diversity axes including geography, gender, ethnicity and domains/topics.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Generalization / OOD performance
|
No
| null |
Subset
| null |
The proposed dataset can be used for both image-to-text, i.e. retrieve the most relevant textual description(s) from a set, and text-to-image retrieval, i.e. retrieve the most relevant image(s) from a dataset.
|
A single row in the dataset will have the path of the image, the Wiki ID of the image, the reference text from Wikipedia, the title of the wikipedia article, the topic label from Wikipedia Diversity Observatory and the generated caption of the image
| null |
Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
train: 384K pairs, 2 test sets (ID and OOD) of size 3K each.
|
Yes
|
topic
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Retrieval
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
train: 384K pairs, 2 test sets (ID and OOD) of size 3K each.
| null |
Simple Mean
|
Yes
|
In-distribution vs Out-of-distribution
|
pass@k (any correct answer in k trials)
|
https://huggingface.co/datasets/Pavankalyan/WikiDO
|
WikiDO
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
The authors show that across various settings, nearly all models perform better on in-distribution (ID) data than on out-of-distribution (OOD) data, except for CLIP, which performs equally well in both settings.
|
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
No
| null | null |
Retrieval
| null | null |
['Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Targeted']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
marchisioUnderstandingMitigatingLanguage2024
|
Understanding and Mitigating Language Confusion in LLMs
|
Include
| null | null |
The paper introduces a benchmark to measure language confusion in LLMs. They investigate language confusion on the line and word level in two practical settings: a) Monolingual generation, where a user queries the LLM in a given language, implicitly requesting an answer in the same language; and b) cross-lingual generation, where a user explicitly instructs a model to generate text in a different language.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Language Confusion
|
Yes
|
LLMs are often unable to consistently generate text in the user’s desired language, or the
appropriate language given the context. They call this category of error “language confusion”.
|
Subset
| null |
They investigate language confusion on the line and word level in two practical settings: a) Monolingual generation, where a user queries the LLM in a given language, implicitly requesting an answer in the same language; and b) cross-lingual generation, where a user explicitly instructs a model to generate text in a different language.
|
A sentence (prompt)
| null |
Modified from another benchmark (e.g. translation into another language), For some part of the data they include human generated prompts
|
7100
|
Yes
|
Language of the prompt and the original data source
|
Random sample (creators defined a task space and sampled from it)
|
Free response (e.g. summary paragarph)
|
The paper introduces 2 new metrics for language confusion. Line-level pass rate (LPR) and Word-level pass rate (WPR).
| null | null |
Industry
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/for-ai/language-confusion
|
LCB
|
Contested
|
Yes
|
Yes
|
Yes
|
No
| null |
The benchmark is itself realistic
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Multilinguality
| null | null |
['Another benchmark', 'Author-crafted']
|
['Random']
|
['Free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Constructed']
|
['Mean']
|
itoGeneralizationCapacityNeural2024
|
On the generalization capacity of neural networks during generic multimodal reasoning
|
Include
| null | null |
The paper introduces gCOG, a multimodal reasoning dataset designed to measure various types of OOD generalisation (distractor generalisation, systematic compositional, and productive compositional). The authors train various encoder architectures from scratch and compare their performances. Transformers can systematically generalise at scale, but no architectures can productively generalise.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Multimodal generalisation
|
Yes
|
"OOD generalization – the ability to perform tasks beyond the training distribution" (1)
|
Comprehensive
| null |
Models are given an 8x8 grid containing multicoloured letters at different indices, and must follow a binary tree of "if-then-else" instructions to answer a question like "Get the position of the orange 't'".
|
A query in natural language, an image of an 8x8 grid in some .jpg-like format, and a correct answer, which is either a shape ("d") a colour ("orange") or a location ((5, 4)).
|
The concrete dataset used for their evaluation is not provided, only a generator object in python is given.
|
Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
Yes
|
task tree depth, num distractors
|
Random sample (creators defined a task space and sampled from it), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Industry
|
Yes
| null | null | null | null | null |
Simple Mean
|
Yes
|
IID and OOD accuracy on varying numbers of distractors and tree depths
| null |
https://github.com/IBM/gcog
|
Generic COG (gCOG)
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
"Identifying neural architectures that can robustly generalize OOD is a central goal in artificial intelligence. Compositional generalization benchmarks, which explicitly evaluate for generalization, provide a good testbed for measuring these capabilities" (9)
|
simple mean/sum
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Language Modelling
|
Adaptability
| null |
['Another benchmark', 'Procedurally-generated']
|
['Random', 'Criterion']
|
['Short free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
liMultimodalArXivDataset2024
|
Multimodal ArXiv: A Dataset for Improving Scientific Comprehension of Large Vision-Language Models
|
Include
| null | null |
Multimodal ArXiv consists of ArXivCap, a figure-caption dataset sourced from scientific papers, and ArXivQA, a QA dataset generated by prompting GPT-4V for QA pairs on ArXivCap entries. Results show that fine-tuning on these datasets boosts performance on the MathVista benchmark, and that evaluation results for various scientific plot comprehension subtasks are poor.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
comprehending scientific plots
|
No
| null |
Subset
|
The phenomenon is vaguely defined but the tasks are precisely defined
|
Vision-to-text subtasks: caption a single (or multiple) scientific figure(s), including an in-context learning subtask, and generate paper titles given figures and captions.
|
A ground truth paper title and a list of scientific figures and corresponding captions
| null |
Real task examples (e.g. GitHub issues), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
100,000
|
Yes
|
arXiv domain, arXiv DOI
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
n-gram (BLEU, ROUGE, chrF), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://huggingface.co/datasets/MMInstruction/ArxivQA; https://huggingface.co/datasets/MMInstruction/ArxivCap
|
Multimodal ArXiv
|
Not defined
|
Yes
| null |
Yes
|
Yes
|
No
|
The benchmark is itself realistic
|
Yes
|
Yes
|
"after training the model on QA pairs from each domain... Most domains hurt the Figure QA task. This suggests that synthetic Figure QA might not be the best benchmark for assessing realistic reasoning ability." (14373-4)
"our Multimodal ArXiv dataset sources from ArXiv papers due to their accessibility and open-source licenses. This approach may overlook the diversity of disciplines and data modalities present in the broader scientific literature." (14378)
|
simple mean/sum
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
VQA
|
Understanding
| null |
['Real task', 'LLM-generated']
|
['Targeted']
|
['Short free response', 'Free response']
|
['Soft match', 'LLM post-processing']
|
['No definition']
|
['Yes']
|
['']
|
['Realistic']
|
['Yes']
|
['Partial']
|
['Mean']
|
zouVGBenchEvaluatingLarge2024
|
VGBench: Evaluating Large Language Models on Vector Graphics Understanding and Generation
|
Include
| null | null |
The paper introduces VGBench, a comprehensive benchmark for vector graphics images that tests both visual understanding and generation. Formats like SVG, TikZ, and Graphviz are included, and performance is generally strong, though LLMs do worse with the lower-level SVG format.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
processing vector graphics
|
No
| null |
Comprehensive
| null |
For the QA task (VGQA), models are given a vector graphics representation (in textual format) and a multiple choice question about a high-level feature of the image, like the colour of a depicted entity. For the generation task (VGen), models must generate vector graphics code from a textual description.
|
For VGQA: a snippet of vector graphics code, a question with multiple choice answers, and a correct answer.
For VGen: a textual description, the desired output format (e.g. SVG), and some ground truth vector graphics code.
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
10,124
|
Yes
|
vector graphic format
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Multiple choice, Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Academia
|
Yes
| null | null |
Test
|
4,279 examples in VGQA, 5,845 examples in VGen
| null |
Simple Mean
|
Yes
|
vector graphics format and question subtype (e.g. "Domain", "Layout", "Relation" questions)
| null |
https://huggingface.co/datasets/vgbench/VGen; https://huggingface.co/datasets/vgbench/VGQA
|
VGBench
|
Widely-agreed
|
Yes
|
Yes
|
No
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
Instruction Following
| null | null |
['Real task', 'Another benchmark', 'LLM-generated']
|
['Convenience']
|
['Multiple choice', 'Structured']
|
['Exact match', 'Soft match', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
zhangXSemPLRCrosslingualSemantic2023
|
XSemPLR: Cross-Lingual Semantic Parsing in Multiple Natural Languages and Meaning Representations
|
Include
| null | null |
The paper introduces XSEMPLR, a unified benchmark for cross-lingual semantic parsing featuring 22 natural languages and 8 meaning representations by examining and selecting 9 existing datasets to cover 5 tasks and 164 domains. They use XSEMPLR to conduct a benchmark study on a wide range of multilingual language models, including encoder-based models (mBERT, XLM-R), encoder-decoder models (mBART, mT5), and
decoder-based models (Codex, BLOOM). The findings show that large multilingual
language models are still inadequate for performing CLSP tasks. They also find that the performance gap between monolingual training and cross-lingual transfer learning is still significant for multilingual models, though it can be mitigated by cross-lingual few-shot training.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
cross-lingual semantic parsing
|
Yes
|
Cross-Lingual Semantic Parsing (CLSP) aims to translate queries in multiple natural languages (NLs) into meaning representations (MRs).
|
Comprehensive
| null |
The task is to train a model to convert a sentence in natural language into a meaning representation (e.g., SQL, programming code, Prolog, Functional Query Language, etc.).
|
A pair of input and output where input is a text in natural language and output is a text of input's meaning representation
| null |
Modified from another benchmark (e.g. translation into another language)
|
Train set: ~42k, test set: ~7500, Dev set: ~5500
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/psunlpgroup/XSemPLR
|
XSEMPLR
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Multilinguality
| null | null |
['Another benchmark']
|
['Random']
|
['Free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
sunInformalLanguageProcessing2024
|
Toward Informal Language Processing: Knowledge of Slang in Large Language Models
|
Include
| null | null |
Using movie subtitles, the authors construct a dataset that supports evaluation on a diverse
set of tasks pertaining to the automatic processing of slang. For both evaluation and finetuning, they show the effectiveness of their dataset on two core applications: 1) slang detection, and 2) identification of regional and historical sources of slang from natural sentences.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
informal language processing (Knowledge of slang in LLMs)
|
No
|
They focus on two core tasks for informal language processing. First, they evaluate the extent to which LLMs can reliably detect slang usages in natural sentences. Second,
they assess whether LLMs can be used to identify regional-historical sources of slang via a text classification task.
|
Subset
| null |
Task1: Given a set of sentences, they evaluate slang detection at both sentence-level and word-level.
Task2: Given a sentence containing a slang usage, they ask the model to classify its source (e.g. region and age).
|
a sentence of natural language
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks)
|
25,000
|
Yes
|
Annotator confidence, Movie ID, Region, Year
|
Random sample (creators defined a task space and sampled from it)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall), They also report two metrics to compare an LLM’s predictive confidence in slang usages relative to their literal counterparts.
| null |
The benchmark is build on top of OpenSubtitles corpus.
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/amazon-science/slang-llm-benchmark
| null |
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Multilinguality
| null | null |
['Crowd-sourced']
|
['Random']
|
['Multiple choice']
|
['Exact match', 'Correlation']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
wangPretrainingLanguageModel2023
|
ON PRE-TRAINED LANGUAGE MODELS FOR ANTIBODY
|
Include
| null | null |
This paper introduces the AnTibody Understanding Evaluation (ATUE) benchmark to systematically assess the representation capabilities of general and antibody-specific pre-trained language models across a range of antibody-related tasks. It also explores how incorporating biological mechanisms into pre-training can enhance model performance and evaluates the transferability of learned representations to real-world applications such as drug discovery and immune system analysis.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
LLMs capability to do antibody representation learning and biological reasoning with sequence specificity
|
Yes
|
how LLMs perform in antibody tasks with different specificity and how introducing specific biological mechanisms to the pre-training process can benefit the model. Additionally, authors evaluate if the learned antibody pre-trained representations can be applied to real-world antibody problems, like drug discovery and immune process understanding.
|
Subset
| null |
Evaluate the ability of pre-trained language models to perform on four supervised antibody-related prediction tasks—antigen binding, paratope prediction, B cell maturation classification, and SARS-CoV-2 antibody discovery—each varying in antibody specificity. These tasks assess whether the models can capture biologically meaningful information from antibody sequences.
|
N/A there are four tasks
| null |
Real task examples (e.g. GitHub issues)
|
3242, 1662, 88094, 22000
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), Matthews Correlation Coefficient (MCC), and AUC (Area Under the ROC Curve)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
15,128/3,242 , N/A
| null |
Simple Mean
|
No
| null | null |
https://github.com/dqwang122/EATLM
|
ATUE
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
Biology
| null | null |
['Real task']
|
['Convenience', 'Targeted', 'Criterion']
|
['Structured']
|
['Exact match', 'Correlation']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
| null |
bajpaiCanLLMsReplace2024
|
Can LLMs replace Neil deGrasse Tyson? Evaluating the Reliability of LLMs as Science Communicators
|
Include
| null | null |
This paper focuses on evaluating the reliability of current LLMs as science communicators. They introduce a dataset, SCiPS-QA, comprising 742 Yes/No queries embedded in complex
scientific concepts, along with a benchmarking suite that evaluates LLMs for correctness and consistency across various criteria. They also benchmark three proprietary LLMs from the OpenAI GPT family and 13 open-access LLMs from the Meta Llama-2, Llama-3, and Mistral families.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Reliability of LLMs as Science Communicators
|
No
|
Can existing LLMs answer scientific reasoning questions successfully and faithfully that require understanding the nuances of scientific knowledge?
|
Comprehensive
| null |
A binary (yes/No) classification task where the model is asked to answer a scientific question.
|
A question in science
| null |
Not explained
|
742
|
Yes
|
topic, date
|
Unknown
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/Prasoon1207/llm-science-miscommunication/blob/main/data/data.csv
|
SCiPS-QA
|
Not defined
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
Simple mean and standard deviation
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
General Science
| null | null |
['Unknown']
|
['Unknown']
|
['Multiple choice']
|
['Exact match']
|
['No definition']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean', 'Std']
|
hauserLargeLanguageModelsExpertlevel2024
|
Large Language Models' Expert-level Global History Knowledge Benchmark (HiST-LLM)
|
Include
| null | null |
The paper introduces the History Seshat Test for LLMs (HiST-LLM), based on a subset of the Seshat Global History Databank, which provides a structured representation of human historical knowledge, containing 36,000 data points across 600 historical societies and over
2,700 scholarly references. Using this dataset, they benchmark a total of seven models from the Gemini, OpenAI, and Llama families.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
LLM's Expert-level Global History Knowledge
|
No
|
The ability of the model to answer expert-level history questions.
|
Comprehensive
| null |
The ask is to ask the model a multiple-choice question about history.
|
A multiple-choice question
| null |
Human expert created the examples
|
36000
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/seshat-db/HiST-LLM
|
HiST-LLM
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Mean and standard deviation
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
History
| null | null |
['Expert-crafted']
|
['Random']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean', 'Std']
|
sadatMSciNLIDiverseBenchmark2024
|
MSciNLI: A Diverse Benchmark for Scientific Natural Language Inference
|
Include
| null | null |
This paper introduces MSCINLI, a new dataset comprising 132,320 sentence pairs from five diverse scientific domains to enhance the study of scientific Natural Language Inference (NLI). Baseline models, including fine-tuned and prompted LLMs, reveal the dataset's challenging nature, as well as performance degradation due to domain shifts, highlighting the unique characteristics of each domain. Additionally, employing both scientific NLI datasets in intermediate task transfer learning showcases improvements in downstream scientific tasks.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Natural language inference (semantic relationship between two sentences), scientific domains
|
Yes
|
predicting the semantic relation between two sentences extracted from research articles
|
Comprehensive
| null |
sentence pairs, multiple choice on semantic relation between sentences
| null |
question, prompt, domain, class, difficulty, response correct/score
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
127,320
|
Yes
|
difficulty, domain
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test, Train
| null | null |
Simple Mean
|
Yes
|
difficulty
| null |
GitHub, huggingface
|
MSciNLI
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
No
| null |
mean and variance, t-tests
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
General Science
| null | null |
['Author-crafted', 'Another benchmark']
|
['Targeted', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['No']
|
['Representative']
|
['Mean', 'Std', 'Tests']
|
dengNewTermBenchmarkingRealtime2024
|
NewTerm: Benchmarking Real-Time New Terms for Large Language Models with Annual Updates
|
Include
| null | null |
This paper introduces NewTerm, an adaptive benchmark designed for the real-time evaluation of new terms in large language models (LLMs) to address their struggle with real-time information due to knowledge cutoffs. The benchmark is constructed using a highly automated method allowing flexible and minimal human effort updates, revealing a performance reduction of over 20% on various LLMs with new terms and highlighting difficulties in generalizing to distant new terms. Annual updates to NewTerm, starting with 2022 and 2023, are planned to continuously assess and analyze the evolving challenge of new terms in LLMs.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Updating of knowledge, real-time evaluation of new terms introduced after knowledge cutoff
|
Yes
|
flexible updates for real-time information
|
Comprehensive
| null |
Answer questions about new terms from dictionary, introduced after knowledge cutoff
|
Question, multiple choice answers, response, correct
| null |
Real task examples (e.g. GitHub issues), Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
Yes
|
Domains: The Choice of Multiple Alter (COMA), The Choice of Similar Terms (COST), Common Sense Judgement (CSJ)
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Domains: The Choice of Multiple Alter (COMA), The Choice of Similar Terms (COST), Common Sense Judgement (CSJ)
| null |
GitHub
|
NewTerm
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Language Modelling
|
Updating
| null |
['Real task', 'Procedurally-generated']
|
['Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative']
|
['Mean']
|
yeRoTBenchMultilevelBenchmark2024
|
RoTBench: A Multi-Level Benchmark for Evaluating the Robustness of Large Language Models in Tool Learning
|
Include
| null | null |
LLMs are increasingly deployedin settings where they can use tools, e.g. call functions to retrieve real-time information on weather. This paper proposes benchmark measuring the robustness of LLMs in selecting tools when these are specified under noise (e.g. the function name is perturbed).
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
tool use when tool names or arguments are mislabeled
|
No
|
LLMs should exhibit consistent tool use when tools or their arguments are mislabeled.
|
Subset
| null | null |
Prompt + List of availabe tools + ground truth tool + ground truth arguments
| null |
Procedurally-generated task examples (e.g. Creating instances from a template)
|
735
|
No
| null |
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible)
|
Free response (e.g. summary paragarph), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null |
existing benchmark + small perturbations
|
Academia
|
Yes
| null |
A) The noise induced in the benchmark significantly alters the *expected behaviour* of the model. For instance, imagine "Get_GPS_COORDINATES : This tool is used for fetching information weather for specified location." is a perturbation of "Get_WEATHER: This tool is used for fetching infromation weather for specified location." Clearly, the inconsistent information provided to the LLM between the function name and its docstring changes the expected behaviour of the model and hence "consistent" behaviour is not necessarily a sign of robustness. This casts doubt on the construct validity of “Robust Tool Use”. A positive note: The authors test human perofrmance and humans get scores between 69% and 89%, showing the task is still somewhat possible to humans.
B) The authors built their dataset by perturbing an existing dataset. their explanations of the existing dataset are negligle. It should be best practice to at least explain what the task of the original dataset is exactly, its size and limitations.
|
Test, Train
| null | null |
Simple Mean
|
Yes
|
different intermediate stages to a full sucess.
| null |
https://github.com/Junjie-Ye/RoTBench
|
RoTBench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null | null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Agents
|
Tool Use
| null |
['Procedurally-generated']
|
['Random', 'Convenience']
|
['Free response', 'Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
| null |
maMMLONGBENCHDOCBenchmarkingLongcontext2024
|
MMLONGBENCH-DOC: Benchmarking Long-context Document Understanding with Visualizations
|
Include
| null | null |
The paper presents a long-context multimodal benchmark dataset of more than 1k expert annotated questions over long PDFs which require aggregating evidence across multiple locations and evidence formats (text, image, charts, etc.) to answer. MMLongBench-Doc presents a challenge for strong models such as GPT-4o and other large vision language models (LVLMs), demonstrating the need for improved long-context LVLM capabilities.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
long-context document understanding
|
Yes
|
"the automatic understanding of [long-context] documents. The understanding of these lengthy documents brings new challenges for LVLMs", including localization and cross-page comprehension
|
Comprehensive
| null |
Give a document to a model and have it answer a question regarding information in the document.
|
Documents are PDF files. Questions are stored in json format with the following attributes: document ID, document type, question, answer, evidence pages, evidence sources, and answer format.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
1082
|
Yes
|
evidence source, answer format, question length statistics, answer length statistics, document length statistics
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
type of evidence source, number of evidence pages involved in answering the question, document type
| null |
https://github.com/mayubo2333/MMLongBench-Doc
|
MMLongBench-Doc
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Author-crafted', 'Another benchmark', 'Procedurally-generated']
|
['Targeted', 'Criterion']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
| null |
kuratovBABILongTestingLimits2024
|
BABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack
|
Include
| null | null |
The BABILong benchmark tests language models’ ability to reason across facts distributed in extremely long documents in the reasoning setting, scattering relevant facts among less relevant natural text. The paper finds LLMs only effectively use less than 20% of the context in such settings, with reasoning complexity negatively impacting performance. Multiple methods including in-context reasoning, retrieval augmented generation, and context extension are applied to profile model capabilities in these long-context tasks.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
language models’ ability to reason across facts distributed in extremely long documents
|
Yes
|
"language models’ ability to reason across facts distributed in extremely long documents"
|
Comprehensive
| null |
Perform one of 20 reasoning tasks (e.g., fact chaining, simple induction, deduction, counting, and handling lists/sets), generally presented in question format, given a long context with relevant and distracting articles.
|
A long-context input text, question, and the question's answer based on the input
| null |
Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
Yes
|
facts per task, relevant facts per task, reasoning task type
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
|
input length, task type, context size
| null |
https://github.com/booydar/babilong
|
BABILong
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
Advantages of the benchmark are compared versus existing related benchmarks based on design and correlation study, and the content of the benchmark and the relation between model performance and capability are analyzed.
|
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Another benchmark', 'Procedurally-generated']
|
['Targeted', 'Criterion']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
|
['Mean']
|
wangAdaLEvalEvaluatingLongcontext2024
|
Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks
|
Include
| null | null |
Ada-LEval presents a length-adaptable benchmark for long-context understanding capabilities of LLMs, involving challenging questions for reliable evaluation and context lengths extending to the ultra-long setting. SOTA open and closed models are evaluated to demonstrate current limitations of LLMs in such settings.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
| null |
No
|
Context window is a notable factor in LLM performance and is critical to handling long texts. The effectiveness of LLMs in managing long text is still open for exploration and assessment.
|
Comprehensive
| null |
1. Take in a long text and arrange the text segments in the correct order.
2. Choose the best answer from multiple candidate answers to a question based on a given long text.
|
Not provided, but generally the task samples consist of either a question and many sample answers, or a series of texts to be rearranged (per the task definition).
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
over 80k
|
Yes
|
total samples per context length, max tokens, average number of tokens
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation), instruction following rate
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
context lengths from 2k to 16k
| null |
https://github.com/open-compass/Ada-LEval
|
Ada-LEval
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
Comparison with traditional long-context benchmarks such as GovReport demonstrate Ada-LEval requires more overall text understanding to complete.
|
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Real task', 'Another benchmark', 'Procedurally-generated']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Free response']
|
['Exact match', 'Distribution', 'Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
|
['Mean']
|
zhangAnalyzingTemporalComplex2024
|
Analyzing Temporal Complex Events with Large Language Models? A Benchmark towards Temporal, Long Context Understanding
|
Include
| null | null |
TCELongBench assess LLMs’ ability to leverage temporal dynamics when understanding extensive texts. Experiments find that retrieval augmented generation and long-context modeling are fairly effective to handle such tasks.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
complex event analysis, handling temporal dynamics, understanding extensive text
|
Yes
|
"[Temporally complex events] consist of semantically related articles that together narrate the development of various entities over time... a TCE may span tens of news articles and then tens of thousands of tokens"
|
Subset
| null |
The task is defined in three specific QA settings:
1. Finding and understanding evidence across numerous articles
2. Understanding the order of temporal sequences
3. Predicting future events based on historical data
|
Each sample consists of the following fields: question, answer choices, answer, ground truth, and shuffled answer choices, along with meta-data concerning sample ID and the sample generation process.
| null |
Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
13124
|
Yes
|
question types, token counts, temporal duration
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
Train: 63050; Validation: 13334
| null |
Simple Mean
|
Yes
|
Metrics for the three different subtasks are provided, as well as results according to input length and input position.
| null |
https://github.com/Zhihan72/TCELongBench
|
TCELongBench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
Simple mean
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean']
|
liLooGLECanLongcontext2024
|
LooGLE: Can Long-Context Language Models Understand Long Contexts?
|
Include
| null | null |
The paper presents a long-context benchmark over recent (post-2022) documents with new questions in diverse domains. LooGLE assesses LLMs’ long dependency capabilities and finds poor performance even with long context window LLMs.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
long-context processing and undersatnding
|
Yes
|
enabling LLMs to "process, comprehend, or even learn from long-context textual information"
|
Comprehensive
| null |
An extremely long text paired with a task direction for a long- or short-dependency understanding task, namely summarization, timeline reordering, calculation, multiple information retrieval, comprehension and reasoning, question answering, or cloze.
|
Each task item consists of the input text, document title, QA pairs, and output.
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
6448
|
Yes
|
number of documents, avg # words, max # words, min # words, avg tokens, task type
|
Random sample (creators defined a task space and sampled from it), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), Human accuracy evaluation
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
|
https://github.com/bigai-nlco/LooGLE
| null |
Test
| null | null |
Simple Mean
|
Yes
|
task type, context length
| null | null |
LooGLE
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Crowd-sourced', 'Another benchmark', 'Procedurally-generated']
|
['Random', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Soft match', 'LLM-as-a-Judge', 'Human ratings']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
wangLeaveNoDocument2024
|
Leave No Document Behind: Benchmarking Long-Context LLMs with Extended Multi-Doc QA
|
Include
| null | null |
Loong is a long-context benchmark which aims to boost the realism of long-context capability evaluation by ensuring each document is relevant to the final answer, covering a range of context lengths and tasks. Various models are assessed on the benchmark, with RAG proving poor for improving performance.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
long-context understanding
|
Yes
|
"long-context understanding in real-world multi-document scenarios"
|
Comprehensive
| null |
An input is provided with a task instruction or question, which the model must answer by leveraging *all* context documents.
|
Each sample consists of a question, instruction, documents, and answer, along with meta-data regarding sample index, task type, and level.
| null |
Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
1600
|
Yes
|
length distribution, task type, avg tokens, language
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph), Structured response (e.g. valid JSON, API call alone)
|
LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
task type, input length
| null |
https://github.com/MozerWang/Loong
|
Loong
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Procedurally-generated', 'LLM-generated']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response', 'Structured']
|
['LLM-as-a-Judge', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
|
['Mean']
|
senelCoDA21EvaluatingLanguage2022
|
CoDA21: Evaluating Language Understanding Capabilities of NLP Models With Context-Definition Alignment
|
Include
| null | null |
CoDA21 is a challenging benchmark to assess NLU capabilities of pretrained language models (PLMs). Performance of PLMs is assessed versus humans.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
natural language understanding
|
No
|
N/A -- not explicitly defined
|
Comprehensive
| null |
Given a set of contexts with masked target words and a set of definitions corresponding to these masked words, the task is to find the correct alignment between contexts and definitions.
|
Each sample consists of words and associated definitions.
| null |
Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
Yes
|
statistics for groups of related words
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), cosine similarity, log generation probability
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
difficulty, clean vs. noisy
| null |
https://github.com/lksenel/CoDA21
|
CoDA21
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
simple mean
|
Model access required (e.g. logits)
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Authors' description is unclear
|
Not applicable
| null | null |
NLP
|
Understanding
| null |
['Procedurally-generated']
|
['Criterion']
|
['Structured']
|
['Exact match', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
anLevalInstitutingStandardized2024
|
L-Eval: Instituting Standardized Evaluation for Long Context Language Models
|
Include
| null | null |
L-Eval presents a standardize evaluation suite for long-context language models consisting of 20 subtasks over long documents up to 200K tokens in length with diverse human-labeled query-response pairs. Evaluation metrics for long-context LLMs are compared for alignment with human judgment. Commercial and open-source LLMs are benchmarked.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
long-context capabilities
|
No
|
N/A -- phenomenon is only defined indirectly through details of the setting for the work
|
Comprehensive
| null |
Given a long input context, answer a relevant question.
|
Each sample consists of an input document, potential instructions, ground truth outputs, data source, and evaluation metrics.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
Yes
|
avg tokens per input, max tokens per input, number of instructions per document, number of documents
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph), Extended interaction (e.g. conversation, calling an API and processing the response)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null |
LLM filtering is used for quality control.
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
data source, input length
| null |
https://github.com/OpenLMLab/LEval
|
L-Eval
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Author-crafted', 'Another benchmark', 'Procedurally-generated']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response', 'Interaction']
|
['Exact match', 'Soft match', 'Human ratings', 'LLM-as-a-Judge']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
zhangMarathonRaceRealm2024
|
Marathon: A Race Through the Realm of Long Context with Large Language Models
|
Include
| null | null |
The paper presents the Marathon benchmark to evaluate comprehension and reasoning capabilities of LLMs over long texts. Marathon is used to assess SOTA LLMs and the efficacy of several existing long-context generation strategies.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
long-context comprehension and reasoning
|
Yes
|
"the capabilities of LLMs to comprehend long contexts"
|
Comprehensive
| null |
A long context is presented with a multiple-choice question.
|
Each sample is represented as the input context, question, and options.
| null |
Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
1530
|
Yes
|
distribution of context lengths per task
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/Hambaobao/Marathon
|
Marathon
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Targeted']
|
['Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
zhangBenchExtendingLong2024
|
\inftyBench: Extending Long Context Evaluation Beyond 100K Tokens
|
Include
| null | null |
The paper presents InfiniteBench, a new benchmark to evaluate LLMs’ ability to process, understand, and reason over ultra-long contexts over 100k tokens in length. InfiniteBench contains both real and synthetic tasks which present notable challenge to existing SOTA LLMs.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
long-context understanding and reasoning
|
Yes
|
"the ability to process long contexts is increasingly critical... Textual documents, historical
dialogues, complex instructions, and cumbersome workflows, which constitute the data most directly processed in daily tasks, must be input to LLMs as long contexts for effective processing."
|
Comprehensive
| null |
Take a long input context and task instruction and/or question and provide an answer.
|
Each sample is represented as the input context, task/question, answer options (if applicable), and ground truth answer.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
3946
|
Yes
|
avg input length, avg output length, annotation method
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph), Extended interaction (e.g. conversation, calling an API and processing the response)
|
LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
task type
| null |
https://github.com/OpenBMB/InfiniteBench
|
InfiniteBench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Author-crafted', 'Procedurally-generated', 'LLM-generated']
|
['Targeted']
|
['Multiple choice', 'Short free response', 'Free response', 'Interaction']
|
['LLM-as-a-Judge', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
xuStresstestingLongcontextLanguage2024
|
Stress-Testing Long-Context Language Models with Lifelong ICL and Task Haystack
|
Include
| null | null |
The paper introduces lifelong ICL as a new long-context problem setting for LLMs and the Test Haystack evaluation suite to understand how LLMs utilize contexts for the lifelong ICL task. Many long-context LMs are benchmarked, and contributors to failure cases are identified.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
lifelong in-context learning
|
Yes
|
"Lifelong ICL, a new problem setting that challenges these models to learn a sequence of tasks via in-context learning"
|
Comprehensive
| null |
Given a task instruction and test inputs, leverage the relevant demonstrations in the Lifelong ICL prompt, avoid distraction and interference from other tasks, and achieve test accuracies that are not significantly worse than those of the Single-task ICL baseline.
|
Each sample is represented by the input context and target answer.
| null |
Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
| null |
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), pass rate
| null | null |
Academia
|
Yes
| null | null |
Test, Train
| null | null |
Simple Mean
|
Yes
|
number of shots
| null |
https://github.com/INK-USC/Lifelong-ICL
|
Test Haystack
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Language Modelling
|
In-context Learning
| null |
['Another benchmark', 'Procedurally-generated']
|
['Targeted']
|
['Multiple choice', 'Short free response', 'Free response', 'Structured']
|
['Exact match', 'Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
| null |
kwanM4LEMultiabilityMultirange2024
|
M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models
|
Include
| null | null |
The paper introduces a comprehensive multi-range, multi-ability, multi-task, multi-domain benchmark for long context processing in LLMs. Analysis confirms LLMs struggle to handle long contexts, especially when multiple input spans are involved. Several long context methods are compared.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
long-context processing
|
Yes
|
"processing instructions based on long sequences"
|
Comprehensive
| null |
Identify single or multiple spans in a long context to use to respond to an instruction.
|
Each sample consists of the task description, input context, instruction, and response.
| null |
Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
64800
|
No
| null |
Random sample (creators defined a task space and sampled from it), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), Normalized score relative to GPT-3.5-Turbo-16K performance
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
context length, task type
| null |
https://github.com/KwanWaiChung/M4LE
|
M4LE
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null | null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Another benchmark', 'Procedurally-generated', 'LLM-generated']
|
['Random', 'Targeted']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', '']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
| null |
baiLongBenchBilingualMultitask2024
|
LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding
|
Include
| null | null |
LongBench is the first bilingual multi-task benchmark for long-context understanding. Benchmarking of open and closed source models suggests notable challenges for LLMs, with fine-tuning and scaled position embedding helping to improve long-context capabilities.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
long-context understanding
|
Yes
|
"the ability to understand and reason over a long context"
|
Comprehensive
| null |
Given a long context input and task instruction, produce an answer.
|
Each sample is represented in a standard format, consisting of the task input, context, ground truth answers, dataset source, language, ID, and meta-data including length and categories for classification tasks.
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt), Expert-annotated task examples (PhD students)
|
4750
|
Yes
|
avg length, data source, language, metric
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
context length, task type
| null |
https://github.com/THUDM/LongBench/tree/main
|
LongBench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Long Context
| null |
['Real task', 'Another benchmark', 'Procedurally-generated', 'LLM-generated', 'Expert-crafted']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response', 'Structured']
|
['Exact match', 'Soft match', 'LLM-as-a-Judge', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
mahbubUnveilingEssencePoetry2023
|
Unveiling the Essence of Poetry: Introducing a Comprehensive Dataset and Benchmark for Poem Summarization
|
Include
| null | null |
The paper proposes the task of poem summarization for LLMs and presents the first benchmark, PoemSum, to evaluate such capability. SOTA summarization models are benchmarked and limitations of current models on the poem summarization task are discussed.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
poem summarization
|
Yes
|
"In recent years, there has been notable research conducted on text summarization in the field of Natural Language Processing (NLP). However, to the best of our knowledge, no such work has been done in the domain of poem summarization yet. While the summarization process of poems seems quite similar to the generic text summarization, there are some major differences between the two... Summarizing literary work poses lots of challenges."
|
Comprehensive
| null |
A poem is given and a summary must be generated.
|
Each sample is represented by the poem title, poet name, poem text, poem link, and poem summary.
| null |
Real task examples (e.g. GitHub issues)
|
301
|
Yes
|
number of poets, max poem length, max summary length, avg poem length, avg summary length, avg # poems per poet
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
Train: 2409; Validation: 301
| null |
Simple Mean
|
No
| null | null |
https://github.com/Ridwan230/PoemSum
|
PoemSum
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Summarization
| null |
['Real task']
|
['Criterion']
|
['Free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Complete']
|
['Mean']
|
fernandezSyllabusQACourseLogistics2024
|
SyllabusQA: A Course Logistics Question Answering Dataset
|
Include
| null | null |
The paper introduces a new dataset consisting of real-world syllabi for question-answering. Strong LLMs are benchmarked on the dataset, SyllabusQA.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
course logistics-related question-answering
|
Yes
|
"answering student questions on logistics whose answers can be directly found or inferred from the syllabus"
|
Comprehensive
| null |
Take a syllabus and question and respond using information from the syllabus.
|
Each sample is represented with the syllabus name, question type, question, and answer, along with meta-data indicating the sample index, answer spans (if applicable), and reasoning steps (if applicable).
| null |
Real task examples (e.g. GitHub issues), Crowd-sourced task examples (e.g. Prolific-created tasks), Procedurally-generated task examples (e.g. Creating instances from a template)
|
1103
|
Yes
|
pages per syllabus, tokens per syllabus, tokens per question, tokens per answer
|
Unknown
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
Train: 3018; Validation: 957
| null |
Simple Mean
|
Yes
|
question type, answer source type
| null |
https://github.com/umass-ml4ed/SyllabusQA
|
SyllabusQA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Retrieval
| null | null |
['Real task', 'Crowd-sourced', 'Procedurally-generated']
|
['Unknown']
|
['Short free response', 'Free response']
|
['Exact match', 'Soft match', 'LLM-as-a-Judge', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
suLivingMomentCan2024
|
Living in the Moment: Can Large Language Models Grasp Co-Temporal Reasoning?
|
Include
| null | null |
This paper addresses the task of reasoning across intricate temporal interconnections and introduces CoTempQA as a comprehensive co-temporal question answering benchmark. Current LLMs exhibit significant deficiencies versus humans in co-temporal comprehension and reasoning, even with Chain of Thought. Mathematical reasoning is found to play a notable role in handling co-temporal events, and a strategy to boost co-temporal reasoning in LLMs which leverages this insight is proposed.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
co-temporal comprehension and reasoning
|
Yes
|
Temporal reasoning "is fundamental for humans to comprehend the world and distinguish daily events, requiring a complex integration of capabilities, involving implicit arithmetic calculations, understanding logical implications, and leveraging extensive world knowledge." Yet "reality might present a more intricate and multifaceted nature, involving concurrent events and complex temporal interconnections over time." Co-temporal reasoning focuses on "the concurrent nature of time and co-temporal relations in real-world situations".
|
Comprehensive
| null |
1. Take a question and generate the answer without relying on external texts.
2. Take a question and relevant temporal facts and generate the answer.
|
Each sample consists of the context, question, and target answer.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Procedurally-generated task examples (e.g. Creating instances from a template), Wikidata
|
4748
|
Yes
|
# questions per mode, # subjects per mode, average number of facts per mode, average number of answers per mode
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
temporal mode
| null |
https://github.com/zhaochen0110/Cotempqa
|
CoTempQA
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Reasoning
|
Temporal
| null |
['Author-crafted', 'Procedurally-generated', 'Crowd-sourced']
|
['Criterion']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
krojerImageRetrievalContextual2022
|
Image Retrieval from Contextual Descriptions
|
Include
| null | null |
The paper proposes a new multimodal challenge, Image Retrieval from Contextual Descriptions (ImageCoDe), to assess vision-and-language models’ ability to integrate context cues into interpretation of linguistic utterances. Models such as ViLBERT and CLIP are evaluated and found to lag significantly behind human performance.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
retrieve images based on textual descriptions
|
Yes
|
"we present a new challenge that requires multimodal models to leverage context to retrieve images from text. In particular, given a contextual description and a set of minimally contrastive candidate images, i.e. differing only in some details, the model has to retrieve the target image."
|
Comprehensive
| null |
Retrieving the correct image from a set of minimally contrastive candidates based on a contextual description.
|
Each sample consists of a brief textual description, ten candidate images, and the index of the target response.
| null |
Real task examples (e.g. GitHub issues), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
2306
|
Yes
|
average length, average # sentences, number of word types
|
Random sample (creators defined a task space and sampled from it), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
Train: 16594; Validation: 2302
| null |
Simple Mean
|
No
|
video frames, static pictures
| null |
https://github.com/McGill-NLP/imagecode
|
ImageCoDe
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Retrieval
| null | null |
['Real task', 'Crowd-sourced', 'Another benchmark', 'Procedurally-generated']
|
['Random', 'Criterion']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
rayColaBenchmarkCompositional2023
|
Cola: A Benchmark for Compositional Text-to-image Retrieval
|
Include
| null | null |
This paper looks at compositional visual reasoning in LLMs, presenting the COLA benchmark which targets text-to-image retrieval to compose objects with localized attributes. Strategies to adapt pre-trained vision-language models for compositional reasoning are assessed, and the authors find training with multimodal layers to be highly promising. COLA is compared to the CREPE benchmark, demonstrating greater difficulty than this contemporary counterpart.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
compositional reasoning
|
Yes
|
"Compositionality is a fundamental characteristic of human intelligence, allowing us to elicit 'the meaning of the whole [as] a function of the meanings of its parts'. In vision, the whole is an image made up of visual elements like objects and attributes. Recent work has consistently identified that this type of compositionality—that between objects and their attributes—is something existing vision-language models struggle to represent."
|
Subset
| null |
Given a query and set of objects, associate the objects in the query with the correct attributes and ignore difficult distractor compositions where the query attributes are attached to distractor objects.
|
In the multi-object setting, each sample is represented as a pair of images and captions. In the single-object setting, samples are represented by an image and a dictionary of objects in the image and relevant attributes. Additional 0/1 labels indicate whether each of 320 label classes is present in the image, along with similar labels indicating whether the image is counted within a difficult set for each label class.
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
Unclear how to compute based on description in the text
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train
|
Unclear how to compute based on description in the text
| null |
Simple Mean
|
Yes
|
data source, single-object compounds, multi-object compounds
| null |
https://github.com/arijitray1993/COLA
|
COLA
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Reasoning
|
Compositional
| null |
['Crowd-sourced', 'Another benchmark', 'Procedurally-generated']
|
['Targeted', 'Criterion']
|
['Multiple choice', 'Short free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
bhaskarBenchmarkingImprovingTexttoSQL2023
|
Benchmarking and Improving Text-to-SQL Generation under Ambiguity
|
Include
| null | null |
Previous research on Text-to-SQL conversions has relied on datasets with unambiguous mappings, despite real-world queries frequently having multiple valid SQL interpretations due to schema overlaps and confusing relationships. To address this gap, the authors created AmbiQT, a benchmark featuring 3000+ examples with dual valid SQL interpretations. This reveals that even SOTA LLMs struggle to generate all valid interpretations— because beam search algorithms produce token-level diversity rather than semantic alternatives.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Models ability to generate all valid interpretations to an ambiguous text-to-SQL query.
| null |
It "tests performance under ambiguity in the context of current models. AmbiQT includes over 3000 examples, each associating a natural question on a database with two valid SQLs."
|
Subset
|
Ambiguity is defined as four kinds: column ambiguity, table ambiguity, join ambiguity and precomputed aggregates.
|
AmbiQT tasks are natural language questions with two valid SQL solutions. The system is expexted to output all valid options in their top-k SQL outputs, for user review.
|
A natural language tasks (with two valid SQL solutions).
| null |
Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
3000 tasks
|
Yes
|
ambiguity type
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null |
The paper also includes Logical Beam which has better performance on the benchmark than the other evaluated models.
|
Test, Train, Validation
| null |
The model is prompted for its top-k answers.
|
Simple Mean
|
Yes
|
by ambiguity type
|
EitherInTopK or BothInTopK (%)
|
https://github.com/testzer0/AmbiQT/tree/master
|
AmbiQT
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
Yes
|
"In this work, we curated a benchmark of ambiguous queries by perturbing SPIDER, an existing dataset. While we believe that our benchmark is a good measure of performance under ambiguity, real-life databases may exhibit more numerous as well as varied forms of ambiguity. In addition, AmbiQT only consists of examples with questions in English. Ambiguity may manifest differently based on the choice of natural language, and a corresponding study should make for interesting future work"
|
simple mean (as percentage)
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Code Generation
|
Natural Language
| null |
['Another benchmark', 'LLM-generated']
|
['Convenience']
|
['Free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
xuPEERComprehensiveMultitask2022
|
PEER: A Comprehensive and Multi-Task Benchmark for Protein Sequence Understanding
|
Include
| null | null |
A benchmark called PEER (a
comprehensive and multi-task benchmark for Protein sEquence undERstanding).
PEER provides a set of diverse protein understanding tasks including protein
function prediction, protein localization prediction, protein structure prediction,
protein-protein interaction prediction, and protein-ligand interaction prediction.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
The capability being measured in the PEER benchmark is protein sequence understanding. The benchmark evaluates model performance across a range of biologically relevant tasks, which include: Protein function prediction, Protein localization prediction, Protein structure prediction, Protein-protein interaction prediction, Protein-ligand interaction prediction
|
Yes
|
The PEER benchmark includes seventeen biologically relevant tasks that cover diverse aspects of protein understanding, including protein function prediction, protein structure prediction, protein localization prediction, protein-protein interaction prediction and protein-ligand interaction prediction.
We represent a protein x as a sequence of amino acids (a.k.a., residues) x = (x₁, x₂, · · · , x_L) of length L. For each task, we list the task name and its acronym, task category, data source, protein sequence statistics, dataset statistics and evaluation metric.
|
Subset
| null |
The task is defined as evaluating language models on a set of 17 biologically relevant benchmarks that test their ability to understand protein sequences. This includes predicting various properties and interactions of proteins, such as their function, structure, localization, and interactions with other proteins or ligands
|
A single item in the task dataset typically consists of a protein sequence (a string of amino acids) and a corresponding label or target value, which varies by task—e.g., a fitness score (regression), a structural class (classification), or a binary interaction label.
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language)
|
across 17 tasks: 115,271
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), Spearman’s ρ, L/5 precision, RMSE
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
274,179 and 28,743
| null |
Simple Mean
|
No
| null | null |
https://github.com/DeepGraphLearning/PEER_Benchmark
|
PEER
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean, std
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
Biology
| null | null |
['Real task', 'Another benchmark']
|
['Convenience', 'Targeted', 'Criterion']
|
['Structured']
|
['Exact match', 'Correlation']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean', 'Std']
|
jangTemporalWikiLifelongBenchmark2022
|
TemporalWiki: A Lifelong Benchmark for Training and Evaluating
Ever-Evolving Language Models
|
Include
| null | null |
Most LLM benchmarks are static yet real factutal knowledge changes, increases and depreciates. TemporalWiki addresses language models' temporal misalignment by providing a benchmark derived from consecutive Wikipedia snapshots to assess how well models adapt to evolving knowledge. The findings demonstrate that updating models using only the differences between snapshots achieves comparable or better perplexity than retraining on entire snapshots, while reducing computational costs by 12x.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Temporal Misalignment
|
Yes
|
"temporal misalignment, which refers to neural networks showing poor performance due to misalignment in time between the train and test data"
|
Subset
| null |
TWIKI-PROBES: "factual phrases synthetically generated from a naive concatenation of Subject, Relation, and Object" from English Wikipedia and Wikidata to evaluate temporal misalignment.
|
A naive concatenation of Subject, Relation from English Wikipedia and Wikidata e.g. [Subject: Mario Chalmers] [Relation: member of sports team] where the model should generate [Object: Indios de Mayagüez] based on the following sentence in Wikipedia: "On September 27, 2021, Chalmers signed with Indios de Mayagüez of the Baloncesto Superior Nacional"
| null |
Real task examples (e.g. GitHub issues)
|
It is an evolving dataset so there is no fixed size.
|
Yes
|
Changed/Unchanged Facts
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation)
| null |
Tasks are sourced from English Wikipedia and English Wikidata.
| null | null | null | null |
Test, Train
| null | null |
Simple Mean
|
Yes
|
For changed/unchanged facts and for different snapshots of the wikipedia data.
| null |
https://github.com/joeljang/temporalwiki
|
TEMPORALWIKI
|
Contested
|
It is evaluating temporal misaglignment through the specific lens of factual information on Wikipedia.
|
rima facie reason to believe that perplexity on factual completions is a valid metric for benchmarking a language model's ability to adapt to changing knowledge over time (the target phenomenon of temporal misalignment). But the task format is very synthetic.
|
Yes
|
No
| null |
No
|
No
| null |
Authors acknowledge that Wikipedia and Wikidata are not true reflections of real-world knowledge. They do not directly discuss the impact of their synthetic task format.
|
Simple average of perplexity for different snapshots of the wikipedia data.
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
Whilst the data to build the tasks is sourced from English Wikipedia and English Wikidata, the task itself is a naive concatenation of Subject, Relation from a real Wikipedia sentence where object is the model output that is evaluated.
|
Single cohesive phenomenon
|
Not applicable
| null | null |
Language Modelling
|
Updating
| null |
['Real task']
|
['Convenience']
|
['Short free response']
|
['Exact match', 'Distribution']
|
['Contested']
|
['Partially']
|
['Yes']
|
['No comparison made']
|
['']
|
['Representative']
|
['Mean']
|
liuAgentBenchEvaluatingLLMs2024
|
AGENTBENCH: EVALUATING LLMS AS AGENTS
|
Include
| null | null |
AgentBench presents a holistic benchmark for evaluating LLMs as agents. It is structured across three domains (code, game, and web) and aims to evaluate a wide range of abilities.
|
While it is a well-respected benchmark, it's also vague in what it actually measures.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
"core" agentic capabilities: following instructions, coding, knowledge acquisition, logical reasoning, and commonsense grounding.
|
No
|
They broadly define agent capabilities as able to do reasoning and decision making but don't define those further.
|
Comprehensive
| null |
The overall tasks are either coding, text-based games/puzzles or web browsing. Each is predominantly evaluated based on successfully solving a problem.
|
Each task has an objective/prompt, a text-based environment, and a success state. Sometimes the success state involves a "gold action sequence".
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
1014
|
No
| null |
Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Academia
|
Yes
| null | null |
Test, Train
|
269
|
The interaction is extended but the output is often short.
|
Weighted Mean
|
Yes
|
Specific subtasks within the broader categories (e.g., "Operating System" within coding)
| null |
https://github.com/THUDM/AgentBench
|
AgentBench
|
Contested
|
Too vaguely defined phenomenon
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
|
Curiously, the authors performa "validity analysis" of the models responses but not of the actual tasks.
|
Aggregated scores (no additional stats)
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
The sub-benchmarks are quite heterogeneous in their realism. The coding tasks are relatively more realistic and the game tasks are quite synthetic.
|
Authors' description is unclear
|
Not applicable
| null | null |
Agents
| null | null |
['Author-crafted', 'Another benchmark', 'LLM-generated']
|
['Criterion']
|
['Short free response', 'Free response']
|
['Exact match', 'LLM-as-a-Judge']
|
['Contested']
|
['No']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Constructed']
|
['Mean']
|
huangMetaToolBenchmarkLarge2024
|
METATOOL BENCHMARK FOR LARGE LANGUAGE
MODELS: DECIDING WHETHER TO USE TOOLS AND
WHICH TO USE
|
Include
| null | null |
MetaTool proposes a benchmark for tool selection. It encompasses a diverse set of scenarios and four different settings (Similar tools, multi-tool, scenario, and reliability). The benchmark only focuses on tool selection and not actual execution.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Tool selection
|
Yes
|
They break it down into tool awareness, i.e., "whether LLMs can resort to external tools when they encounter problems they cannot solve" and actual tool selection, which they define as a knowledge retrieval task given a set of tools and a query.
|
Comprehensive
| null |
The task is broadly to select the relevant tool(s) (if any) given a query.
|
A query with a set of "correct" tools to use.
| null |
LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
975
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall)
| null |
Tool descriptions are sourced from OpenAI plugins but the actual queries are LLM-generated
|
Academia
|
Yes
| null | null |
Test, Train
|
Train: 21127
|
The responses are specifically a set of tools
|
Simple Mean
|
Yes
|
For multi-tool, it's reported for different strictness (e.g., "only one of two correct)
| null |
https://github.com/HowieHwong/MetaTool
|
MetaTool
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
|
The do human validation of the benchmark and whether the queries reliably trigger tools, but no more than that.
| null |
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
The task is deliberately a narrow aspect of "real" QA tasks. Still, it's unclear how realistic the queries are.
|
Composite phenomenon
|
Yes
| null | null |
Agents
|
Tool Use
| null |
['LLM-generated']
|
['Random']
|
['Free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative']
| null |
huangMLAgentBenchEvaluatingLanguage2024
|
MLAgentBench: Evaluating Language Agents on Machine Learning Experimentation
|
Include
| null | null |
MLAgentBench benchmarks the ability of LLM agents to perform machine learning experiments. The benchmark comprises different tasks from canonical classification to code optimization. A success is beating the baseline by more than 10%
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
ML Experimentation
|
No
|
competence in accomplishing the task, i.e., the fraction of time that the agent was able to improve the performance metric
|
Subset
|
While the definition is very high-level (i.e., "ML experimentation"), the authors make no claim that their benchmark is comprehensive.
|
A task is broadly to improve on some starter code either in terms of performance of the trained model (e.g., classification accuracy) or code efficiency (e.g., clock speed). Each task has a description with instructions and goals as well as a set of starter files.
|
A dataset (e.g., CIFAR), a starter model (defined in a `train.py`), and a metric (e.g., `test accuracy`).
| null |
Real task examples (e.g. GitHub issues)
|
13
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically)
|
functioning code (i.e., a .py script or model artifacts)
|
Score improvement of script
|
On a high-level, all metrics are "did the model improve $SCORE by more than 10%?" averaged over 8 trials.
| null |
Academia
|
Yes
| null | null |
Test
| null | null | null |
Yes
|
Measure for each task
| null |
https://github.com/snap-stanford/MLAgentBench/
|
MLAgentBench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
mean over 8 runs.
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
|
The task of improving an existing codebase/doing a kaggle challenge has a degree of gamification but is still quite realistic.
|
Authors' description is unclear
|
Not applicable
| null | null |
Agents
|
Coding
| null |
['Real task']
|
['Targeted']
|
['Free response']
|
['Reward']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Complete']
|
['Mean']
|
yeGlobeSummChallengingBenchmark2024
|
GlobeSumm: A Challenging Benchmark Towards Unifying Multi-lingual, Cross-lingual and Multi-document News Summarization
|
Include
| null | null |
Propose GLOBESUMM and introduce prompting method for silver summary annotation. Validate the quality and difficulty of the dataset.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Text summarization
|
Yes
|
The goal of Multi-lingual, Cross-lingual and Multi- document Summarization (MCMS) is to succinctly capture the key information from a collection of documents written in various languages and present a cohesive summary in the target language. Notably, the MCMS task has three distinctive features: (1) the input consists of multi- ple documents, (2) the multiple documents are in different languages, and (3) the multiple documents revolve around the same event.
|
Subset
| null |
(a) Single- turn Summarization summarizes a document set within a single-turn generation; (b) Chronological Recurrent Summarization iteratively summarizes two documents at a time in a time-ordered manner
|
The model is given a set of articles and asked to summarize them in one or multiple turns.
| null |
Real task examples (e.g. GitHub issues)
|
74 events 942 documents 868 summaries
|
Yes
|
language
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragarph)
|
n-gram (BLEU, ROUGE, chrF), LLM post-processing (extracting answers, reformatting for automated scoring)
|
On top of ROUGE, authors also use Red (Chen et al., 2021) for redundancy, Normalized Inverse of Coverage (NIC) for Omission, and Conflict Resolution Effectiveness (CRE) for conflict
| null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
Training Set: 222 events, 2,848 documents, and 2,626 summaries Validation Set: 74 events, 897 documents, and 823 summaries
| null | null |
Yes
|
for different languages
| null |
https://github.com/YYF-Tommy/GlobeSumm
|
GLOBESUMM
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
the authors conduct extensive human validation in the annotation process. They also validated their annotation method against other benchmark (XQuAD specifically).
|
simple mean
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
|
People would use chatbots to summarize news articles, in my opinion.
|
Composite phenomenon
|
Yes
| null | null |
NLP
|
Summarization
|
Multilinguality
|
['Real task']
|
['Targeted']
|
['Free response']
|
['Soft match', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Complete']
|
['Mean']
|
huSportsMetricsBlendingText2024
|
SportsMetrics: Blending Text and Numerical Data to Understand Information Fusion in LLMs
|
Include
| null | null |
SportsMetrics evaluates LLMs' numerical reasoning abilities within a sports domain. Specifically, it tasks LLMs with filling in information based on play-by-play descriptions from different games. SportsMetrics also include adversarial examples with scrambled rules.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Numerical reasoning and (numerical) information fusion
|
Yes
|
"Information fusion focuses on synthesizing information from multiple textual sources to derive meaningful conclusions" (numerical reasoning is more vaguely defined as ability to "tackle mathematical word problems")
|
Subset
|
The authors narrow down the scope by focusing specifically on the domain of sports.
|
The task is generally to keep track of either the points or comprehensive game statistics given a partial play-by-play description of the game.
|
Each task has a game recap (play-by-play) and a description of target statistics (e.g., the final score, and the rebounds for a specific player) in cloze style.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
200
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
|
Specfically, the metric is delta $TARGET, where target can be, e.g., the ground truth point score. Note, there is no discussion of how this relates to information fusion.
| null |
Mix (multiple authors from industry and academia)
|
No, no link is provided
| null | null |
Test, Train
|
34359
|
There is some flexibility in the exact internal organisation of the data structure, but it has to be JSON
|
Simple Mean
|
Yes
|
For both individual and aggregated metrics.
| null | null |
SportsMetrics
|
Contested
|
No
|
No
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Simple summary stats.
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
Reconstructing summary statistics from a game is likely to be relatively automatable without LLMs. Still, the general idea of extracting numerical data from long texts is fairly realistic.
|
Single cohesive phenomenon
|
Not applicable
| null | null |
Reasoning
|
Mathematical
| null |
['Author-crafted']
|
['Random']
|
['Structured']
|
['Exact match']
|
['Contested']
|
['No']
|
['No']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
choiLoTabenchBenchmarkingLanguageoriented2024
|
LOTA-BENCH: BENCHMARKING LANGUAGE-ORIENTED TASK PLANNERS FOR EMBODIED AGENTS
|
Include
| null | null |
LoTa-Bench is a benchmark for task planning for home-service agents. It proposes a quantitative and automated evaluation framework for language-based agents to complete different home-making tasks like placing an apple in a micro-wave.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
embodied task planning
|
No
|
The ability to create high-level plans for an action sequence resulting in a specified goal state in an embodied home-making context.
|
Comprehensive
| null |
The task is to obtain a specified home-making goal (e.g., put the plate and forks in the dishwasher) based on interacting with a simulator. The end state is evaluted.
|
A simulator and high-level instructions chosen from one of the overall task types (e.g., `Put groceries`).
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
308
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null |
The crowd-sourced component is for translating one of the sub benchmarks to natural language instructions.
|
Academia
|
Yes
| null | null |
Test, Validation
|
943
| null |
Simple Mean
|
No
| null | null |
https://github.com/lbaa2022/LLMTaskPlanning
|
LoTa-Bench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
Success rate
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Reasoning
|
Planning
| null |
['Crowd-sourced', 'Another benchmark']
|
['Random']
|
['Structured']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
songSLINGSinoLinguistic2022
|
SLING: Sino Linguistic Evaluation of Large Language Models
|
Include
| null | null |
The SLING benchmark is introduced to evaluate the linguistic knowledge of pretrained Chinese language models, featuring 38,000 minimal sentence pairs in Mandarin Chinese that highlight syntactic and semantic phenomena. These sentences are naturally-occuring and annotated, from the Chinese Treebank 9.0. Evaluating 18 LMs, the study found that their average accuracy is significantly lower than human performance (69.7% vs. 97.1%), with BERT-base-zh achieving the highest accuracy at 84.8%.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Chinese language (Mandarin)
| null |
"To understand what kinds of linguistic knowledge are encoded by pretrained Chinese language models (LMs)"
|
Subset
| null |
The tasks consist of short sentence pairs in Mandarin Chinese, classified into nine major linguistic categories. Each pair highlights the difference in acceptability for a particular syntactic or semantic phenomenon (e.g., "The keys are lost" vs. "The keys is lost").
|
short sentence pairs in Mandarin Chinese
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
38,000
|
Yes
|
Linguistic Phenomena
|
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Choice of one input sentence
|
Exact Match (accuracy, F1, precision, recall), Human ratings (text quality, preference, NOT manual scoring of other metrics)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Linguistic phenomena
| null |
https://github.com/Yixiao-Song/SLING_Data_Code
|
SLING
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
Comprehensiveness: "there are still phenomena that are important but not included in the current work: for example, the ba and bei constructions. For those structures, unacceptability can have different sources (e.g., syntax or pragmatics).19 Simple syntactic structure restrictions are not enough. When deciding which phenomena to include in SLING, we deliberately avoid such cases because the (un)acceptability of these phenomena can be mitigated by contextual or world knowledge. As a result, human judgement can vary significantly"
| null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
Multilinguality
| null | null |
['Real task', 'Author-crafted', 'Another benchmark']
|
['Convenience', 'Criterion']
|
['Multiple choice']
|
['Exact match', 'Human ratings']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['Yes']
|
['Constructed']
| null |
athiwaratkunMultilingualEvaluationCode2023
|
Multi-lingual Evaluation of Code Generation Models
|
Include
| null | null |
Measures code generation capabilities across 10 programming languages (Java, JavaScript, TypeScript, Go, Ruby, Kotlin, PHP, C#, Scala, C++, Swift, and Perl). Transforms existing Python benchmarks into other languages.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Code generation
|
No
| null |
Comprehensive
| null |
Generating code to complete a function given a docstring.
|
Each example contains a function signature and a docstring. The docstring is detailed and contains examples of the desired behaviour.
|
Fairly limited discussion given it was a transpiled from existing benchmarks.
|
Modified from another benchmark (e.g. translation into another language)
| null |
Yes
|
Programming language.
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Structured response (e.g. valid JSON, API call alone)
|
Accuracy when the generated function is executed.
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null | null | null | null |
Simple Mean
|
Yes
|
Programming language
|
pass@k (any correct answer in k trials)
|
https://github.com/amazon-science/mxeval
|
MBXP and Multilingual HumanEval (two benchmarks)
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
Discuss this briefly in the limitations. Say that they assume this is representative of all code completion problems.
|
Simple mean
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Composite phenomenon
|
Yes
| null | null |
Code Generation
| null | null |
['Another benchmark']
|
['Convenience']
|
['Structured']
|
['Reward']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Mean']
|
pengCOPENProbingConceptual2022
|
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models
|
Include
| null | null |
The paper introduces COPEN, a benchmark designed to probe conceptual knowledge in pre-trained language models (PLMs). It includes three tasks evaluating whether PLMs can group entities by concepts, understand concept properties, and identify concepts in context. Results show that PLMs struggle with conceptual reasoning and often rely on spurious correlations.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
conceptual knowledge
|
Yes
|
"implicit commonsense behind texts"
|
Subset
| null |
Assessing whether PLMs can judge conceptual similarity, recognize conceptual properties, and conceptualize entities based on context.
|
A single item represents one probe instance for a specific conceptual task. E.g
In CPJ, an item includes a statement about a property and a concept or concept chain, along with a true/false label.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Procedurally-generated task examples (e.g. Creating instances from a template)
|
11,035
|
Yes
|
Task types: conceptual similarity, recognize conceptual properties, and conceptualize entities based on context.
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
Train 10,624, Validation: 2,661
| null |
Simple Mean
|
Yes
|
Task Types
| null |
https://github.com/THU-KEG/COPEN
|
COPEN
|
Contested
|
Yes
|
Yes
|
Yes
|
No
| null |
No
|
Yes
|
Yes
|
The authors explicitly connect each probing task to specific cognitive functions and conceptual structures - grounding their design in existing literature.
|
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
NLP
|
Understanding
| null |
['Author-crafted', 'Crowd-sourced', 'Procedurally-generated']
|
['Convenience', 'Targeted']
|
['Short free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean']
|
hardalovBgGLUEBulgarianGeneral2023
|
bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark
|
Include
| null | null |
bgGLUE (Bulgarian General Language
Understanding Evaluation), a benchmark
for evaluating language models on Natural Language
Understanding (NLU) tasks in Bulgarian.
The benchmark includes NLU tasks targeting
a variety of NLP problems (e.g., natural language
inference, fact-checking, named entity
recognition, sentiment analysis, question answering,
etc.) and machine learning tasks (sequence
labeling, document-level classification,
and regression).
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
NLU for the Bulgarian language
|
Yes
|
We present bgGLUE (Bulgarian General Language Understanding Evaluation), a benchmark for evaluating language models on Natural Language Understanding (NLU) tasks in Bulgarian. Our benchmark includes NLU tasks targeting a variety of NLP problems (e.g., natural language inference, fact-checking, named entity recognition, sentiment analysis, question answering, etc.) and machine learning tasks (sequence labeling, document-level classification, and regression).
|
Subset
| null |
The task is defined as the evaluation of language models on a benchmark suite of nine NLU tasks in Bulgarian, covering areas such as token classification, regression/ranking, and text classification. Each task is designed to test specific language understanding capabilities, including named entity recognition, sentiment analysis, fact-checking, natural language inference, and question answering
|
A single item would consist of a text input (e.g., sentence, paragraph, tweet, or document) along with its associated label or target output, depending on the task type.
| null |
Human exam questions (e.g. GRE questions), Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
total 32,448
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), Pear./Spear. Corr , Avg. Precision
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Train, Validation
|
total train 452,449 , total validation 20,930
| null |
Simple Mean
|
No
| null | null |
https://bgglue.github.io/
|
bgGLUE
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean, for tasks with more than one metric (like Pearson and Spearman correlation for sentiment regression), scores are averaged to get a single task score
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
NLP
|
Understanding
|
Multilinguality
|
['Human exams', 'Real task', 'Author-crafted', 'Another benchmark']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match', 'Correlation']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Representative', 'Constructed']
|
['Mean']
|
kwanMTevalMultiturnCapabilities2024
|
MT-Eval: A Multi-Turn Capabilities Evaluation Benchmark for Large Language Models
|
Include
| null | null |
Thie paper introduces MT-Eval, a benchmark to evaluate the multiturn conversational abilities of LLMs. Paper's analysis of interactions in LMSYS-Chat1M reveals four predominant patterns when users interact with AI assistants: Recollection, where the assistant must recall information from earlier turns; Expansion, involving the exploration of varied topics within the main subject; Refinement, where initial instructions are clarified
or revised; and Follow-up, consisting of questions based on the assistant’s previous responses. They then construct evaluation sets for each interaction type by augmenting existing datasets or creating new ones to cover real-world applications.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
LLMs' capabilities in multi-turn interactions
|
No
|
The ability to perform coherent multi-turn interactions
|
Subset
| null |
Multi-turn conversion (given a context, the model is asked to answer some questions)
|
A multi-turn query (multiple sentences)
| null |
Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
168
|
No
| null |
Random sample (creators defined a task space and sampled from it)
|
Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Difficulty
| null |
https://github.com/KwanWaiChung/MT-Eval
|
MT-Eval
|
Contested
|
Yes
|
No
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
User Interaction
| null | null |
['Another benchmark', 'LLM-generated']
|
['Random']
|
['Free response']
|
['Exact match', 'LLM-as-a-Judge']
|
['Contested']
|
['Yes']
|
['No']
|
['No comparison made']
|
['No']
|
['Constructed']
|
['Mean']
|
naousReadMeBenchmarkingMultilingual2024
|
README++: Benchmarking Multilingual Language Models for
Multi-Domain Readability Assessment
|
Include
| null | null |
ReadMe++ is a multilingual and multi-domain dataset for readability assessment according to the Common European Framework of Reference for Languages (CEFR) scale in Arabic, English, French, Hindi, and Russian. The dataset is human-annotated and publicly available. The dataset can benchmark supervised, unsupervised, and few-shot approaches, and is measured by the Pearson Correlation between predictions and ground-truth labels (supervised, few-shot) or the Ranged Sentence Readability Score (unsupervised).
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Readability assessment
|
Yes
|
Readability assessment is the task of determining how difficult it is for a specific audience to read and comprehend a piece of text.
|
Comprehensive
| null |
The model must classify the readability of a sentence according to the 6-point Common European Framework of Reference for Languages (CEFR). The scale proceeds as 1 (A1), 2 (A2), 3 (B1), 4 (B2), 5 (C1), 6 (C2), where A is for basic, B is for independent, and C is for proficient; the paper provides the full annotation criteria in the appendix.
|
A single item is a sentence with its associated language, domain, sub-domain, paragraph, context, and readability assessment label. The paragraph and context are optional and provided for human annotators to aid in manual labeling.
| null |
Human exam questions (e.g. GRE questions), Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
9757
|
Yes
|
Language, Domain, Sub-Domain, Context, Paragraph
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Distribution (perplexity, calibration, correlation)
|
The model has two metrics. Pearson correlation requires just model output, but Ranked Sentence Readability Score requires model access to access the LLM's distribution.
|
Data is sourced from 21 types of text (e.g. textbooks, legal documents, etc.) from various open-source datasets or open-access resources.
|
Academia
|
Yes
| null | null |
Test, Train, Validation
|
60/10/30 train/validation/test
| null |
Simple Mean
|
Yes
|
Unseen Domains per Data Source, Cross-Lingual Transfer
| null |
https://github.com/tareknaous/readme/tree/main
|
ReadMe++
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
Authors assess their construct validity when justifying the originality or contribution of their benchmark. They expand an existing scale grounded in literary research to be multilingual and balance several domains, which current assessments fail to do, to ensure the most reliable assessment of readability.
|
Min, max, average
|
Outputs alone
|
Complete real task (e.g. providing medical advice to real people interactively)
|
The task would probably be integrated into user applications, but not directly asked for by the user. Provided real-world applications of readability assessment were controllable text-simplification, ranking search engine results by their level of difficulty, and selecting appropriate reading material for language learners.
|
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
| null | null |
['Human exams', 'Real task', 'Author-crafted', 'Another benchmark']
|
['Targeted', 'Criterion']
|
['Multiple choice']
|
['Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Mean', 'Other']
|
hengleStillNotQuite2024
|
Still Not Quite There! Evaluating Large Language Models for Comorbid Mental Health Diagnosis
|
Include
| null | null |
ANGST is a benchmark for depression-anxiety comorbidity classification from social media posts. The dataset has multi-class labeling for anxiety, depression, both, or none, and the samples are neutrally seeded from Reddit and human-annotated by expert psychologists. Additionally, the paper presents ANGST-SILVER, a more extensive and silver-labeled dataset by GPT-3.5-turbo to support few-shot learning or supervised fine-tuning.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
depression-anxiety comorbidity classification
|
Yes
|
Depression-anxiety comorbidity is the phenomenon of depression and anxiety manifesting concurrently, and requiring a dual diagnosis/multiple labels of depression and anxiety.
|
Subset
| null |
The benchmark supports three classification tasks: multi-label classification of a Reddit post as showing anxiety, depression, comorbid (both), or control (none), and two binary classification tasks to identify a post as exhibiting depression or non-depression, and anxiety or non-anxiety.
|
A single item would be a Reddit post and its label as anxiety, depression, comorbid (both), or control (none).
| null |
Real task examples (e.g. GitHub issues)
|
ANGST: 2876, ANGST-SILVER: 7667
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Mix (multiple authors from industry and academia)
|
No, link is broken
| null | null |
Test
| null | null |
Weighted Mean
|
Yes
|
Depression vs Control, Anxiety vs Control
| null |
https://github.com/AmeyHengle/ANGST
|
ANGST (ANxiety-Depression Comorbidity DiaGnosis in Reddit PoST)
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
The authors compared the construction of ANGST against SDCNL, Depression Reddit, Dreaddit, and DATD. They measured the inter-class similarity of each benchmark by Jensen-Shannon Divergence (JSD) and Maximum Mean Discrepancy (MMD), and found that ANGST had the lowest pairwise JSD, indicating that ANGST is more challenging to classify, and thus more representative of the minute but vital differences between anxiety and depression. The authors also compared the data drift of ANGST against the other benchmarks, calculated by accuracy, macro-F1, ROC_AUC scores, and Matthews Correlation Coefficient. The results are between 0.904 and 1.0 for ROC-AUC, and 0.990 and 0.875 for MCC, indicating a distinct and inherent difference from existing datasets, claimed to result from its meticulous data curation and gold annotation scheme.
|
Weighted Precision, Recall, F1 scores, and macro-F1 scores for binary and multi-class classification. Hamming loss is also reported for multi-class classification.
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
The benchmark utilizes publicly available Reddit data, so the task chooses a "diagnosis" based upon data from real people. However, the data has been heavily filtered from mental-health-related subreddits, so the benchmark is somewhat constructed or artificial.
|
Composite phenomenon
|
Yes
| null | null |
Mental Health
| null | null |
['Real task']
|
['Targeted', 'Criterion']
|
['Multiple choice']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial', 'Constructed']
|
['Mean', 'Other']
|
tanDevBenchMultimodalDevelopmental2024
|
DevBench: A multimodal developmental benchmark for language learning
|
Include
| null | null |
DevBench is a multimodal benchmark for assessing how LLMs compare to human language development across seven language evaluation tasks spanning lexical, syntactic, and semantic domains. Each task contains item-level human baseline data to facilitate human-model language development comparison using a novel metric: softmax-optimized Kullback-Leibler divergence. The goal of the benchmark is to measure whether developmentally realistic data leads to human-like learning in LLMs.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
language evaluation, language development, cognitive evaluation
|
No
|
Language development evaluation is assessing whether the language ability gained by machine learning models matches the language ability gained by children when exposed to similar developmental data.
|
Subset
| null |
The benchmark consists of 7 multi-modal language evaluations. The lexical tasks consist of Looking-while-listening (LWL) and Visual vocabulary task (VV), the syntatic tasks consist of Test of Receptive Grammar (TROG), Winoground-NoTag (WG), and the semantic tasks consist of Free word association task (WAT), Visual object categorization (VOC), and THINGS similarity ratings.
|
For each task, a single sample would consist of the task prompt, a correct label if applicable, and the associated human response and human age range. Several tasks (LWL, VOC) are quantitative and measured by the looking time response, while the rest are categorical.
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
22212
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Distribution (perplexity, calibration, correlation)
| null |
The experiments are sourced from child development literature, hence the choice of real task examples. Several task samples were modified to ensure that the images used in multimodal prompts had the correct licensing.
|
Academia
|
Yes
|
For attribution and licensing reasons, not all assets and data are hosted in the repo.
| null |
Test
| null | null | null |
No
|
Scores are provided per task, and the benchmark itself consists of 7 distinct tasks
| null |
https://github.com/alvinwmtan/dev-bench
|
DevBench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
Yes
|
Yes
|
The authors define the desiderata for an ideal benchmark of developmentally appropriate evaluation of language models as (1) a wide dynamic range of difficulty (2) multiple levels of linguistic representations (3) corresponding data from children, and (4) high similarity in evaluation method between models and humans. These desiderata are based on child development literature and seek to overcome the limitations of existing benchmarks. Namely, current benchmarks are either unimodal, when cognitive language evaluations for children and infants are multimodal to accommodate pointing or looking responses, or current benchmarks compare language models to exclusively adult performance. DevBench seeks to fulfill all four criteria.
|
Visual semantic tasks were measured with representational similarity analysis (RSA), while the other tasks were measured with a novel metric: softmax-optimized Kullback-Leibler divergence
|
Model access required (e.g. logits)
|
Complete real task (e.g. providing medical advice to real people interactively)
| null |
Composite phenomenon
|
Yes
| null | null |
Language Modelling
| null | null |
['Real task', 'Author-crafted']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response']
|
['Distribution']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Complete']
|
['Other']
|
shavrinaRussianSuperGLUERussianLanguage2020
|
RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark
|
Include
| null | null |
In this paper, we introduce an advanced Russian general language understanding evaluation benchmark – RussianGLUE. This benchmark consists of nine tasks, collected and organised analogically to the SuperGLUE methodology (Wang et al., 2019), it was developed from scratch for the Russian language. We provide baselines, human level evaluation, an open- source framework for evaluating models and an overall leaderboard of transformer models for the Russian language.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
natural language understanding
|
Yes
| null |
Subset
| null |
The RussianSuperGLUE benchmark evaluates LM on a set of nine diverse natural language understanding tasks in Russian. These include diagnostics, commonsense reasoning, natural language inference, machine reading comprehension, and world knowledge.
.
|
A single item in the dataset consists of a natural language input (e.g. a sentence, paragraph, or question) and a corresponding label or output (e.g. classification label, entailment judgment, or text). The exact format varies by task.
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
total test 22,119
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), exact match, MCC (Matthews Correlation Coefficient)
| null | null |
Mix (multiple authors from industry and academia)
|
it is not in the paper, but available online
| null | null |
Test, Train, Validation
|
Total (some tasks have none): 97,090 and 14,104
| null |
Simple Mean
|
No
| null | null |
https://russiansuperglue.com
|
RussianGLUE
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
NLP
|
Understanding
|
Multilinguality
|
['Author-crafted', 'Crowd-sourced', 'Another benchmark', 'Procedurally-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response']
|
['Exact match', 'Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
|
['Mean']
|
taktashevaRuBLiMPRussianBenchmark2024
|
RuBLiMP: Russian Benchmark of Linguistic Minimal Pairs
|
Include
| null | null |
Minimal pairs are a well-established approach
to evaluating the grammatical knowledge of language
models. This paper introduces
the Russian Benchmark of Linguistic
Minimal Pairs (RuBLiMP), which includes
45k pairs of sentences that differ in grammaticality
and isolate a morphological, syntactic,
or semantic phenomenon. In contrast to existing
benchmarks of linguistic minimal pairs,
RuBLiMP is created by applying linguistic
perturbations to automatically annotated sentences
from open text corpora and decontaminating
test data.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
grammatical knowledge, specifically across morphological, syntactic, and semantic phenomena in the Russian language.
|
Yes
|
This paper introduces the Russian Benchmark of Linguistic Minimal Pairs (RuBLiMP), which includes 45k pairs of sentences that differ in grammaticality and isolate a morphological, syntactic, or semantic phenomenon. Our benchmark covers morphological, syntactic, and semantic phenomena well-represented in Russian theoretical linguistics.
|
Subset
| null |
The task is defined as a forced-choice acceptability judgment between two sentences in a minimal pair, where the model must assign a higher probability to the grammatical sentence over the ungrammatical one.
|
A pair of sentences with one being grammatically correct and the other one is incorrect, with the respective label
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
45k
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), Distribution (perplexity, calibration, correlation)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
No
| null | null |
https://github.com/RussianNLP/RuBLiMP
|
RuBLiMP
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
No
| null |
simple mean, inter-annotator agreement with WAWA and the Dawid-Skene method for vote aggregation. delta-scores to measure performance differences between models under different dataset filtering conditions
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
The task is a constructed benchmark using linguistic minimal pairs to test grammatical knowledge in LMs. This setup is a representative proxy for evaluating capabilities that are critical in applications like machine translation, dialogue systems, and text generation.
|
Composite phenomenon
|
Yes
| null | null |
NLP
| null |
Multilinguality
|
['Author-crafted', 'Another benchmark', 'Procedurally-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response']
|
['Exact match', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Representative', 'Constructed']
|
['Mean', 'Other']
|
liInfiBenchEvaluatingQuestionanswering2024
|
InfiBench: Evaluating the Question-Answering Capabilities of Code Large Language Models
|
Include
| null | null |
Freeform question-answering (QA) benchmark for code across 15 programming languages.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Code question-answering.
|
No
| null |
Comprehensive
| null |
Providing responses to Stack Overflow questions.
|
A modified Stack Overflow question in a certain programming language.
| null |
Real task examples (e.g. GitHub issues)
|
234
|
Yes
|
15 programming languages, 5 topic areas e.g. front-end, back-end,...etc
|
Random sample (creators defined a task space and sampled from it), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragarph)
|
n-gram (BLEU, ROUGE, chrF), Also consider unit tests for some questions.
|
Use 4 different metrics, weights for each metric per question and provide a weighted average.
| null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Difficulty level and question topic (not question language)
| null |
https://infi-coder.github.io/infibench/
|
InfiBench
|
Contested
|
Yes
|
Mixed. Keywords/n-grams are a limited way of assessing performance.
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
No
| null |
Mean, standard deviation.
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Code Generation
| null | null |
['Real task']
|
['Random', 'Targeted', 'Criterion']
|
['Free response']
|
['Soft match', 'Reward']
|
['Contested']
|
['Yes']
|
['Partially']
|
['Realistic']
|
['No']
|
['Partial']
|
['Mean', 'Std']
|
duMercuryCodeEfficiency2024
|
Mercury: A Code Efficiency Benchmark for Code Large Language Models
|
Include
| null | null |
Introduces the first code efficiency benchmark for Code LLMs. Benchmark functional correctness and code efficiency simultaneously
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Code Efficiency
|
Yes
|
Code efficiency refers to the performance measure of time and space complexity to accomplish a specific task (they explicitly say they focus on the time dimension only)
|
Subset
|
Define code efficiency over time and memory elements. Just focus on the time element in this benchmark.
|
Code generation problems from Leedcode. Natural language to code tasks.
|
A Python Leetcode question.
| null |
Real task examples (e.g. GitHub issues)
|
256
|
Yes
|
Difficulty level.
|
Convenience sample (creators found a set of tasks that was readily accessible), Specific criteria (items were taken from a larger set based on specified rules)
|
Structured response (e.g. valid JSON, API call alone)
|
runtime percentile of the LLM-generated code on the runtime distribution supported by corresponding solutions (the Leetcode solutions)
|
The average question has 18.4 reference solutions (to form the runtime distribution)
| null |
Academia
|
Yes
| null | null |
Test, Train
|
1,633
| null |
Weighted Mean
|
Yes
|
Difficulty level.
|
Mean score @ k
|
https://github.com/Elfsong/Mercury
|
Mercury
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
No
|
No
| null | null |
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
Code Generation
| null | null |
['Real task']
|
['Convenience', 'Criterion']
|
['Structured']
|
['Reward']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial']
| null |
linghuMultimodalSituatedReasoning2024
|
Multi-modal Situated Reasoning in 3D Scenes
|
Include
| null | null |
Introduces MSQA, a large-scale dataset (251K pairs) for multi-modal situated reasoning in 3D scenes, and two corresponding benchmarks: Multi-modal Situated Question Answering (MSQA) and Multi-modal Situated Next-step Navigation (MSNN). The MSQA dataset was collected scalably using 3D scene graphs and vision-language models, while the benchmarks use a novel interleaved input setting (text, image, point cloud) to improve situation awareness and resolve ambiguity present in single-modality approaches.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Situated Reasoning or Situation Awareness within 3D scenes.
|
No
| null |
Subset
| null |
The primary tasks require a model to either answer diverse, multi-modal situated questions about a 3D scene (MSQA) or predict the immediate next navigation action towards a goal based on the current situation (MSNN), using interleaved text, image, and point cloud context.
|
A single data instance includes the 3D scene point cloud, a specific situation (location, orientation, multi-modal description), an interleaved multi-modal question (for MSQA) or goal description (for MSNN), and the ground truth answer (for MSQA) or the correct next-step navigation action (for MSNN).
|
A key feature is the use of interleaved multi-modal inputs (text, images embedded within text, point clouds) for both defining the situation and the question/goal, aimed at resolving ambiguity found in single-modality descriptions. Additionally, the MSNN task deliberately focuses only on the immediate next navigation step to isolate situated understanding from complex, long-horizon planning.
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
1413 (This is the total test set size for the MSQA benchmark, calculated by summing the test set items reported for ScanNet (832), 3RScan (315), and ARKitScenes (266) in Appendix Table 12. The specific test set size for the MSNN task (total size 34K) is not explicitly stated in the reviewed sections/tables.)
|
Yes
|
Question type, Situation location, Situation orientation, Situation multi-modal description components, Source scene ID, Referenced object attributes, Goal description (for MSNN task).
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
|
For the open-ended MSQA task, the authors employ a "GPT-score," an LLM-as-a-judge approach following OpenEQA, to evaluate response correctness on a 1-5 scale, as they argue standard metrics like Exact Match are unsuitable. For the MSNN next-step prediction task, standard Accuracy is used.
|
The task generation is a multi-stage process: Situations (location/orientation) are sampled procedurally within real-world 3D scene datasets (ScanNet, 3RScan, ARKitScenes). Situated scene graphs are created, which are then used with author-designed prompts to generate question-answer pairs (for MSQA) or navigation goals (for MSNN) via LLMs (GPT-3.5/GPT-4V). Finally, author-led refinement and balancing steps were applied to the generated data.
|
Academia
|
Yes
|
It utilises three existing real-world 3D scan datasets (ScanNet, 3RScan, ARKitScenes) as base environments. The data generation and evaluation processes significantly use specific LLMs (GPT-3.5, GPT-4V).
|
A key contribution highlighted is the novel interleaved multi-modal input format (text, images, point clouds) designed to resolve ambiguity inherent in situated tasks. The paper also emphasises the large scale of the generated MSQA dataset (251K pairs) and includes a human study specifically assessing the quality of this LLM-generated data compared to human annotations.
|
Test, Train, Validation
|
MSQA Train: 248,328; MSQA Validation: 2,147 (Justification: Calculated by summing the respective splits reported for ScanNet, 3RScan, and ARKitScenes in Appendix Table 12. Train/Val split sizes for the separate MSNN dataset are not explicitly stated.)
|
For MSQA, the expected output is open-ended text, ranging from short answers (like "yes", "no", counts) to brief descriptive sentences (e.g., explaining spatial relationships or object attributes). For MSNN, the output is a short textual command representing the immediate next navigation action (e.g., "Turn right", "Move forward").
|
Simple Mean
|
Yes
|
Scores are provided broken down by: question category (for MSQA, e.g., Counting, Spatial, Navigation), source domain (ScanNet, 3RScan, ARKitScenes), presence/location of images in the input (situation vs. question), and specific question properties (e.g., ground truth count value for counting questions, questions involving directional answers).
| null |
https://msr3d.github.io
|
Multi-modal Situated Question Answering (MSQA), Multi-modal Situated Next-step Navigation (MSNN)
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
The authors provide evidence for validity by: 1) Justifying the need based on limitations of prior benchmarks (scope, scale, ambiguity). 2) Arguing their interleaved multi-modal task design resolves ambiguity and is more versatile. 3) Conducting a human study showing the quality (clarity, correctness) of their generated data is comparable to human-annotated data. 4) Demonstrating benchmark utility and internal consistency through model performance analysis (e.g., showing tasks are challenging, situation modeling matters, MSQA pre-training benefits MSNN).
|
Simple mean/average scores (MSQA Correctness Score C, MSNN Accuracy) are used to aggregate results. Different models or settings are compared directly based on these mean scores presented in tables.
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Reasoning
| null | null |
['Author-crafted', 'Procedurally-generated', 'LLM-generated']
|
['Targeted', 'Criterion']
|
['Short free response', 'Free response']
|
['Exact match', 'LLM-as-a-Judge']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial', 'Representative']
|
['Mean']
|
wuSTaRKBenchmarkingLLM2024
|
STaRK: Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases
|
Include
| null | null |
STaRK is a large-scale benchmark for evaluating LLM-based retrieval systems on semi-structured knowledge bases (SKBs) that integrate textual and relational information. It covers product search, academic paper search, and precision medicine domains. A novel pipeline synthesizes realistic queries and ground truth answers, supplemented by human-generated queries, revealing significant challenges for current retrieval systems.
|
Key contributions include the first large-scale benchmark specifically for retrieval on SKBs integrating text and relations, a novel query synthesis pipeline using LLMs, the construction of three domain-specific SKBs and corresponding datasets, and extensive experiments evaluating various retrieval models including LLMs.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications), Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
LLM retrieval capability on semi-structured knowledge bases (SKBs), involving reasoning over combined textual and relational information.
|
Yes
|
The task is defined as: Given a semi-structured knowledge base (SKB) comprising a knowledge graph G=(V,E) and associated text documents D, and a query Q, the goal is to retrieve a set of nodes (entities) A ⊆ V that satisfy both the relational requirements implied by G and the textual requirements specified in Q, based on their associated documents.
|
Subset
|
The benchmark specifically targets the gap left by prior work that treated textual and relational retrieval separately, aiming to evaluate systems on more realistic, integrated knowledge sources.
|
Given a query combining textual descriptions and relational constraints, retrieve the correct entities (nodes) from a semi-structured knowledge base (SKB) that satisfies both aspects.
|
A single item consists of a natural language query (potentially simulating different user roles or contexts) and a set of ground-truth entity identifiers (nodes) from the corresponding SKB that correctly answer the query.
|
Queries are designed to be natural-sounding, incorporate diverse relational patterns (including multi-hop) and textual properties, cover three distinct domains, and include both synthesised and human-generated questions.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
Test set sizes: Synthesized: STARK-AMAZON ≈ 1638, STARK-MAG ≈ 2665, STARK-PRIME ≈ 2801. Human-generated: STARK-AMAZON = 81, STARK-MAG = 84, STARK-PRIME = 98. Total Test Queries ≈ 7367.
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), LLM post-processing (extracting answers, reformatting for automated scoring), Distribution (perplexity, calibration, correlation)
|
Primary metrics are Hit@k (k=1, 5), Recall@k (k=20, chosen because max answer set size ≤ 20), and Mean Reciprocal Rank (MRR).
|
A novel pipeline samples relational templates extracts textual properties from a 'gold' entity using LLMs, synthesizes natural language queries using LLMs (incorporating roles and context), and filters candidate answers using LLMs to create the synthesized dataset. Additionally, human participants generated queries using an interactive platform exploring the SKBs.
|
Mix (multiple authors from industry and academia)
|
Yes
|
Human query generation involved volunteers acknowledged in the paper. Detailed prompts and LLM versions used for the synthesis pipeline are documented in the appendix. Data sources and licenses are mentioned. An interactive data explorer is provided.
|
The benchmark demonstrates that even advanced LLM-based retrieval and re-ranking systems face significant challenges with complex SKB retrieval, indicated by relatively low performance on metrics like Hit@1 and Recall@20 across all domains, especially STARK-PRIME. Retrieval latency is identified as a major practical hurdle for the best-performing (re-ranker) models.
|
Test, Train, Validation
|
Synthesized Train/Validation sizes: STARK-AMAZON: Train≈5915, Val≈1547; STARK-MAG: Train≈7994, Val≈2665; STARK-PRIME: Train≈6162, Val≈2241.
|
Systems are expected to return a ranked list of entity nodes (V) from the knowledge base that satisfies the query's textual and relational constraints.
|
Simple Mean
|
No
| null | null |
https://github.com/snap-stanford/STARK
|
STaRK (Semi-structure retrieval benchmark on Textual and Relational Knowledge Bases)
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
Yes
|
Conducted human evaluation with 63 participants validating synthesized query naturalness (94.1% ≥ neutral), diversity (85.3% ≥ neutral), and practicality (89.4% ≥ neutral). Analyzed dataset statistics: query/answer lengths, lexical diversity (Shannon Entropy, TTR), and ratio of relational/textual information. Assessed the precision of the LLM-based answer filtering step in the synthesis pipeline (high verification rates for gold answers). Compared synthesized vs. human-generated queries.
|
Simple mean/average of Hit@k, Recall@k, and MRR over the test sets.
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
The benchmark simulates queries from different user roles (customers, researchers, doctors, patients) and includes complex contexts. Human evaluations confirmed the naturalness, diversity, and practicality of the synthesized queries.
|
Composite phenomenon
|
Yes
| null | null |
Retrieval
| null | null |
['Real task', 'Author-crafted', 'Crowd-sourced', 'Procedurally-generated', 'LLM-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Short free response']
|
['Exact match', 'LLM post-processing', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial']
|
['Mean']
|
krumdickBizBenchQuantitativeReasoning2024
|
BizBench: A Quantitative Reasoning Benchmark for Business and Finance
|
Include
| null | null |
This paper introduces BizBench, a benchmark for evaluating models’ ability to reason about realistic financial problems. BizBench comprises eight quantitative reasoning tasks, focusing on question answering (QA) over financial data via program synthesis.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Financial quantitative reasoning
|
Yes
|
This paper proposes a benchmark for evaluating models’ ability to reason about realistic financial problems as the ability to perform question-answering over structured and unstructured financial data.
|
Subset
| null |
BizBench consists of three interrelated types of tasks for assessing transparent and accurate financial reasoning: program synthesis, quantity extraction, and domain knowledge.
|
The benchmark comprises of three separate sub-tasks. The task items for each sub-task are described below;
- Program Synthesis: Each example contains a natural language question, optionally text or structured data source, and a Python program that produces a numeric answer to the question
- Quantity Extraction: A document snippet and a target label as input, the expected output is the quantity span from the snippet corresponding to the label
- Domain Knowledge: MCQA and function stub including a docstring and type hints for code completion.
| null |
Human exam questions (e.g. GRE questions), Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
5,448
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Industry
|
Yes
| null | null |
Test, Train
|
14,394
| null |
Simple Mean
|
Yes
|
Scores are provided for each sub-task, sub-task dataset, and number of few-shot examples provided
| null |
https://huggingface.co/datasets/kensho/bizbench
|
BizBench
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
No
|
No
|
Somewhat
|
The authors attempt to demonstrate construct validity by stating that the questions used in the benchmark "are written by financial professionals using real-world data and financial knowledge. As such, they are closer to the kinds of questions that business and financial professionals answer as part of their workflows." However, they do not empirically validate this with any extensive experiments.
|
simple mean
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Finance
| null | null |
['Human exams', 'Real task', 'Author-crafted', 'Procedurally-generated', 'LLM-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean']
|
ghoshEPiCEmployingProverbs2022
|
ePiC: Employing Proverbs in Context as a Benchmark for Abstract Language Understanding
|
Include
| null | null |
This paper introduces ePiC, a high-quality crowdsourced dataset designed to benchmark abstract language understanding and analogical reasoning in LLMs. The dataset pairs narratives with proverbs, featuring fine-grained span alignments and minimal lexical overlap. Three tasks are proposed: proverb recommendation/alignment, narrative generation, and identifying similar narrative motifs. Experiments show that current LLMs struggle with these tasks compared to humans, indicating significant challenges in abstract reasoning.
|
Introduced a high-quality, manually curated dataset (ePiC) specifically for benchmarking abstract reasoning using proverbs, featuring fine-grained span alignments and intentionally low lexical overlap. Proposed three challenging tasks (proverb recommendation/alignment, narrative generation, similar motif identification) designed to test reasoning beyond surface patterns. Provided benchmark results for several LLMs, demonstrating a significant performance gap compared to humans on these tasks.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Abstract language understanding, complex analogical reasoning.
|
Yes
|
The ability for abstract language understanding and complex analogical reasoning, demonstrated by correctly associating proverbs with illustrative narrative contexts and identifying underlying motifs, requiring reasoning beyond surface lexical features.
|
Subset
|
The benchmark uses proverbs because they require understanding analogies, cultural context, and reasoning beyond literal meanings, posing a challenge distinct from many standard NLU tasks.
|
The benchmark includes three main tasks: (1) Proverb & Alignment Prediction: Given a narrative, predict the most fitting proverb from 250 options and identify corresponding text spans between the narrative and proverb. (2) Narrative Generation: Given a proverb and topic keywords, generate a relevant narrative. (3) Identifying Similar Motifs: Given a narrative, identify other narratives that illustrate the same underlying proverb/motif.
|
A proverb paired with 10 distinct, crowdsourced narratives. Each narrative-proverb pair includes annotations of aligned text spans (up to 5) indicating semantic correspondences.
|
Narratives are short (avg. 64 words), intended as realistic stories, and intentionally written with minimal lexical overlap with the corresponding proverb to prevent reliance on surface cues.
|
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks)
|
Total dataset: 250 proverbs, 2500 narratives. Test set: 1000 proverb-narrative pairs (exact narratives depend on 'seen' vs 'unseen' split setup).
|
Yes
|
Fine-grained aligned spans between proverbs and narratives (up to 5 pairs per item, linking contiguous text spans).
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Free response (e.g. summary paragarph), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics), Distribution (perplexity, calibration, correlation)
|
Proverb Prediction: Accuracy, MRR. Alignment Prediction: Token-level Precision, Recall, F1. Narrative Generation: BLEU, ROUGE-L, Keyword Recall, Human Likert ratings (1-5) for Relatedness, Interesting/Creative, Fluency, Overall. Motif Identification: Accuracy.
|
Proverbs were collected from public online sources (The Phrase Finder, WikiQuotes) and manually curated. Narratives and alignments were generated by paid crowdworkers on Amazon Mechanical Turk following specific instructions to ensure quality and low lexical overlap.
|
Academia
|
Yes
|
Detailed appendices cover additional data analysis (sentiment, gender, complexity, hate speech), human evaluation specifics (MCQ task design, error analysis), generated narrative examples, and detailed training parameters (models, hyperparameters, hardware, software). Ethical considerations including data bias (gender, cultural), turker compensation and selection are discussed.
|
A key feature is the fine-grained span alignment annotations, intended to support interpretability and more sophisticated modeling approaches. The paper explicitly acknowledges the limitation of focusing only on English proverbs and suggests future work to broaden cultural representation. The low performance of models, especially compared to humans, strongly suggests these tasks capture reasoning abilities beyond current LLM capabilities.
|
Test, Train
|
Train set: 1500 proverb-narrative pairs. No validation set mentioned.
|
Proverb prediction is classification/MCQ. Alignment prediction involves outputting span indices. Narrative generation produces free text. Motif identification ranks narratives based on similarity.
|
Simple Mean
|
Yes
|
Results are reported separately for 'seen proverbs' and 'unseen proverbs' test conditions.
| null |
https://epic-benchmark.github.io
|
ePiC (Employing Proverbs in Context)
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
Analyses demonstrated minimal lexical overlap between proverbs/narratives and high diversity among narratives for the same proverb. Sentiment analysis showed narrative sentiment diversity. The dataset contains diverse events and reading complexity levels. Human evaluations confirmed high quality for narratives (Overall 3.68/5) and alignments (3.91/5), surpassing prior related datasets. Potential gender bias was identified and discussed.
|
Accuracy, MRR, Precision, Recall, F1, BLEU, ROUGE-L, Keyword Recall, Mean Likert scores.
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
|
The dataset consists of narrative stories intended to be realistic, but the tasks themselves (classification, generation from keywords, similarity based on shared proverbs) are primarily evaluation constructs.
|
Composite phenomenon
|
Yes
| null | null |
Reasoning
|
Logical
| null |
['Author-crafted', 'Crowd-sourced']
|
['Convenience', 'Targeted']
|
['Multiple choice', 'Free response', 'Structured']
|
['Exact match', 'Soft match', 'Human ratings', 'Distribution']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
|
['Mean', 'Other']
|
yuanUnlockingMarketsMultilingual2024
|
Unlocking Markets: A Multilingual Benchmark to Cross-Market Question Answering
|
Include
| null | null |
The paper introduces Multilingual Cross-market Product-based Question Answering (MCPQA), a novel task where information from a resource-rich market (e.g., US) is used to answer product questions in a resource-scarce market, potentially in a different language. It presents a large-scale dataset derived from 17 Amazon marketplaces (11 languages), with a translated subset for Electronics called McMarket. Experiments on review-based answer generation (AG) and question ranking (QR) benchmark various models, demonstrating that leveraging cross-market information significantly boosts performance.
|
Key contributions include: (1) Proposing the novel MCPQA task framework. (2) Constructing a large-scale, multilingual, cross-market PQA dataset, including the translated McMarket subset. (3) Demonstrating the use of LLMs (GPT-4) for annotating high-quality subsets (McMarket_r, McMarket_q) for specific tasks, validated by human assessment. (4) Providing extensive benchmarks comparing single-market vs. cross-market approaches using models from lexical methods to LLMs, verifying the benefit of cross-market data.
|
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Product-related Question Answering (PQA), specifically focusing on cross-market information leveraging in a multilingual context.
|
Yes
|
Multilingual Cross-market Product-based Question Answering (MCPQA) is defined as "providing answers to product-related questions in a main marketplace by utilizing information from another resource-rich auxiliary marketplace in a multilingual context". This involves using resources like reviews or QA pairs from an auxiliary market to address questions in a main market.
|
Subset
|
The work addresses the practical issue of data scarcity in smaller e-commerce marketplaces by proposing methods to leverage data from larger, resource-rich marketplaces, even across language barriers.
|
The paper defines two subtasks within MCPQA: (1) Review-based Answer Generation (AG): Predict if a question is answerable using reviews from main and auxiliary markets, and if so, generate the answer. (2) Product-related Question Ranking (QR): Rank existing QA pairs from main and auxiliary markets based on their relevance for answering a given question in the main market.
|
The base dataset contains products with metadata, user questions, answers, and reviews from 17 Amazon marketplaces. The McMarket subset includes English translations. LLM-annotated subsets contain specific labels: McMarket_r has (Question, Reviews, Answerability, Generated Answer/Reason); McMarket_q has (Query Question, Candidate QA pair, Relevance Score, Reason).
|
Key aspects are leveraging cross-market data (from a resource-rich auxiliary market like US) and handling multilingual information (via translation in McMarket).
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
McMarket (Electronics category subset): Over 2.2 million questions total. Test set sizes used in experiments: AG Test Set = 49,958; QR Test Set (McMarket_q) = 360.
|
Yes
|
Includes marketplace origin, language, product identifiers/metadata, question text, answer text, review text, English translations (for McMarket), and LLM-generated annotations (answerability, generated answers, relevance scores, reasons) for the specific subsets. Timestamps are implicitly available based on analysis in Figure 3.
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Free response (e.g. summary paragarph)
|
n-gram (BLEU, ROUGE, chrF), Human ratings (text quality, preference, NOT manual scoring of other metrics), Distribution (perplexity, calibration, correlation)
|
AG: BLEU-4, ROUGE-L. QR: Mean Reciprocal Rank (MRR), Precision@3
|
Product metadata and reviews originate from the XMarket dataset. Question-answer pairs were collected via web crawling from Amazon. Translations for McMarket were done using DeepL and NLLB models. Subsets McMarket_r and McMarket_q were annotated using GPT-4 prompts defined by the authors. Human validation of LLM annotations was performed by crowdworkers via Appen.
|
Academia
|
Yes
|
Dataset built upon XMarket. Used DeepL and NLLB for translations. Used GPT-4 (gpt-4-1106-preview) for annotations, with prompts provided. Human validation via Appen. Data licensed under CCO 1.0 DEED for academic research. Baseline model details provided.
|
The work highlights the utility of LLMs for dataset creation/annotation in specialized domains. It confirms the value of cross-context information transfer (cross-market, cross-product) for improving QA performance. Future work directions include improving multilingual handling without translation and exploring cross-lingual transfer techniques.
|
Test, Train, Validation
|
AG Train/Validation sizes: 183,092 / 24,973. QR Train/Validation sizes (using McMarket_q): 1260 / 180.
|
Task AG involves generating natural language answers. Task QR involves producing a ranked list of relevant questions.
|
Simple Mean
|
Yes
|
Results are reported per marketplace, for single-market vs. cross-market settings, and for translated vs. original language data in multilingual analysis. Performance is also compared between the main McMarket dataset and the LLM-annotated subsets.
| null |
https://github.com/yfyuan01/MCPQA
|
McMarket (specifically, the automatically translated Electronics category subset of a larger collected dataset for the MCPQA task)
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
Cross-market data significantly increased the percentage of review-answerable questions across markets. Temporal analysis showed auxiliary market data often pre-dates main market questions. The human evaluation confirmed the high quality of GPT-4 annotations for AG (e.g., 88% correctness) and QR (97.6% F1), with LLM answers often preferred.
|
BLEU-4, ROUGE-L, MRR, Precision@3. Mean scores are reported, sometimes with standard deviation (e.g., for text lengths in Table 2 ).
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
|
The core task addresses answering real user questions on e-commerce platforms using available user-generated content like reviews and existing QAs.
|
Composite phenomenon
| null | null | null |
Retrieval
| null | null |
['Real task', 'Author-crafted', 'LLM-generated']
|
['Convenience']
|
['Free response']
|
['Soft match', 'Human ratings', 'Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Partial']
|
['Mean', 'Std']
|
berdicevskisSuperlimSwedishLanguage2023
|
Superlim: A Swedish Language Understanding Evaluation Benchmark
|
Include
| null | null |
We present Superlim, a multi-task NLP bench- mark and analysis platform for evaluating Swedish language models, a counterpart to the English-language (Super)GLUE suite. From the set of experiments, it is quite challenging to the models.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
natural language understanding
|
Yes
|
NLU includes a wide range of subtasks such as sentiment analysis, argumentation classification, grammatical error detection, semantic similarity, natural language inference, coreference resolution, word similarity and relatedness, analogy, synonym detection, and diagnostics for linguistic phenomena and gender bias.
|
Subset
| null |
The Superlim benchmark defines its tasks as a set of 15 NLU tasks for Swedish, covering text-level tasks (e.g., sentiment analysis, NLI, paraphrase detection), word-level tasks (e.g., similarity, analogy), and diagnostic tasks (e.g., gender bias detection, linguistic phenomenon inference).
|
A single item in a task dataset typically consists of text inputs (such as a sentence, sentence pair, or word pair) with the respective label or target output specific to the task—e.g., a sentiment score or a classification label.
| null |
Human exam questions (e.g. GRE questions), Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
36,118 (the range is from 109 examples to 18,593)
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph), predicted label
|
Krippendorff’s α
| null | null |
Mix (multiple authors from industry and academia)
|
there is no link in the paper, but can find it online
| null | null |
Test, Train, Validation
|
total (479,571 train) and (22,527 validation)
| null |
Simple Mean
|
No
| null | null |
https://spraakbanken.gu.se/en/resources/superlim
|
Superlim
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean, std
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
NLP
|
Understanding
| null |
['Human exams', 'Real task', 'Author-crafted', 'Another benchmark', 'Procedurally-generated']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response', 'Multiple choice']
|
['Correlation']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Representative', 'Constructed']
|
['Mean', 'Std']
|
wangMAVENARGCompletingPuzzle2024
|
MAVEN-ARG: Completing the Puzzle of All-in-One Event Understanding Dataset with Event Argument Annotation.
|
Include
| null | null |
This paper introduces MAVEN-ARG, an augmentation of the MAVEN dataset with event argument annotations, creating the first large-scale, all-in-one resource for event detection, argument extraction (EAE), and relation extraction. MAVEN-ARG features a comprehensive schema (162 event types, 612 argument roles), substantial data scale (over 290k annotated arguments), and exhaustive annotations (document-level, entity & non-entity args). Experiments show MAVEN-ARG poses significant challenges for existing EAE models and LLMs.
|
The primary contribution is the creation and release of MAVEN-ARG, the largest EAE dataset and the first dataset integrating ED, EAE, and ERE annotations. Other contributions include the development of a comprehensive event argument schema with detailed definitions, the exhaustive annotation methodology, benchmarking results showing the dataset's difficulty, and a demonstration of its utility for downstream tasks like future event prediction.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Event Argument Extraction (EAE); Event Understanding.
|
Yes
|
Event Argument Extraction (EAE) is defined as the task of extracting event arguments (participants, attributes) for identified event occurrences (triggers) and classifying their specific semantic roles according to a predefined schema.
|
Comprehensive
|
A main motivation was to create a unified, large-scale dataset covering the full spectrum of event understanding (ED, EAE, ERE) to overcome limitations of previous fragmented datasets and enable end-to-end modeling and applications.
|
Event Argument Extraction (EAE): For a given event trigger in a document, identify all text spans (both entity mentions and non-entity spans) that function as arguments for that event, and assign the correct argument role label to each identified span based on the event schema.
|
An event trigger (a word or phrase indicating an event) within a document, linked to its event type. Associated with this trigger are annotated arguments, each consisting of a text span within the document and an assigned argument role label. Entity arguments are linked via coreference IDs.
|
The annotation scope is document-level (arguments can be anywhere in the document, not just the trigger's sentence), includes arguments for all fine-grained event mentions (not just a single topic event), and covers both entity and non-entity arguments.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language)
|
Test set: 857 documents, 18,112 events, 53,676 arguments.
|
Yes
|
Event Type, Event Trigger Span, Argument Role, Argument Span, Entity Annotations (span, type, coreference cluster ID).
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall)
|
Bag-of-words F1 and Exact Match (EM) scores. These are calculated at three levels: Mention Level, Entity Coreference Level, and Event Coreference Level.
|
The dataset builds on the MAVEN dataset's Wikipedia text and event trigger/type annotations. The argument schema was manually created by experts, adapting concepts from FrameNet. Entity and argument annotations were collected through a three-phase human annotation process involving ordinary, senior, and expert annotators using a custom platform.
| null |
Yes
|
Dataset builds on MAVEN/MAVEN-ERE. Uses coarse-grained entity types from Few-NERD guidelines. Custom annotation platform developed. Test set annotations withheld for online leaderboard evaluation. Annotation cost ~85k USD. Detailed model hyperparameters and LLM prompts provided in appendices.
|
MAVEN-ARG completes the MAVEN trilogy, enabling research on integrated event understanding. Its exhaustive annotation style (document-level, all events, entity/non-entity args) is a key differentiator. Error analysis pinpoints argument identification as the primary difficulty for models.
|
Test, Train, Validation
|
Train set: 2,913 documents, 64,923 events, 190,479 arguments. Dev set: 710 documents, 15,556 events, 46,458 arguments.
|
The standard output format involves identifying argument text spans and assigning a role label from the schema for each argument associated with an event trigger.
|
Simple Mean
|
Yes
|
Performance is analysed based on trigger-argument distance, separately for entity vs. non-entity arguments, and using varying proportions of training data.
| null |
https://github.com/THU-KEG/MAVEN-Argument
|
MAVEN-ARG
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
No
|
Yes
|
Yes
|
Schema developed over 3 years by experts with definitions and examples. Multi-phase annotation included checks by senior annotators and experts. Satisfactory inter-annotator agreement (Fleiss' kappa 68.6% for arguments) achieved. Dataset statistics confirm largest scale and comprehensive schema/annotation style compared to predecessors. Data analysis revealed diverse distributions and challenges like long-distance dependencies.
|
Precision, Recall, F1 score, Exact Match (EM)
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
The task focuses on extracting structured event information from Wikipedia articles, representing a common information extraction goal.
|
Composite phenomenon
|
Yes
| null | null |
NLP
|
Extraction
| null |
['Real task', 'Author-crafted', 'Crowd-sourced', 'Another benchmark']
|
['Convenience', 'Targeted']
|
['Structured']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
|
['Mean', 'Other']
|
jiangFollowBenchMultilevelFinegrained2024
|
FollowBench: A Multi-level Fine-grained Constraints Following
Benchmark for Large Language Models
|
Include
| null | null |
The paper presents a benchmark called FollowBench for multi fine-grained constraint following evaluations. It asses five different constraint types (e.g. content, situation, style, format and example). The paper evaluated 13 LLMs with FollowBench which highlights weaknesses in LLMs instruction following capabilities.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
instruction following
|
Yes
|
"To precisely estimate the difficulty degree to which LLMs can follow instructions"
|
Subset
| null |
The task is to generate responses that satisfy all the constraints specified in the given instructions. The model must interpret multiple fine-grained constraints and produce an output that follows every constraint simultaneously.
|
an instruction with multiple constraints (ranging from 1 to 5)
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
820
|
Yes
|
difficulty (L1-L5 based on the number of contraints)
|
Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Difficulty (L1-L5)
| null |
https://github.com/YJiangcm/FollowBench
|
FollowBench
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Yes
|
They have human expert annotators to assess LLM-as-a-Judge performance and they do a diversity analysis to ensure the comprehensiveness of the benchmark.
| null |
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
No
| null | null |
Instruction Following
| null | null |
['Real task', 'Another benchmark', 'Procedurally-generated']
|
['Targeted', 'Criterion']
|
['Free response']
|
['Exact match', 'LLM-as-a-Judge']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Constructed']
| null |
romanouCRABAssessingStrength2023
|
CRAB: Assessing the Strength of Causal Relationships Between Real-World Events.
|
Include
| null | null |
This paper introduces CRAB, a new benchmark to evaluate the causal reasoning abilities of language models on real-world events presented in news narratives. It contains approximately 2,700 event pairs derived from 20 news stories, annotated with fine-grained causality scores (0-100) based on context. Experiments using large language models reveal poor performance, particularly when reasoning about complex causal structures (like causal frames and chains) versus simple ones.
|
The main contributions are: (1) The creation of the CRAB benchmark with fine-grained, contextual causality annotations for real-world event pairs. (2) A data construction pipeline leveraging causal principles and involving LLMs for event extraction followed by human annotation and expert validation. (3) Benchmarking state-of-the-art LLMs on causal reasoning tasks derived from CRAB. (4) Analysis of model performance based on causal structures (frames and chains) and context (in-document vs. cross-document).
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Causal reasoning between real-world events; Understanding actual causality in narratives.
|
Yes
|
The paper focuses on assessing the understanding of 'actual causality' - the causal relationship between specific, real-world events as perceived by humans based on context. This is operationalized by collecting graded (0-100) human judgments about the causal strength between pairs of events extracted from news narratives.
|
Subset
|
The benchmark aims to address limitations in existing causal reasoning datasets by focusing on real-world events, contextual dependence (including multi-document context), and graded (non-binary) causality judgments. It draws on principles from cognitive science and actual causality research.
|
To assess the strength of the causal relationship between a pair of real-world events, given the context from news articles. This involves predicting a scalar score (0-100) or classifying the relationship into discrete levels (e.g., High/Medium/Low/No, or Binary Yes/No), potentially within specific structural contexts like causal frames or chains.
|
A pair of event descriptions, the source news document(s) providing context, and a human-annotated causality score (0-100) indicating the perceived causal strength from the first to the second event. Event pairs are also grouped into causal frames and chains.
|
The benchmark includes event pairs where both events originate from the same document ('in-doc') and pairs where events come from different documents ('cross-doc'). It uses a continuous 0-100 score for annotation, often mapped to 4 classes for evaluation. Events are based on real news stories from the past decade.
|
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Crowd-sourced task examples (e.g. Prolific-created tasks), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
2,730 event pairs in total. Test set size is not applicable in the main zero-shot evaluation setup.
|
Yes
|
Event pair descriptions, Source document(s), Story identifier, Temporal order (implicit in timeline), Pairwise causality score (0-100), Causality class (derived), Causal frame structure type, Causal chain structure type.
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice
|
Exact Match (accuracy, F1, precision, recall)
|
Macro F1 score (for binary and 4-class classification), Exact Match (EM) score (for causal structure analysis).
|
News articles related to 20 selected stories were scraped (Google News API). Events were extracted using GPT-3 prompts, followed by expert filtering and validation. Timelines were manually constructed. Pairwise causality scores were annotated by AMT workers (7 per pair) and validated/adjusted by experts for ambiguous cases.
|
Academia
|
Yes
|
Document sources from Google News API via SerpApi. Event extraction used GPT-3 (text-davinci-003). Annotation via AMT with specific qualification/payment details. Detailed prompts provided in appendix. Discussion of limitations and ethics provided. Fine-tuning experiments detailed in appendix.
|
A key aspect is the focus on graded causal strength (0-100 score) rather than just binary causality. The analysis highlighting poorer performance on complex causal structures (e.g., mixed frames, colliders) and cross-document pairs is significant. The study also attempts to disentangle reasoning ability from memorization by analyzing performance based on event dates relative to model training cutoffs.
|
Test
| null |
Depending on the specific task setup, models output a scalar score (0-100), a class label (e.g., High, Medium, Low, No), a binary label (Yes/No), or a choice from multiple options (MCQ).
|
Simple Mean
|
Yes
|
Performance broken down by: in-document vs. cross-document pairs; pre- vs. post-Jan 2022 events (model knowledge cutoff); causal frame type; causal chain type; individual causality classes (High/Medium/Low/No).
| null |
https://github.com/epfl-nlp/CRAB
|
CRAB (Causal Reasoning Assessment Benchmark)
|
Contested
|
Yes
|
Yes
|
Yes
|
Yes
|
Yes
|
The benchmark is itself realistic
|
No
|
Yes
|
Dataset creation motivated by causal principles. Event extraction pipeline included expert validation. Causality annotations used multiple AMT workers plus expert review for ambiguous cases. Inter-rater agreement was measured (Krippendorff's alpha), showing reasonable agreement for extreme classes and among experts. Analysis based on theoretically grounded causal frames/chains.
|
Macro F1 score, Exact Match (EM)
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
|
The task uses real events reported in news media and requires reasoning about their causal connections based on the provided context, mirroring how humans interpret such narratives.
|
Composite phenomenon
|
Yes
| null | null |
Reasoning
| null | null |
['Real task', 'Author-crafted', 'Crowd-sourced', 'LLM-generated']
|
['Convenience', 'Targeted']
|
['Multiple choice']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['Yes']
|
['Representative']
|
['Mean']
|
zhaoFinanceMATHKnowledgeintensiveMath2024
|
FinanceMATH: Knowledge-Intensive Math Reasoning in Finance Domains
|
Include
| null | null |
This paper introduces FinanceMath; a novel benchmark designed to evaluate LLMs’ capabilities in solving knowledge-intensive math reasoning problems. These problems require college-level knowledge in the finance domain for effective resolution.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Knowledge-intensive math reasoning in finance domains
|
Yes
|
The phenomena is defned as abilities of LLMs in solving math problems requiring; 1) College-level knowledge in the finance domain 2) Interpretation of both textual and tabular data and 3) Intergration of domain-specific knowledge.
|
Subset
| null |
The task is defined as requiring LLMs to understand specialized financial terms, interpret tabular data to find relevant information, and then either perform step-by-step reasoning (Chain-of-Thought) or generate a structured program to solve the math question.
|
A math question containing the question text, table that the model must intepret to extract relevant numerical information, excecutable python program with solution and topic
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks)
|
1000
|
Yes
|
Topic related to the question
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragarph), Structured response (e.g. valid JSON, API call alone)
|
Exact Match (accuracy, F1, precision, recall), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Validation
|
200
| null |
Simple Mean
|
Yes
|
The paper presents results across different topics and prompting strategies e.g. CoT, PoT
| null |
https://financemath-acl2024.github.io/
|
FinanceMATH
|
Widely-agreed
|
No
|
Yes
|
Yes
|
No
|
No
|
No
|
Yes
|
No
| null |
Simple Mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Reasoning
|
Mathematics
|
Finance
|
['Crowd-sourced']
|
['Targeted']
|
['Free response', 'Structured']
|
['Exact match', 'LLM post-processing']
|
['Widely-agreed']
|
['No']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Representative']
|
['Mean']
|
zhaoFinDVerExplainableClaim2024
|
FINDVER: Explainable Claim Verification over Long and Hybrid-Content Financial Documents
|
Include
| null | null |
A comprehensive benchmark designed to evaluate the explainable claim verification capabilities of LLMs in the context of understanding and analyzing long, hybrid-content financial documents. FINDVER contains 2,400 expertannotated examples, divided into three subsets: information extraction, numerical reasoning, and knowledge-intensive reasoning—each addressing common scenarios encountered in realworld financial contexts.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Claim verification over long financial documents
|
Yes
|
Given a financial document and a claim, a model is expected to provide a label of whether the claim is refuted or entailed based on the evidence in the document, followed by a rationale explanation of its prediction.
|
Subset
| null |
Consider a single financial document d, containing textual data P and tabular data T, associated with a claim c that requires verification. The task is defined as follows:
1. Entailment Classification: The language model must determine the entailment label ℓ ∈ L = {“entailed”, “refuted”}, based on the hybrid-content financial document (P and T).
2. Reasoning-Process Explanation Generation: The model must generate a natural language explanation e, which articulates the reasoning process behind the validity of the claim c, relying solely on the textual (P) and tabular (T) content of the document d.
|
A financial document, a claim, label i.e. refutes or entails
| null |
Real task examples (e.g. GitHub issues), Domain expert annotators
|
1700
| null |
subset task e.g. FDV-IE, FDV-MATH, FDV-KNOW, relevant context, report
|
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Academia
|
Yes
| null | null |
Test, Validation
|
700
| null |
Simple Mean
|
Yes
|
Metrics across each subset task e.g. FDV-IE, FDV-MATH, FDV-KNOW
| null |
https://github.com/yilunzhao/FinDVer/tree/main
|
FINDVER
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
Yes
|
No comparisons made
|
Yes
|
Yes
|
Somehwat
|
The authors engage with domain experts during dataset design "To identify the common reasoning-intensive scenarios in claim verification based on financial documents, we engage with domain experts and conducted a preliminary study. This helped us determine three key types of scenarios that frequently arise in realworld settings: information extraction, numerical reasoning, and knowledge-intensive reasoning"
|
Simple mean
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
Finance
| null | null |
['Real task', 'Expert-crafted']
|
['Convenience', 'Targeted']
|
['Free response']
|
['Exact match', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Comparison made']
|
['Yes']
|
['Partial']
|
['Mean']
|
magnussonPalomaBenchmarkEvaluating2024
|
Paloma: A Benchmark for Evaluating Language Model Fit
|
Include
| null | null |
Evaluations of language models typically use a single dataset for measuring perplexity, but this dataset comprises various domains with different language distributions. PALOMA introduces a new benchmark to assess language model performance across distinct English and code domains, including two new datasets from top subreddits and popular programming languages, providing a more detailed and domain-specific analysis of model fit.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Perplexity analysis to assess LM fit to different domains
|
Yes
|
perplexity
|
Comprehensive
| null |
Predict text from different data sources
|
Source, domain, val and test tokens, token per split per domain
| null |
Modified from another benchmark (e.g. translation into another language)
|
123,683,201 tokens
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Free response (e.g. summary paragarph)
|
Distribution (perplexity, calibration, correlation)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
| null | null |
Simple Mean
|
Yes
|
domains, sources
| null |
HuggingFace
|
Paloma
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Language Modelling
| null | null |
['Another benchmark']
|
['Convenience', 'Targeted']
|
['Free response']
|
['Distribution']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Constructed']
|
['Mean']
|
tangTofuEvalEvaluatingHallucinations2024
|
TofuEval: Evaluating Hallucinations of LLMs on Topic-Focused Dialogue Summarization
|
Include
| null | null |
Propose a summarization dataset generated by LLMs and human annotations of factual consistencies. Show that LLMs hallucinate and have diverse errors, and that non-LLM evaluators can capture these errors better than LLMs.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
text summarization
|
Yes
|
(1) are LLMs up to the task of evaluating model outputs? (2) can LLMs generate factually consistent summaries without hallucinations for non-news domains?
|
Comprehensive
| null |
Topic-focused Dialogue summarization Evaluation of factual consistency
|
a document and a topic for summarization; summary and the corresponding document for evaluation
| null |
Real task examples (e.g. GitHub issues), Modified from another benchmark (e.g. translation into another language), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
1,479 summaries split into 70%/30% development/test so the test should be 444 summaries
|
Yes
|
topic area
|
Random sample (creators defined a task space and sampled from it), Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), LLM post-processing (extracting answers, reformatting for automated scoring)
| null | null |
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Validation
|
1,479 summaries split into 70%/30% development/test so the dev should be 1035 summaries
| null | null |
Yes
|
for different data sources
| null |
https://github.com/amazon-science/tofueval
|
TOFUEVAL
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
they conduct extensive human experiment
|
simple mean
|
Model access required (e.g. logits)
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Composite phenomenon
|
Yes
| null | null |
NLP
|
Summarization
| null |
['Real task', 'Another benchmark', 'LLM-generated']
|
['Random', 'Targeted']
|
['Short free response', 'Free response']
|
['Human ratings', 'LLM-as-a-Judge', 'LLM post-processing']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
|
['Mean']
|
casolaMultiPICoMultilingualPerspectivist2024
|
MultiPICo: Multilingual Perspectivist Irony Corpus
|
Include
| null | null |
Perspectivism in NLP models different individual perspectives by leveraging data annotated with subjective opinions. The proposed MultiPICo corpus includes multilingual ironic short conversations from Twitter and Reddit, along with annotator sociodemographic information, allowing for the analysis of demographic influences on irony perception and the benchmarking of large language models' ability to recognize irony across different groups and languages.
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
Irony detection
|
Yes
|
benchmark the ability of large language models to recognize irony, their positionality with respect to sociodemographic groups, and the efficacy of perspective-taking prompting for irony detection in multiple languages
|
Comprehensive
| null |
Detect irony in text
|
Text, language, LLM, detection score, positionality of LLM with respect to age
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks)
|
18,778
|
Yes
|
language, annotator demographics, sources, human annotation
|
Convenience sample (creators found a set of tasks that was readily accessible)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
| null | null | null |
Test
| null | null |
Simple Mean
|
Yes
|
positionality with respect to age, demographics of annotators
| null |
HuggingFace
|
MultiPICo
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
Yes
|
No
| null |
simple mean/sum
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
| null | null |
['Crowd-sourced']
|
['Convenience']
|
['Short free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative']
|
['Mean']
|
jinRWKUBenchmarkingRealworld2024
|
RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models
|
Include
| null | null |
Large language models often memorize sensitive or harmful information from their training data, necessitating methods to erase this knowledge. The Real-World Knowledge Unlearning (RWKU) benchmark is proposed to address this challenge by evaluating the ability of LLMs to forget specific knowledge without access to the original training data, using real-world famous people as unlearning targets, and employing rigorous evaluation methods for both forgetting and retaining relevant information in various applications.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
real-world knowledge unlearning
|
Yes
|
Effectively removing specific memorized content from trained machine-learning models
|
Comprehensive
| null |
given an unlearning target, a model gθ with parameters θ is updated with a certain unlearning method, which results in an unlearned model with new parameters θ'
|
Subject, Query, level, type, answer
| null |
Procedurally-generated task examples (e.g. Creating instances from a template), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
3270
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
Forget Set, Neighbour Set, MIA Set, Utility Set
| null |
GitHub
|
RWKU
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Language Modelling
|
Unlearning
| null |
['Procedurally-generated', 'LLM-generated']
|
['Targeted']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Constructed']
|
['Mean']
|
jiangXFACTRMultilingualFactual2020
|
X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models
|
Include
| null | null |
Language models have effectively captured factual knowledge through cloze-style fill-in-the-blank questions, but evaluations have mostly focused on English. To assess factual knowledge retrieval across different languages, a multilingual benchmark for cloze-style probes covering 23 diverse languages is created, along with expanded methods and decoding algorithms for multi-word entities. The study also introduces a code-switching method to enhance multilingual models' knowledge access, demonstrating its effectiveness across several languages.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
Factual knowledge retrieval
|
Yes
|
factual knowledge retrieval in LMs in different languages than English
|
Comprehensive
| null |
The cloze-style prompts used therein are manually created and consist of a sequence of
tokens, where [X] and [Y] are placeholders for subjects and objects (e.g. “[X] is a [Y] by profession.”). To assess the existence of a certain fact, [X] is replaced with the actual subject and the model predicts the object in the blank
|
subject, object/fact, answer, scores
| null |
Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions)
|
>500,000 facts
|
Yes
|
language, percentage in dataset
|
Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
|
language, independence, order, confidence
| null |
GitHub
|
X-FACTR
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Knowledge
|
General
|
Multilinguality
|
['Author-crafted']
|
['Criterion']
|
['Short free response']
|
['Exact match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Constructed']
|
['Mean']
|
yuKoLACarefullyBenchmarking2024
|
KoLA: Carefully Benchmarking World Knowledge of Large Language Models
|
Include
| null | null |
This paper introduced Knowledge-oriented LLM Assessment benchmark (KoLA), which aims at carefully benchmarking the world knowledge of LLMs by undertaking meticulous designs considering the aforementioned three factors: ability modeling, known and evolving data sources and contrastive evaluation system.
|
The paper provides a detailed motivation for the design considerations of their dataset, which is well-grounded in learning theory. To what extent this further anthropomorphises LLMs is up for debate, as this grounding assumes that LLMs acquire and consume human knowledge in a manner similar to humans and, as such, should be evaluated in a similar way.
|
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
World knowledge
|
Yes
|
Benchmarking the world knowledge of LLMs across four levels; Knowledge Memorization, Knowledge Understanding, Knowledge Applying, and Knowledge Creating.
|
Subset
| null |
Given a question probing for world knowledge, provide an answer
|
Consists of a question
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks), Modified from another benchmark (e.g. translation into another language), Procedurally-generated task examples (e.g. Creating instances from a template)
|
2138
|
Yes
| null |
Random sample (creators defined a task space and sampled from it), Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Academia
|
Unclear
| null | null |
Test
| null | null |
Simple Mean, Rank
|
Yes
|
For each subtask dataset
| null |
https://github.com/THU-KEG/KoLA/tree/main
|
yuKoLACarefullyBenchmarking2024
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
Somewhat
|
The dataset design is grounded on human cognitive processes in learning theory which seeks to stimulate acquistion and application of knowledge across different stages
| null |
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Composite phenomenon
|
Yes
| null | null |
Knowledge
|
General
| null |
['Crowd-sourced', 'Another benchmark', 'Procedurally-generated']
|
['Random', 'Convenience', 'Targeted', 'Criterion']
|
['Short free response', 'Free response']
|
['Exact match', 'Soft match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Representative']
| null |
subbiahSTORYSUMMEvaluatingFaithfulness2024
|
STORYSUMM: Evaluating Faithfulness in Story Summarization
|
Include
| null | null |
Propose a dataset, show that one human annotation protocol is likely to miss inconsistencies, and recent automatic metrics do not perform well either
| null |
Specific Application (A single use case, where the benchmark is likely to be examples of that use case)
|
"LLM summaries often contain subtle errors, particularly for narrative text which requires nu- anced interpretation...By focusing on faithfulness in narrative summarization and using real-world data from LLMs and Reddit, STORYSUMM poses a realistic but hard benchmark to push our methods forward." -p9989
|
Yes
|
"We define a consistent summary as: The events and details described in the summary should not misrepresent details from the story or include de- tails that are unsupported by the story."-p9990
|
Subset
| null |
"Is the information in the summary consistent with the story?"-p9990
|
given a story and a summary, the model/human has to decide whether the summary is faithful to the story.
| null |
Real task examples (e.g. GitHub issues), LLM-generated task examples (e.g. Filtered from responses to a prompt)
|
63 stories
|
Yes
|
difficulty
|
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Human ratings (text quality, preference, NOT manual scoring of other metrics), LLM-as-a-Judge (text quality, preferences, NOT extracting answers for other metrics), LLM post-processing (extracting answers, reformatting for automated scoring)
| null |
original data is sourced from Reddit (-p9989)
|
Mix (multiple authors from industry and academia)
|
Yes
| null | null |
Test, Validation
|
val: 33 stories
| null | null |
Yes
|
difficulty
| null |
https://github.com/melaniesubbiah/storysumm
|
STORYSUMM
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
Yes
|
Yes
|
the authors conduct extensive human experiment
| null |
Model access required (e.g. logits)
|
Partial real task (e.g. answering medical questions collected from real people)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
NLP
|
Summarization
| null |
['Real task', 'LLM-generated']
|
['Targeted']
|
['Short free response', 'Free response']
|
['Human ratings', 'LLM-as-a-Judge', 'LLM post-processing']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['Yes']
|
['Partial']
| null |
zhengNEOBENCHEvaluatingRobustness2024
|
NEO-BENCH: Evaluating Robustness of Large Language Models with Neologisms
|
Include
| null | null |
The performance of Large Language Models (LLMs) declines due to the temporal drift between the training data and newer texts, notably impacted by the emergence of neologisms. A resource of recent English neologisms is created and analyzed, revealing that introducing new words significantly reduces model performance in tasks like machine translation. To address this, a benchmark is constructed to evaluate LLMs' ability to handle neologisms across various natural language understanding tasks, showing that models trained on more recent data perform better and highlighting the complexity neologisms pose for static LLMs
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
LLM performance degradation due to temporal drift between data used for model training and newer text seen during inference
|
Yes
|
language change causing data drift due to the emergence of neologisms – new word forms
|
Subset
| null |
Answer multiple choice cloze questions based on example text with masked word, machine translation, definition generation, perplexity comparison of individual words
|
Text, answer, score
| null |
Crowd-sourced task examples (e.g. Prolific-created tasks)
|
2162
|
No
| null |
Targeted items (creators defined a task space and chose tasks within it strategically)
|
Multiple choice, Short free response (e.g. single word or number)
|
Exact Match (accuracy, F1, precision, recall), n-gram (BLEU, ROUGE, chrF)
| null | null |
Academia
|
Yes
| null | null |
Test
| null | null |
Simple Mean
|
Yes
| null | null |
GitHub
|
NEO-BENCH
|
Widely-agreed
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
The benchmark is itself realistic
|
No
|
No
| null |
simple mean/sum
|
Outputs alone
|
Representative task (e.g. answering medical licensing exam questions)
| null |
Single cohesive phenomenon
|
Not applicable
| null | null |
Language Modelling
|
Updating
| null |
['Crowd-sourced']
|
['Targeted']
|
['Multiple choice', 'Short free response']
|
['Exact match', 'Soft match']
|
['Widely-agreed']
|
['Yes']
|
['Yes']
|
['Realistic']
|
['No']
|
['Representative']
|
['Mean']
|
pfisterSuperGLEBerGermanLanguage2024
|
SuperGLEBer: German Language Understanding Evaluation Benchmark
|
Include
| null | null |
This is a broad NLU benchmark suite for the German language. The benchmark consists of 29 different tasks ranging over different types such as document classification, sequence tagging, sentence similarity, and question answering, on which 10 different German-pretrained models are evaluated.
| null |
General Capability (A broadly useful ability, which could be relevant to multiple applications)
|
NLU
|
Yes
|
Our benchmark evaluation suite thus aims for both: 1. aggregating a diverse set of available German Natural Language Understanding (NLU) tasks, 2. identifying commonly used German-pretrained LLMs and evaluating the models on this benchmark.
|
Subset
| null |
The task is defined as the evaluation of German language models across 29 NLU tasks, covering four task types: text classification, sequence tagging, sentence similarity, and question answering.
|
this is a combination of a text input (sentence, sentence pairs, paragraph or a short text) and the corresponding label or text or the related answer
| null |
Real task examples (e.g. GitHub issues), Author-crafted task examples (e.g. hand-written examples, manual transformation of existing data into questions), Modified from another benchmark (e.g. translation into another language)
|
> 50k
|
No
| null |
Convenience sample (creators found a set of tasks that was readily accessible), Targeted items (creators defined a task space and chose tasks within it strategically), Specific criteria (items were taken from a larger set based on specified rules)
|
Multiple choice, Short free response (e.g. single word or number), Free response (e.g. summary paragarph)
|
Exact Match (accuracy, F1, precision, recall)
| null | null |
Academia
|
Yes
| null | null |
Test, Train, Validation
|
train >200k, validation >20k
| null |
Simple Mean
|
No
| null | null |
https://supergleber.professor-x.de/
|
SuperGLEBer
|
Contested
|
Yes
|
Yes
|
Yes
|
No
|
No comparisons made
|
No
|
No
|
No
| null |
simple mean, mean and std, averaging across multiple metrics
|
Outputs alone
|
Partial real task (e.g. answering medical questions collected from real people), Representative task (e.g. answering medical licensing exam questions), Constructed task (e.g. predicting medical diagnoses from clinicians' notes)
| null |
Composite phenomenon
|
Yes
| null | null |
NLP
|
Understanding
| null |
['Real task', 'Author-crafted', 'Another benchmark']
|
['Convenience', 'Targeted', 'Criterion']
|
['Multiple choice', 'Short free response', 'Free response']
|
['Exact match']
|
['Contested']
|
['Yes']
|
['Yes']
|
['No comparison made']
|
['No']
|
['Partial', 'Representative', 'Constructed']
|
['Mean', 'Std']
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.