( name: str prompt_function: typing.Callable[[dict, str], lighteval.tasks.requests.Doc] hf_repo: str hf_subset: str metrics: list[lighteval.metrics.utils.metric_utils.Metric] | tuple[lighteval.metrics.utils.metric_utils.Metric, ...] hf_revision: str | None = None hf_filter: typing.Optional[typing.Callable[[dict], bool]] = None hf_avail_splits: list[str] | tuple[str, ...] = <factory> evaluation_splits: list[str] | tuple[str, ...] = <factory> few_shots_split: str | None = None few_shots_select: str | None = None generation_size: int | None = None generation_grammar: huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType | None = None stop_sequence: list[str] | tuple[str, ...] | None = None num_samples: list[int] | None = None suite: list[str] | tuple[str, ...] = <factory> original_num_docs: int = -1 effective_num_docs: int = -1 must_remove_duplicate_docs: bool = False num_fewshots: int = 0 truncate_fewshots: bool = False version: int = 0 )
Parameters
Doc samples from each line of the evaluation dataset. Stored configuration of a given LightevalTask.
Return a dict with metric name and its aggregation function for all metrics
Worker function to download a dataset from the HuggingFace Hub. Used for parallel dataset loading.
Returns the evaluation documents.
( ) → list[Doc]
Returns
list[Doc]
Documents that will be used for few shot examples. One document = one few shot example.
Returns the few shot documents. If the few shot documents are not available, it gets them from the few shot split or the evaluation split.
( available_splits: list[str] | tuple[str, ...] ) → str
Returns
str
the first available fewshot splits or None if nothing is available
Parses the possible fewshot split keys in order: train, then validation keys and matches them with the available keys. Returns the first available.
( tasks: dict dataset_loading_processes: int = 1 )
Load datasets from the HuggingFace Hub for the given tasks.
( use_chat_template: bool = False tokenizer = None system_prompt: str | None = None )
Prepare a prompt from a document, either using chat template or plain text format.
Prepare a prompt for API calls, using a chat-like format. Will not tokenize the message because APIs will usually handle this.
( custom_tasks: str | pathlib.Path | module | None = None )
The Registry class is used to manage the task registry and get task classes.
( custom_tasks: str | pathlib.Path | module ) → ModuleType
Creates a custom task module to load tasks defined by the user in their own file.
( meta_table: list[lighteval.tasks.lighteval_task.LightevalTaskConfig] | None = None ) → Dict[str, LightevalTask]
Parameters
Returns
Dict[str, LightevalTask]
A dictionary of task names mapped to their corresponding LightevalTask classes.
Create configuration tasks based on the provided meta_table.
( task_definition: str ) → list[str]
task is a string of the form “suite|task|few_shot|truncate_few_shots,suite|task|few_shot|truncate_few_shots”
returns a LightevalTaskConfig object based on the task name and fewshot and truncate_few_shots values.
( suites: str | None = None )
Print all the tasks in the task registry.
( tasks: str ) → tuple[list[str], dict[str, list[tuple[int, bool]]]]
Parameters
Returns
tuple[list[str], dict[str, list[tuple[int, bool]]]]
A tuple containing:
Converts a input string of tasks name to task information usable by lighteval.
( query: str choices: list gold_index: typing.Union[int, list[int]] instruction: str | None = None images: list['Image'] | None = None specific: dict | None = None unconditioned_query: str | None = None original_query: str | None = None id: str = '' task_name: str = '' num_asked_few_shots: int = 0 num_effective_few_shots: int = 0 fewshot_samples: list = <factory> sampling_methods: list = <factory> fewshot_sorting_class: str | None = None generation_size: int | None = None stop_sequences: list[str] | None = None use_logits: bool = False num_samples: int = 1 generation_grammar: None = None )
Parameters
Dataclass representing a single evaluation sample for a benchmark.
This class encapsulates all the information needed to evaluate a model on a single task instance. It contains the input query, expected outputs, metadata, and configuration parameters for different types of evaluation tasks.
Required Fields:
query: The input prompt or questionchoices: Available answer choices (for multiple choice tasks)gold_index: Index(es) of the correct answer(s)Optional Fields:
instruction: System prompt, task specific. Will be appended to model specific system prompt.images: Visual inputs for multimodal tasks.Methods: get_golds(): Returns the correct answer(s) as strings based on gold_index. Handles both single and multiple correct answers.
Usage Examples:
Multiple Choice Question:
doc = Doc(
query="What is the capital of France?",
choices=["London", "Paris", "Berlin", "Madrid"],
gold_index=1, # Paris is the correct answer
instruction="Answer the following geography question:",
)Generative Task:
doc = Doc(
query="Write a short story about a robot.",
choices=[], # No predefined choices for generative tasks
gold_index=0, # Not used for generative tasks
generation_size=100,
stop_sequences=["
End"],
)Few-shot Learning:
doc = Doc(
query="Translate 'Hello world' to Spanish.",
choices=["Hola mundo", "Bonjour monde", "Ciao mondo"],
gold_index=0,
fewshot_samples=[
Doc(query="Translate 'Good morning' to Spanish.",
choices=["Buenos días", "Bonjour", "Buongiorno"],
gold_index=0),
Doc(query="Translate 'Thank you' to Spanish.",
choices=["Gracias", "Merci", "Grazie"],
gold_index=0)
],
)Multimodal Task:
doc = Doc(
query="What is shown in this image?",
choices=["A cat"],
gold_index=0,
images=[pil_image], # PIL Image object
)Return gold targets extracted from the target dict
( new_arr: list ) → list
Get the original order of the data.
Iterator that yields the dataset splits based on the split limits.
( num_dataset_splits ) → type
Initialises the split limits based on generation parameters. The splits are used to estimate time remaining when evaluating, and in the case of generative evaluations, to group similar samples together.
For generative tasks, self._sorting_criteria outputs:
In the current function, we create evaluation groups by generation parameters (logits and eos), so that samples with similar properties get batched together afterwards. The samples will then be further organised by length in each split.
( requests: list num_dataset_splits: int )
( dataset: Dataset num_replicas: typing.Optional[int] = None rank: typing.Optional[int] = None shuffle: bool = True seed: int = 0 drop_last: bool = False )
A distributed sampler that copy the last element only when drop_last is False so we keep a small padding in the batches as our samples are sorted by length.