The model configs are used to define the model and its parameters. All the parameters can be
set in the model-args or in the model yaml file (see example
here).
( generation_parameters: GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None) system_prompt: str | None = None cache_dir: str = '~/.cache/huggingface/lighteval' )
Parameters
Base configuration class for all model types in Lighteval.
This is the foundation class that all specific model configurations inherit from. It provides common functionality for parsing configuration from files and command-line arguments, as well as shared attributes that are used by all models like generation parameters and system prompts.
Methods: from_path(path: str): Load configuration from a YAML file. from_args(args: str): Parse configuration from a command-line argument string. _parse_args(args: str): Static method to parse argument strings into configuration dictionaries.
Example:
# Load from YAML file
config = ModelConfig.from_path("model_config.yaml")
# Load from command line arguments
config = ModelConfig.from_args("model_name=meta-llama/Llama-3.1-8B-Instruct,system_prompt='You are a helpful assistant.',generation_parameters={temperature=0.7}")
# Direct instantiation
config = ModelConfig(
model_name="meta-llama/Llama-3.1-8B-Instruct",
generation_parameters=GenerationParameters(temperature=0.7),
system_prompt="You are a helpful assistant."
)( generation_parameters: GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None) system_prompt: str | None = None cache_dir: str = '~/.cache/huggingface/lighteval' model_name: str tokenizer: str | None = None subfolder: str | None = None revision: str = 'main' batch_size: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None max_length: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None model_loading_kwargs: dict = <factory> add_special_tokens: bool = True skip_special_tokens: bool = True model_parallel: bool | None = None dtype: str | None = None device: typing.Union[int, str] = 'cuda' trust_remote_code: bool = False compile: bool = False multichoice_continuations_start_space: bool | None = None pairwise_tokenization: bool = False continuous_batching: bool = False override_chat_template: bool = None )
Parameters
pretrained_model_name_or_path argument in HuggingFace’s from_pretrained method. from_pretrained. Defaults to empty dict. Configuration class for HuggingFace Transformers models.
This configuration is used to load and configure models from the HuggingFace Transformers library.
Example:
config = TransformersModelConfig(
model_name="meta-llama/Llama-3.1-8B-Instruct",
batch_size=4,
dtype="float16",
generation_parameters=GenerationParameters(
temperature=0.7,
max_new_tokens=100
)
)Note: This configuration supports quantization (4-bit and 8-bit) through the dtype parameter. When using quantization, ensure you have the required dependencies installed (bitsandbytes for 4-bit/8-bit quantization).
( generation_parameters: GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None) system_prompt: str | None = None cache_dir: str = '~/.cache/huggingface/lighteval' model_name: str tokenizer: str | None = None subfolder: str | None = None revision: str = 'main' batch_size: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None max_length: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None model_loading_kwargs: dict = <factory> add_special_tokens: bool = True skip_special_tokens: bool = True model_parallel: bool | None = None dtype: str | None = None device: typing.Union[int, str] = 'cuda' trust_remote_code: bool = False compile: bool = False multichoice_continuations_start_space: bool | None = None pairwise_tokenization: bool = False continuous_batching: bool = False override_chat_template: bool = None base_model: str )
Configuration class for PEFT (Parameter-Efficient Fine-Tuning) adapter models.
This configuration is used to load models that have been fine-tuned using PEFT adapters, such as LoRA, AdaLoRA, or other parameter-efficient fine-tuning methods. The adapter weights are merged with the base model during loading for efficient inference.
Note:
peft library to be installed, pip install lighteval[adapters]( generation_parameters: GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None) system_prompt: str | None = None cache_dir: str = '~/.cache/huggingface/lighteval' model_name: str tokenizer: str | None = None subfolder: str | None = None revision: str = 'main' batch_size: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None max_length: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None model_loading_kwargs: dict = <factory> add_special_tokens: bool = True skip_special_tokens: bool = True model_parallel: bool | None = None dtype: str | None = None device: typing.Union[int, str] = 'cuda' trust_remote_code: bool = False compile: bool = False multichoice_continuations_start_space: bool | None = None pairwise_tokenization: bool = False continuous_batching: bool = False override_chat_template: bool = None base_model: str )
Configuration class for delta models (weight difference models).
This configuration is used to load models that represent the difference between a fine-tuned model and its base model. The delta weights are added to the base model during loading to reconstruct the full fine-tuned model.
( generation_parameters: GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None) system_prompt: str | None = None cache_dir: str = '~/.cache/huggingface/lighteval' model_name: str revision: str = 'main' dtype: str = 'bfloat16' tensor_parallel_size: typing.Annotated[int, Gt(gt=0)] = 1 data_parallel_size: typing.Annotated[int, Gt(gt=0)] = 1 pipeline_parallel_size: typing.Annotated[int, Gt(gt=0)] = 1 gpu_memory_utilization: typing.Annotated[float, Ge(ge=0)] = 0.9 max_model_length: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None quantization: str | None = None load_format: str | None = None swap_space: typing.Annotated[int, Gt(gt=0)] = 4 seed: typing.Annotated[int, Ge(ge=0)] = 1234 trust_remote_code: bool = False add_special_tokens: bool = True multichoice_continuations_start_space: bool = True pairwise_tokenization: bool = False max_num_seqs: typing.Annotated[int, Gt(gt=0)] = 128 max_num_batched_tokens: typing.Annotated[int, Gt(gt=0)] = 2048 subfolder: str | None = None is_async: bool = False override_chat_template: bool = None )
Parameters
Configuration class for VLLM inference engine.
This configuration is used to load and configure models using the VLLM inference engine, which provides high-performance inference for large language models with features like PagedAttention, continuous batching, and efficient memory management.
vllm doc: https://docs.vllm.ai/en/v0.7.1/serving/engine_args.html
( generation_parameters: GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None) system_prompt: str | None = None cache_dir: str = '~/.cache/huggingface/lighteval' model_name: str load_format: str = 'auto' dtype: str = 'auto' tp_size: typing.Annotated[int, Gt(gt=0)] = 1 dp_size: typing.Annotated[int, Gt(gt=0)] = 1 context_length: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = None random_seed: typing.Optional[typing.Annotated[int, Gt(gt=0)]] = 1234 trust_remote_code: bool = False device: str = 'cuda' skip_tokenizer_init: bool = False kv_cache_dtype: str = 'auto' add_special_tokens: bool = True pairwise_tokenization: bool = False sampling_backend: str | None = None attention_backend: str | None = None mem_fraction_static: typing.Annotated[float, Gt(gt=0)] = 0.8 chunked_prefill_size: typing.Annotated[int, Gt(gt=0)] = 4096 override_chat_template: bool = None )
Parameters
Configuration class for SGLang inference engine.
This configuration is used to load and configure models using the SGLang inference engine, which provides high-performance inference.
sglang doc: https://docs.sglang.ai/index.html#
( generation_parameters: GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None) system_prompt: str | None = None cache_dir: str = '~/.cache/huggingface/lighteval' model_name: str = 'dummy' seed: int = 42 )
Configuration class for dummy models used for testing and baselines.
This configuration is used to create dummy models that generate random responses or baselines for evaluation purposes. Useful for testing evaluation pipelines without requiring actual model inference.
( generation_parameters: GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None) system_prompt: str | None = None cache_dir: str = '~/.cache/huggingface/lighteval' model_name: str provider: str timeout: int | None = None proxies: typing.Optional[typing.Any] = None org_to_bill: str | None = None parallel_calls_count: typing.Annotated[int, Ge(ge=0)] = 10 )
Parameters
Configuration class for HuggingFace’s inference providers (like Together AI, Anyscale, etc.).
inference providers doc: https://huggingface.co/docs/inference-providers/en/index
Example:
config = InferenceProvidersModelConfig(
model_name="deepseek-ai/DeepSeek-R1-0528",
provider="together",
parallel_calls_count=5,
generation_parameters=GenerationParameters(
temperature=0.7,
max_new_tokens=100
)
)Note:
( generation_parameters: GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None) system_prompt: str | None = None cache_dir: str = '~/.cache/huggingface/lighteval' endpoint_name: str | None = None model_name: str | None = None reuse_existing: bool = False accelerator: str = 'gpu' dtype: str | None = None vendor: str = 'aws' region: str = 'us-east-1' instance_size: str | None = None instance_type: str | None = None framework: str = 'pytorch' endpoint_type: str = 'protected' add_special_tokens: bool = True revision: str = 'main' namespace: str | None = None image_url: str | None = None env_vars: dict | None = None batch_size: int = 1 )
Parameters
Configuration class for HuggingFace Inference Endpoints (dedicated infrastructure).
This configuration is used to create and manage dedicated inference endpoints on HuggingFace’s infrastructure. These endpoints provide dedicated compute resources and can handle larger batch sizes and higher throughput.
Methods: model_post_init(): Validates configuration and ensures proper parameter combinations. get_dtype_args(): Returns environment variables for dtype configuration. get_custom_env_vars(): Returns custom environment variables for the endpoint.
Example:
config = InferenceEndpointModelConfig(
model_name="microsoft/DialoGPT-medium",
instance_type="nvidia-a100",
instance_size="x1",
vendor="aws",
region="us-east-1",
dtype="float16",
generation_parameters=GenerationParameters(
temperature=0.7,
max_new_tokens=100
)
)Note:
( generation_parameters: GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None) system_prompt: str | None = None cache_dir: str = '~/.cache/huggingface/lighteval' model_name: str add_special_tokens: bool = True batch_size: int = 1 )
Parameters
Configuration class for HuggingFace Inference API (inference endpoints).
( generation_parameters: GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None) system_prompt: str | None = None cache_dir: str = '~/.cache/huggingface/lighteval' inference_server_address: str | None = None inference_server_auth: str | None = None model_name: str | None model_info: dict | None = None batch_size: int = 1 )
Parameters
Configuration class for Text Generation Inference (TGI) backend.
doc: https://huggingface.co/docs/text-generation-inference/en/index
This configuration is used to connect to TGI servers that serve HuggingFace models using the text-generation-inference library. TGI provides high-performance inference with features like continuous batching and efficient memory management.
( generation_parameters: GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None) system_prompt: str | None = None cache_dir: str = '~/.cache/huggingface/lighteval' model_name: str provider: str | None = None base_url: str | None = None api_key: str | None = None concurrent_requests: int = 10 )
Parameters
Configuration class for LiteLLM unified API client.
This configuration is used to connect to various LLM providers through the LiteLLM unified API. LiteLLM provides a consistent interface to multiple providers including OpenAI, Anthropic, Google, and many others.
litellm doc: https://docs.litellm.ai/docs/
( generation_parameters: GenerationParameters = GenerationParameters(num_blocks=None, block_size=None, early_stopping=None, repetition_penalty=None, frequency_penalty=None, length_penalty=None, presence_penalty=None, max_new_tokens=None, min_new_tokens=None, seed=None, stop_tokens=None, temperature=0, top_k=None, min_p=None, top_p=None, truncate_prompt=None, cache_implementation=None, response_format=None) system_prompt: str | None = None cache_dir: str = '~/.cache/huggingface/lighteval' model_name: str model_definition_file_path: str )
Parameters
Configuration class for loading custom model implementations in Lighteval.
This config allows users to define and load their own model implementations by specifying a Python file containing a custom model class that inherits from LightevalModel.
The custom model file should contain exactly one class that inherits from LightevalModel. This class will be automatically detected and instantiated when loading the model.
Example usage:
# Define config
config = CustomModelConfig(
model="my-custom-model",
model_definition_file_path="path/to/my_model.py"
)
# Example custom model file (my_model.py):
from lighteval.models.abstract_model import LightevalModel
class MyCustomModel(LightevalModel):
def __init__(self, config, env_config):
super().__init__(config, env_config)
# Custom initialization...
def greedy_until(self, docs: list[Doc]) -> list[ModelResponse]:
# Custom generation logic...
pass
def loglikelihood(self, docs: list[Doc]) -> list[ModelResponse]:
passAn example of a custom model can be found in examples/custom_models/google_translate_model.py.