The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
				Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({'config.backend.half', 'config.backend.reshape'}) and 95 missing columns ({'report.decode.energy.total', 'report.decode.latency.unit', 'report.load.memory.max_process_vram', 'config.backend.auto_calibration', 'report.prefill.memory.max_reserved', 'report.per_token.latency.p95', 'report.load.energy', 'report.decode.efficiency.unit', 'report.per_token.energy', 'report.load.memory.max_global_vram', 'report.prefill.latency.p95', 'report.prefill.latency.stdev', 'report.prefill.throughput.value', 'report.decode.latency.values', 'report.decode.energy.ram', 'report.load.latency.count', 'report.decode.latency.p95', 'report.prefill.latency.stdev_', 'report.load.latency.values', 'report.load.latency.p99', 'report.per_token.latency.stdev_', 'report.decode.latency.count', 'report.load.latency.unit', 'report.decode.memory.unit', 'report.decode.latency.stdev', 'report.decode.throughput.value', 'report.per_token.latency.count', 'report.load.latency.p95', 'report.load.latency.p90', 'report.decode.energy.cpu', 'report.per_token.throughput.value', 'report.prefill.energy.unit', 'report.per_token.latency.p50', 'report.decode.energy.gpu', 'report.per_token.efficiency', 'report.load.efficiency', 'report.per_token.latency.p90', 'report.prefill.latency.unit', 'report.load.throughput', 'report.per_token.latency.p99', 'report.decode.memory.max_reserved', 'report.load.latency.stdev_', 'report.prefill.latency.p99', 'report.decode.latency.stdev_', 'report.load.latency.mean', 'report.load.memory.max_allocated', 'report.decode.energy.unit', 'report.prefill.energy.gpu', 'report.pr
...
'report.per_token.memory', 'report.load.latency.stdev', 'config.backend.optimization', 'report.load.memory.max_reserved', 'report.prefill.memory.max_ram', 'report.prefill.latency.values', 'report.per_token.throughput.unit', 'report.prefill.memory.unit', 'report.prefill.energy.total', 'report.prefill.latency.p50', 'report.prefill.efficiency.value', 'report.per_token.latency.total', 'report.prefill.memory.max_process_vram', 'report.prefill.latency.mean', 'report.prefill.memory.max_allocated', 'report.prefill.latency.total', 'report.per_token.latency.unit', 'report.decode.memory.max_global_vram', 'config.backend.use_io_binding', 'report.prefill.energy.ram', 'report.decode.latency.mean', 'report.load.latency.total', 'report.decode.throughput.unit', 'report.decode.latency.p50', 'report.decode.latency.p99', 'report.prefill.memory.max_global_vram', 'report.load.memory.max_ram', 'report.decode.latency.p90', 'report.prefill.latency.p90', 'report.decode.latency.total', 'config.backend.torch_dtype', 'config.backend.auto_quantization', 'report.prefill.throughput.unit', 'report.per_token.latency.stdev', 'report.load.latency.p50', 'report.load.memory.unit', 'report.decode.efficiency.value', 'report.prefill.efficiency.unit', 'report.decode.memory.max_ram', 'config.backend.auto_optimization', 'report.per_token.latency.mean', 'config.backend.provider', 'report.decode.memory.max_allocated', 'report.decode.memory.max_process_vram', 'report.prefill.energy.cpu', 'report.per_token.latency.values'}).
This happened while the csv dataset builder was generating data using
hf://datasets/optimum-benchmark/llm-perf-leaderboard/data/perf-df-openvino-cpu-unquantized-32vCPU-C7i.csv (at revision 94a9713e1842c87029a1dcc829e0003533a6d275)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              config.name: string
              config.backend.name: string
              config.backend.version: string
              config.backend._target_: string
              config.backend.task: string
              config.backend.library: string
              config.backend.model_type: string
              config.backend.model: string
              config.backend.processor: string
              config.backend.device: string
              config.backend.device_ids: int64
              config.backend.seed: int64
              config.backend.inter_op_num_threads: double
              config.backend.intra_op_num_threads: double
              config.backend.model_kwargs.trust_remote_code: bool
              config.backend.no_weights: bool
              config.backend.export: bool
              config.backend.use_cache: bool
              config.backend.use_merged: bool
              config.backend.half: bool
              config.backend.reshape: bool
              config.backend.quantization: bool
              config.backend.calibration: bool
              config.scenario.name: string
              config.scenario._target_: string
              config.scenario.iterations: int64
              config.scenario.duration: int64
              config.scenario.warmup_runs: int64
              config.scenario.input_shapes.batch_size: int64
              config.scenario.input_shapes.num_choices: double
              config.scenario.input_shapes.sequence_length: int64
              config.scenario.new_tokens: double
              config.scenario.memory: bool
              config.scenario.latency: bool
              config.scenario.energy: bool
              config.scenario.generate_kwargs.max_new_tokens: int64
              config.scenario.generate_kwargs.min_new_tokens: int64
              config.launcher.name: string
              config.launcher._target_: string
              config.launcher.device_isolation: bool
              config.launcher.device_isolation_action: double
              config.launcher.numactl: bool
              config.launcher.start_method: string
              config.environment.cpu: string
              config.environment.cpu_count: int64
              config.environment.cpu_ram_mb: double
              config.environment.system: string
              config.environment.machine: string
              config.environment.platform: string
              config.environment.processor: string
              config.environment.python_version: string
              config.environment.optimum_benchmark_version: string
              config.environment.optimum_benchmark_commit: double
              config.environment.transformers_version: string
              config.environment.transformers_commit: double
              config.environment.accelerate_version: string
              config.environment.accelerate_commit: double
              config.environment.diffusers_version: double
              config.environment.diffusers_commit: double
              config.environment.optimum_version: string
              config.environment.optimum_commit: double
              config.environment.timm_version: double
              config.environment.timm_commit: double
              config.environment.peft_version: double
              config.environment.peft_commit: double
              config.print_report: bool
              config.log_report: bool
              report.traceback: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 10798
              to
              {'config.name': Value(dtype='string', id=None), 'config.backend.name': Value(dtype='string', id=None), 'config.backend.version': Value(dtype='string', id=None), 'config.backend._target_': Value(dtype='string', id=None), 'config.backend.task': Value(dtype='string', id=None), 'config.backend.library': Value(dtype='string', id=None), 'config.backend.model_type': Value(dtype='string', id=None), 'config.backend.model': Value(dtype='string', id=None), 'config.backend.processor': Value(dtype='string', id=None), 'config.backend.device': Value(dtype='string', id=None), 'config.backend.device_ids': Value(dtype='int64', id=None), 'config.backend.seed': Value(dtype='int64', id=None), 'config.backend.inter_op_num_threads': Value(dtype='float64', id=None), 'config.backend.intra_op_num_threads': Value(dtype='float64', id=None), 'config.backend.model_kwargs.trust_remote_code': Value(dtype='bool', id=None), 'config.backend.no_weights': Value(dtype='bool', id=None), 'config.backend.export': Value(dtype='bool', id=None), 'config.backend.use_cache': Value(dtype='bool', id=None), 'config.backend.use_merged': Value(dtype='bool', id=None), 'config.backend.torch_dtype': Value(dtype='string', id=None), 'config.backend.provider': Value(dtype='string', id=None), 'config.backend.use_io_binding': Value(dtype='bool', id=None), 'config.backend.auto_optimization': Value(dtype='float64', id=None), 'config.backend.auto_quantization': Value(dtype='float64', id=None), 'config.backend.auto_calibration': Value(dt
              ...
              ', id=None), 'report.decode.energy.unit': Value(dtype='string', id=None), 'report.decode.energy.cpu': Value(dtype='float64', id=None), 'report.decode.energy.ram': Value(dtype='float64', id=None), 'report.decode.energy.gpu': Value(dtype='float64', id=None), 'report.decode.energy.total': Value(dtype='float64', id=None), 'report.decode.efficiency.unit': Value(dtype='string', id=None), 'report.decode.efficiency.value': Value(dtype='float64', id=None), 'report.per_token.memory': Value(dtype='float64', id=None), 'report.per_token.latency.unit': Value(dtype='string', id=None), 'report.per_token.latency.values': Value(dtype='string', id=None), 'report.per_token.latency.count': Value(dtype='float64', id=None), 'report.per_token.latency.total': Value(dtype='float64', id=None), 'report.per_token.latency.mean': Value(dtype='float64', id=None), 'report.per_token.latency.p50': Value(dtype='float64', id=None), 'report.per_token.latency.p90': Value(dtype='float64', id=None), 'report.per_token.latency.p95': Value(dtype='float64', id=None), 'report.per_token.latency.p99': Value(dtype='float64', id=None), 'report.per_token.latency.stdev': Value(dtype='float64', id=None), 'report.per_token.latency.stdev_': Value(dtype='float64', id=None), 'report.per_token.throughput.unit': Value(dtype='string', id=None), 'report.per_token.throughput.value': Value(dtype='float64', id=None), 'report.per_token.energy': Value(dtype='float64', id=None), 'report.per_token.efficiency': Value(dtype='float64', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 2 new columns ({'config.backend.half', 'config.backend.reshape'}) and 95 missing columns ({'report.decode.energy.total', 'report.decode.latency.unit', 'report.load.memory.max_process_vram', 'config.backend.auto_calibration', 'report.prefill.memory.max_reserved', 'report.per_token.latency.p95', 'report.load.energy', 'report.decode.efficiency.unit', 'report.per_token.energy', 'report.load.memory.max_global_vram', 'report.prefill.latency.p95', 'report.prefill.latency.stdev', 'report.prefill.throughput.value', 'report.decode.latency.values', 'report.decode.energy.ram', 'report.load.latency.count', 'report.decode.latency.p95', 'report.prefill.latency.stdev_', 'report.load.latency.values', 'report.load.latency.p99', 'report.per_token.latency.stdev_', 'report.decode.latency.count', 'report.load.latency.unit', 'report.decode.memory.unit', 'report.decode.latency.stdev', 'report.decode.throughput.value', 'report.per_token.latency.count', 'report.load.latency.p95', 'report.load.latency.p90', 'report.decode.energy.cpu', 'report.per_token.throughput.value', 'report.prefill.energy.unit', 'report.per_token.latency.p50', 'report.decode.energy.gpu', 'report.per_token.efficiency', 'report.load.efficiency', 'report.per_token.latency.p90', 'report.prefill.latency.unit', 'report.load.throughput', 'report.per_token.latency.p99', 'report.decode.memory.max_reserved', 'report.load.latency.stdev_', 'report.prefill.latency.p99', 'report.decode.latency.stdev_', 'report.load.latency.mean', 'report.load.memory.max_allocated', 'report.decode.energy.unit', 'report.prefill.energy.gpu', 'report.pr
              ...
              'report.per_token.memory', 'report.load.latency.stdev', 'config.backend.optimization', 'report.load.memory.max_reserved', 'report.prefill.memory.max_ram', 'report.prefill.latency.values', 'report.per_token.throughput.unit', 'report.prefill.memory.unit', 'report.prefill.energy.total', 'report.prefill.latency.p50', 'report.prefill.efficiency.value', 'report.per_token.latency.total', 'report.prefill.memory.max_process_vram', 'report.prefill.latency.mean', 'report.prefill.memory.max_allocated', 'report.prefill.latency.total', 'report.per_token.latency.unit', 'report.decode.memory.max_global_vram', 'config.backend.use_io_binding', 'report.prefill.energy.ram', 'report.decode.latency.mean', 'report.load.latency.total', 'report.decode.throughput.unit', 'report.decode.latency.p50', 'report.decode.latency.p99', 'report.prefill.memory.max_global_vram', 'report.load.memory.max_ram', 'report.decode.latency.p90', 'report.prefill.latency.p90', 'report.decode.latency.total', 'config.backend.torch_dtype', 'config.backend.auto_quantization', 'report.prefill.throughput.unit', 'report.per_token.latency.stdev', 'report.load.latency.p50', 'report.load.memory.unit', 'report.decode.efficiency.value', 'report.prefill.efficiency.unit', 'report.decode.memory.max_ram', 'config.backend.auto_optimization', 'report.per_token.latency.mean', 'config.backend.provider', 'report.decode.memory.max_allocated', 'report.decode.memory.max_process_vram', 'report.prefill.energy.cpu', 'report.per_token.latency.values'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/optimum-benchmark/llm-perf-leaderboard/data/perf-df-openvino-cpu-unquantized-32vCPU-C7i.csv (at revision 94a9713e1842c87029a1dcc829e0003533a6d275)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
| config.name
				 string | config.backend.name
				 string | config.backend.version
				 string | config.backend._target_
				 string | config.backend.task
				 string | config.backend.library
				 string | config.backend.model_type
				 string | config.backend.model
				 string | config.backend.processor
				 string | config.backend.device
				 string | config.backend.device_ids
				 int64 | config.backend.seed
				 int64 | config.backend.inter_op_num_threads
				 null | config.backend.intra_op_num_threads
				 null | config.backend.model_kwargs.trust_remote_code
				 bool | config.backend.no_weights
				 bool | config.backend.export
				 bool | config.backend.use_cache
				 bool | config.backend.use_merged
				 bool | config.backend.torch_dtype
				 string | config.backend.provider
				 string | config.backend.use_io_binding
				 bool | config.backend.auto_optimization
				 null | config.backend.auto_quantization
				 null | config.backend.auto_calibration
				 null | config.backend.optimization
				 bool | config.backend.quantization
				 bool | config.backend.calibration
				 bool | config.scenario.name
				 string | config.scenario._target_
				 string | config.scenario.iterations
				 int64 | config.scenario.duration
				 int64 | config.scenario.warmup_runs
				 int64 | config.scenario.input_shapes.batch_size
				 int64 | config.scenario.input_shapes.num_choices
				 int64 | config.scenario.input_shapes.sequence_length
				 int64 | config.scenario.new_tokens
				 null | config.scenario.memory
				 bool | config.scenario.latency
				 bool | config.scenario.energy
				 bool | config.scenario.generate_kwargs.max_new_tokens
				 int64 | config.scenario.generate_kwargs.min_new_tokens
				 int64 | config.launcher.name
				 string | config.launcher._target_
				 string | config.launcher.device_isolation
				 bool | config.launcher.device_isolation_action
				 null | config.launcher.numactl
				 bool | config.launcher.start_method
				 string | config.environment.cpu
				 string | config.environment.cpu_count
				 int64 | config.environment.cpu_ram_mb
				 float64 | config.environment.system
				 string | config.environment.machine
				 string | config.environment.platform
				 string | config.environment.processor
				 string | config.environment.python_version
				 string | config.environment.optimum_benchmark_version
				 string | config.environment.optimum_benchmark_commit
				 string | config.environment.transformers_version
				 string | config.environment.transformers_commit
				 null | config.environment.accelerate_version
				 string | config.environment.accelerate_commit
				 null | config.environment.diffusers_version
				 null | config.environment.diffusers_commit
				 null | config.environment.optimum_version
				 string | config.environment.optimum_commit
				 null | config.environment.timm_version
				 null | config.environment.timm_commit
				 null | config.environment.peft_version
				 null | config.environment.peft_commit
				 null | config.print_report
				 bool | config.log_report
				 bool | report.traceback
				 string | report.load.memory.unit
				 null | report.load.memory.max_ram
				 null | report.load.memory.max_global_vram
				 null | report.load.memory.max_process_vram
				 null | report.load.memory.max_reserved
				 null | report.load.memory.max_allocated
				 null | report.load.latency.unit
				 null | report.load.latency.values
				 null | report.load.latency.count
				 null | report.load.latency.total
				 null | report.load.latency.mean
				 null | report.load.latency.p50
				 null | report.load.latency.p90
				 null | report.load.latency.p95
				 null | report.load.latency.p99
				 null | report.load.latency.stdev
				 null | report.load.latency.stdev_
				 null | report.load.throughput
				 null | report.load.energy
				 null | report.load.efficiency
				 null | report.prefill.memory.unit
				 null | report.prefill.memory.max_ram
				 null | report.prefill.memory.max_global_vram
				 null | report.prefill.memory.max_process_vram
				 null | report.prefill.memory.max_reserved
				 null | report.prefill.memory.max_allocated
				 null | report.prefill.latency.unit
				 null | report.prefill.latency.values
				 null | report.prefill.latency.count
				 null | report.prefill.latency.total
				 null | report.prefill.latency.mean
				 null | report.prefill.latency.p50
				 null | report.prefill.latency.p90
				 null | report.prefill.latency.p95
				 null | report.prefill.latency.p99
				 null | report.prefill.latency.stdev
				 null | report.prefill.latency.stdev_
				 null | report.prefill.throughput.unit
				 null | report.prefill.throughput.value
				 null | report.prefill.energy.unit
				 null | report.prefill.energy.cpu
				 null | report.prefill.energy.ram
				 null | report.prefill.energy.gpu
				 null | report.prefill.energy.total
				 null | report.prefill.efficiency.unit
				 null | report.prefill.efficiency.value
				 null | report.decode.memory.unit
				 null | report.decode.memory.max_ram
				 null | report.decode.memory.max_global_vram
				 null | report.decode.memory.max_process_vram
				 null | report.decode.memory.max_reserved
				 null | report.decode.memory.max_allocated
				 null | report.decode.latency.unit
				 null | report.decode.latency.values
				 null | report.decode.latency.count
				 null | report.decode.latency.total
				 null | report.decode.latency.mean
				 null | report.decode.latency.p50
				 null | report.decode.latency.p90
				 null | report.decode.latency.p95
				 null | report.decode.latency.p99
				 null | report.decode.latency.stdev
				 null | report.decode.latency.stdev_
				 null | report.decode.throughput.unit
				 null | report.decode.throughput.value
				 null | report.decode.energy.unit
				 null | report.decode.energy.cpu
				 null | report.decode.energy.ram
				 null | report.decode.energy.gpu
				 null | report.decode.energy.total
				 null | report.decode.efficiency.unit
				 null | report.decode.efficiency.value
				 null | report.per_token.memory
				 null | report.per_token.latency.unit
				 null | report.per_token.latency.values
				 null | report.per_token.latency.count
				 null | report.per_token.latency.total
				 null | report.per_token.latency.mean
				 null | report.per_token.latency.p50
				 null | report.per_token.latency.p90
				 null | report.per_token.latency.p95
				 null | report.per_token.latency.p99
				 null | report.per_token.latency.stdev
				 null | report.per_token.latency.stdev_
				 null | report.per_token.throughput.unit
				 null | report.per_token.throughput.value
				 null | report.per_token.energy
				 null | report.per_token.efficiency
				 null | 
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 
	float32-sdpa-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Meta-Llama-3-8B | 
	meta-llama/Meta-Llama-3-8B | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	float32 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	float16-eager-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Llama-3.1-8B-Instruct | 
	meta-llama/Llama-3.1-8B-Instruct | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	float16 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	float32-sdpa-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Meta-Llama-3-8B-Instruct | 
	meta-llama/Meta-Llama-3-8B-Instruct | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	float32 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	bfloat16-sdpa-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Llama-3.1-405B | 
	meta-llama/Llama-3.1-405B | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	bfloat16 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 57, in launch
    raise RuntimeError(f"Isolated process exited with non-zero code {isolated_process.exitcode}")
RuntimeError: Isolated process exited with non-zero code -9
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	bfloat16-eager-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Meta-Llama-3-8B-Instruct | 
	meta-llama/Meta-Llama-3-8B-Instruct | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	bfloat16 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	float32-eager-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Llama-3.1-405B | 
	meta-llama/Llama-3.1-405B | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	float32 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 57, in launch
    raise RuntimeError(f"Isolated process exited with non-zero code {isolated_process.exitcode}")
RuntimeError: Isolated process exited with non-zero code -9
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	float16-sdpa-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Meta-Llama-3-8B-Instruct | 
	meta-llama/Meta-Llama-3-8B-Instruct | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	float16 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	float16-sdpa-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Meta-Llama-3-8B | 
	meta-llama/Meta-Llama-3-8B | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	float16 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	float16-sdpa-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Llama-3.1-8B-Instruct | 
	meta-llama/Llama-3.1-8B-Instruct | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	float16 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	float16-eager-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Meta-Llama-3-8B-Instruct | 
	meta-llama/Meta-Llama-3-8B-Instruct | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	float16 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	float32-eager-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Meta-Llama-3-8B-Instruct | 
	meta-llama/Meta-Llama-3-8B-Instruct | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	float32 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	bfloat16-eager-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Llama-3.1-8B-Instruct | 
	meta-llama/Llama-3.1-8B-Instruct | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	bfloat16 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	bfloat16-sdpa-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Llama-3.1-8B-Instruct | 
	meta-llama/Llama-3.1-8B-Instruct | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	bfloat16 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	bfloat16-eager-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Llama-3.1-405B | 
	meta-llama/Llama-3.1-405B | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	bfloat16 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 57, in launch
    raise RuntimeError(f"Isolated process exited with non-zero code {isolated_process.exitcode}")
RuntimeError: Isolated process exited with non-zero code -9
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	float32-eager-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Meta-Llama-3-8B | 
	meta-llama/Meta-Llama-3-8B | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	float32 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	float32-eager-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Llama-3.1-8B-Instruct | 
	meta-llama/Llama-3.1-8B-Instruct | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	float32 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	float16-eager-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Meta-Llama-3-8B | 
	meta-llama/Meta-Llama-3-8B | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	float16 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	float32-sdpa-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Llama-3.1-405B | 
	meta-llama/Llama-3.1-405B | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	float32 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 57, in launch
    raise RuntimeError(f"Isolated process exited with non-zero code {isolated_process.exitcode}")
RuntimeError: Isolated process exited with non-zero code -9
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
| 
	bfloat16-eager-onnxruntime | 
	onnxruntime | 
	ort:1.19.2 | 
	optimum_benchmark.backends.onnxruntime.backend.ORTBackend | 
	text-generation | 
	transformers | 
	llama | 
	meta-llama/Meta-Llama-3-8B | 
	meta-llama/Meta-Llama-3-8B | 
	cpu | 0 | 42 | null | null | true | true | true | true | false | 
	bfloat16 | 
	CPUExecutionProvider | true | null | null | null | false | false | false | 
	inference | 
	optimum_benchmark.scenarios.inference.scenario.InferenceScenario | 10 | 10 | 10 | 1 | 2 | 256 | null | true | true | true | 64 | 64 | 
	process | 
	optimum_benchmark.launchers.process.launcher.ProcessLauncher | false | null | false | 
	spawn | 
	 Intel(R) Xeon(R) Platinum 8488C | 32 | 66,326.188032 | 
	Linux | 
	x86_64 | 
	Linux-5.10.226-214.880.amzn2.x86_64-x86_64-with-glibc2.35 | 
	x86_64 | 
	3.10.12 | 
	0.5.0.dev0 | 
	08c9f59440cf4e5a5d6711ec19e8329ab2de652d | 
	4.45.2 | null | 
	1.0.1 | null | null | null | 
	1.23.2 | null | null | null | null | null | false | true | 
	Traceback (most recent call last):
  File "/workspace/src/common/benchmark_runner.py", line 118, in execute_and_log_benchmark
    benchmark_report = Benchmark.launch(benchmark_config)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 51, in launch
    report = launcher.launch(worker=Benchmark.run, worker_args=[config])
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 66, in launch
    raise ChildProcessError(response["traceback"])
ChildProcessError: Traceback (most recent call last):
  File "/workspace/src/optimum-benchmark/optimum_benchmark/launchers/process/launcher.py", line 103, in target
    report = worker(*worker_args)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/benchmark/base.py", line 78, in run
    report = scenario.run(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 89, in run
    self.run_model_loading_tracking(backend)
  File "/workspace/src/optimum-benchmark/optimum_benchmark/scenarios/inference/scenario.py", line 168, in run_model_loading_tracking
    backend.load()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 86, in load
    self.load_ortmodel_with_no_weights()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 133, in load_ortmodel_with_no_weights
    self.load_ortmodel_from_pretrained()
  File "/workspace/src/optimum-benchmark/optimum_benchmark/backends/onnxruntime/backend.py", line 118, in load_ortmodel_from_pretrained
    self.pretrained_model = self.ort_model_loader.from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_ort.py", line 737, in from_pretrained
    return super().from_pretrained(
  File "/usr/local/lib/python3.10/dist-packages/optimum/modeling_base.py", line 438, in from_pretrained
    return from_pretrained_method(
  File "/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/modeling_decoder.py", line 653, in _from_transformers
    main_export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py", line 373, in main_export
    onnx_export_from_model(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 1193, in onnx_export_from_model
    _, onnx_outputs = export_models(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 783, in export_models
    export(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 888, in export
    export_output = export_pytorch(
  File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py", line 584, in export_pytorch
    onnx_export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/__init__.py", line 375, in export
    export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export
    _export(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1117, in _model_to_graph
    graph = _optimize_graph(
  File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 663, in _optimize_graph
    _C._jit_pass_onnx_graph_shape_type_inference(
RuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.
 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 
End of preview. 
	
			Subsets and Splits
				
	
				
			
				
No community queries yet
The top public SQL queries from the community will appear here once available.
