Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'task_hashes' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 620, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'task_hashes' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1886, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 639, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'task_hashes' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1438, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1050, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

results
dict
groups
dict
group_subtasks
dict
configs
dict
versions
dict
n-shot
dict
n-samples
dict
config
dict
git_hash
string
date
float64
pretty_env_info
string
transformers_version
string
upper_git_hash
null
task_hashes
dict
model_source
string
model_name
string
model_name_sanitized
string
start_time
float64
end_time
float64
total_evaluation_time_seconds
string
{ "blimp": { "acc,none": 0.5102835820895523, "acc_stderr,none": 0.0018749288807151285, "alias": "blimp" }, "blimp_adjunct_island": { "acc,none": 0.596, "acc_stderr,none": 0.015524980677122437, "alias": " - blimp_adjunct_island" }, "blimp_anaphor_gender_agreement": { "acc,none": 0.6...
{ "blimp": { "acc,none": 0.5102835820895523, "acc_stderr,none": 0.0018749288807151285, "alias": "blimp" } }
{ "blimp": [ "blimp_causative", "blimp_determiner_noun_agreement_irregular_2", "blimp_passive_1", "blimp_wh_questions_subject_gap", "blimp_irregular_plural_subject_verb_agreement_1", "blimp_wh_vs_that_no_gap", "blimp_complex_NP_island", "blimp_principle_A_reconstruction", "blimp_an...
{ "blimp_adjunct_island": { "task": "blimp_adjunct_island", "group": "blimp", "dataset_path": "blimp", "dataset_name": "adjunct_island", "validation_split": "train", "doc_to_text": "", "doc_to_target": 0, "doc_to_choice": "{{[sentence_good, sentence_bad]}}", "description": "", ...
{ "blimp_adjunct_island": 1, "blimp_anaphor_gender_agreement": 1, "blimp_anaphor_number_agreement": 1, "blimp_animate_subject_passive": 1, "blimp_animate_subject_trans": 1, "blimp_causative": 1, "blimp_complex_NP_island": 1, "blimp_coordinate_structure_constraint_complex_left_branch": 1, "blimp_coordi...
{ "blimp": 0, "blimp_adjunct_island": 0, "blimp_anaphor_gender_agreement": 0, "blimp_anaphor_number_agreement": 0, "blimp_animate_subject_passive": 0, "blimp_animate_subject_trans": 0, "blimp_causative": 0, "blimp_complex_NP_island": 0, "blimp_coordinate_structure_constraint_complex_left_branch": 0, ...
{ "blimp_causative": { "original": 1000, "effective": 1000 }, "blimp_determiner_noun_agreement_irregular_2": { "original": 1000, "effective": 1000 }, "blimp_passive_1": { "original": 1000, "effective": 1000 }, "blimp_wh_questions_subject_gap": { "original": 1000, "effective...
{ "model": "hf", "model_args": "pretrained=EleutherAI/pythia-14m,revision=step0", "model_num_parameters": 14067712, "model_dtype": "torch.float16", "model_revision": "step0", "model_sha": "bc249c50b4a381f99e9296f08a69b8562a31fde4", "batch_size": "1024", "batch_sizes": [], "device": "cuda", "use_cach...
51a7ca9
1,720,429,466.869086
PyTorch version: 2.3.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: CentOS Linux release 7.9.2009 (Core) (x86_64) GCC version: (GCC) 12.1.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.17 Python version: 3.12.1 (main, Jan 1...
4.40.2
null
{}
hf
EleutherAI/pythia-14m
EleutherAI__pythia-14m
753,492.986263
753,904.208267
411.22200435004197
{ "lambada_openai": { "perplexity,none": 3507781.8105202965, "perplexity_stderr,none": 338907.2426000304, "acc,none": 0, "acc_stderr,none": 0, "alias": "lambada_openai" } }
null
{ "lambada_openai": [] }
{ "lambada_openai": { "task": "lambada_openai", "group": [ "lambada" ], "dataset_path": "EleutherAI/lambada_openai", "dataset_name": "default", "dataset_kwargs": { "trust_remote_code": true }, "test_split": "test", "doc_to_text": "{{text.split(' ')[:-1]|join(' ')}}", ...
{ "lambada_openai": 1 }
{ "lambada_openai": 0 }
{ "lambada_openai": { "original": 5153, "effective": 5153 } }
{ "model": "hf", "model_args": "pretrained=EleutherAI/pythia-14m,revision=step0,", "model_num_parameters": 14067712, "model_dtype": "torch.float16", "model_revision": "step0", "model_sha": "bc249c50b4a381f99e9296f08a69b8562a31fde4", "batch_size": "128", "batch_sizes": [], "device": "cuda", "use_cach...
51a7ca9
1,723,751,264.258567
PyTorch version: 2.3.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: CentOS Linux release 7.9.2009 (Core) (x86_64) GCC version: (GCC) 12.1.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.17 Python version: 3.12.1 (main, Jan 1...
4.40.2
null
{}
hf
EleutherAI/pythia-14m
EleutherAI__pythia-14m
1,133,831.651543
1,133,915.107704
83.45616106898524
{ "simple_cooccurrence_bias": { "likelihood_diff,none": -0.31463995695410724, "likelihood_diff_stderr,none": 0.01165886599756874, "pct_male_preferred,none": 0.8746438746438746, "pct_male_preferred_stderr,none": 0.017699230587944016, "alias": "simple_cooccurrence_bias" } }
null
{ "simple_cooccurrence_bias": [] }
{ "simple_cooccurrence_bias": { "task": "simple_cooccurrence_bias", "group": [ "social_bias" ], "dataset_path": "oskarvanderwal/simple-cooccurrence-bias", "test_split": "test", "doc_to_text": "{{sentence}}", "doc_to_target": [ 0, 1 ], "doc_to_choice": [ "fem...
{ "simple_cooccurrence_bias": 1 }
{ "simple_cooccurrence_bias": 0 }
{ "simple_cooccurrence_bias": { "original": 351, "effective": 351 } }
{ "model": "hf", "model_args": "pretrained=EleutherAI/pythia-14m,revision=step0", "model_num_parameters": 14067712, "model_dtype": "torch.float16", "model_revision": "step0", "model_sha": "bc249c50b4a381f99e9296f08a69b8562a31fde4", "batch_size": "1024", "batch_sizes": [], "device": "cuda", "use_cach...
51a7ca9
1,724,408,842.463424
PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: CentOS Linux release 7.9.2009 (Core) (x86_64) GCC version: (GCC) 12.1.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.17 Python version: 3.9.0 (default, Oct...
4.44.0
null
{}
hf
EleutherAI/pythia-14m
EleutherAI__pythia-14m
4,697,201.426428
4,697,253.040466
51.61403867881745
{"blimp":{"acc,none":0.5102835820895523,"acc_stderr,none":0.0018749288807151285,"alias":"blimp"},"bl(...TRUNCATED)
{ "blimp": { "acc,none": 0.5102835820895523, "acc_stderr,none": 0.0018749288807151285, "alias": "blimp" } }
{"blimp":["blimp_causative","blimp_determiner_noun_agreement_irregular_2","blimp_passive_1","blimp_w(...TRUNCATED)
{"blimp_adjunct_island":{"task":"blimp_adjunct_island","group":"blimp","dataset_path":"blimp","datas(...TRUNCATED)
{"blimp_adjunct_island":1.0,"blimp_anaphor_gender_agreement":1.0,"blimp_anaphor_number_agreement":1.(...TRUNCATED)
{"blimp":0,"blimp_adjunct_island":0,"blimp_anaphor_gender_agreement":0,"blimp_anaphor_number_agreeme(...TRUNCATED)
{"blimp_causative":{"original":1000,"effective":1000},"blimp_determiner_noun_agreement_irregular_2":(...TRUNCATED)
{"model":"hf","model_args":"pretrained=EleutherAI/pythia-14m,revision=step1","model_num_parameters":(...TRUNCATED)
51a7ca9
1,720,429,852.278012
"PyTorch version: 2.3.0+cu121\nIs debug build: False\nCUDA used to build PyTorch: 12.1\nROCM used to(...TRUNCATED)
4.40.2
null
{}
hf
EleutherAI/pythia-14m
EleutherAI__pythia-14m
753,927.127344
754,281.841991
354.71464768901933
{"lambada_openai":{"perplexity,none":3507781.8105202965,"perplexity_stderr,none":338907.2426000304,"(...TRUNCATED)
null
{ "lambada_openai": [] }
{"lambada_openai":{"task":"lambada_openai","group":["lambada"],"dataset_path":"EleutherAI/lambada_op(...TRUNCATED)
{ "lambada_openai": 1 }
{ "lambada_openai": 0 }
{ "lambada_openai": { "original": 5153, "effective": 5153 } }
{"model":"hf","model_args":"pretrained=EleutherAI/pythia-14m,revision=step1,","model_num_parameters"(...TRUNCATED)
51a7ca9
1,723,751,353.851652
"PyTorch version: 2.3.0+cu121\nIs debug build: False\nCUDA used to build PyTorch: 12.1\nROCM used to(...TRUNCATED)
4.40.2
null
{}
hf
EleutherAI/pythia-14m
EleutherAI__pythia-14m
1,133,922.894151
1,134,001.333326
78.43917500483803
{"simple_cooccurrence_bias":{"likelihood_diff,none":-0.31463995695410724,"likelihood_diff_stderr,non(...TRUNCATED)
null
{ "simple_cooccurrence_bias": [] }
{"simple_cooccurrence_bias":{"task":"simple_cooccurrence_bias","group":["social_bias"],"dataset_path(...TRUNCATED)
{ "simple_cooccurrence_bias": 1 }
{ "simple_cooccurrence_bias": 0 }
{ "simple_cooccurrence_bias": { "original": 351, "effective": 351 } }
{"model":"hf","model_args":"pretrained=EleutherAI/pythia-14m,revision=step1","model_num_parameters":(...TRUNCATED)
51a7ca9
1,724,408,896.377194
"PyTorch version: 2.4.0+cu121\nIs debug build: False\nCUDA used to build PyTorch: 12.1\nROCM used to(...TRUNCATED)
4.44.0
null
{}
hf
EleutherAI/pythia-14m
EleutherAI__pythia-14m
4,697,261.07943
4,697,297.183277
36.10384631436318
{"blimp":{"acc,none":0.577134328358209,"acc_stderr,none":0.0016256946286422691,"alias":"blimp"},"bli(...TRUNCATED)
{ "blimp": { "acc,none": 0.577134328358209, "acc_stderr,none": 0.0016256946286422691, "alias": "blimp" } }
{"blimp":["blimp_causative","blimp_determiner_noun_agreement_irregular_2","blimp_passive_1","blimp_w(...TRUNCATED)
{"blimp_adjunct_island":{"task":"blimp_adjunct_island","group":"blimp","dataset_path":"blimp","datas(...TRUNCATED)
{"blimp_adjunct_island":1.0,"blimp_anaphor_gender_agreement":1.0,"blimp_anaphor_number_agreement":1.(...TRUNCATED)
{"blimp":0,"blimp_adjunct_island":0,"blimp_anaphor_gender_agreement":0,"blimp_anaphor_number_agreeme(...TRUNCATED)
{"blimp_causative":{"original":1000,"effective":1000},"blimp_determiner_noun_agreement_irregular_2":(...TRUNCATED)
{"model":"hf","model_args":"pretrained=EleutherAI/pythia-14m,revision=step1000","model_num_parameter(...TRUNCATED)
51a7ca9
1,720,432,542.210486
"PyTorch version: 2.3.0+cu121\nIs debug build: False\nCUDA used to build PyTorch: 12.1\nROCM used to(...TRUNCATED)
4.40.2
null
{}
hf
EleutherAI/pythia-14m
EleutherAI__pythia-14m
756,656.804734
756,983.172127
326.3673931409139
{"lambada_openai":{"perplexity,none":195988.90526637834,"perplexity_stderr,none":12193.7268807034,"a(...TRUNCATED)
null
{ "lambada_openai": [] }
{"lambada_openai":{"task":"lambada_openai","group":["lambada"],"dataset_path":"EleutherAI/lambada_op(...TRUNCATED)
{ "lambada_openai": 1 }
{ "lambada_openai": 0 }
{ "lambada_openai": { "original": 5153, "effective": 5153 } }
{"model":"hf","model_args":"pretrained=EleutherAI/pythia-14m,revision=step1000,","model_num_paramete(...TRUNCATED)
51a7ca9
1,723,752,224.187968
"PyTorch version: 2.3.0+cu121\nIs debug build: False\nCUDA used to build PyTorch: 12.1\nROCM used to(...TRUNCATED)
4.40.2
null
{}
hf
EleutherAI/pythia-14m
EleutherAI__pythia-14m
1,134,793.204587
1,134,878.555077
85.35048984992318
{"simple_cooccurrence_bias":{"likelihood_diff,none":0.48672151134867353,"likelihood_diff_stderr,none(...TRUNCATED)
null
{ "simple_cooccurrence_bias": [] }
{"simple_cooccurrence_bias":{"task":"simple_cooccurrence_bias","group":["social_bias"],"dataset_path(...TRUNCATED)
{ "simple_cooccurrence_bias": 1 }
{ "simple_cooccurrence_bias": 0 }
{ "simple_cooccurrence_bias": { "original": 351, "effective": 351 } }
{"model":"hf","model_args":"pretrained=EleutherAI/pythia-14m,revision=step1000","model_num_parameter(...TRUNCATED)
51a7ca9
1,724,409,271.13693
"PyTorch version: 2.4.0+cu121\nIs debug build: False\nCUDA used to build PyTorch: 12.1\nROCM used to(...TRUNCATED)
4.44.0
null
{}
hf
EleutherAI/pythia-14m
EleutherAI__pythia-14m
4,697,640.102159
4,697,671.799938
31.697778502479196
{"blimp":{"acc,none":0.6514925373134328,"acc_stderr,none":0.00155966533870429,"alias":"blimp"},"blim(...TRUNCATED)
{ "blimp": { "acc,none": 0.6514925373134328, "acc_stderr,none": 0.00155966533870429, "alias": "blimp" } }
{"blimp":["blimp_causative","blimp_determiner_noun_agreement_irregular_2","blimp_passive_1","blimp_w(...TRUNCATED)
{"blimp_adjunct_island":{"task":"blimp_adjunct_island","group":"blimp","dataset_path":"blimp","datas(...TRUNCATED)
{"blimp_adjunct_island":1.0,"blimp_anaphor_gender_agreement":1.0,"blimp_anaphor_number_agreement":1.(...TRUNCATED)
{"blimp":0,"blimp_adjunct_island":0,"blimp_anaphor_gender_agreement":0,"blimp_anaphor_number_agreeme(...TRUNCATED)
{"blimp_causative":{"original":1000,"effective":1000},"blimp_determiner_noun_agreement_irregular_2":(...TRUNCATED)
{"model":"hf","model_args":"pretrained=EleutherAI/pythia-14m,revision=step10000","model_num_paramete(...TRUNCATED)
51a7ca9
1,720,435,689.384633
"PyTorch version: 2.3.0+cu121\nIs debug build: False\nCUDA used to build PyTorch: 12.1\nROCM used to(...TRUNCATED)
4.40.2
null
{}
hf
EleutherAI/pythia-14m
EleutherAI__pythia-14m
759,789.583742
760,113.082642
323.4988997380715
End of preview.

No dataset card yet

Downloads last month
53

Collection including EleutherAI/polypythias-evals