Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
umarbutler commited on
Commit
633695d
ยท
verified ยท
1 Parent(s): 0c063eb

Shaped up README

Browse files
Files changed (1) hide show
  1. README.md +14 -16
README.md CHANGED
@@ -1,8 +1,8 @@
1
  ---
2
  license: cc0-1.0
3
  task_categories:
4
- - text-retrieval
5
  - summarization
 
6
  language:
7
  - en
8
  tags:
@@ -11,7 +11,7 @@ tags:
11
  size_categories:
12
  - n<1K
13
  source_datasets:
14
- - https://huggingface.co/datasets/FiscalNote/billsum
15
  dataset_info:
16
  - config_name: default
17
  features:
@@ -57,34 +57,30 @@ configs:
57
  data_files:
58
  - split: queries
59
  path: data/queries.jsonl
60
- pretty_name: BillSum MTEB Benchmark
61
  ---
62
- # BillSum MTEB Benchmark ๐Ÿ‹
63
- This is the US (federal) split of the [BillSum](https://huggingface.co/datasets/FiscalNote/billsum) dataset formatted in the [Massive Text Embedding Benchmark (MTEB)](https://github.com/embeddings-benchmark/mteb) information retrieval dataset format.
64
 
65
- The Californian mteb-formatted split is available via: https://huggingface.co/datasets/isaacus/mteb-BillSumCA
66
 
67
- This dataset is intended to facilitate the consistent and reproducible evaluation of summarization and information retrieval models on BillSum with the [`mteb`](https://github.com/embeddings-benchmark/mteb) embedding model evaluation framework.
68
-
69
- More specifically, this dataset tests the ability of information retrieval models to identify legislation given a prompts formatted as a summary.
70
 
71
  This dataset has been processed into the MTEB format by [Isaacus](https://isaacus.com/), a legal AI research company.
72
 
73
  ## Methodology ๐Ÿงช
74
- To understand how BillSum was created, refer to its [documentation](https://github.com/FiscalNote/BillSum).
75
-
76
- This dataset was formatted by assigning the summaries as queries (or anchors), and treating the `text` column as relevant (or positive) passages.
77
 
78
- This benchmarking dataset pulls 500 random entries from the BillSum test split.
79
 
80
  ## Structure ๐Ÿ—‚๏ธ
81
  As per the MTEB information retrieval dataset format, this dataset comprises three splits, `default`, `corpus` and `queries`.
82
 
83
  The `default` split pairs summary (`query-id`) with the raw text of the bills (`corpus-id`), each pair having a `score` of 1.
84
 
85
- The `corpus` split contains extracts from congressional bills, with the `text` key containg the raw text and its id being stored in the `_id` key.
86
 
87
- The `queries` split contains the summaries, with the text of a query being stored in the `text` key and its id being stored in the `_id` key.
88
 
89
  ## License ๐Ÿ“œ
90
  To the extent that any intellectual property rights reside in the contributions made by Isaacus in formatting and processing this dataset, Isaacus licenses those contributions under the same license terms as the source dataset. You are free to use this dataset without citing Isaacus.
@@ -101,5 +97,7 @@ The source dataset is licensed under [CC0](https://creativecommons.org/public-do
101
  publisher={Association for Computational Linguistics},
102
  author={Eidelman, Vladimir},
103
  year={2019},
104
- pages={48โ€“56} }
 
 
105
  ```
 
1
  ---
2
  license: cc0-1.0
3
  task_categories:
 
4
  - summarization
5
+ - text-retrieval
6
  language:
7
  - en
8
  tags:
 
11
  size_categories:
12
  - n<1K
13
  source_datasets:
14
+ - FiscalNote/billsum
15
  dataset_info:
16
  - config_name: default
17
  features:
 
57
  data_files:
58
  - split: queries
59
  path: data/queries.jsonl
60
+ pretty_name: BillSumUS MTEB Benchmark
61
  ---
62
+ # BillSumUS MTEB Benchmark ๐Ÿ‹
63
+ This is the federal US split of the [BillSum](https://huggingface.co/datasets/FiscalNote/billsum) dataset formatted in the [Massive Text Embedding Benchmark (MTEB)](https://github.com/embeddings-benchmark/mteb) information retrieval dataset format.
64
 
65
+ This dataset is intended to facilitate the consistent and reproducible evaluation of information retrieval models on BillSum with the [`mteb`](https://github.com/embeddings-benchmark/mteb) embedding model evaluation framework.
66
 
67
+ More specifically, this dataset tests the ability of information retrieval models to retrieve US congressional bills based on their summaries.
 
 
68
 
69
  This dataset has been processed into the MTEB format by [Isaacus](https://isaacus.com/), a legal AI research company.
70
 
71
  ## Methodology ๐Ÿงช
72
+ To understand how BillSum itself was created, refer to its [documentation](https://github.com/FiscalNote/BillSum).
 
 
73
 
74
+ This dataset was formatted by taking the federal US split of BillSum, treating summaries as queries (or anchors) and bills as relevant (or positive) passages, and randomly sampling 500 examples (as per MTEB guidelines, to keep the size of this evaluation set manageable).
75
 
76
  ## Structure ๐Ÿ—‚๏ธ
77
  As per the MTEB information retrieval dataset format, this dataset comprises three splits, `default`, `corpus` and `queries`.
78
 
79
  The `default` split pairs summary (`query-id`) with the raw text of the bills (`corpus-id`), each pair having a `score` of 1.
80
 
81
+ The `corpus` split contains bills, with the text of a bill being stored in the `text` key and its id being stored in the `_id` key.
82
 
83
+ The `queries` split contains summaries, with the text of a summary being stored in the `text` key and its id being stored in the `_id` key.
84
 
85
  ## License ๐Ÿ“œ
86
  To the extent that any intellectual property rights reside in the contributions made by Isaacus in formatting and processing this dataset, Isaacus licenses those contributions under the same license terms as the source dataset. You are free to use this dataset without citing Isaacus.
 
97
  publisher={Association for Computational Linguistics},
98
  author={Eidelman, Vladimir},
99
  year={2019},
100
+ pages={48โ€“56},
101
+ eprint={1910.00523}
102
+ }
103
  ```