Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,290 Bytes
c01e879
 
 
 
633695d
c01e879
 
 
 
 
 
 
 
633695d
c01e879
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cf092d2
c01e879
 
 
 
 
 
 
 
 
 
 
 
 
e3da631
c01e879
e3da631
ea34d2e
cf092d2
633695d
c01e879
633695d
c01e879
 
 
 
633695d
406ef12
633695d
c01e879
 
 
 
f4fb765
c01e879
633695d
c01e879
633695d
c01e879
 
e3da631
c01e879
 
 
 
 
 
 
 
 
 
 
633695d
 
 
c01e879
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
license: cc0-1.0
task_categories:
- summarization
- text-retrieval
language:
- en
tags:
- legal
- law
size_categories:
- n<1K
source_datasets:
- FiscalNote/billsum
dataset_info:
- config_name: default
  features:
  - name: query-id
    dtype: string
  - name: corpus-id
    dtype: string
  - name: score
    dtype: float64
  splits:
  - name: test
    num_examples: 500
- config_name: corpus
  features:
  - name: _id
    dtype: string
  - name: title
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: corpus
    num_examples: 500
- config_name: queries
  features:
  - name: _id
    dtype: string
  - name: text
    dtype: string
  splits:
  - name: queries
    num_examples: 500
configs:
- config_name: default
  data_files:
  - split: test
    path: data/default.jsonl
- config_name: corpus
  data_files:
  - split: corpus
    path: data/corpus.jsonl
- config_name: queries
  data_files:
  - split: queries
    path: data/queries.jsonl
pretty_name: BillSumUS (MTEB format)
---
# BillSumUS (MTEB formsat)
This is the federal US test split of the [BillSum](https://huggingface.co/datasets/FiscalNote/billsum) dataset formatted in the [Massive Text Embedding Benchmark (MTEB)](https://github.com/embeddings-benchmark/mteb) information retrieval dataset format.

This dataset is intended to facilitate the consistent and reproducible evaluation of information retrieval models on BillSum with the [`mteb`](https://github.com/embeddings-benchmark/mteb) embedding model evaluation framework.

More specifically, this dataset tests the ability of information retrieval models to retrieve US congressional bills based on their summaries.

This dataset has been processed into the MTEB format by [Isaacus](https://isaacus.com/), a legal AI research company.

## Methodology 🧪
To understand how BillSum itself was created, refer to its [documentation](https://github.com/FiscalNote/BillSum).

This dataset was formatted by taking the federal US split of BillSum, treating summaries as queries (or anchors) and bills as relevant (or positive) passages, and randomly sampling 500 examples (as per MTEB guidelines, to keep the size of this evaluation set manageable).

## Structure 🗂️
As per the MTEB information retrieval dataset format, this dataset comprises three splits, `default`, `corpus` and `queries`.

The `default` split pairs summaries (`query-id`) with the raw text of the bills (`corpus-id`), each pair having a `score` of 1.

The `corpus` split contains bills, with the text of a bill being stored in the `text` key and its id being stored in the `_id` key.

The `queries` split contains summaries, with the text of a summary being stored in the `text` key and its id being stored in the `_id` key.

## License 📜
This dataset is licensed under [CC0](https://creativecommons.org/public-domain/cc0/).

## Citation 🔖
```bibtex
@inproceedings{Eidelman_2019,
   title={BillSum: A Corpus for Automatic Summarization of US Legislation},
   url={http://dx.doi.org/10.18653/v1/D19-5406},
   DOI={10.18653/v1/d19-5406},
   booktitle={Proceedings of the 2nd Workshop on New Frontiers in Summarization},
   publisher={Association for Computational Linguistics},
   author={Eidelman, Vladimir},
   year={2019},
   pages={48–56},
   eprint={1910.00523}
}
```