ConTEB evaluation datasets Evaluation datasets of the ConTEB benchmark. Use "test" split where available, otherwise "validation", otherwise "train". illuin-conteb/covid-qa Viewer • Updated Jun 2 • 4.46k • 490 • 1 illuin-conteb/geography Viewer • Updated May 30 • 11.4k • 454 • 2 illuin-conteb/esg-reports Viewer • Updated May 30 • 3.74k • 181 • 1 illuin-conteb/insurance Viewer • Updated May 30 • 180 • 476 • 1
ConTEB training datasets Training data for the InSeNT method. illuin-conteb/narrative-qa Viewer • Updated Jun 2 • 47.3k • 685 • 1 illuin-conteb/squad-conteb-train Viewer • Updated Jun 2 • 91.8k • 150 illuin-conteb/mldr-conteb-train Viewer • Updated Jun 2 • 566k • 141
ConTEB models Our models trained with the InSeNT approach. These are the checkpoints that we used to run the evaluations reported in our paper. illuin-conteb/modern-colbert-insent Feature Extraction • 0.1B • Updated Jun 2 • 21 • 6 illuin-conteb/modernbert-large-insent Sentence Similarity • 0.4B • Updated Jun 2 • 86 • 3
ConTEB evaluation datasets Evaluation datasets of the ConTEB benchmark. Use "test" split where available, otherwise "validation", otherwise "train". illuin-conteb/covid-qa Viewer • Updated Jun 2 • 4.46k • 490 • 1 illuin-conteb/geography Viewer • Updated May 30 • 11.4k • 454 • 2 illuin-conteb/esg-reports Viewer • Updated May 30 • 3.74k • 181 • 1 illuin-conteb/insurance Viewer • Updated May 30 • 180 • 476 • 1
ConTEB models Our models trained with the InSeNT approach. These are the checkpoints that we used to run the evaluations reported in our paper. illuin-conteb/modern-colbert-insent Feature Extraction • 0.1B • Updated Jun 2 • 21 • 6 illuin-conteb/modernbert-large-insent Sentence Similarity • 0.4B • Updated Jun 2 • 86 • 3
ConTEB training datasets Training data for the InSeNT method. illuin-conteb/narrative-qa Viewer • Updated Jun 2 • 47.3k • 685 • 1 illuin-conteb/squad-conteb-train Viewer • Updated Jun 2 • 91.8k • 150 illuin-conteb/mldr-conteb-train Viewer • Updated Jun 2 • 566k • 141