This repository contains the model described in the paper Rank-DistiLLM: Closing the Effectiveness Gap Between Cross-Encoders and LLMs for Passage Re-Ranking.
The code for training and evaluation can be found at https://github.com/webis-de/msmarco-llm-distillation.
- Downloads last month
- 146
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for webis/monoelectra-large
Base model
google/electra-large-discriminator