liuqi6777 commited on
Commit
85f0376
·
verified ·
1 Parent(s): ef6a41a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +403 -0
README.md ADDED
@@ -0,0 +1,403 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/Qwen3-4B
5
+ tags:
6
+ - transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - text-embeddings-inference
10
+ - reranking
11
+ pipeline_tag: feature-extraction
12
+ datasets:
13
+ - Alibaba-NLP/E2Rank_ranking_datasets
14
+ ---
15
+
16
+ <!-- <div align="center">
17
+ <p align="center">
18
+ <img src="assets/overall.jpg" width="50%" height="50%" />
19
+ </p>
20
+ </div> -->
21
+
22
+ <div align="center">
23
+ <h1>E2Rank: Your Text Embedding can Also be an Effective and Efficient Listwise Reranker</h1>
24
+
25
+ <a href="https://Alibaba-NLP.github.io/E2Rank/">🤖 Website</a> |
26
+ <a href="h[ttps://arxiv.org/pdf/2510.00000](https://arxiv.org/abs/2510.22733)">📄 Arxiv Paper</a> |
27
+ <a href="https://huggingface.co/collections/Alibaba-NLP/e2rank">🤗 Huggingface Collection</a> |
28
+ <a href="#🚩-citation">🚩 Citation</a>
29
+
30
+ </div>
31
+
32
+ # 📌 Introduction
33
+
34
+ We introduce E2Rank,
35
+ meaning **E**fficient **E**mbedding-based **Rank**ing
36
+ (also meaning **Embedding-to-Rank**),
37
+ which extends a single text embedding model
38
+ to perform both high-quality retrieval and listwise reranking,
39
+ thereby achieving strong effectiveness with remarkable efficiency.
40
+
41
+ By applying cosine similarity between the query and
42
+ document embeddings as a unified ranking function, the listwise ranking prompt,
43
+ which is constructed from the original query and its candidate documents, serves
44
+ as an enhanced query enriched with signals from the top-K documents, akin to
45
+ pseudo-relevance feedback (PRF) in traditional retrieval models. This design
46
+ preserves the efficiency and representational quality of the base embedding model
47
+ while significantly improving its reranking performance.
48
+
49
+ Empirically, E2Rank achieves state-of-the-art results on the BEIR reranking benchmark
50
+ and demonstrates competitive performance on the reasoning-intensive BRIGHT benchmark,
51
+ with very low reranking latency. We also show that the ranking training process
52
+ improves embedding performance on the MTEB benchmark.
53
+ Our findings indicate that a single embedding model can effectively unify retrieval and reranking,
54
+ offering both computational efficiency and competitive ranking accuracy.
55
+
56
+ **Our work highlights the potential of single embedding models to serve as unified retrieval-reranking engines, offering a practical, efficient, and accurate alternative to complex multi-stage ranking systems.**
57
+
58
+
59
+ # 🚀 Quick Start
60
+
61
+ ## Model List
62
+
63
+ | Supported Task | Model Name | Size | Layers | Sequence Length | Embedding Dimension | Instruction Aware |
64
+ |-----------------------------|----------------------|------|--------|-----------------|---------------------|-------------------|
65
+ | **Embedding + Reranking** | [Alibaba-NLP/E2Rank-0.6B](https://huggingface.co/Alibaba-NLP/E2Rank-0.6B) | 0.6B | 28 | 32K | 1024 | Yes |
66
+ | **Embedding + Reranking** | [Alibaba-NLP/E2Rank-4B](https://huggingface.co/Alibaba-NLP/E2Rank-4B) | 4B | 36 | 32K | 2560 | Yes |
67
+ | **Embedding + Reranking** | [Alibaba-NLP/E2Rank-8B](https://huggingface.co/Alibaba-NLP/E2Rank-8B) | 8B | 36 | 32K | 4096 | Yes |
68
+ | Embedding Only | [Alibaba-NLP/E2Rank-0.6B-Embedding-Only](https://huggingface.co/Alibaba-NLP/E2Rank-0.6B-Embedding-Only) | 0.6B | 28 | 32K | 1024 | Yes |
69
+ | Embedding Only | [Alibaba-NLP/E2Rank-0.6B-Embedding-Only](https://huggingface.co/Alibaba-NLP/E2Rank-4B-Embedding-Only) | 4B | 36 | 32K | 2560 | Yes |
70
+ | Embedding Only | [Alibaba-NLP/E2Rank-0.6B-Embedding-Only](https://huggingface.co/Alibaba-NLP/E2Rank-8B-Embedding-Only) | 8B | 36 | 32K | 4096 | Yes |
71
+
72
+
73
+ > **Note**:
74
+ > - `Embedding Only` indicates that the model is trained only with the constrative learning and support embedding tasks, while `Embedding + Reranking` indicates the **full E2Rank model** trained with both embedding and reranking objectives (for more detals, please refer to the [paper]()).
75
+ > - `Instruction Aware` notes whether the model supports customizing the input instruction according to different tasks.
76
+ <!-- > - For `Listwise Reranking` models, they are supervised fine-tuned from the Qwen3 Models in the paradigm of RankGPT and support only the reranking task. -->
77
+
78
+ ## Usage
79
+
80
+ ### Embedding Model
81
+
82
+ The usage of E2Rank as an embedding model is similar to [Qwen3-Embedding](https://github.com/QwenLM/Qwen3-Embedding). The only difference is that Qwen3-Embedding will automatically append an EOS token, while E2Rank requires users to manully append the special token `<|endoftext|>` at the end of each input text.
83
+
84
+
85
+ **vLLM Usage (recommended)**
86
+
87
+ ```python
88
+ # Requires vllm>=0.8.5
89
+ import torch
90
+ import vllm
91
+ from vllm import LLM
92
+ from vllm.config import PoolerConfig
93
+
94
+ def get_detailed_instruct(task_description: str, query: str) -> str:
95
+ return f'Instruct: {task_description}\nQuery:{query}'
96
+
97
+ # Each query must come with a one-sentence instruction that describes the task
98
+ task = 'Given a web search query, retrieve relevant passages that answer the query'
99
+
100
+ queries = [
101
+ get_detailed_instruct(task, 'What is the capital of China?'),
102
+ get_detailed_instruct(task, 'Explain gravity')
103
+ ]
104
+ # No need to add instruction for retrieval documents
105
+ documents = [
106
+ "The capital of China is Beijing.",
107
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
108
+ ]
109
+ input_texts = queries + documents
110
+ input_texts = [t + "<|endoftext|>" for t in input_texts]
111
+
112
+ model = LLM(
113
+ model="Alibaba-NLP/E2Rank-4B",
114
+ task="embed",
115
+ override_pooler_config=PoolerConfig(pooling_type="LAST", normalize=True)
116
+ )
117
+
118
+ outputs = model.embed(input_texts)
119
+ embeddings = torch.tensor([o.outputs.embedding for o in outputs])
120
+ scores = (embeddings[:2] @ embeddings[2:].T)
121
+ print(scores.tolist())
122
+ ```
123
+
124
+ <details>
125
+ <summary><b>Transformers Usage</b></summary>
126
+
127
+ ```python
128
+ # Requires transformers>=4.51.0
129
+ import torch
130
+ import torch.nn.functional as F
131
+
132
+ from torch import Tensor
133
+ from transformers import AutoTokenizer, AutoModel
134
+
135
+
136
+ def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
137
+ left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
138
+ if left_padding:
139
+ return last_hidden_states[:, -1]
140
+ else:
141
+ sequence_lengths = attention_mask.sum(dim=1) - 1
142
+ batch_size = last_hidden_states.shape[0]
143
+ return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
144
+
145
+
146
+ def get_detailed_instruct(task_description: str, query: str) -> str:
147
+ return f'Instruct: {task_description}\nQuery:{query}'
148
+
149
+ # Each query must come with a one-sentence instruction that describes the task
150
+ task = 'Given a web search query, retrieve relevant passages that answer the query'
151
+
152
+ queries = [
153
+ get_detailed_instruct(task, 'What is the capital of China?'),
154
+ get_detailed_instruct(task, 'Explain gravity')
155
+ ]
156
+ # No need to add instruction for retrieval documents
157
+ documents = [
158
+ "The capital of China is Beijing.",
159
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
160
+ ]
161
+ input_texts = queries + documents
162
+ input_texts = [t + "<|endoftext|>" for t in input_texts]
163
+
164
+ tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/E2Rank-4B', padding_side='left')
165
+ model = AutoModel.from_pretrained('Alibaba-NLP/E2Rank-4B')
166
+
167
+ max_length = 8192
168
+
169
+ # Tokenize the input texts
170
+ batch_dict = tokenizer(
171
+ input_texts,
172
+ padding=True,
173
+ truncation=True,
174
+ max_length=max_length,
175
+ return_tensors="pt",
176
+ )
177
+ batch_dict.to(model.device)
178
+ with torch.no_grad():
179
+ outputs = model(**batch_dict)
180
+ embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
181
+
182
+ # normalize embeddings
183
+ embeddings = F.normalize(embeddings, p=2, dim=1)
184
+ scores = (embeddings[:2] @ embeddings[2:].T)
185
+
186
+ print(scores.tolist())
187
+ ```
188
+ </details>
189
+
190
+
191
+ ### Reranking
192
+
193
+ For using E2Rank as a reranker, you only need to perform additional processing on the query by adding (part of) the docs that needs to be reranked to the *listwise prompt*, while the rest is the same as using the embedding model.
194
+
195
+ **vLLM Usage (recommended)**
196
+
197
+ ```python
198
+ # Requires vllm>=0.8.5
199
+ import torch
200
+ import vllm
201
+ from vllm import LLM
202
+ from vllm.config import PoolerConfig
203
+
204
+ model = LLM(
205
+ model="./checkpoints/E2Rank-4B",
206
+ task="embed",
207
+ override_pooler_config=PoolerConfig(pooling_type="LAST", normalize=True)
208
+ )
209
+ tokenizer = model.get_tokenizer()
210
+
211
+ def get_listwise_prompt(task_description: str, query: str, documents: list[str], num_input_docs: int = 20) -> str:
212
+ input_docs = documents[:num_input_docs]
213
+ input_docs = "\n".join([f"[{i}] {doc}" for i, doc in enumerate(input_docs, start=1)])
214
+ messages = [{
215
+ "role": "user",
216
+ "content": f'{task_description}\nDocuments:\n{input_docs}Search Query:{query}'
217
+ }]
218
+ text = tokenizer.apply_chat_template(
219
+ messages,
220
+ tokenize=False,
221
+ add_generation_prompt=True,
222
+ enable_thinking=False,
223
+ )
224
+ return text
225
+
226
+ task = 'Given a web search query and some relevant documents, rerank the documents that answer the query:'
227
+
228
+ queries = [
229
+ 'What is the capital of China?',
230
+ 'Explain gravity'
231
+ ]
232
+
233
+ # No need to add instruction for retrieval documents
234
+ documents = [
235
+ "The capital of China is Beijing.",
236
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
237
+ ]
238
+ documents = [doc + "<|endoftext|>" for doc in documents]
239
+
240
+ pseudo_queries = [
241
+ get_listwise_prompt(task, queries[0], documents),
242
+ get_listwise_prompt(task, queries[1], documents)
243
+ ] # no need to add the EOS token here
244
+
245
+ input_texts = pseudo_queries + documents
246
+
247
+ outputs = model.embed(input_texts)
248
+ embeddings = torch.tensor([o.outputs.embedding for o in outputs])
249
+ scores = (embeddings[:2] @ embeddings[2:].T)
250
+ print(scores.tolist())
251
+ ```
252
+
253
+ <details>
254
+ <summary><b>Transformers Usage</b></summary>
255
+
256
+ ```python
257
+ # Requires transformers>=4.51.0
258
+ import torch
259
+ import torch.nn.functional as F
260
+
261
+ from torch import Tensor
262
+ from transformers import AutoTokenizer, AutoModel
263
+
264
+
265
+ tokenizer = AutoTokenizer.from_pretrained('./checkpoints/E2Rank-4B', padding_side='left')
266
+ model = AutoModel.from_pretrained('./checkpoints/E2Rank-4B')
267
+
268
+
269
+ def last_token_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
270
+ left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
271
+ if left_padding:
272
+ return last_hidden_states[:, -1]
273
+ else:
274
+ sequence_lengths = attention_mask.sum(dim=1) - 1
275
+ batch_size = last_hidden_states.shape[0]
276
+ return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
277
+
278
+
279
+ def get_listwise_prompt(task_description: str, query: str, documents: list[str], num_input_docs: int = 20) -> str:
280
+ input_docs = documents[:num_input_docs]
281
+ input_docs = "\n".join([f"[{i}] {doc}" for i, doc in enumerate(input_docs, start=1)])
282
+ messages = [{
283
+ "role": "user",
284
+ "content": f'{task_description}\nDocuments:\n{input_docs}Search Query:{query}'
285
+ }]
286
+ text = tokenizer.apply_chat_template(
287
+ messages,
288
+ tokenize=False,
289
+ add_generation_prompt=True,
290
+ enable_thinking=False,
291
+ )
292
+ return text
293
+
294
+ task = 'Given a web search query and some relevant documents, rerank the documents that answer the query:'
295
+
296
+ queries = [
297
+ 'What is the capital of China?',
298
+ 'Explain gravity'
299
+ ]
300
+
301
+ # No need to add instruction for retrieval documents
302
+ documents = [
303
+ "The capital of China is Beijing.",
304
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
305
+ ]
306
+ documents = [doc + "<|endoftext|>" for doc in documents]
307
+
308
+ pseudo_queries = [
309
+ get_listwise_prompt(task, queries[0], documents),
310
+ get_listwise_prompt(task, queries[1], documents)
311
+ ] # no need to add the EOS token here
312
+
313
+ input_texts = pseudo_queries + documents
314
+
315
+
316
+ max_length = 8192
317
+ # Tokenize the input texts
318
+ batch_dict = tokenizer(
319
+ input_texts,
320
+ padding=True,
321
+ truncation=True,
322
+ max_length=max_length,
323
+ return_tensors="pt",
324
+ )
325
+ batch_dict.to(model.device)
326
+ with torch.no_grad():
327
+ outputs = model(**batch_dict)
328
+ embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
329
+
330
+ # normalize embeddings
331
+ embeddings = F.normalize(embeddings, p=2, dim=1)
332
+ scores = (embeddings[:2] @ embeddings[2:].T)
333
+
334
+ print(scores.tolist())
335
+ ```
336
+ </details>
337
+
338
+ ### End-to-end search
339
+
340
+ Since E2Rank extends a single text embedding model to perform both high-quality retrieval and listwise reranking, you can directly use it to build an end-to-end search system. By reusing the embeddings computed during the retrieval stage, E2Rank only need to compute the pseudo query's embedding and can efficiently rerank the retrieved documents with minimal additional computational overhead.
341
+
342
+ Example code is coming soon.
343
+
344
+ # 📊 Evaluation
345
+
346
+ ## Reranking Benchmark
347
+
348
+ ### BEIR
349
+
350
+ | | Covid | NFCorpus | Touche | DBPedia | SciFact | Signal | News | Robust | Avg. |
351
+ |------------------------------------------------------------|:-----:|:--------:|:------:|:-------:|:-------:|:------:|:-----:|:------:|:---------:|
352
+ | BM25 | 59.47 | 30.75 | 44.22 | 31.80 | 67.89 | 33.05 | 39.52 | 40.70 | 43.43 |
353
+ | *Zero-shot Listwise Reranker* | | | | | | | | | |
354
+ | RankGPT-4o | 83.41 | 39.67 | 32.26 | 45.56 | 77.41 | 34.20 | 51.92 | 60.25 | 53.09 |
355
+ | RankGPT-4o-mini | 80.03 | 38.73 | 30.91 | 44.54 | 73.14 | 33.64 | 50.91 | 57.41 | 51.16 |
356
+ | RankQwen3-14B | 84.45 | 38.94 | 38.30 | 44.52 | 78.64 | 33.58 | 51.24 | 59.66 | 53.67 |
357
+ | RankQwen3-32B | 83.48 | 39.22 | 37.13 | 45.00 | 78.22 | 32.12 | 51.08 | 60.74 | 53.37 |
358
+ | *Fine-tuned Listwise Reranker based on Qwen3* | | | | | | | | | |
359
+ | RankQwen3-0.6B | 78.35 | 36.41 | 37.54 | 39.19 | 71.01 | 30.96 | 44.43 | 46.31 | 48.03 |
360
+ | RankQwen3-4B | 83.91 | 39.88 | 32.66 | 43.91 | 76.37 | 32.15 | 50.81 | 59.36 | 52.38 |
361
+ | RankQwen3-8B | 85.37 | 40.05 | 31.73 | 45.44 | 78.96 | 32.48 | 52.36 | 60.72 | 53.39 |
362
+ | *Ours* | | | | | | | | | |
363
+ | **E2Rank-0.6B** | 79.17 | 38.60 | 41.91 | 41.96 | 73.43 | 35.26 | 52.75 | 53.67 | 52.09 |
364
+ | **E2Rank-4B** | 83.30 | 39.20 | 43.16 | 42.95 | 77.19 | 34.48 | 52.71 | 60.16 | 54.14 |
365
+ | **E2Rank-8B** | 84.09 | 39.08 | 42.06 | 43.44 | 77.49 | 34.01 | 54.25 | 60.34 | **54.35** |
366
+
367
+ ## Embedding Benchmark
368
+
369
+ ### MTEB (Eng, v1)
370
+
371
+ | Models | Retr. | Rerank. | Clust. | PairClass. | Class. | STS | Summ. | Avg. |
372
+ |------------------------------------|:-----:|:-------:|:------:|:----------:|:------:|:-----:|:-----:|:---------:|
373
+ | Instructor-xl | 49.26 | 57.29 | 44.74 | 86.62 | 73.12 | 83.06 | 32.32 | 61.79 |
374
+ | BGE-large-en-v1.5 | 54.29 | 60.03 | 46.08 | 87.12 | 75.97 | 83.11 | 31.61 | 64.23 |
375
+ | GritLM-7B | 53.10 | 61.30 | 48.90 | 86.90 | 77.00 | 82.80 | 29.40 | 64.70 |
376
+ | E5-Mistral-7b-v1 | 52.78 | 60.38 | 47.78 | 88.47 | 76.80 | 83.77 | 31.90 | 64.56 |
377
+ | Echo-Mistral-7b-v1 | 55.52 | 58.14 | 46.32 | 87.34 | 77.43 | 82.56 | 30.73 | 64.68 |
378
+ | LLM2Vec-Mistral-7B | 55.99 | 58.42 | 45.54 | 87.99 | 76.63 | 84.09 | 29.96 | 64.80 |
379
+ | LLM2Vec-Meta-LLaMA-3-8B | 56.63 | 59.68 | 46.45 | 87.80 | 75.92 | 83.58 | 30.94 | 65.01 |
380
+ | **E2Rank-0.6B** | 51.74 | 55.97 | 40.85 | 83.93 | 73.66 | 81.41 | 30.90 | 61.25 |
381
+ | **E2Rank-4B** | 55.33 | 59.10 | 44.27 | 87.14 | 77.08 | 84.03 | 30.06 | 64.47 |
382
+ | **E2Rank-8B** | 56.89 | 59.58 | 44.75 | 86.96 | 76.81 | 84.52 | 30.23 | **65.03** |
383
+
384
+ > Note: For baselines, we only compared with models that are trained using public datasets.
385
+
386
+
387
+ # 🚩 Citation
388
+
389
+ If this work is helpful, please kindly cite as:
390
+
391
+ ```bibtext
392
+ @misc{liu2025e2rank,
393
+ title={E2Rank: Your Text Embedding can Also be an Effective and Efficient Listwise Reranker},
394
+ author={Qi Liu and Yanzhao Zhang and Mingxin Li and Dingkun Long and Pengjun Xie and Jiaxin Mao},
395
+ year={2025},
396
+ eprint={2510.22733},
397
+ archivePrefix={arXiv},
398
+ primaryClass={cs.CL},
399
+ url={https://arxiv.org/abs/2510.22733},
400
+ }
401
+ ```
402
+
403
+ If you have any questions, feel free to contact us via qiliu6777[AT]gmail.com or create an issue.