HJUNN commited on
Commit
399d5e9
·
verified ·
1 Parent(s): 8a79ec1

Add new CrossEncoder model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,529 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - zh
6
+ tags:
7
+ - mteb
8
+ - text-embeddings-inference
9
+ model-index:
10
+ - name: bge-reranker-base
11
+ results:
12
+ - task:
13
+ type: Reranking
14
+ dataset:
15
+ type: C-MTEB/CMedQAv1-reranking
16
+ name: MTEB CMedQAv1
17
+ config: default
18
+ split: test
19
+ revision: None
20
+ metrics:
21
+ - type: map
22
+ value: 81.27206722525007
23
+ - type: mrr
24
+ value: 84.14238095238095
25
+ - task:
26
+ type: Reranking
27
+ dataset:
28
+ type: C-MTEB/CMedQAv2-reranking
29
+ name: MTEB CMedQAv2
30
+ config: default
31
+ split: test
32
+ revision: None
33
+ metrics:
34
+ - type: map
35
+ value: 84.10369934291236
36
+ - type: mrr
37
+ value: 86.79376984126984
38
+ - task:
39
+ type: Reranking
40
+ dataset:
41
+ type: C-MTEB/Mmarco-reranking
42
+ name: MTEB MMarcoReranking
43
+ config: default
44
+ split: dev
45
+ revision: None
46
+ metrics:
47
+ - type: map
48
+ value: 35.4600511272538
49
+ - type: mrr
50
+ value: 34.60238095238095
51
+ - task:
52
+ type: Reranking
53
+ dataset:
54
+ type: C-MTEB/T2Reranking
55
+ name: MTEB T2Reranking
56
+ config: default
57
+ split: dev
58
+ revision: None
59
+ metrics:
60
+ - type: map
61
+ value: 67.27728847727172
62
+ - type: mrr
63
+ value: 77.1315192743764
64
+ pipeline_tag: text-classification
65
+ library_name: sentence-transformers
66
+ ---
67
+
68
+ **We have updated the [new reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_reranker), supporting larger lengths, more languages, and achieving better performance.**
69
+
70
+ <h1 align="center">FlagEmbedding</h1>
71
+
72
+
73
+ <h4 align="center">
74
+ <p>
75
+ <a href=#model-list>Model List</a> |
76
+ <a href=#frequently-asked-questions>FAQ</a> |
77
+ <a href=#usage>Usage</a> |
78
+ <a href="#evaluation">Evaluation</a> |
79
+ <a href="#train">Train</a> |
80
+ <a href="#citation">Citation</a> |
81
+ <a href="#license">License</a>
82
+ <p>
83
+ </h4>
84
+
85
+ **More details please refer to our Github: [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding).**
86
+
87
+
88
+ [English](README.md) | [中文](https://github.com/FlagOpen/FlagEmbedding/blob/master/README_zh.md)
89
+
90
+
91
+ FlagEmbedding focuses on retrieval-augmented LLMs, consisting of the following projects currently:
92
+
93
+ - **Long-Context LLM**: [Activation Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon)
94
+ - **Fine-tuning of LM** : [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail)
95
+ - **Embedding Model**: [Visualized-BGE](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/visual), [BGE-M3](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3), [LLM Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), [BGE Embedding](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/baai_general_embedding)
96
+ - **Reranker Model**: [llm rerankers](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_reranker), [BGE Reranker](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
97
+ - **Benchmark**: [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB)
98
+
99
+ ## News
100
+ - 3/18/2024: Release new [rerankers](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_reranker), built upon powerful M3 and LLM (GEMMA and MiniCPM, not so large actually) backbones, supporitng multi-lingual processing and larger inputs, massive improvements of ranking performances on BEIR, C-MTEB/Retrieval, MIRACL, LlamaIndex Evaluation.
101
+ - 3/18/2024: Release [Visualized-BGE](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/visual), equipping BGE with visual capabilities. Visualized-BGE can be utilized to generate embeddings for hybrid image-text data.
102
+ - 1/30/2024: Release **BGE-M3**, a new member to BGE model series! M3 stands for **M**ulti-linguality (100+ languages), **M**ulti-granularities (input length up to 8192), **M**ulti-Functionality (unification of dense, lexical, multi-vec/colbert retrieval).
103
+ It is the first embedding model which supports all three retrieval methods, achieving new SOTA on multi-lingual (MIRACL) and cross-lingual (MKQA) benchmarks.
104
+ [Technical Report](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/BGE_M3/BGE_M3.pdf) and [Code](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3). :fire:
105
+ - 1/9/2024: Release [Activation-Beacon](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/activation_beacon), an effective, efficient, compatible, and low-cost (training) method to extend the context length of LLM. [Technical Report](https://arxiv.org/abs/2401.03462) :fire:
106
+ - 12/24/2023: Release **LLaRA**, a LLaMA-7B based dense retriever, leading to state-of-the-art performances on MS MARCO and BEIR. Model and code will be open-sourced. Please stay tuned. [Technical Report](https://arxiv.org/abs/2312.15503)
107
+ - 11/23/2023: Release [LM-Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail), a method to maintain general capabilities during fine-tuning by merging multiple language models. [Technical Report](https://arxiv.org/abs/2311.13534) :fire:
108
+ - 10/12/2023: Release [LLM-Embedder](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/llm_embedder), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Technical Report](https://arxiv.org/pdf/2310.07554.pdf)
109
+ - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
110
+ - 09/15/2023: The [massive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
111
+ - 09/12/2023: New models:
112
+ - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
113
+ - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
114
+
115
+
116
+ <details>
117
+ <summary>More</summary>
118
+ <!-- ### More -->
119
+
120
+ - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
121
+ - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
122
+ - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
123
+ - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
124
+ - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
125
+
126
+ </details>
127
+
128
+
129
+ ## Model List
130
+
131
+ `bge` is short for `BAAI general embedding`.
132
+
133
+ | Model | Language | | Description | query instruction for retrieval [1] |
134
+ |:-------------------------------|:--------:| :--------:| :--------:|:--------:|
135
+ | [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | [Inference](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3#usage) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/BGE_M3) | Multi-Functionality(dense retrieval, sparse retrieval, multi-vector(colbert)), Multi-Linguality, and Multi-Granularity(8192 tokens) | |
136
+ | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
137
+ | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
138
+ | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
139
+ | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
140
+ | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
141
+ | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
142
+ | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
143
+ | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
144
+ | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `为这个句子生成表示以用于检索相关文章:` |
145
+ | [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard | `Represent this sentence for searching relevant passages: ` |
146
+ | [BAAI/bge-base-en](https://huggingface.co/BAAI/bge-base-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-en` | `Represent this sentence for searching relevant passages: ` |
147
+ | [BAAI/bge-small-en](https://huggingface.co/BAAI/bge-small-en) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) |a small-scale model but with competitive performance | `Represent this sentence for searching relevant passages: ` |
148
+ | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | :trophy: rank **1st** in [C-MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) benchmark | `为这个句子生成表示以用于检索相关文章:` |
149
+ | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a base-scale model but with similar ability to `bge-large-zh` | `为这个句子生成表示以用于检索相关文章:` |
150
+ | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
151
+
152
+
153
+ [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
154
+
155
+ [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
156
+ For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
157
+
158
+ All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
159
+ If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
160
+
161
+
162
+ ## Frequently asked questions
163
+
164
+ <details>
165
+ <summary>1. How to fine-tune bge embedding model?</summary>
166
+
167
+ <!-- ### How to fine-tune bge embedding model? -->
168
+ Following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) to prepare data and fine-tune your model.
169
+ Some suggestions:
170
+ - Mine hard negatives following this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune#hard-negatives), which can improve the retrieval performance.
171
+ - If you pre-train bge on your data, the pre-trained model cannot be directly used to calculate similarity, and it must be fine-tuned with contrastive learning before computing similarity.
172
+ - If the accuracy of the fine-tuned model is still not high, it is recommended to use/fine-tune the cross-encoder model (bge-reranker) to re-rank top-k results.
173
+ Hard negatives also are needed to fine-tune reranker. Refer to this [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) for the fine-tuning for reranker
174
+
175
+
176
+ </details>
177
+
178
+ <details>
179
+ <summary>2. The similarity score between two dissimilar sentences is higher than 0.5</summary>
180
+
181
+ <!-- ### The similarity score between two dissimilar sentences is higher than 0.5 -->
182
+ **Suggest to use bge v1.5, which alleviates the issue of the similarity distribution.**
183
+
184
+ Since we finetune the models by contrastive learning with a temperature of 0.01,
185
+ the similarity distribution of the current BGE model is about in the interval \[0.6, 1\].
186
+ So a similarity score greater than 0.5 does not indicate that the two sentences are similar.
187
+
188
+ For downstream tasks, such as passage retrieval or semantic similarity,
189
+ **what matters is the relative order of the scores, not the absolute value.**
190
+ If you need to filter similar sentences based on a similarity threshold,
191
+ please select an appropriate similarity threshold based on the similarity distribution on your data (such as 0.8, 0.85, or even 0.9).
192
+
193
+ </details>
194
+
195
+ <details>
196
+ <summary>3. When does the query instruction need to be used</summary>
197
+
198
+ <!-- ### When does the query instruction need to be used -->
199
+
200
+ For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
201
+ No instruction only has a slight degradation in retrieval performance compared with using instruction.
202
+ So you can generate embedding without instruction in all cases for convenience.
203
+
204
+ For a retrieval task that uses short queries to find long related documents,
205
+ it is recommended to add instructions for these short queries.
206
+ **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
207
+ In all cases, the documents/passages do not need to add the instruction.
208
+
209
+ </details>
210
+
211
+
212
+ ## Usage
213
+
214
+ ### Usage for Embedding Model
215
+
216
+ Here are some examples for using `bge` models with
217
+ [FlagEmbedding](#using-flagembedding), [Sentence-Transformers](#using-sentence-transformers), [Langchain](#using-langchain), or [Huggingface Transformers](#using-huggingface-transformers).
218
+
219
+ #### Using FlagEmbedding
220
+ ```
221
+ pip install -U FlagEmbedding
222
+ ```
223
+ If it doesn't work for you, you can see [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md) for more methods to install FlagEmbedding.
224
+
225
+ ```python
226
+ from FlagEmbedding import FlagModel
227
+ sentences_1 = ["样例数据-1", "样例数据-2"]
228
+ sentences_2 = ["样例数据-3", "样例数据-4"]
229
+ model = FlagModel('BAAI/bge-large-zh-v1.5',
230
+ query_instruction_for_retrieval="为这个句子生成表示以用于检索相关文章:",
231
+ use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
232
+ embeddings_1 = model.encode(sentences_1)
233
+ embeddings_2 = model.encode(sentences_2)
234
+ similarity = embeddings_1 @ embeddings_2.T
235
+ print(similarity)
236
+
237
+ # for s2p(short query to long passage) retrieval task, suggest to use encode_queries() which will automatically add the instruction to each query
238
+ # corpus in retrieval task can still use encode() or encode_corpus(), since they don't need instruction
239
+ queries = ['query_1', 'query_2']
240
+ passages = ["样例文档-1", "样例文档-2"]
241
+ q_embeddings = model.encode_queries(queries)
242
+ p_embeddings = model.encode(passages)
243
+ scores = q_embeddings @ p_embeddings.T
244
+ ```
245
+ For the value of the argument `query_instruction_for_retrieval`, see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list).
246
+
247
+ By default, FlagModel will use all available GPUs when encoding. Please set `os.environ["CUDA_VISIBLE_DEVICES"]` to select specific GPUs.
248
+ You also can set `os.environ["CUDA_VISIBLE_DEVICES"]=""` to make all GPUs unavailable.
249
+
250
+
251
+ #### Using Sentence-Transformers
252
+
253
+ You can also use the `bge` models with [sentence-transformers](https://www.SBERT.net):
254
+
255
+ ```
256
+ pip install -U sentence-transformers
257
+ ```
258
+ ```python
259
+ from sentence_transformers import SentenceTransformer
260
+ sentences_1 = ["样例数据-1", "样例数据-2"]
261
+ sentences_2 = ["样例数据-3", "样例数据-4"]
262
+ model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
263
+ embeddings_1 = model.encode(sentences_1, normalize_embeddings=True)
264
+ embeddings_2 = model.encode(sentences_2, normalize_embeddings=True)
265
+ similarity = embeddings_1 @ embeddings_2.T
266
+ print(similarity)
267
+ ```
268
+ For s2p(short query to long passage) retrieval task,
269
+ each short query should start with an instruction (instructions see [Model List](https://github.com/FlagOpen/FlagEmbedding/tree/master#model-list)).
270
+ But the instruction is not needed for passages.
271
+ ```python
272
+ from sentence_transformers import SentenceTransformer
273
+ queries = ['query_1', 'query_2']
274
+ passages = ["样例文档-1", "样例文档-2"]
275
+ instruction = "为这个句子生成表示以用于检索相关文章:"
276
+
277
+ model = SentenceTransformer('BAAI/bge-large-zh-v1.5')
278
+ q_embeddings = model.encode([instruction+q for q in queries], normalize_embeddings=True)
279
+ p_embeddings = model.encode(passages, normalize_embeddings=True)
280
+ scores = q_embeddings @ p_embeddings.T
281
+ ```
282
+
283
+ #### Using Langchain
284
+
285
+ You can use `bge` in langchain like this:
286
+ ```python
287
+ from langchain.embeddings import HuggingFaceBgeEmbeddings
288
+ model_name = "BAAI/bge-large-en-v1.5"
289
+ model_kwargs = {'device': 'cuda'}
290
+ encode_kwargs = {'normalize_embeddings': True} # set True to compute cosine similarity
291
+ model = HuggingFaceBgeEmbeddings(
292
+ model_name=model_name,
293
+ model_kwargs=model_kwargs,
294
+ encode_kwargs=encode_kwargs,
295
+ query_instruction="为这个句子生成表示以用于检索相关文章:"
296
+ )
297
+ model.query_instruction = "为这个句子生成表示以用于检索相关文章:"
298
+ ```
299
+
300
+
301
+ #### Using HuggingFace Transformers
302
+
303
+ With the transformers package, you can use the model like this: First, you pass your input through the transformer model, then you select the last hidden state of the first token (i.e., [CLS]) as the sentence embedding.
304
+
305
+ ```python
306
+ from transformers import AutoTokenizer, AutoModel
307
+ import torch
308
+ # Sentences we want sentence embeddings for
309
+ sentences = ["样例数据-1", "样例数据-2"]
310
+
311
+ # Load model from HuggingFace Hub
312
+ tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-large-zh-v1.5')
313
+ model = AutoModel.from_pretrained('BAAI/bge-large-zh-v1.5')
314
+ model.eval()
315
+
316
+ # Tokenize sentences
317
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
318
+ # for s2p(short query to long passage) retrieval task, add an instruction to query (not add instruction for passages)
319
+ # encoded_input = tokenizer([instruction + q for q in queries], padding=True, truncation=True, return_tensors='pt')
320
+
321
+ # Compute token embeddings
322
+ with torch.no_grad():
323
+ model_output = model(**encoded_input)
324
+ # Perform pooling. In this case, cls pooling.
325
+ sentence_embeddings = model_output[0][:, 0]
326
+ # normalize embeddings
327
+ sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
328
+ print("Sentence embeddings:", sentence_embeddings)
329
+ ```
330
+
331
+ ### Usage for Reranker
332
+
333
+ Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding.
334
+ You can get a relevance score by inputting query and passage to the reranker.
335
+ The reranker is optimized based cross-entropy loss, so the relevance score is not bounded to a specific range.
336
+
337
+
338
+ #### Using FlagEmbedding
339
+ ```
340
+ pip install -U FlagEmbedding
341
+ ```
342
+
343
+ Get relevance scores (higher scores indicate more relevance):
344
+ ```python
345
+ from FlagEmbedding import FlagReranker
346
+ reranker = FlagReranker('BAAI/bge-reranker-large', use_fp16=True) # Setting use_fp16 to True speeds up computation with a slight performance degradation
347
+
348
+ score = reranker.compute_score(['query', 'passage'])
349
+ print(score)
350
+
351
+ scores = reranker.compute_score([['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']])
352
+ print(scores)
353
+ ```
354
+
355
+
356
+ #### Using Huggingface transformers
357
+
358
+ ```python
359
+ import torch
360
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
361
+
362
+ tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
363
+ model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-large')
364
+ model.eval()
365
+
366
+ pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
367
+ with torch.no_grad():
368
+ inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512)
369
+ scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
370
+ print(scores)
371
+ ```
372
+
373
+ #### Usage reranker with the ONNX files
374
+
375
+ ```python
376
+ from optimum.onnxruntime import ORTModelForSequenceClassification # type: ignore
377
+
378
+ import torch
379
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
380
+
381
+ tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-reranker-large')
382
+ model = AutoModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-base')
383
+ model_ort = ORTModelForSequenceClassification.from_pretrained('BAAI/bge-reranker-base', file_name="onnx/model.onnx")
384
+
385
+ # Sentences we want sentence embeddings for
386
+ pairs = [['what is panda?', 'hi'], ['what is panda?', 'The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear or simply panda, is a bear species endemic to China.']]
387
+
388
+ # Tokenize sentences
389
+ encoded_input = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt')
390
+
391
+ scores_ort = model_ort(**encoded_input, return_dict=True).logits.view(-1, ).float()
392
+ # Compute token embeddings
393
+ with torch.inference_mode():
394
+ scores = model_ort(**encoded_input, return_dict=True).logits.view(-1, ).float()
395
+
396
+ # scores and scores_ort are identical
397
+ ```
398
+ #### Usage reranker with infinity
399
+
400
+ Its also possible to deploy the onnx/torch files with the [infinity_emb](https://github.com/michaelfeil/infinity) pip package.
401
+ ```python
402
+ import asyncio
403
+ from infinity_emb import AsyncEmbeddingEngine, EngineArgs
404
+
405
+ query='what is a panda?'
406
+ docs = ['The giant panda (Ailuropoda melanoleuca), sometimes called a panda bear', "Paris is in France."]
407
+
408
+ engine = AsyncEmbeddingEngine.from_args(
409
+ EngineArgs(model_name_or_path = "BAAI/bge-reranker-base", device="cpu", engine="torch" # or engine="optimum" for onnx
410
+ ))
411
+
412
+ async def main():
413
+ async with engine:
414
+ ranking, usage = await engine.rerank(query=query, docs=docs)
415
+ print(list(zip(ranking, docs)))
416
+ asyncio.run(main())
417
+ ```
418
+
419
+ ## Evaluation
420
+
421
+ `baai-general-embedding` models achieve **state-of-the-art performance on both MTEB and C-MTEB leaderboard!**
422
+ For more details and evaluation tools see our [scripts](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md).
423
+
424
+ - **MTEB**:
425
+
426
+ | Model Name | Dimension | Sequence Length | Average (56) | Retrieval (15) |Clustering (11) | Pair Classification (3) | Reranking (4) | STS (10) | Summarization (1) | Classification (12) |
427
+ |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
428
+ | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 1024 | 512 | **64.23** | **54.29** | 46.08 | 87.12 | 60.03 | 83.11 | 31.61 | 75.97 |
429
+ | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 768 | 512 | 63.55 | 53.25 | 45.77 | 86.55 | 58.86 | 82.4 | 31.07 | 75.53 |
430
+ | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | 384 | 512 | 62.17 |51.68 | 43.82 | 84.92 | 58.36 | 81.59 | 30.12 | 74.14 |
431
+ | [bge-large-en](https://huggingface.co/BAAI/bge-large-en) | 1024 | 512 | 63.98 | 53.9 | 46.98 | 85.8 | 59.48 | 81.56 | 32.06 | 76.21 |
432
+ | [bge-base-en](https://huggingface.co/BAAI/bge-base-en) | 768 | 512 | 63.36 | 53.0 | 46.32 | 85.86 | 58.7 | 81.84 | 29.27 | 75.27 |
433
+ | [gte-large](https://huggingface.co/thenlper/gte-large) | 1024 | 512 | 63.13 | 52.22 | 46.84 | 85.00 | 59.13 | 83.35 | 31.66 | 73.33 |
434
+ | [gte-base](https://huggingface.co/thenlper/gte-base) | 768 | 512 | 62.39 | 51.14 | 46.2 | 84.57 | 58.61 | 82.3 | 31.17 | 73.01 |
435
+ | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1024| 512 | 62.25 | 50.56 | 44.49 | 86.03 | 56.61 | 82.05 | 30.19 | 75.24 |
436
+ | [bge-small-en](https://huggingface.co/BAAI/bge-small-en) | 384 | 512 | 62.11 | 51.82 | 44.31 | 83.78 | 57.97 | 80.72 | 30.53 | 74.37 |
437
+ | [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) | 768 | 512 | 61.79 | 49.26 | 44.74 | 86.62 | 57.29 | 83.06 | 32.32 | 61.79 |
438
+ | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 768 | 512 | 61.5 | 50.29 | 43.80 | 85.73 | 55.91 | 81.05 | 30.28 | 73.84 |
439
+ | [gte-small](https://huggingface.co/thenlper/gte-small) | 384 | 512 | 61.36 | 49.46 | 44.89 | 83.54 | 57.7 | 82.07 | 30.42 | 72.31 |
440
+ | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | 1536 | 8192 | 60.99 | 49.25 | 45.9 | 84.89 | 56.32 | 80.97 | 30.8 | 70.93 |
441
+ | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 384 | 512 | 59.93 | 49.04 | 39.92 | 84.67 | 54.32 | 80.39 | 31.16 | 72.94 |
442
+ | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 768 | 512 | 59.51 | 42.24 | 43.72 | 85.06 | 56.42 | 82.63 | 30.08 | 73.42 |
443
+ | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 768 | 514 | 57.78 | 43.81 | 43.69 | 83.04 | 59.36 | 80.28 | 27.49 | 65.07 |
444
+ | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 4096 | 2048 | 57.59 | 48.22 | 38.93 | 81.9 | 55.65 | 77.74 | 33.6 | 66.19 |
445
+
446
+
447
+
448
+ - **C-MTEB**:
449
+ We create the benchmark C-MTEB for Chinese text embedding which consists of 31 datasets from 6 tasks.
450
+ Please refer to [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/README.md) for a detailed introduction.
451
+
452
+ | Model | Embedding dimension | Avg | Retrieval | STS | PairClassification | Classification | Reranking | Clustering |
453
+ |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
454
+ | [**BAAI/bge-large-zh-v1.5**](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 1024 | **64.53** | 70.46 | 56.25 | 81.6 | 69.13 | 65.84 | 48.99 |
455
+ | [BAAI/bge-base-zh-v1.5](https://huggingface.co/BAAI/bge-base-zh-v1.5) | 768 | 63.13 | 69.49 | 53.72 | 79.75 | 68.07 | 65.39 | 47.53 |
456
+ | [BAAI/bge-small-zh-v1.5](https://huggingface.co/BAAI/bge-small-zh-v1.5) | 512 | 57.82 | 61.77 | 49.11 | 70.41 | 63.96 | 60.92 | 44.18 |
457
+ | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 1024 | 64.20 | 71.53 | 54.98 | 78.94 | 68.32 | 65.11 | 48.39 |
458
+ | [bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct) | 1024 | 63.53 | 70.55 | 53 | 76.77 | 68.58 | 64.91 | 50.01 |
459
+ | [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh) | 768 | 62.96 | 69.53 | 54.12 | 77.5 | 67.07 | 64.91 | 47.63 |
460
+ | [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 1024 | 58.79 | 63.66 | 48.44 | 69.89 | 67.34 | 56.00 | 48.23 |
461
+ | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | 512 | 58.27 | 63.07 | 49.45 | 70.35 | 63.64 | 61.48 | 45.09 |
462
+ | [m3e-base](https://huggingface.co/moka-ai/m3e-base) | 768 | 57.10 | 56.91 | 50.47 | 63.99 | 67.52 | 59.34 | 47.68 |
463
+ | [m3e-large](https://huggingface.co/moka-ai/m3e-large) | 1024 | 57.05 | 54.75 | 50.42 | 64.3 | 68.2 | 59.66 | 48.88 |
464
+ | [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 768 | 55.48 | 61.63 | 46.49 | 67.07 | 65.35 | 54.35 | 40.68 |
465
+ | [multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) | 384 | 55.38 | 59.95 | 45.27 | 66.45 | 65.85 | 53.86 | 45.26 |
466
+ | [text-embedding-ada-002(OpenAI)](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings) | 1536 | 53.02 | 52.0 | 43.35 | 69.56 | 64.31 | 54.28 | 45.68 |
467
+ | [luotuo](https://huggingface.co/silk-road/luotuo-bert-medium) | 1024 | 49.37 | 44.4 | 42.78 | 66.62 | 61 | 49.25 | 44.39 |
468
+ | [text2vec-base](https://huggingface.co/shibing624/text2vec-base-chinese) | 768 | 47.63 | 38.79 | 43.41 | 67.41 | 62.19 | 49.45 | 37.66 |
469
+ | [text2vec-large](https://huggingface.co/GanymedeNil/text2vec-large-chinese) | 1024 | 47.36 | 41.94 | 44.97 | 70.86 | 60.66 | 49.16 | 30.02 |
470
+
471
+
472
+ - **Reranking**:
473
+ See [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB/) for evaluation script.
474
+
475
+ | Model | T2Reranking | T2RerankingZh2En\* | T2RerankingEn2Zh\* | MMarcoReranking | CMedQAv1 | CMedQAv2 | Avg |
476
+ |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
477
+ | text2vec-base-multilingual | 64.66 | 62.94 | 62.51 | 14.37 | 48.46 | 48.6 | 50.26 |
478
+ | multilingual-e5-small | 65.62 | 60.94 | 56.41 | 29.91 | 67.26 | 66.54 | 57.78 |
479
+ | multilingual-e5-large | 64.55 | 61.61 | 54.28 | 28.6 | 67.42 | 67.92 | 57.4 |
480
+ | multilingual-e5-base | 64.21 | 62.13 | 54.68 | 29.5 | 66.23 | 66.98 | 57.29 |
481
+ | m3e-base | 66.03 | 62.74 | 56.07 | 17.51 | 77.05 | 76.76 | 59.36 |
482
+ | m3e-large | 66.13 | 62.72 | 56.1 | 16.46 | 77.76 | 78.27 | 59.57 |
483
+ | bge-base-zh-v1.5 | 66.49 | 63.25 | 57.02 | 29.74 | 80.47 | 84.88 | 63.64 |
484
+ | bge-large-zh-v1.5 | 65.74 | 63.39 | 57.03 | 28.74 | 83.45 | 85.44 | 63.97 |
485
+ | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | 67.28 | 63.95 | 60.45 | 35.46 | 81.26 | 84.1 | 65.42 |
486
+ | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | 67.6 | 64.03 | 61.44 | 37.16 | 82.15 | 84.18 | 66.09 |
487
+
488
+ \* : T2RerankingZh2En and T2RerankingEn2Zh are cross-language retrieval tasks
489
+
490
+ ## Train
491
+
492
+ ### BAAI Embedding
493
+
494
+ We pre-train the models using [retromae](https://github.com/staoxiao/RetroMAE) and train them on large-scale pairs data using contrastive learning.
495
+ **You can fine-tune the embedding model on your data following our [examples](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune).**
496
+ We also provide a [pre-train example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/pretrain).
497
+ Note that the goal of pre-training is to reconstruct the text, and the pre-trained model cannot be used for similarity calculation directly, it needs to be fine-tuned.
498
+ More training details for bge see [baai_general_embedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md).
499
+
500
+
501
+
502
+ ### BGE Reranker
503
+
504
+ Cross-encoder will perform full-attention over the input pair,
505
+ which is more accurate than embedding model (i.e., bi-encoder) but more time-consuming than embedding model.
506
+ Therefore, it can be used to re-rank the top-k documents returned by embedding model.
507
+ We train the cross-encoder on a multilingual pair data,
508
+ The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
509
+ More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
510
+
511
+
512
+
513
+ ## Citation
514
+
515
+ If you find this repository useful, please consider giving a star :star: and citation
516
+
517
+ ```
518
+ @misc{bge_embedding,
519
+ title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
520
+ author={Shitao Xiao and Zheng Liu and Peitian Zhang and Niklas Muennighoff},
521
+ year={2023},
522
+ eprint={2309.07597},
523
+ archivePrefix={arXiv},
524
+ primaryClass={cs.CL}
525
+ }
526
+ ```
527
+
528
+ ## License
529
+ FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "XLMRobertaForSequenceClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "dtype": "float32",
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "id2label": {
14
+ "0": "LABEL_0"
15
+ },
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 3072,
18
+ "label2id": {
19
+ "LABEL_0": 0
20
+ },
21
+ "layer_norm_eps": 1e-05,
22
+ "max_position_embeddings": 514,
23
+ "model_type": "xlm-roberta",
24
+ "num_attention_heads": 12,
25
+ "num_hidden_layers": 12,
26
+ "output_past": true,
27
+ "pad_token_id": 1,
28
+ "position_embedding_type": "absolute",
29
+ "sentence_transformers": {
30
+ "activation_fn": "torch.nn.modules.activation.Sigmoid",
31
+ "version": "5.1.2"
32
+ },
33
+ "transformers_version": "4.57.3",
34
+ "type_vocab_size": 1,
35
+ "use_cache": true,
36
+ "vocab_size": 250002
37
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2bcaef174f9139b7f500b0543eba4cf8a600785dffd6b6dff10668a1db20b945
3
+ size 1112201932
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a8d0b7573869188be52cca17a27a84f3cfbc0a5536c28ee1eca82903e8c68c6
3
+ size 17083051
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "250001": {
36
+ "content": "<mask>",
37
+ "lstrip": true,
38
+ "normalized": true,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "<s>",
45
+ "clean_up_tokenization_spaces": true,
46
+ "cls_token": "<s>",
47
+ "eos_token": "</s>",
48
+ "extra_special_tokens": {},
49
+ "mask_token": "<mask>",
50
+ "model_max_length": 512,
51
+ "pad_token": "<pad>",
52
+ "sep_token": "</s>",
53
+ "sp_model_kwargs": {},
54
+ "tokenizer_class": "XLMRobertaTokenizer",
55
+ "unk_token": "<unk>"
56
+ }