Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -30,8 +30,8 @@ And the score can be mapped to a float value in [0,1] by sigmoid function.
|
|
| 30 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) | Chinese and English | - | Lightweight reranker model, easy to deploy, with fast inference. |
|
| 31 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | [xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) | Chinese and English | - | Lightweight reranker model, easy to deploy, with fast inference. |
|
| 32 |
| [BAAI/bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) | [bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | - | Lightweight reranker model, possesses strong multilingual capabilities, easy to deploy, with fast inference. |
|
| 33 |
-
| [BAAI/bge-reranker-v2-gemma](https://huggingface.co/BAAI/bge-reranker-v2-gemma) |
|
| 34 |
-
| [BAAI/bge-reranker-v2-minicpm-layerwise](https://huggingface.co/BAAI/bge-reranker-v2-minicpm-layerwise) | [
|
| 35 |
|
| 36 |
|
| 37 |
You can select the model according your senario and resource.
|
|
@@ -267,7 +267,7 @@ You can fine-tune the reranker with the following code:
|
|
| 267 |
torchrun --nproc_per_node {number of gpus} \
|
| 268 |
-m FlagEmbedding.llm_reranker.finetune_for_instruction.run \
|
| 269 |
--output_dir {path to save model} \
|
| 270 |
-
--model_name_or_path
|
| 271 |
--train_data ./toy_finetune_data.jsonl \
|
| 272 |
--learning_rate 2e-4 \
|
| 273 |
--num_train_epochs 1 \
|
|
@@ -298,7 +298,7 @@ torchrun --nproc_per_node {number of gpus} \
|
|
| 298 |
torchrun --nproc_per_node {number of gpus} \
|
| 299 |
-m FlagEmbedding.llm_reranker.finetune_for_layerwise.run \
|
| 300 |
--output_dir {path to save model} \
|
| 301 |
-
--model_name_or_path
|
| 302 |
--train_data ./toy_finetune_data.jsonl \
|
| 303 |
--learning_rate 2e-4 \
|
| 304 |
--num_train_epochs 1 \
|
|
@@ -326,7 +326,7 @@ torchrun --nproc_per_node {number of gpus} \
|
|
| 326 |
--head_type simple
|
| 327 |
```
|
| 328 |
|
| 329 |
-
Our rerankers are initialized from [google/gemma-2b](https://huggingface.co/google/gemma-2b) (for llm-based reranker) and [openbmb/MiniCPM-2B-dpo-
|
| 330 |
|
| 331 |
- [bge-m3-data](https://huggingface.co/datasets/Shitao/bge-m3-data)
|
| 332 |
- [quora train data](https://huggingface.co/datasets/quora)
|
|
|
|
| 30 |
| [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) | Chinese and English | - | Lightweight reranker model, easy to deploy, with fast inference. |
|
| 31 |
| [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | [xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) | Chinese and English | - | Lightweight reranker model, easy to deploy, with fast inference. |
|
| 32 |
| [BAAI/bge-reranker-v2-m3](https://huggingface.co/BAAI/bge-reranker-v2-m3) | [bge-m3](https://huggingface.co/BAAI/bge-m3) | Multilingual | - | Lightweight reranker model, possesses strong multilingual capabilities, easy to deploy, with fast inference. |
|
| 33 |
+
| [BAAI/bge-reranker-v2-gemma](https://huggingface.co/BAAI/bge-reranker-v2-gemma) | [gemma-2b](https://huggingface.co/google/gemma-2b) | Multilingual | - | Suitable for multilingual contexts, performs well in both English proficiency and multilingual capabilities. |
|
| 34 |
+
| [BAAI/bge-reranker-v2-minicpm-layerwise](https://huggingface.co/BAAI/bge-reranker-v2-minicpm-layerwise) | [MiniCPM-2B-dpo-bf16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16) | Multilingual | 8-40 | Suitable for multilingual contexts, performs well in both English and Chinese proficiency, allows freedom to select layers for output, facilitating accelerated inference. |
|
| 35 |
|
| 36 |
|
| 37 |
You can select the model according your senario and resource.
|
|
|
|
| 267 |
torchrun --nproc_per_node {number of gpus} \
|
| 268 |
-m FlagEmbedding.llm_reranker.finetune_for_instruction.run \
|
| 269 |
--output_dir {path to save model} \
|
| 270 |
+
--model_name_or_path google/gemma-2b \
|
| 271 |
--train_data ./toy_finetune_data.jsonl \
|
| 272 |
--learning_rate 2e-4 \
|
| 273 |
--num_train_epochs 1 \
|
|
|
|
| 298 |
torchrun --nproc_per_node {number of gpus} \
|
| 299 |
-m FlagEmbedding.llm_reranker.finetune_for_layerwise.run \
|
| 300 |
--output_dir {path to save model} \
|
| 301 |
+
--model_name_or_path openbmb/MiniCPM-2B-dpo-bf16 \
|
| 302 |
--train_data ./toy_finetune_data.jsonl \
|
| 303 |
--learning_rate 2e-4 \
|
| 304 |
--num_train_epochs 1 \
|
|
|
|
| 326 |
--head_type simple
|
| 327 |
```
|
| 328 |
|
| 329 |
+
Our rerankers are initialized from [google/gemma-2b](https://huggingface.co/google/gemma-2b) (for llm-based reranker) and [openbmb/MiniCPM-2B-dpo-bf16](https://huggingface.co/openbmb/MiniCPM-2B-dpo-bf16) (for llm-based layerwise reranker), and we train it on a mixture of multilingual datasets:
|
| 330 |
|
| 331 |
- [bge-m3-data](https://huggingface.co/datasets/Shitao/bge-m3-data)
|
| 332 |
- [quora train data](https://huggingface.co/datasets/quora)
|