adambuttrick's picture
Add new CrossEncoder model
fb79793 verified
metadata
language:
  - multilingual
license: apache-2.0
tags:
  - sentence-transformers
  - cross-encoder
  - reranker
  - generated_from_trainer
  - dataset_size:16862
  - loss:BinaryCrossEntropyLoss
base_model: Alibaba-NLP/gte-multilingual-reranker-base
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
  - map
  - mrr@10
  - ndcg@10
model-index:
  - name: cometadata/gte-multilingual-reranker-affiliations
    results:
      - task:
          type: cross-encoder-reranking
          name: Cross Encoder Reranking
        dataset:
          name: affiliation val
          type: affiliation-val
        metrics:
          - type: map
            value: 0.9666
            name: Map
          - type: mrr@10
            value: 0.9666
            name: Mrr@10
          - type: ndcg@10
            value: 0.9753
            name: Ndcg@10

cometadata/gte-multilingual-reranker-affiliations

This is a Cross Encoder model finetuned from Alibaba-NLP/gte-multilingual-reranker-base using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.

Model Details

Model Description

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the 🤗 Hub
model = CrossEncoder("cometadata/gte-multilingual-reranker-affiliations")
# Get scores for pairs of texts
pairs = [
    ['Université Toulouse', 'a  Université de Toulouse, Mines Albi, CNRS, Centre RAPSODEE ,  Albi ,  France'],
    ['Université Toulouse', 'National Polytechnic Institute of Toulouse'],
    ['School of Fundamental Science and Technology, Keio University 1 , 3-14-1 Hiyoshi, Kohoku-ku, Yokohama 223-8522, Japan', 'Center for Supercentenarian Research, Keio University, Tokyo, Japan'],
    ['School of Fundamental Science and Technology, Keio University 1 , 3-14-1 Hiyoshi, Kohoku-ku, Yokohama 223-8522, Japan', 'g    Toin Human Science and Technology Center, Department of Materials Science and Technology, Toin University of Yokohama, 1614 Kurogane-cho, Aoba-ku, Yokohama 225, Japan'],
    ['Division of Pulmonary and Critical Care Medicine, University of North Carolina School of Medicine, Chapel Hill, North Carolina', 'Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, 101 Manning Drive, CB# 7295, Chapel Hill, NC 27599, USA'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)

# Or rank different texts based on similarity to a single text
ranks = model.rank(
    'Université Toulouse',
    [
        'a  Université de Toulouse, Mines Albi, CNRS, Centre RAPSODEE ,  Albi ,  France',
        'National Polytechnic Institute of Toulouse',
        'Center for Supercentenarian Research, Keio University, Tokyo, Japan',
        'g    Toin Human Science and Technology Center, Department of Materials Science and Technology, Toin University of Yokohama, 1614 Kurogane-cho, Aoba-ku, Yokohama 225, Japan',
        'Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, 101 Manning Drive, CB# 7295, Chapel Hill, NC 27599, USA',
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Reranking

Metric Value
map 0.9666 (-0.0334)
mrr@10 0.9666 (-0.0334)
ndcg@10 0.9753 (-0.0247)

Training Details

Training Dataset

Unnamed Dataset

  • Size: 16,862 training samples
  • Columns: query, document, and label
  • Approximate statistics based on the first 1000 samples:
    query document label
    type string string int
    details
    • min: 6 characters
    • mean: 95.73 characters
    • max: 505 characters
    • min: 8 characters
    • mean: 92.11 characters
    • max: 393 characters
    • 0: ~50.00%
    • 1: ~50.00%
  • Samples:
    query document label
    Nanjing University of Science and Technology,Computer Science and Engineering,Nanjing,China Nanjing University of Science And Technology, China 1
    Nanjing University of Science and Technology,Computer Science and Engineering,Nanjing,China Nanjing university of finance & economics, China. 0
    University of Bonn, Bonn, Germany Department of Geophysics, University of Bonn, 53115 Bonn, Germany 1
  • Loss: BinaryCrossEntropyLoss with these parameters:
    {
        "activation_fn": "torch.nn.modules.linear.Identity",
        "pos_weight": null
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 808 evaluation samples
  • Columns: query, document, and label
  • Approximate statistics based on the first 808 samples:
    query document label
    type string string int
    details
    • min: 14 characters
    • mean: 80.47 characters
    • max: 394 characters
    • min: 15 characters
    • mean: 109.87 characters
    • max: 500 characters
    • 0: ~50.00%
    • 1: ~50.00%
  • Samples:
    query document label
    Université Toulouse a Université de Toulouse, Mines Albi, CNRS, Centre RAPSODEE , Albi , France 1
    Université Toulouse National Polytechnic Institute of Toulouse 0
    School of Fundamental Science and Technology, Keio University 1 , 3-14-1 Hiyoshi, Kohoku-ku, Yokohama 223-8522, Japan Center for Supercentenarian Research, Keio University, Tokyo, Japan 1
  • Loss: BinaryCrossEntropyLoss with these parameters:
    {
        "activation_fn": "torch.nn.modules.linear.Identity",
        "pos_weight": null
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • learning_rate: 2e-05
  • num_train_epochs: 2
  • warmup_ratio: 0.1
  • load_best_model_at_end: True
  • hub_model_id: cometadata/gte-multilingual-reranker-affiliations

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • parallelism_config: None
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • project: huggingface
  • trackio_space_id: trackio
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: cometadata/gte-multilingual-reranker-affiliations
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: no
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: True
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss affiliation-val_ndcg@10
-1 -1 - - 0.8392 (-0.1608)
0.0019 1 0.6145 - -
0.1898 100 0.4534 - -
0.3795 200 0.2997 - -
0.5693 300 0.2428 - -
0.7590 400 0.2213 - -
0.9488 500 0.2311 0.4316 0.9653 (-0.0347)
1.1385 600 0.162 - -
1.3283 700 0.167 - -
1.5180 800 0.1712 - -
1.7078 900 0.1617 - -
1.8975 1000 0.1511 0.4495 0.9753 (-0.0247)
-1 -1 - - 0.9753 (-0.0247)
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.12.12
  • Sentence Transformers: 5.2.0
  • Transformers: 4.57.3
  • PyTorch: 2.9.1+cu128
  • Accelerate: 1.12.0
  • Datasets: 4.4.2
  • Tokenizers: 0.22.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}