Job - Job matching finetuned Alibaba-NLP/gte-Qwen2-7B-instruct

Best performing model on TalentCLEF 2025 Task A. Use it for multilingual job title matching

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Alibaba-NLP/gte-Qwen2-7B-instruct
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 3584 dimensions
  • Similarity Function: Cosine Similarity
  • Training Datasets:
    • full_en
    • full_de
    • full_es
    • full_zh
    • mix

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: Qwen2Model 
  (1): Pooling({'word_embedding_dimension': 3584, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': True, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("pj-mathematician/JobGTE-7b-Lora")
# Run inference
sentences = [
    'Volksvertreter',
    'Parlamentarier',
    'Oberbürgermeister',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 3584]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Datasets

full_en

full_en

  • Dataset: full_en
  • Size: 28,880 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 2 tokens
    • mean: 4.4 tokens
    • max: 9 tokens
    • min: 2 tokens
    • mean: 4.42 tokens
    • max: 10 tokens
  • Samples:
    anchor positive
    air commodore flight lieutenant
    command and control officer flight officer
    air commodore command and control officer
  • Loss: CachedGISTEmbedLoss with these parameters:
    {'guide': SentenceTransformer(
      (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
      (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
      (2): Normalize()
    ), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
    
full_de

full_de

  • Dataset: full_de
  • Size: 23,023 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 2 tokens
    • mean: 9.11 tokens
    • max: 33 tokens
    • min: 2 tokens
    • mean: 9.41 tokens
    • max: 33 tokens
  • Samples:
    anchor positive
    Staffelkommandantin Kommodore
    Luftwaffenoffizierin Luftwaffenoffizier/Luftwaffenoffizierin
    Staffelkommandantin Luftwaffenoffizierin
  • Loss: CachedGISTEmbedLoss with these parameters:
    {'guide': SentenceTransformer(
      (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
      (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
      (2): Normalize()
    ), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
    
full_es

full_es

  • Dataset: full_es
  • Size: 20,724 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 3 tokens
    • mean: 9.42 tokens
    • max: 35 tokens
    • min: 3 tokens
    • mean: 9.18 tokens
    • max: 35 tokens
  • Samples:
    anchor positive
    jefe de escuadrón instructor
    comandante de aeronave instructor de simulador
    instructor oficial del Ejército del Aire
  • Loss: CachedGISTEmbedLoss with these parameters:
    {'guide': SentenceTransformer(
      (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
      (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
      (2): Normalize()
    ), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
    
full_zh

full_zh

  • Dataset: full_zh
  • Size: 30,401 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 3 tokens
    • mean: 4.7 tokens
    • max: 12 tokens
    • min: 3 tokens
    • mean: 5.04 tokens
    • max: 19 tokens
  • Samples:
    anchor positive
    技术总监 技术和运营总监
    技术总监 技术主管
    技术总监 技术艺术总监
  • Loss: CachedGISTEmbedLoss with these parameters:
    {'guide': SentenceTransformer(
      (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
      (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
      (2): Normalize()
    ), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
    
mix

mix

  • Dataset: mix
  • Size: 21,760 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 1 tokens
    • mean: 4.98 tokens
    • max: 14 tokens
    • min: 1 tokens
    • mean: 7.22 tokens
    • max: 27 tokens
  • Samples:
    anchor positive
    technical manager Technischer Direktor für Bühne, Film und Fernsehen
    head of technical directora técnica
    head of technical department 技术艺术总监
  • Loss: CachedGISTEmbedLoss with these parameters:
    {'guide': SentenceTransformer(
      (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
      (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
      (2): Normalize()
    ), 'temperature': 0.01, 'mini_batch_size': 64, 'margin_strategy': 'absolute', 'margin': 0.0}
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • gradient_accumulation_steps: 2
  • num_train_epochs: 2
  • warmup_ratio: 0.05
  • log_on_each_node: False
  • fp16: True
  • dataloader_num_workers: 4
  • fsdp: ['full_shard', 'auto_wrap']
  • fsdp_config: {'transformer_layer_cls_to_wrap': ['Qwen2DecoderLayer'], 'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • ddp_find_unused_parameters: True
  • gradient_checkpointing: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 2
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.05
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: False
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: True
  • dataloader_num_workers: 4
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: ['full_shard', 'auto_wrap']
  • fsdp_min_num_params: 0
  • fsdp_config: {'transformer_layer_cls_to_wrap': ['Qwen2DecoderLayer'], 'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: True
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: True
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss
0.0165 1 4.5178
0.0331 2 3.8803
0.0496 3 2.8882
0.0661 4 4.5362
0.0826 5 3.6406
0.0992 6 3.5285
0.1157 7 4.1398
0.1322 8 4.1543
0.1488 9 4.4487
0.1653 10 4.7408
0.1818 11 2.1874
0.1983 12 3.3176
0.2149 13 2.8286
0.2314 14 2.87
0.2479 15 2.4834
0.2645 16 2.7856
0.2810 17 3.1948
0.2975 18 2.1755
0.3140 19 1.9861
0.3306 20 2.0536
0.3471 21 2.7626
0.3636 22 1.6489
0.3802 23 2.078
0.3967 24 1.5864
0.4132 25 1.8815
0.4298 26 1.8041
0.4463 27 1.7482
0.4628 28 1.191
0.4793 29 1.4166
0.4959 30 1.3215
0.5124 31 1.2907
0.5289 32 1.1294
0.5455 33 1.1586
0.5620 34 1.551
0.5785 35 1.3628
0.5950 36 0.9899
0.6116 37 1.1846
0.6281 38 1.2721
0.6446 39 1.1261
0.6612 40 0.9535
0.6777 41 1.2086
0.6942 42 0.7472
0.7107 43 1.0324
0.7273 44 1.0397
0.7438 45 1.185
0.7603 46 1.2112
0.7769 47 0.84
0.7934 48 0.9286
0.8099 49 0.8689
0.8264 50 0.9546
0.8430 51 0.8283
0.8595 52 0.757
0.8760 53 0.9199
0.8926 54 0.7404
0.9091 55 1.0995
0.9256 56 0.8231
0.9421 57 0.6297
0.9587 58 0.9869
0.9752 59 0.9597
0.9917 60 0.7025
1.0 61 0.4866

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 4.1.0
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for pj-mathematician/JobGTE-7b-Lora

Finetuned
(9)
this model

Collection including pj-mathematician/JobGTE-7b-Lora