SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the ๐Ÿค— Hub
model = SentenceTransformer("LamaDiab/MiniLM-SemanticEngine")
# Run inference
sentences = [
    'hiit biker shorts - black',
    'black shorts',
    'winter slippers for ladies christmas themed',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000,  0.7103, -0.0705],
#         [ 0.7103,  1.0000, -0.0356],
#         [-0.0705, -0.0356,  1.0000]])

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9472

Training Details

Training Dataset

Unnamed Dataset

  • Size: 169,967 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 3 tokens
    • mean: 8.82 tokens
    • max: 237 tokens
    • min: 3 tokens
    • mean: 14.99 tokens
    • max: 256 tokens
  • Samples:
    anchor positive
    orasi barista almond milk is a premium, plant-based milk designed specifically for coffee lovers. crafted to create the perfect froth, it delivers a smooth and creamy texture that enhances the flavor of your lattes, cappuccinos, and other coffee drinks. groceries
    this toy is a "modern fashion" doll, combining beauty and innovation in its design. the doll has long and pink hair that adds a modern and attractive character to it. it comes with a wide variety of clothes and cool accessories that allow children to switch outfits and try different looks.

    features:

    modern and attractive design: the doll has a stylish and modern design that suits the tastes of children of different ages.

    long and colorful hair: long and colorful hair gives the doll a distinctive and beautiful look, enhancing the possibilities of play and creativity.

    wide range of clothes: the game has a large assortment of clothes that allow children to choose the appropriate outfits for the doll character according to their imagination.

    multiple accessories: it comes with various accessories that add a touch of distinction and elegance to the doll, allowing to experiment with different styles.

    stimulate creativity and imagination: the game helps enhance children's imagination by...
    kids
    zinnia ice box vivid gen.2 - blue blue ice box
  • Loss: MultipleNegativesSymmetricRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": true
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 16,216 evaluation samples
  • Columns: anchor, positive, and negative
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative
    type string string string
    details
    • min: 3 tokens
    • mean: 9.79 tokens
    • max: 52 tokens
    • min: 2 tokens
    • mean: 19.21 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 9.76 tokens
    • max: 67 tokens
  • Samples:
    anchor positive negative
    dosado ring dosado or dos- ร - dos: a wavy movement of two people around eachother, without turning & facing the same direction. material: 18k gold plated hammered brass. size: one size, adjustable. care instructions: to keep the jewelry pieces looking as good as new, please make sure that you store them in an airtight container. they should not come in contact with sweat, water or pefume, alcohol, sanitizers etc. polish with a microfiber cloth. kiprun ks light men's running shoes - black
    puzzle city of fog this amazing puzzle offers a unique opportunity to explore the beauty of san francisco, also known as the "city by the bay," through assembling a 2000-piece jigsaw. you'll immerse yourself in a world full of colors and details, as your eyes wander across the iconic golden gate bridge, towering buildings, distinctive hilly streets, and sailing ships in the harbor. itโ€™s a panoramic depiction of san francisco, providing a comprehensive view of the city and its landmarks.
    features:
    explore san francisco: enjoy a virtual exploration of san francisco without leaving your home. get up close with famous landmarks such as the golden gate bridge and the harbor.
    improves cognitive skills: assembling the puzzle enhances focus, memory, and fine motor skills while boosting problem-solving and decision-making abilities.
    relaxation and stress relief: puzzle assembly is a fun and engaging activity that helps to relax and reduce stress, especially when concentrating on the appealing details of san franc...
    unicorn
    my fault series mercedes ron book sophie's world
  • Loss: MultipleNegativesSymmetricRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim",
        "gather_across_devices": true
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • weight_decay: 0.01
  • num_train_epochs: 5
  • warmup_ratio: 0.2
  • fp16: True
  • dataloader_num_workers: 2
  • dataloader_prefetch_factor: 2
  • push_to_hub: True
  • hub_model_id: LamaDiab/MiniLM-SemanticEngine
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 2
  • dataloader_prefetch_factor: 2
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: LamaDiab/MiniLM-SemanticEngine
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss Validation Loss cosine_accuracy
0.0004 1 1.6989 - -
0.1883 500 1.6103 1.4441 0.9124
0.3765 1000 1.1942 1.3155 0.9233
0.5648 1500 0.9831 1.2584 0.9257
0.7530 2000 0.8867 1.2368 0.9254
0.9413 2500 0.8094 1.1874 0.9274
1.1295 3000 0.5818 1.1431 0.9348
1.3178 3500 0.6978 1.1291 0.9374
1.5060 4000 0.6652 1.0936 0.9389
1.6943 4500 0.6287 1.0889 0.9369
1.8825 5000 0.5986 1.0780 0.9404
2.0708 5500 0.4376 1.0783 0.9386
2.2590 6000 0.511 1.0674 0.9405
2.4473 6500 0.4997 1.0412 0.9427
2.6355 7000 0.4985 1.0160 0.9441
2.8238 7500 0.4798 1.0264 0.9434
3.0120 8000 0.3477 1.0153 0.9455
3.2003 8500 0.4117 1.0177 0.9461
3.3886 9000 0.4302 1.0071 0.9451
3.5768 9500 0.4046 1.0171 0.9460
3.7651 10000 0.414 0.9819 0.9474
3.9533 10500 0.3786 0.9982 0.9463
4.1416 11000 0.2952 0.9920 0.9461
4.3298 11500 0.3655 0.9959 0.9455
4.5181 12000 0.3655 0.9961 0.9464
4.7063 12500 0.3662 0.9826 0.9467
4.8946 13000 0.3545 0.9864 0.9472

Framework Versions

  • Python: 3.11.13
  • Sentence Transformers: 5.1.2
  • Transformers: 4.53.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.9.0
  • Datasets: 4.4.1
  • Tokenizers: 0.21.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
35
Safetensors
Model size
22.7M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for LamaDiab/MiniLM-SemanticEngine

Finetuned
(608)
this model

Evaluation results