Embedding Models
Collection
Some embedding models I've trained, finetuned, distilled, converted, or something else entirely
β’
14 items
β’
Updated
This model is a 34.5% smaller version of nomic-ai/nomic-embed-text-v2-moe for the Portuguese language, created using (a modded version of) the mtem-pruner space.
This pruned model should perform similarly to the original model for Portuguese language tasks with a much smaller memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not commonly used in Portuguese were removed from the original multilingual model's vocabulary.
You can use this model with the Transformers library:
from transformers import AutoModel, AutoTokenizer
model_name = "cnmoro/portuguese-nomic-embed-text-v2-moe"
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=True)
Or with the sentence-transformers library:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("cnmoro/portuguese-nomic-embed-text-v2-moe")
Credits: cc @antoinelouis
Base model
FacebookAI/xlm-roberta-base