POTION Collection These are the flagship POTION models. Load them and use them with model2vec (https://github.com/MinishLab/model2vec) or sentence-transformers • 6 items • Updated May 23 • 12
REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models Paper • 2501.03262 • Published Jan 4 • 100
Reward Bench Collection Datasets, spaces, and models for the reward model benchmark! • 5 items • Updated Apr 30 • 9
view article Article Accelerated Inference with Optimum and Transformers Pipelines By philschmid • May 10, 2022 • 2
view article Article 🪆 Introduction to Matryoshka Embedding Models By tomaarsen and 2 others • Feb 23, 2024 • 150
view article Article TGI Multi-LoRA: Deploy Once, Serve 30 Models By derek-thomas and 2 others • Jul 18, 2024 • 59