--- license: mit datasets: - liuwenhan/reasonrank_data_sft - liuwenhan/reasonrank_data_rl - liuwenhan/reasonrank_data_13k language: - en base_model: - Qwen/Qwen2.5-32B-Instruct library_name: transformers tags: - passage-ranking - text-ranking - reasoning - Information-retrieval --- **📢 Update: On September 4, 2025, we merged the LoRA parameters of ReasonRank (32B) into the model’s checkpoint shards, so now everyone only needs to load the model shards without the LoRA adapter anymore.** ## Introduction This is the model trained in our paper: ReasonRank: Empowering Passage Ranking with Strong Reasoning Ability ([📝arXiv](https://arxiv.org/abs/2508.07050)). Please refer our [🧩github repository](https://github.com/8421BCD/ReasonRank) for the usage of reasonrank-32B. ## Model Performance

image

🌹 If you use this model, please ✨star our GitHub repository to support us. Your star means a lot!