zerank-1-GGUF
zerank-1 is a state-of-the-art reranker developed by ZeroEntropy to significantly enhance retrieval accuracy in search engines. Unlike most SOTA rerankers that are closed-source and proprietary, zerank-1 outperforms top models like Cohere-Rerank-v3.5 and Salesforce/LlamaRank-v1 across diverse domains including finance, legal, code, STEM, medical, and conversational data.
Model files
File | Size | Format |
---|---|---|
zerank-1.BF16.gguf | 8.05 GB | BF16 |
zerank-1.F16.gguf | 8.05 GB | F16 |
zerank-1.F32.gguf | 16.1 GB | F32 |
zerank-1.Q2_K.gguf | 1.67 GB | Q2_K |
zerank-1.Q3_K_L.gguf | 2.24 GB | Q3_K_L |
zerank-1.Q3_K_M.gguf | 2.08 GB | Q3_K_M |
zerank-1.Q3_K_S.gguf | 1.89 GB | Q3_K_S |
zerank-1.Q4_K_M.gguf | 2.5 GB | Q4_K_M |
zerank-1.Q4_K_S.gguf | 2.38 GB | Q4_K_S |
zerank-1.Q5_K_M.gguf | 2.89 GB | Q5_K_M |
zerank-1.Q5_K_S.gguf | 2.82 GB | Q5_K_S |
zerank-1.Q6_K.gguf | 3.31 GB | Q6_K |
zerank-1.Q8_0.gguf | 4.28 GB | Q8_0 |
.gitattributes | 2.39 kB | - |
README.md | 462 Bytes | - |
config.json | 29 Bytes | - |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 186
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support