deepseek-qwen-7b-gguf
deepseek-qwen-7b-gguf is a GGUF Q4_K_M int4 quantized version of deepseek-qwen-7b-gguf, providing a very fast inference implementation, optimized for AI PCs.
Model Description
- Developed by: deepseek-ai / qwen
- Model type: qwen2.5
- Parameters: 7 billion
- Model Parent: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
- Language(s) (NLP): English
- License: Apache 2.0
- Uses: Chat, general-purpose LLM
- Quantization: int4
Model Card Contact
- Downloads last month
- 28
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for llmware/deepseek-qwen-7b-gguf
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B