deepseek-qwen-7b-gguf

deepseek-qwen-7b-gguf is a GGUF Q4_K_M int4 quantized version of deepseek-qwen-7b-gguf, providing a very fast inference implementation, optimized for AI PCs.

Model Description

  • Developed by: deepseek-ai / qwen
  • Model type: qwen2.5
  • Parameters: 7 billion
  • Model Parent: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Uses: Chat, general-purpose LLM
  • Quantization: int4

Model Card Contact

llmware on github

llmware on hf

llmware website

Downloads last month
28
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for llmware/deepseek-qwen-7b-gguf

Quantized
(153)
this model