Explora-0.6B-GGUF

Explora-0.6B is a lightweight and efficient general-purpose reasoning model, fine-tuned on Qwen3-0.6B using the first 100,000 entries of the Open-Omega-Explora-2.5M dataset. It is tailored for science and code-focused reasoning tasks, combining symbolic clarity with fluent instruction-following, ideal for exploratory workflows in STEM domains.

Model Files

File Name Format Size Precision Description
Explora-0.6B.F32.gguf GGUF 2.39 GB 32-bit Float Full precision model, highest quality
Explora-0.6B.F16.gguf GGUF 1.2 GB 16-bit Float Half precision, good balance of size and quality
Explora-0.6B.BF16.gguf GGUF 1.2 GB 16-bit BFloat Brain floating point, optimized for inference
Explora-0.6B.Q8_0.gguf GGUF 639 MB 8-bit Quantized High quality quantized model
Explora-0.6B.Q6_K.gguf GGUF 495 MB 6-bit Quantized Very good quality with smaller size
Explora-0.6B.Q5_K_M.gguf GGUF 444 MB 5-bit Quantized (Medium) Good quality, balanced compression
Explora-0.6B.Q5_K_S.gguf GGUF 437 MB 5-bit Quantized (Small) Good quality, higher compression
Explora-0.6B.Q4_K_M.gguf GGUF 397 MB 4-bit Quantized (Medium) Decent quality with good compression
Explora-0.6B.Q4_K_S.gguf GGUF 383 MB 4-bit Quantized (Small) Decent quality, higher compression
Explora-0.6B.Q3_K_L.gguf GGUF 368 MB 3-bit Quantized (Large) Lower quality but very compact
Explora-0.6B.Q3_K_M.gguf GGUF 347 MB 3-bit Quantized (Medium) Lower quality, more compact
Explora-0.6B.Q3_K_S.gguf GGUF 323 MB 3-bit Quantized (Small) Lower quality, most compact
Explora-0.6B.Q2_K.gguf GGUF 296 MB 2-bit Quantized Minimal quality, maximum compression

Configuration Files

File Name Size Description
config.json 29 Bytes Model configuration parameters
.gitattributes 2.3 kB Git LFS configuration for large files
README.md 280 Bytes Project documentation

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
182
GGUF
Model size
596M params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Explora-0.6B-GGUF

Finetuned
Qwen/Qwen3-0.6B
Quantized
(3)
this model

Dataset used to train prithivMLmods/Explora-0.6B-GGUF

Collection including prithivMLmods/Explora-0.6B-GGUF