SAM3-GGUF

GGUF weights for SAM 3 (Segment Anything Model 3), converted for use with sam3.cpp.

Available Variants

File Precision Size Inference (M4 Max)
sam3-image-f32.gguf F32 3.2 GB 3.5s
sam3-image-f16.gguf F16 1.7 GB 3.5s
sam3-image-q8_0.gguf Q8_0 1.0 GB 3.2s

Usage

import sam3cpp

model = sam3cpp.Sam3Model("hf://rob-laz/sam3-gguf/sam3-image-f16.gguf")
pred = model.predict("photo.jpg", "person")
print(f"{pred.count} detections")

See sam3.cpp for build instructions and full documentation.

Conversion

Converted from the original SAM 3 MLX weights using tools/convert_mlx_sam3_to_gguf.py.

Downloads last month
118
GGUF
Model size
0.8B params
Architecture
sam3
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support