This model was converted and quantized using the official tools provided by llama.cpp.
Serving:
./build/bin/llama-server --mmproj /models/mmproj-MiMo-VL-7B-RL-f16.gguf \
-m /models/MiMo-VL-7B-RL-q4_k_m.gguf -ngl 999
./build/bin/llama-server --mmproj /models/mmproj-MiMo-VL-7B-RL-q8_0.gguf \
-m /models/MiMo-VL-7B-RL-q4_k_m.gguf -ngl 999
- Downloads last month
- 38
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for yeahdongcn/MiMo-VL-7B-RL-q4_k_m-gguf
Unable to build the model tree, the base model loops to the model itself. Learn more.