This model is converted from zai-org/GLM-4.6V to GGUF using convert_hf_to_gguf.py
convert_hf_to_gguf.py
To use it:
llama-server -hf ggml-org/GLM-4.6V-GGUF
Chat template
4-bit
8-bit
Base model