LLMic

#1340
by razvanab - opened

It's queued! :D
This model is of LlamaForCausalLM which is often used for modified llama architectures which may or may not still be compatible with llama.cpp so let's hope for the best.

You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#LLMic-GGUF

The BPE pre-tokenizer of this model is unfortunately not supported by llama.cpp. If you know what supported pre-tokenizer, we could use instead please let me know but given that it seems like a unique model based on the Llama architecture and not a finetune I’m thoughtful reusing an existing pre-tokenizer would be feasible.

Sign up or log in to comment