By freezing one or more self-attention heads of the LLaVA 1.5-7b-hf model I hope to create a method for correcting/mitigating hallucination in previously generated answers.

Downloads last month
3
Safetensors
Model size
7B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for JanEbigt/LLaVA-att-test

Finetuned
(97)
this model