vLLM Compatibility Issue with Unsloth's 4-bit Quantized Models - Shape Mismatch During Weight Loading
π
1
1
#9 opened 3 months ago
by
varun12345
Update model type in config.json
#8 opened 3 months ago
by
RTannous
Broken in transformers 4.53 >
π
1
1
#7 opened 4 months ago
by
Sorenmc
Bnb breaks the function calls
#5 opened 9 months ago
by
Rexschwert
generation_config.json
#4 opened 9 months ago
by
iqdddd
Help getting the example working
2
#3 opened 9 months ago
by
salamanders
Update preprocessor_config.json
#2 opened 10 months ago
by
TahirC