Phi 3.5 Mini LoRA Fine-tuned with MLX

This is a LoRA (Low-Rank Adaptation) fine-tuned version of Phi 3.5 Mini using MLX.

Usage with MLX

from mlx_lm import load, generate

model, tokenizer = load("kacperbb/phi-3.5-mlx-finetuned")
response = generate(model, tokenizer, prompt="Hello, how are you?", max_tokens=100)
print(response)
Downloads last month
16
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for kacperbb/phi-3.5-mlx-finetuned

Adapter
(662)
this model