Quazim0t0 commited on
Commit
2501faa
·
verified ·
1 Parent(s): fd1cf48

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -1
README.md CHANGED
@@ -24,4 +24,71 @@ datasets:
24
 
25
 
26
 
27
- If using this model for Open WebUI here is a simple function to organize the models responses: https://openwebui.com/f/quaz93/phi4_turn_r1_distill_thought_function_v1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
 
26
 
27
+ If using this model for Open WebUI here is a simple function to organize the models responses: https://openwebui.com/f/quaz93/phi4_turn_r1_distill_thought_function_v1
28
+
29
+ # Phi4 Turn R1Distill LoRA Adapters
30
+
31
+ ## Overview
32
+ These **LoRA adapters** were trained using diverse **reasoning datasets** that incorporate structured **Thought** and **Solution** responses to enhance logical inference. This project was designed to **test the R1 dataset** on **Phi-4**, aiming to create a **lightweight, fast, and efficient reasoning model**.
33
+
34
+ All adapters were fine-tuned using an **NVIDIA A800 GPU**, ensuring high performance and compatibility for continued training, merging, or direct deployment.
35
+ As part of an open-source initiative, all resources are made **publicly available** for unrestricted research and development.
36
+
37
+ ---
38
+
39
+ ## LoRA Adapters
40
+ Below are the currently available LoRA fine-tuned adapters (**as of January 30, 2025**):
41
+
42
+ - [Phi4.Turn.R1Distill-Lora1](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora1)
43
+ - [Phi4.Turn.R1Distill-Lora2](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora2)
44
+ - [Phi4.Turn.R1Distill-Lora3](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora3)
45
+ - [Phi4.Turn.R1Distill-Lora4](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora4)
46
+ - [Phi4.Turn.R1Distill-Lora5](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora5)
47
+ - [Phi4.Turn.R1Distill-Lora6](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora6)
48
+ - [Phi4.Turn.R1Distill-Lora7](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora7)
49
+ - [Phi4.Turn.R1Distill-Lora8](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill-Lora8)
50
+
51
+ ---
52
+
53
+ ## GGUF Full & Quantized Models
54
+ To facilitate broader testing and real-world inference, **GGUF Full and Quantized versions** have been provided for evaluation on **Open WebUI** and other LLM interfaces.
55
+
56
+ ### **Version 1**
57
+ - [Phi4.Turn.R1Distill.Q8_0](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill.Q8_0)
58
+ - [Phi4.Turn.R1Distill.Q4_k](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill.Q4_k)
59
+ - [Phi4.Turn.R1Distill.16bit](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill.16bit)
60
+
61
+ ### **Version 1.1**
62
+ - [Phi4.Turn.R1Distill_v1.1_Q4_k](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.1_Q4_k)
63
+
64
+ ### **Version 1.2**
65
+ - [Phi4.Turn.R1Distill_v1.2_Q4_k](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.2_Q4_k)
66
+
67
+ ### **Version 1.3**
68
+ - [Phi4.Turn.R1Distill_v1.3_Q4_k-GGUF](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.3_Q4_k-GGUF)
69
+
70
+ ### **Version 1.4**
71
+ - [Phi4.Turn.R1Distill_v1.4_Q4_k-GGUF](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.4_Q4_k-GGUF)
72
+
73
+ ### **Version 1.5**
74
+ - [Phi4.Turn.R1Distill_v1.5_Q4_k-GGUF](https://huggingface.co/Quazim0t0/Phi4.Turn.R1Distill_v1.5_Q4_k-GGUF)
75
+
76
+ ---
77
+
78
+ ## Usage
79
+
80
+ ### **Loading LoRA Adapters with `transformers` and `peft`**
81
+ To load and apply the LoRA adapters on Phi-4, use the following approach:
82
+
83
+ ```python
84
+ from transformers import AutoModelForCausalLM, AutoTokenizer
85
+ from peft import PeftModel
86
+
87
+ base_model = "microsoft/Phi-4"
88
+ lora_adapter = "Quazim0t0/Phi4.Turn.R1Distill-Lora1"
89
+
90
+ tokenizer = AutoTokenizer.from_pretrained(base_model)
91
+ model = AutoModelForCausalLM.from_pretrained(base_model)
92
+ model = PeftModel.from_pretrained(model, lora_adapter)
93
+
94
+ model.eval()