imgrind commited on
Commit
46c00d2
·
verified ·
1 Parent(s): bd29150

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license_name: tongyi-qianwen-research
3
+ license_link: https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE
4
+ tags:
5
+ - code
6
+ - mlx
7
+ - mlx-my-repo
8
+ pipeline_tag: text-generation
9
+ license: other
10
+ base_model: NTQAI/Nxcode-CQ-7B-orpo
11
+ ---
12
+
13
+ # imgrind/Nxcode-CQ-7B-orpo-mlx-4Bit
14
+
15
+ The Model [imgrind/Nxcode-CQ-7B-orpo-mlx-4Bit](https://huggingface.co/imgrind/Nxcode-CQ-7B-orpo-mlx-4Bit) was converted to MLX format from [NTQAI/Nxcode-CQ-7B-orpo](https://huggingface.co/NTQAI/Nxcode-CQ-7B-orpo) using mlx-lm version **0.28.3**.
16
+
17
+ ## Use with mlx
18
+
19
+ ```bash
20
+ pip install mlx-lm
21
+ ```
22
+
23
+ ```python
24
+ from mlx_lm import load, generate
25
+
26
+ model, tokenizer = load("imgrind/Nxcode-CQ-7B-orpo-mlx-4Bit")
27
+
28
+ prompt="hello"
29
+
30
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
31
+ messages = [{"role": "user", "content": prompt}]
32
+ prompt = tokenizer.apply_chat_template(
33
+ messages, tokenize=False, add_generation_prompt=True
34
+ )
35
+
36
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
37
+ ```