N8Programs commited on
Commit
d659408
·
verified ·
1 Parent(s): e2a75b4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: mlx
3
+ license: apache-2.0
4
+ license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE
5
+ pipeline_tag: text-generation
6
+ base_model: Qwen/Qwen3-30B-A3B
7
+ tags:
8
+ - mlx
9
+ ---
10
+
11
+ # mlx-community/Qwen3-30B-A3B-4bit-DWQ
12
+
13
+ This model is a custom DWQ quant made by distilling from the 6-bit to the 4-bit quantization of [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B)
14
+
15
+ ## Use with mlx
16
+
17
+ ```bash
18
+ pip install mlx-lm
19
+ ```
20
+
21
+ ```python
22
+ from mlx_lm import load, generate
23
+
24
+ model, tokenizer = load("mlx-community/Qwen3-30B-A3B-4bit-DWQ")
25
+
26
+ prompt = "hello"
27
+
28
+ if tokenizer.chat_template is not None:
29
+ messages = [{"role": "user", "content": prompt}]
30
+ prompt = tokenizer.apply_chat_template(
31
+ messages, add_generation_prompt=True
32
+ )
33
+
34
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
35
+ ```