huihui-ai commited on
Commit
ff4f9fe
·
verified ·
1 Parent(s): 73c0cf9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -116
README.md CHANGED
@@ -1,116 +1,122 @@
1
- ---
2
- library_name: transformers
3
- license: other
4
- license_name: qwen
5
- license_link: https://huggingface.co/huihui-ai/Qwen2.5-72B-Instruct-abliterated/blob/main/LICENSE
6
- language:
7
- - zho
8
- - eng
9
- - fra
10
- - spa
11
- - por
12
- - deu
13
- - ita
14
- - rus
15
- - jpn
16
- - kor
17
- - vie
18
- - tha
19
- - ara
20
- pipeline_tag: text-generation
21
- base_model: Qwen/Qwen2.5-72B-Instruct
22
- tags:
23
- - chat
24
- - abliterated
25
- - uncensored
26
- ---
27
-
28
- # huihui-ai/Qwen2.5-72B-Instruct-abliterated
29
-
30
-
31
- This is an uncensored version of [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
32
- This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
33
-
34
- ## ollama
35
-
36
- You can use [huihui_ai/qwen2.5-abliterate:72b](https://ollama.com/huihui_ai/qwen2.5-abliterate:72b) directly,
37
- ```
38
- ollama run huihui_ai/qwen2.5-abliterate:72b
39
- ```
40
-
41
- ## Usage
42
- You can use this model in your applications by loading it with Hugging Face's `transformers` library:
43
-
44
-
45
- ```python
46
- from transformers import AutoModelForCausalLM, AutoTokenizer
47
-
48
- # Load the model and tokenizer
49
- model_name = "huihui-ai/Qwen2.5-72B-Instruct-abliterated"
50
- model = AutoModelForCausalLM.from_pretrained(
51
- model_name,
52
- torch_dtype="auto",
53
- device_map="auto"
54
- )
55
- tokenizer = AutoTokenizer.from_pretrained(model_name)
56
-
57
- # Initialize conversation context
58
- initial_messages = [
59
- {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
60
- ]
61
- messages = initial_messages.copy() # Copy the initial conversation context
62
-
63
- # Enter conversation loop
64
- while True:
65
- # Get user input
66
- user_input = input("User: ").strip() # Strip leading and trailing spaces
67
-
68
- # If the user types '/exit', end the conversation
69
- if user_input.lower() == "/exit":
70
- print("Exiting chat.")
71
- break
72
-
73
- # If the user types '/clean', reset the conversation context
74
- if user_input.lower() == "/clean":
75
- messages = initial_messages.copy() # Reset conversation context
76
- print("Chat history cleared. Starting a new conversation.")
77
- continue
78
-
79
- # If input is empty, prompt the user and continue
80
- if not user_input:
81
- print("Input cannot be empty. Please enter something.")
82
- continue
83
-
84
- # Add user input to the conversation
85
- messages.append({"role": "user", "content": user_input})
86
-
87
- # Build the chat template
88
- text = tokenizer.apply_chat_template(
89
- messages,
90
- tokenize=False,
91
- add_generation_prompt=True
92
- )
93
-
94
- # Tokenize input and prepare it for the model
95
- model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
96
-
97
- # Generate a response from the model
98
- generated_ids = model.generate(
99
- **model_inputs,
100
- max_new_tokens=8192
101
- )
102
-
103
- # Extract model output, removing special tokens
104
- generated_ids = [
105
- output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
106
- ]
107
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
108
-
109
- # Add the model's response to the conversation
110
- messages.append({"role": "assistant", "content": response})
111
-
112
- # Print the model's response
113
- print(f"Qwen: {response}")
114
-
115
- ```
116
-
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ license_name: qwen
5
+ license_link: https://huggingface.co/huihui-ai/Qwen2.5-72B-Instruct-abliterated/blob/main/LICENSE
6
+ language:
7
+ - zho
8
+ - eng
9
+ - fra
10
+ - spa
11
+ - por
12
+ - deu
13
+ - ita
14
+ - rus
15
+ - jpn
16
+ - kor
17
+ - vie
18
+ - tha
19
+ - ara
20
+ pipeline_tag: text-generation
21
+ base_model: Qwen/Qwen2.5-72B-Instruct
22
+ tags:
23
+ - chat
24
+ - abliterated
25
+ - uncensored
26
+ ---
27
+
28
+ # huihui-ai/Qwen2.5-72B-Instruct-abliterated
29
+
30
+
31
+ This is an uncensored version of [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
32
+ This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
33
+
34
+ ## ollama
35
+
36
+ You can use [huihui_ai/qwen2.5-abliterate:72b](https://ollama.com/huihui_ai/qwen2.5-abliterate:72b) directly,
37
+ ```
38
+ ollama run huihui_ai/qwen2.5-abliterate:72b
39
+ ```
40
+
41
+ ## Usage
42
+ You can use this model in your applications by loading it with Hugging Face's `transformers` library:
43
+
44
+
45
+ ```python
46
+ from transformers import AutoModelForCausalLM, AutoTokenizer
47
+
48
+ # Load the model and tokenizer
49
+ model_name = "huihui-ai/Qwen2.5-72B-Instruct-abliterated"
50
+ model = AutoModelForCausalLM.from_pretrained(
51
+ model_name,
52
+ torch_dtype="auto",
53
+ device_map="auto"
54
+ )
55
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
56
+
57
+ # Initialize conversation context
58
+ initial_messages = [
59
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
60
+ ]
61
+ messages = initial_messages.copy() # Copy the initial conversation context
62
+
63
+ # Enter conversation loop
64
+ while True:
65
+ # Get user input
66
+ user_input = input("User: ").strip() # Strip leading and trailing spaces
67
+
68
+ # If the user types '/exit', end the conversation
69
+ if user_input.lower() == "/exit":
70
+ print("Exiting chat.")
71
+ break
72
+
73
+ # If the user types '/clean', reset the conversation context
74
+ if user_input.lower() == "/clean":
75
+ messages = initial_messages.copy() # Reset conversation context
76
+ print("Chat history cleared. Starting a new conversation.")
77
+ continue
78
+
79
+ # If input is empty, prompt the user and continue
80
+ if not user_input:
81
+ print("Input cannot be empty. Please enter something.")
82
+ continue
83
+
84
+ # Add user input to the conversation
85
+ messages.append({"role": "user", "content": user_input})
86
+
87
+ # Build the chat template
88
+ text = tokenizer.apply_chat_template(
89
+ messages,
90
+ tokenize=False,
91
+ add_generation_prompt=True
92
+ )
93
+
94
+ # Tokenize input and prepare it for the model
95
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
96
+
97
+ # Generate a response from the model
98
+ generated_ids = model.generate(
99
+ **model_inputs,
100
+ max_new_tokens=8192
101
+ )
102
+
103
+ # Extract model output, removing special tokens
104
+ generated_ids = [
105
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
106
+ ]
107
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
108
+
109
+ # Add the model's response to the conversation
110
+ messages.append({"role": "assistant", "content": response})
111
+
112
+ # Print the model's response
113
+ print(f"Qwen: {response}")
114
+
115
+ ```
116
+
117
+ ## Evaluations
118
+
119
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66228dc4c8920ec3513dc81a/5X_nBBpH-oHeVD-h_dqoz.png)
120
+
121
+ [open-llm-leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/)
122
+