lbourdois commited on
Commit
a9de9f6
·
verified ·
1 Parent(s): 68f298d

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +120 -108
README.md CHANGED
@@ -1,108 +1,120 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- license_link: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2/blob/main/LICENSE
5
- language:
6
- - en
7
- pipeline_tag: text-generation
8
- base_model: Qwen/Qwen2.5-14B-Instruct
9
- tags:
10
- - chat
11
- - abliterated
12
- - uncensored
13
- ---
14
-
15
- # huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
16
-
17
-
18
- This is an uncensored version of [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
19
-
20
- Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
21
-
22
- **Important Note** This version is an improvement over the previous one [Qwen2.5-14B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated).
23
-
24
- ## ollama
25
-
26
- You can use [huihui_ai/qwen2.5-abliterate:14b](https://ollama.com/huihui_ai/qwen2.5-abliterate:14b) directly,
27
- ```
28
- ollama run huihui_ai/qwen2.5-abliterate:14b
29
- ```
30
-
31
- ## Usage
32
- You can use this model in your applications by loading it with Hugging Face's `transformers` library:
33
-
34
-
35
- ```python
36
- from transformers import AutoModelForCausalLM, AutoTokenizer
37
-
38
- # Load the model and tokenizer
39
- model_name = "huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2"
40
- model = AutoModelForCausalLM.from_pretrained(
41
- model_name,
42
- torch_dtype="auto",
43
- device_map="auto"
44
- )
45
- tokenizer = AutoTokenizer.from_pretrained(model_name)
46
-
47
- # Initialize conversation context
48
- initial_messages = [
49
- {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
50
- ]
51
- messages = initial_messages.copy() # Copy the initial conversation context
52
-
53
- # Enter conversation loop
54
- while True:
55
- # Get user input
56
- user_input = input("User: ").strip() # Strip leading and trailing spaces
57
-
58
- # If the user types '/exit', end the conversation
59
- if user_input.lower() == "/exit":
60
- print("Exiting chat.")
61
- break
62
-
63
- # If the user types '/clean', reset the conversation context
64
- if user_input.lower() == "/clean":
65
- messages = initial_messages.copy() # Reset conversation context
66
- print("Chat history cleared. Starting a new conversation.")
67
- continue
68
-
69
- # If input is empty, prompt the user and continue
70
- if not user_input:
71
- print("Input cannot be empty. Please enter something.")
72
- continue
73
-
74
- # Add user input to the conversation
75
- messages.append({"role": "user", "content": user_input})
76
-
77
- # Build the chat template
78
- text = tokenizer.apply_chat_template(
79
- messages,
80
- tokenize=False,
81
- add_generation_prompt=True
82
- )
83
-
84
- # Tokenize input and prepare it for the model
85
- model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
86
-
87
- # Generate a response from the model
88
- generated_ids = model.generate(
89
- **model_inputs,
90
- max_new_tokens=8192
91
- )
92
-
93
- # Extract model output, removing special tokens
94
- generated_ids = [
95
- output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
96
- ]
97
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
98
-
99
- # Add the model's response to the conversation
100
- messages.append({"role": "assistant", "content": response})
101
-
102
- # Print the model's response
103
- print(f"Qwen: {response}")
104
-
105
- ```
106
-
107
- ## Evaluations
108
- Evaluation is ongoing, to be continued later.
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ license_link: https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2/blob/main/LICENSE
5
+ language:
6
+ - zho
7
+ - eng
8
+ - fra
9
+ - spa
10
+ - por
11
+ - deu
12
+ - ita
13
+ - rus
14
+ - jpn
15
+ - kor
16
+ - vie
17
+ - tha
18
+ - ara
19
+ pipeline_tag: text-generation
20
+ base_model: Qwen/Qwen2.5-14B-Instruct
21
+ tags:
22
+ - chat
23
+ - abliterated
24
+ - uncensored
25
+ ---
26
+
27
+ # huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2
28
+
29
+
30
+ This is an uncensored version of [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
31
+
32
+ Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
33
+
34
+ **Important Note** This version is an improvement over the previous one [Qwen2.5-14B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-14B-Instruct-abliterated).
35
+
36
+ ## ollama
37
+
38
+ You can use [huihui_ai/qwen2.5-abliterate:14b](https://ollama.com/huihui_ai/qwen2.5-abliterate:14b) directly,
39
+ ```
40
+ ollama run huihui_ai/qwen2.5-abliterate:14b
41
+ ```
42
+
43
+ ## Usage
44
+ You can use this model in your applications by loading it with Hugging Face's `transformers` library:
45
+
46
+
47
+ ```python
48
+ from transformers import AutoModelForCausalLM, AutoTokenizer
49
+
50
+ # Load the model and tokenizer
51
+ model_name = "huihui-ai/Qwen2.5-14B-Instruct-abliterated-v2"
52
+ model = AutoModelForCausalLM.from_pretrained(
53
+ model_name,
54
+ torch_dtype="auto",
55
+ device_map="auto"
56
+ )
57
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
58
+
59
+ # Initialize conversation context
60
+ initial_messages = [
61
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
62
+ ]
63
+ messages = initial_messages.copy() # Copy the initial conversation context
64
+
65
+ # Enter conversation loop
66
+ while True:
67
+ # Get user input
68
+ user_input = input("User: ").strip() # Strip leading and trailing spaces
69
+
70
+ # If the user types '/exit', end the conversation
71
+ if user_input.lower() == "/exit":
72
+ print("Exiting chat.")
73
+ break
74
+
75
+ # If the user types '/clean', reset the conversation context
76
+ if user_input.lower() == "/clean":
77
+ messages = initial_messages.copy() # Reset conversation context
78
+ print("Chat history cleared. Starting a new conversation.")
79
+ continue
80
+
81
+ # If input is empty, prompt the user and continue
82
+ if not user_input:
83
+ print("Input cannot be empty. Please enter something.")
84
+ continue
85
+
86
+ # Add user input to the conversation
87
+ messages.append({"role": "user", "content": user_input})
88
+
89
+ # Build the chat template
90
+ text = tokenizer.apply_chat_template(
91
+ messages,
92
+ tokenize=False,
93
+ add_generation_prompt=True
94
+ )
95
+
96
+ # Tokenize input and prepare it for the model
97
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
98
+
99
+ # Generate a response from the model
100
+ generated_ids = model.generate(
101
+ **model_inputs,
102
+ max_new_tokens=8192
103
+ )
104
+
105
+ # Extract model output, removing special tokens
106
+ generated_ids = [
107
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
108
+ ]
109
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
110
+
111
+ # Add the model's response to the conversation
112
+ messages.append({"role": "assistant", "content": response})
113
+
114
+ # Print the model's response
115
+ print(f"Qwen: {response}")
116
+
117
+ ```
118
+
119
+ ## Evaluations
120
+ Evaluation is ongoing, to be continued later.