adi2606 commited on
Commit
495e237
·
verified ·
1 Parent(s): f66b013

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -3
README.md CHANGED
@@ -1,3 +1,60 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ ## Usage
5
+
6
+ ### Chat format
7
+
8
+ > **IMPORTANT**: This model is **sensitive** to the chat template used. Ensure you use the correct template:
9
+ ```
10
+ <s>system
11
+ [System message]</s>
12
+ <s>user
13
+ [Your question or message]</s>
14
+ <s>assistant
15
+ [The model's response]</s>
16
+ ```
17
+
18
+ ### Example Usage with HuggingFace Transformers
19
+
20
+ ```python
21
+ import torch
22
+ from transformers import AutoModelForCausalLM, AutoTokenizer
23
+
24
+ # Determine the device to use (GPU if available, otherwise CPU)
25
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
26
+
27
+ # Load the model and tokenizer, then move the model to the appropriate device
28
+ model = AutoModelForCausalLM.from_pretrained("adi2606/MenstrualQA").to(device)
29
+ tokenizer = AutoTokenizer.from_pretrained("adi2606/MenstrualQA")
30
+
31
+ # Function to generate a response from the chatbot
32
+ def generate_response(message: str, temperature: float = 0.4, repetition_penalty: float = 1.1) -> str:
33
+ # Apply the chat template and convert to PyTorch tensors
34
+ messages = [
35
+ {"role": "system", "content": "You are a helpful assistant."},
36
+ {"role": "user", "content": message}
37
+ ]
38
+ input_ids = tokenizer.apply_chat_template(
39
+ messages, add_generation_prompt=True, return_tensors="pt"
40
+ ).to(device)
41
+
42
+ # Generate the response
43
+ output = model.generate(
44
+ input_ids,
45
+ max_length=512,
46
+ temperature=temperature,
47
+ repetition_penalty=repetition_penalty,
48
+ do_sample=True
49
+ )
50
+
51
+ # Decode the generated output
52
+ generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
53
+ return generated_text
54
+
55
+ # Example usage
56
+ message = "how to stop pain during menstruation?"
57
+ response = generate_response(message)
58
+ print(response)
59
+
60
+ ```