Nuo Chen commited on
Commit
121bdf1
·
verified ·
1 Parent(s): 4d3b355

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -66
README.md CHANGED
@@ -17,11 +17,61 @@ XtraGPT is a series of LLMs for Human-AI Collaboration on Controllable Scientifi
17
 
18
  ## Requirements
19
 
20
- The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
21
 
22
  ## Quickstart
23
 
24
- Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  ```python
27
  from transformers import AutoModelForCausalLM, AutoTokenizer
@@ -48,7 +98,6 @@ Finally, the CAT-score is used to interpret how CodePTMs attend code structure,
48
  selected_content="""
49
  After that, we define CAT-score to measure the matching degree between the filtered attention matrix and the distance matrix.
50
  """
51
-
52
  prompt ="""
53
  help me redefine cat-score based on the context.
54
  """
@@ -92,68 +141,5 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
92
  ```
93
 
94
 
95
- ```python
96
- from openai import OpenAI
97
- model_name = "Xtra-Computing/XtraGPT-7B"
98
- client = OpenAI(
99
- base_url="http://localhost:8088/v1",
100
- api_key="sk-1234567890"
101
- )
102
-
103
- paper_content="""
104
- markdown
105
- """
106
- selected_content="""
107
- After that, we define CAT-score to measure the matching degree between the filtered attention matrix and the distance matrix.
108
- """
109
-
110
- prompt ="""
111
- help me redefine cat-score based on the context.
112
- """
113
-
114
- content = f"""
115
- Please improve the selected content based on the following. Act as an expert model for improving articles **PAPER_CONTENT**.\n
116
- The output needs to answer the **QUESTION** on **SELECTED_CONTENT** in the input. Avoid adding unnecessary length, unrelated details, overclaims, or vague statements.
117
- Focus on clear, concise, and evidence-based improvements that align with the overall context of the paper.\n
118
-
119
- <PAPER_CONTENT>
120
- {paper_content}
121
- </PAPER_CONTENT>\n
122
-
123
- <SELECTED_CONTENT>
124
- {selected_content}
125
- </SELECTED_CONTENT>\n
126
 
127
- <QUESTION>
128
- {prompt}
129
- </QUESTION>\n
130
- """
131
 
132
- response = client.chat.completions.create(
133
- model="xtragpt",
134
- messages=[{"role": "user", "content": content}],
135
- temperature=0.7,
136
- max_tokens=16384
137
- )
138
- print(response.choices[0].message.content)
139
-
140
- ```
141
-
142
- ## Citation
143
-
144
- If you find our work helpful, feel free to give us a cite.
145
-
146
- ```
147
- @misc{xtracomputing2025xtraqa,
148
- title = {XtraQA},
149
- url = {https://huggingface.co/Xtra-Computing/XtraGPT-7B},
150
- author = {Xtra Computing Group},
151
- year = {2025}
152
- }
153
- @article{xtracomputing2025xtragpt,
154
- title={XtraGPT: LLMs for Human-AI Collaboration on Controllable Scientific Paper Refinement},
155
- author={Xtra Computing Group},
156
- journal={arXiv preprint arXiv:abcdefg},
157
- year={2025}
158
- }
159
- ```
 
17
 
18
  ## Requirements
19
 
20
+ The code of XtraGPT has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
21
 
22
  ## Quickstart
23
 
24
+ ```python
25
+ from openai import OpenAI
26
+ model_name = "Xtra-Computing/XtraGPT-7B"
27
+ client = OpenAI(
28
+ base_url="http://localhost:8088/v1",
29
+ api_key="sk-1234567890"
30
+ )
31
+
32
+ paper_content="markdown"
33
+ selected_content="After that, we define CAT-score to measure the matching degree between the filtered attention matrix and the distance matrix."
34
+
35
+ prompt = "help me redefine cat-score based on the context."
36
+
37
+ content = f"""
38
+ Please improve the selected content based on the following. Act as an expert model for improving articles **PAPER_CONTENT**.\n
39
+ The output needs to answer the **QUESTION** on **SELECTED_CONTENT** in the input. Avoid adding unnecessary length, unrelated details, overclaims, or vague statements.
40
+ Focus on clear, concise, and evidence-based improvements that align with the overall context of the paper.\n
41
+ <PAPER_CONTENT> {paper_content} </PAPER_CONTENT>\n <SELECTED_CONTENT> {selected_content} </SELECTED_CONTENT>\n <QUESTION> {prompt} </QUESTION>\n
42
+ """
43
+
44
+ response = client.chat.completions.create(
45
+ model="xtragpt",
46
+ messages=[{"role": "user", "content": content}],
47
+ temperature=0.7,
48
+ max_tokens=16384
49
+ )
50
+ print(response.choices[0].message.content)
51
+
52
+ ```
53
+
54
+ ## Citation
55
+
56
+ If you find our work helpful, feel free to give us a cite.
57
+
58
+ ```
59
+ @misc{xtracomputing2025xtraqa,
60
+ title = {XtraQA},
61
+ url = {https://huggingface.co/Xtra-Computing/XtraGPT-7B},
62
+ author = {Xtra Computing Group},
63
+ year = {2025}
64
+ }
65
+ @article{xtracomputing2025xtragpt,
66
+ title={XtraGPT: LLMs for Human-AI Collaboration on Controllable Scientific Paper Refinement},
67
+ author={Xtra Computing Group},
68
+ journal={arXiv preprint arXiv:abcdefg},
69
+ year={2025}
70
+ }
71
+ ```
72
+
73
+
74
+
75
 
76
  ```python
77
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
98
  selected_content="""
99
  After that, we define CAT-score to measure the matching degree between the filtered attention matrix and the distance matrix.
100
  """
 
101
  prompt ="""
102
  help me redefine cat-score based on the context.
103
  """
 
141
  ```
142
 
143
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
144
 
 
 
 
 
145