ToastyPigeon commited on
Commit
afaadd6
·
verified ·
1 Parent(s): b88ab40

Training in progress, step 47, checkpoint

Browse files
.gitattributes CHANGED
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ last-checkpoint/tokenizer.json filter=lfs diff=lfs merge=lfs -text
last-checkpoint/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: THUDM/GLM-4-32B-0414
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.15.2
last-checkpoint/adapter_config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "THUDM/GLM-4-32B-0414",
5
+ "bias": "none",
6
+ "corda_config": null,
7
+ "eva_config": null,
8
+ "exclude_modules": null,
9
+ "fan_in_fan_out": null,
10
+ "inference_mode": true,
11
+ "init_lora_weights": true,
12
+ "layer_replication": null,
13
+ "layers_pattern": null,
14
+ "layers_to_transform": null,
15
+ "loftq_config": {},
16
+ "lora_alpha": 16,
17
+ "lora_bias": false,
18
+ "lora_dropout": 0.25,
19
+ "megatron_config": null,
20
+ "megatron_core": "megatron.core",
21
+ "modules_to_save": null,
22
+ "peft_type": "LORA",
23
+ "r": 16,
24
+ "rank_pattern": {},
25
+ "revision": null,
26
+ "target_modules": [
27
+ "up_proj",
28
+ "o_proj",
29
+ "k_proj",
30
+ "down_proj",
31
+ "q_proj",
32
+ "gate_proj",
33
+ "v_proj"
34
+ ],
35
+ "task_type": "CAUSAL_LM",
36
+ "trainable_token_indices": null,
37
+ "use_dora": false,
38
+ "use_rslora": false
39
+ }
last-checkpoint/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c85daf97f910553f93f5e8469874a597fcce937b7a35a9d46b7beb1743ae0548
3
+ size 259932816
last-checkpoint/optimizer.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e2923bfa21757d379d312beadadd29647e2e253182ce6a43e25801cd24c8a70
3
+ size 520248073
last-checkpoint/pytorch_model_fsdp.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4b2666f8d0cf90543161c7b8d96d6259f88ab8febd0fe96aa80faee312bfbbe
3
+ size 260079091
last-checkpoint/rng_state_0.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e526cd12c76e8840854e966ece8c8820e0f4618735f6e96f362d88a6365c1a60
3
+ size 14917
last-checkpoint/rng_state_1.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b0effed55bc424df6d67efdeb209700514defb0a945060a56799c3dd8c9d777
3
+ size 14917
last-checkpoint/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c04bbc30d4c439b799a307b6abdd0261060113e7e6a8f5337b4393ae3fc4e682
3
+ size 1529
last-checkpoint/special_tokens_map.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|endoftext|>",
4
+ "[MASK]",
5
+ "[gMASK]",
6
+ "[sMASK]",
7
+ "<sop>",
8
+ "<eop>",
9
+ "<|system|>",
10
+ "<|user|>",
11
+ "<|assistant|>",
12
+ "<|observation|>",
13
+ "<|begin_of_image|>",
14
+ "<|end_of_image|>",
15
+ "<|begin_of_video|>",
16
+ "<|end_of_video|>"
17
+ ],
18
+ "eos_token": {
19
+ "content": "<|user|>",
20
+ "lstrip": false,
21
+ "normalized": false,
22
+ "rstrip": false,
23
+ "single_word": false
24
+ },
25
+ "pad_token": {
26
+ "content": "<|endoftext|>",
27
+ "lstrip": false,
28
+ "normalized": false,
29
+ "rstrip": false,
30
+ "single_word": false
31
+ }
32
+ }
last-checkpoint/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76ebeac0d8bd7879ead7b43c16b44981f277e47225de2bd7de9ae1a6cc664a8c
3
+ size 19966496
last-checkpoint/tokenizer_config.json ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "151329": {
4
+ "content": "<|endoftext|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "151330": {
12
+ "content": "[MASK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "151331": {
20
+ "content": "[gMASK]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "151332": {
28
+ "content": "[sMASK]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "151333": {
36
+ "content": "<sop>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "151334": {
44
+ "content": "<eop>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "151335": {
52
+ "content": "<|system|>",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ },
59
+ "151336": {
60
+ "content": "<|user|>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false,
65
+ "special": true
66
+ },
67
+ "151337": {
68
+ "content": "<|assistant|>",
69
+ "lstrip": false,
70
+ "normalized": false,
71
+ "rstrip": false,
72
+ "single_word": false,
73
+ "special": true
74
+ },
75
+ "151338": {
76
+ "content": "<|observation|>",
77
+ "lstrip": false,
78
+ "normalized": false,
79
+ "rstrip": false,
80
+ "single_word": false,
81
+ "special": true
82
+ },
83
+ "151339": {
84
+ "content": "<|begin_of_image|>",
85
+ "lstrip": false,
86
+ "normalized": false,
87
+ "rstrip": false,
88
+ "single_word": false,
89
+ "special": true
90
+ },
91
+ "151340": {
92
+ "content": "<|end_of_image|>",
93
+ "lstrip": false,
94
+ "normalized": false,
95
+ "rstrip": false,
96
+ "single_word": false,
97
+ "special": true
98
+ },
99
+ "151341": {
100
+ "content": "<|begin_of_video|>",
101
+ "lstrip": false,
102
+ "normalized": false,
103
+ "rstrip": false,
104
+ "single_word": false,
105
+ "special": true
106
+ },
107
+ "151342": {
108
+ "content": "<|end_of_video|>",
109
+ "lstrip": false,
110
+ "normalized": false,
111
+ "rstrip": false,
112
+ "single_word": false,
113
+ "special": true
114
+ }
115
+ },
116
+ "additional_special_tokens": [
117
+ "<|endoftext|>",
118
+ "[MASK]",
119
+ "[gMASK]",
120
+ "[sMASK]",
121
+ "<sop>",
122
+ "<eop>",
123
+ "<|system|>",
124
+ "<|user|>",
125
+ "<|assistant|>",
126
+ "<|observation|>",
127
+ "<|begin_of_image|>",
128
+ "<|end_of_image|>",
129
+ "<|begin_of_video|>",
130
+ "<|end_of_video|>"
131
+ ],
132
+ "chat_template": "[gMASK]<sop>\n{%- if tools -%}\n<|system|>\n# 可用工具\n{% for tool in tools %}\n {%- set function = tool.function if tool.get(\"function\") else tool %}\n\n## {{ function.name }}\n\n{{ function | tojson(indent=4, ensure_ascii=False) }}\n在调用上述函数时,请使用 Json 格式表示调用的参数。\n{%- endfor %}\n{%- endif -%}\n\n{%- for msg in messages %}\n {%- if msg.role == 'system' %}\n<|system|>\n{{ msg.content }}\n {%- endif %}\n{%- endfor %}\n\n{%- for message in messages if message.role != 'system' %}\n {%- set role = message['role'] %}\n {%- set content = message['content'] %}\n {%- set meta = message.get(\"metadata\", \"\") %}\n\n {%- if role == 'user' %}\n<|user|>\n{{ content }}\n {%- elif role == 'assistant' and not meta %}\n<|assistant|>\n{{ content }}\n {%- elif role == 'assistant' and meta %}\n<|assistant|>{{ meta }}\n{{ content }}\n {%- elif role == 'observation' %}\n<|observation|>\n{{ content }}\n {%- endif %}\n{%- endfor %}\n{% if add_generation_prompt %}<|assistant|>{% endif %}",
133
+ "clean_up_tokenization_spaces": false,
134
+ "do_lower_case": false,
135
+ "eos_token": "<|user|>",
136
+ "extra_special_tokens": {},
137
+ "model_input_names": [
138
+ "input_ids",
139
+ "attention_mask"
140
+ ],
141
+ "model_max_length": 128000,
142
+ "pad_token": "<|endoftext|>",
143
+ "padding_side": "left",
144
+ "remove_space": false,
145
+ "tokenizer_class": "PreTrainedTokenizer"
146
+ }
last-checkpoint/trainer_state.json ADDED
@@ -0,0 +1,371 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 0.05079708186976493,
6
+ "eval_steps": 185,
7
+ "global_step": 47,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.0010807889759524452,
14
+ "grad_norm": 0.9461905360221863,
15
+ "learning_rate": 1.0000000000000002e-06,
16
+ "loss": 2.3777,
17
+ "step": 1
18
+ },
19
+ {
20
+ "epoch": 0.0010807889759524452,
21
+ "eval_loss": 2.449392318725586,
22
+ "eval_runtime": 439.9948,
23
+ "eval_samples_per_second": 0.227,
24
+ "eval_steps_per_second": 0.114,
25
+ "step": 1
26
+ },
27
+ {
28
+ "epoch": 0.0021615779519048904,
29
+ "grad_norm": 3.1373608112335205,
30
+ "learning_rate": 2.0000000000000003e-06,
31
+ "loss": 2.3452,
32
+ "step": 2
33
+ },
34
+ {
35
+ "epoch": 0.0032423669278573357,
36
+ "grad_norm": 1.4320189952850342,
37
+ "learning_rate": 3.0000000000000005e-06,
38
+ "loss": 2.3817,
39
+ "step": 3
40
+ },
41
+ {
42
+ "epoch": 0.004323155903809781,
43
+ "grad_norm": 0.6826081871986389,
44
+ "learning_rate": 4.000000000000001e-06,
45
+ "loss": 2.3642,
46
+ "step": 4
47
+ },
48
+ {
49
+ "epoch": 0.0054039448797622265,
50
+ "grad_norm": 1.0959327220916748,
51
+ "learning_rate": 5e-06,
52
+ "loss": 2.3326,
53
+ "step": 5
54
+ },
55
+ {
56
+ "epoch": 0.006484733855714671,
57
+ "grad_norm": 0.7357077598571777,
58
+ "learning_rate": 6.000000000000001e-06,
59
+ "loss": 2.3463,
60
+ "step": 6
61
+ },
62
+ {
63
+ "epoch": 0.007565522831667117,
64
+ "grad_norm": 1.5454062223434448,
65
+ "learning_rate": 7.000000000000001e-06,
66
+ "loss": 2.3717,
67
+ "step": 7
68
+ },
69
+ {
70
+ "epoch": 0.008646311807619562,
71
+ "grad_norm": 0.9271339178085327,
72
+ "learning_rate": 8.000000000000001e-06,
73
+ "loss": 2.4972,
74
+ "step": 8
75
+ },
76
+ {
77
+ "epoch": 0.009727100783572008,
78
+ "grad_norm": 3.468608856201172,
79
+ "learning_rate": 9e-06,
80
+ "loss": 2.4317,
81
+ "step": 9
82
+ },
83
+ {
84
+ "epoch": 0.010807889759524453,
85
+ "grad_norm": 1.2068188190460205,
86
+ "learning_rate": 1e-05,
87
+ "loss": 2.3907,
88
+ "step": 10
89
+ },
90
+ {
91
+ "epoch": 0.011888678735476898,
92
+ "grad_norm": 1.531826376914978,
93
+ "learning_rate": 9.999565004621828e-06,
94
+ "loss": 2.3206,
95
+ "step": 11
96
+ },
97
+ {
98
+ "epoch": 0.012969467711429343,
99
+ "grad_norm": 0.6466525197029114,
100
+ "learning_rate": 9.999129583288e-06,
101
+ "loss": 2.4577,
102
+ "step": 12
103
+ },
104
+ {
105
+ "epoch": 0.01405025668738179,
106
+ "grad_norm": 1.1580270528793335,
107
+ "learning_rate": 9.998693735372558e-06,
108
+ "loss": 2.4218,
109
+ "step": 13
110
+ },
111
+ {
112
+ "epoch": 0.015131045663334234,
113
+ "grad_norm": 0.8021056652069092,
114
+ "learning_rate": 9.998257460248313e-06,
115
+ "loss": 2.4092,
116
+ "step": 14
117
+ },
118
+ {
119
+ "epoch": 0.01621183463928668,
120
+ "grad_norm": 3.115946054458618,
121
+ "learning_rate": 9.997820757286844e-06,
122
+ "loss": 2.4307,
123
+ "step": 15
124
+ },
125
+ {
126
+ "epoch": 0.017292623615239124,
127
+ "grad_norm": 2.3331785202026367,
128
+ "learning_rate": 9.9973836258585e-06,
129
+ "loss": 2.4198,
130
+ "step": 16
131
+ },
132
+ {
133
+ "epoch": 0.01837341259119157,
134
+ "grad_norm": 0.889893651008606,
135
+ "learning_rate": 9.99694606533239e-06,
136
+ "loss": 2.3658,
137
+ "step": 17
138
+ },
139
+ {
140
+ "epoch": 0.019454201567144017,
141
+ "grad_norm": 1.715402364730835,
142
+ "learning_rate": 9.996508075076388e-06,
143
+ "loss": 2.4134,
144
+ "step": 18
145
+ },
146
+ {
147
+ "epoch": 0.02053499054309646,
148
+ "grad_norm": 2.0538547039031982,
149
+ "learning_rate": 9.996069654457121e-06,
150
+ "loss": 2.3369,
151
+ "step": 19
152
+ },
153
+ {
154
+ "epoch": 0.021615779519048906,
155
+ "grad_norm": 0.7021394968032837,
156
+ "learning_rate": 9.995630802839979e-06,
157
+ "loss": 2.3592,
158
+ "step": 20
159
+ },
160
+ {
161
+ "epoch": 0.022696568495001353,
162
+ "grad_norm": 1.9398308992385864,
163
+ "learning_rate": 9.995191519589096e-06,
164
+ "loss": 2.4132,
165
+ "step": 21
166
+ },
167
+ {
168
+ "epoch": 0.023777357470953796,
169
+ "grad_norm": 0.8852663636207581,
170
+ "learning_rate": 9.994751804067352e-06,
171
+ "loss": 2.4435,
172
+ "step": 22
173
+ },
174
+ {
175
+ "epoch": 0.024858146446906242,
176
+ "grad_norm": 1.335016131401062,
177
+ "learning_rate": 9.994311655636383e-06,
178
+ "loss": 2.4313,
179
+ "step": 23
180
+ },
181
+ {
182
+ "epoch": 0.025938935422858685,
183
+ "grad_norm": 0.6692914366722107,
184
+ "learning_rate": 9.993871073656562e-06,
185
+ "loss": 2.5476,
186
+ "step": 24
187
+ },
188
+ {
189
+ "epoch": 0.027019724398811132,
190
+ "grad_norm": 0.9945073127746582,
191
+ "learning_rate": 9.993430057487e-06,
192
+ "loss": 2.3929,
193
+ "step": 25
194
+ },
195
+ {
196
+ "epoch": 0.02810051337476358,
197
+ "grad_norm": 0.49665793776512146,
198
+ "learning_rate": 9.99298860648554e-06,
199
+ "loss": 2.3797,
200
+ "step": 26
201
+ },
202
+ {
203
+ "epoch": 0.02918130235071602,
204
+ "grad_norm": 0.8177111744880676,
205
+ "learning_rate": 9.992546720008768e-06,
206
+ "loss": 2.4478,
207
+ "step": 27
208
+ },
209
+ {
210
+ "epoch": 0.030262091326668468,
211
+ "grad_norm": 0.7002121806144714,
212
+ "learning_rate": 9.992104397411999e-06,
213
+ "loss": 2.4638,
214
+ "step": 28
215
+ },
216
+ {
217
+ "epoch": 0.03134288030262091,
218
+ "grad_norm": 0.9923571944236755,
219
+ "learning_rate": 9.991661638049263e-06,
220
+ "loss": 2.4516,
221
+ "step": 29
222
+ },
223
+ {
224
+ "epoch": 0.03242366927857336,
225
+ "grad_norm": 10.232260704040527,
226
+ "learning_rate": 9.991218441273328e-06,
227
+ "loss": 2.4958,
228
+ "step": 30
229
+ },
230
+ {
231
+ "epoch": 0.033504458254525804,
232
+ "grad_norm": 0.9542494416236877,
233
+ "learning_rate": 9.990774806435673e-06,
234
+ "loss": 2.4304,
235
+ "step": 31
236
+ },
237
+ {
238
+ "epoch": 0.03458524723047825,
239
+ "grad_norm": 0.958182692527771,
240
+ "learning_rate": 9.990330732886498e-06,
241
+ "loss": 2.3102,
242
+ "step": 32
243
+ },
244
+ {
245
+ "epoch": 0.0356660362064307,
246
+ "grad_norm": 1.6190742254257202,
247
+ "learning_rate": 9.989886219974718e-06,
248
+ "loss": 2.4043,
249
+ "step": 33
250
+ },
251
+ {
252
+ "epoch": 0.03674682518238314,
253
+ "grad_norm": 1.284880518913269,
254
+ "learning_rate": 9.989441267047957e-06,
255
+ "loss": 2.456,
256
+ "step": 34
257
+ },
258
+ {
259
+ "epoch": 0.03782761415833558,
260
+ "grad_norm": 0.9109122157096863,
261
+ "learning_rate": 9.988995873452545e-06,
262
+ "loss": 2.2928,
263
+ "step": 35
264
+ },
265
+ {
266
+ "epoch": 0.03890840313428803,
267
+ "grad_norm": 0.9100764989852905,
268
+ "learning_rate": 9.988550038533524e-06,
269
+ "loss": 2.3849,
270
+ "step": 36
271
+ },
272
+ {
273
+ "epoch": 0.039989192110240476,
274
+ "grad_norm": 7.713947772979736,
275
+ "learning_rate": 9.988103761634633e-06,
276
+ "loss": 2.4167,
277
+ "step": 37
278
+ },
279
+ {
280
+ "epoch": 0.04106998108619292,
281
+ "grad_norm": 3.86708402633667,
282
+ "learning_rate": 9.987657042098305e-06,
283
+ "loss": 2.327,
284
+ "step": 38
285
+ },
286
+ {
287
+ "epoch": 0.04215077006214537,
288
+ "grad_norm": 6.0242133140563965,
289
+ "learning_rate": 9.98720987926567e-06,
290
+ "loss": 2.1308,
291
+ "step": 39
292
+ },
293
+ {
294
+ "epoch": 0.04323155903809781,
295
+ "grad_norm": 0.6192132234573364,
296
+ "learning_rate": 9.98676227247656e-06,
297
+ "loss": 2.2282,
298
+ "step": 40
299
+ },
300
+ {
301
+ "epoch": 0.044312348014050255,
302
+ "grad_norm": 0.8091052770614624,
303
+ "learning_rate": 9.98631422106948e-06,
304
+ "loss": 2.4306,
305
+ "step": 41
306
+ },
307
+ {
308
+ "epoch": 0.045393136990002705,
309
+ "grad_norm": 0.5277882814407349,
310
+ "learning_rate": 9.985865724381627e-06,
311
+ "loss": 2.5304,
312
+ "step": 42
313
+ },
314
+ {
315
+ "epoch": 0.04647392596595515,
316
+ "grad_norm": 2.935492992401123,
317
+ "learning_rate": 9.985416781748882e-06,
318
+ "loss": 2.407,
319
+ "step": 43
320
+ },
321
+ {
322
+ "epoch": 0.04755471494190759,
323
+ "grad_norm": 0.41435685753822327,
324
+ "learning_rate": 9.984967392505804e-06,
325
+ "loss": 2.5168,
326
+ "step": 44
327
+ },
328
+ {
329
+ "epoch": 0.048635503917860035,
330
+ "grad_norm": 0.6153848171234131,
331
+ "learning_rate": 9.984517555985624e-06,
332
+ "loss": 2.4032,
333
+ "step": 45
334
+ },
335
+ {
336
+ "epoch": 0.049716292893812485,
337
+ "grad_norm": 0.7048211097717285,
338
+ "learning_rate": 9.984067271520248e-06,
339
+ "loss": 2.5226,
340
+ "step": 46
341
+ },
342
+ {
343
+ "epoch": 0.05079708186976493,
344
+ "grad_norm": 0.508848249912262,
345
+ "learning_rate": 9.983616538440251e-06,
346
+ "loss": 2.3653,
347
+ "step": 47
348
+ }
349
+ ],
350
+ "logging_steps": 1,
351
+ "max_steps": 1850,
352
+ "num_input_tokens_seen": 0,
353
+ "num_train_epochs": 2,
354
+ "save_steps": 47,
355
+ "stateful_callbacks": {
356
+ "TrainerControl": {
357
+ "args": {
358
+ "should_epoch_stop": false,
359
+ "should_evaluate": false,
360
+ "should_log": false,
361
+ "should_save": true,
362
+ "should_training_stop": false
363
+ },
364
+ "attributes": {}
365
+ }
366
+ },
367
+ "total_flos": 1.2779097730895053e+18,
368
+ "train_batch_size": 1,
369
+ "trial_name": null,
370
+ "trial_params": null
371
+ }
last-checkpoint/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61fbda8f4d49945647ff922ff233377c45213a41e932d6f2f22126893384bd1e
3
+ size 7569