taronklm commited on
Commit
c1cac3b
·
verified ·
1 Parent(s): c58df6e

taronklm/Qwen2.5-0.5B-Instruct-chatbot

Browse files
Files changed (3) hide show
  1. README.md +50 -51
  2. model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -1,61 +1,60 @@
1
  ---
2
- base_model: Qwen/Qwen2.5-0.5B-Instruct
3
- datasets:
4
- - generator
5
- library_name: peft
6
  license: apache-2.0
 
7
  tags:
8
  - trl
9
  - sft
10
  - generated_from_trainer
 
 
11
  model-index:
12
  - name: trained_model
13
  results: []
14
  ---
15
-
16
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
- should probably proofread and complete it, then remove this comment. -->
18
-
19
- # trained_model
20
-
21
- This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the generator dataset.
22
-
23
- ## Model description
24
-
25
- More information needed
26
-
27
- ## Intended uses & limitations
28
-
29
- More information needed
30
-
31
- ## Training and evaluation data
32
-
33
- More information needed
34
-
35
- ## Training procedure
36
-
37
- ### Training hyperparameters
38
-
39
- The following hyperparameters were used during training:
40
- - learning_rate: 0.0002
41
- - train_batch_size: 2
42
- - eval_batch_size: 8
43
- - seed: 42
44
- - gradient_accumulation_steps: 8
45
- - total_train_batch_size: 16
46
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
- - lr_scheduler_type: linear
48
- - num_epochs: 1
49
- - mixed_precision_training: Native AMP
50
-
51
- ### Training results
52
-
53
-
54
-
55
- ### Framework versions
56
-
57
- - PEFT 0.13.0
58
- - Transformers 4.45.1
59
- - Pytorch 2.5.1+cpu
60
- - Datasets 3.0.1
61
- - Tokenizers 0.20.0
 
1
  ---
2
+ library_name: transformers
 
 
 
3
  license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-0.5B-Instruct
5
  tags:
6
  - trl
7
  - sft
8
  - generated_from_trainer
9
+ datasets:
10
+ - generator
11
  model-index:
12
  - name: trained_model
13
  results: []
14
  ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # trained_model
20
+
21
+ This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the generator dataset.
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 0.0002
41
+ - train_batch_size: 4
42
+ - eval_batch_size: 8
43
+ - seed: 42
44
+ - gradient_accumulation_steps: 4
45
+ - total_train_batch_size: 16
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: linear
48
+ - num_epochs: 1
49
+ - mixed_precision_training: Native AMP
50
+
51
+ ### Training results
52
+
53
+
54
+
55
+ ### Framework versions
56
+
57
+ - Transformers 4.45.1
58
+ - Pytorch 2.5.1+cpu
59
+ - Datasets 3.0.1
60
+ - Tokenizers 0.20.0
 
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5566f89d7bb6399f8a7d2fee2a91e2c2fd3ad7f2bd0a39137e6dbd33939b14e0
3
  size 1976163472
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5fc0a24dce134d9086d64b21071e6dc2df5e075c250426c99b98eba31a13f8fa
3
  size 1976163472
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0030c1f3113bfe9bb995cfe03f520eacfc94f7be11d72eea85ca548cef83468b
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b93612e5f080373abe2115f82df55e7a6633a575113ed2345880f59464cec355
3
  size 5368