BDAIO commited on
Commit
8453dfd
·
verified ·
1 Parent(s): 791fea9

End of training

Browse files
Files changed (2) hide show
  1. README.md +10 -24
  2. model.safetensors +2 -2
README.md CHANGED
@@ -3,11 +3,6 @@ license: apache-2.0
3
  base_model: google-bert/bert-base-multilingual-uncased
4
  tags:
5
  - generated_from_trainer
6
- metrics:
7
- - accuracy
8
- - precision
9
- - recall
10
- - f1
11
  model-index:
12
  - name: NLP_whole_dataseet_
13
  results: []
@@ -20,11 +15,16 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased) on the None dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.0925
24
- - Accuracy: 0.9817
25
- - Precision: 0.9797
26
- - Recall: 0.9828
27
- - F1: 0.9809
 
 
 
 
 
28
 
29
  ## Model description
30
 
@@ -51,20 +51,6 @@ The following hyperparameters were used during training:
51
  - lr_scheduler_type: cosine
52
  - num_epochs: 8
53
 
54
- ### Training results
55
-
56
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
57
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
58
- | 0.3209 | 1.0 | 55 | 0.2928 | 0.9037 | 0.9020 | 0.8929 | 0.8950 |
59
- | 0.1962 | 2.0 | 110 | 0.1979 | 0.9450 | 0.9447 | 0.9353 | 0.9387 |
60
- | 0.2778 | 3.0 | 165 | 0.1383 | 0.9587 | 0.9530 | 0.9627 | 0.9560 |
61
- | 0.2216 | 4.0 | 220 | 0.1156 | 0.9679 | 0.9667 | 0.9640 | 0.9652 |
62
- | 0.2203 | 5.0 | 275 | 0.1061 | 0.9817 | 0.9797 | 0.9828 | 0.9809 |
63
- | 0.1948 | 6.0 | 330 | 0.0967 | 0.9817 | 0.9797 | 0.9828 | 0.9809 |
64
- | 0.2017 | 7.0 | 385 | 0.0902 | 0.9817 | 0.9797 | 0.9828 | 0.9809 |
65
- | 0.2384 | 8.0 | 440 | 0.0925 | 0.9817 | 0.9797 | 0.9828 | 0.9809 |
66
-
67
-
68
  ### Framework versions
69
 
70
  - Transformers 4.42.4
 
3
  base_model: google-bert/bert-base-multilingual-uncased
4
  tags:
5
  - generated_from_trainer
 
 
 
 
 
6
  model-index:
7
  - name: NLP_whole_dataseet_
8
  results: []
 
15
 
16
  This model is a fine-tuned version of [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased) on the None dataset.
17
  It achieves the following results on the evaluation set:
18
+ - eval_loss: 0.0225
19
+ - eval_accuracy: 0.9954
20
+ - eval_precision: 0.9951
21
+ - eval_recall: 0.9960
22
+ - eval_f1: 0.9955
23
+ - eval_runtime: 0.792
24
+ - eval_samples_per_second: 275.256
25
+ - eval_steps_per_second: 8.839
26
+ - epoch: 5.0
27
+ - step: 275
28
 
29
  ## Model description
30
 
 
51
  - lr_scheduler_type: cosine
52
  - num_epochs: 8
53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  ### Framework versions
55
 
56
  - Transformers 4.42.4
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b307dab050a02c894fd48d5f1e2e01d6ea18e59a73079c0d5ba549193b4795a3
3
- size 2029642140
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1b038790c04772b50d2b95cd38bba498954cfec839f37d4738363a34f675842
3
+ size 2028069268