floflodebilbao commited on
Commit
7af1627
·
verified ·
1 Parent(s): fa3cd31

End of training

Browse files
README.md CHANGED
@@ -22,21 +22,21 @@ should probably proofread and complete it, then remove this comment. -->
22
 
23
  This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset.
24
  It achieves the following results on the evaluation set:
25
- - Loss: 2.8523
26
- - Rouge1: 0.306
27
- - Rouge2: 0.1081
28
- - Rougel: 0.2447
29
- - Rougelsum: 0.2447
30
- - Gen Len: 20.8533
31
- - Bleu: 0.05
32
- - Precisions: 0.1248
33
- - Brevity Penalty: 0.6089
34
- - Length Ratio: 0.6684
35
- - Translation Length: 2407.0
36
  - Reference Length: 3601.0
37
- - Precision: 0.8812
38
- - Recall: 0.8701
39
- - F1: 0.8756
40
  - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
41
 
42
  ## Model description
@@ -64,17 +64,22 @@ The following hyperparameters were used during training:
64
  - total_train_batch_size: 16
65
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
66
  - lr_scheduler_type: linear
67
- - num_epochs: 5
68
 
69
  ### Training results
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
72
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
73
- | No log | 1.0 | 19 | 4.1851 | 0.2835 | 0.088 | 0.2261 | 0.2267 | 20.6467 | 0.04 | 0.108 | 0.6311 | 0.6848 | 2466.0 | 3601.0 | 0.8845 | 0.8678 | 0.876 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
74
- | No log | 2.0 | 38 | 3.2829 | 0.2809 | 0.0891 | 0.2258 | 0.2252 | 20.64 | 0.041 | 0.1083 | 0.6093 | 0.6687 | 2408.0 | 3601.0 | 0.8795 | 0.8661 | 0.8726 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
75
- | No log | 3.0 | 57 | 3.0226 | 0.2883 | 0.096 | 0.2293 | 0.2302 | 20.7867 | 0.0454 | 0.1151 | 0.6093 | 0.6687 | 2408.0 | 3601.0 | 0.88 | 0.8682 | 0.874 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
76
- | No log | 4.0 | 76 | 2.8930 | 0.301 | 0.1068 | 0.2374 | 0.2371 | 20.78 | 0.049 | 0.1246 | 0.6017 | 0.6631 | 2388.0 | 3601.0 | 0.8817 | 0.8699 | 0.8757 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
77
- | No log | 5.0 | 95 | 2.8523 | 0.306 | 0.1081 | 0.2447 | 0.2447 | 20.8533 | 0.05 | 0.1248 | 0.6089 | 0.6684 | 2407.0 | 3601.0 | 0.8812 | 0.8701 | 0.8756 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
 
 
 
 
 
78
 
79
 
80
  ### Framework versions
 
22
 
23
  This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on an unknown dataset.
24
  It achieves the following results on the evaluation set:
25
+ - Loss: 2.2405
26
+ - Rouge1: 0.3564
27
+ - Rouge2: 0.1444
28
+ - Rougel: 0.296
29
+ - Rougelsum: 0.2951
30
+ - Gen Len: 20.96
31
+ - Bleu: 0.0646
32
+ - Precisions: 0.1586
33
+ - Brevity Penalty: 0.5945
34
+ - Length Ratio: 0.6579
35
+ - Translation Length: 2369.0
36
  - Reference Length: 3601.0
37
+ - Precision: 0.892
38
+ - Recall: 0.8776
39
+ - F1: 0.8846
40
  - Hashcode: roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1)
41
 
42
  ## Model description
 
64
  - total_train_batch_size: 16
65
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
66
  - lr_scheduler_type: linear
67
+ - num_epochs: 10
68
 
69
  ### Training results
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Precision | Recall | F1 | Hashcode |
72
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:------:|:----------:|:---------------:|:------------:|:------------------:|:----------------:|:---------:|:------:|:------:|:---------------------------------------------------------:|
73
+ | No log | 1.0 | 19 | 4.0849 | 0.2835 | 0.0826 | 0.2239 | 0.2243 | 20.5333 | 0.0374 | 0.105 | 0.6263 | 0.6812 | 2453.0 | 3601.0 | 0.8837 | 0.8674 | 0.8754 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
74
+ | No log | 2.0 | 38 | 3.1922 | 0.2787 | 0.0853 | 0.2229 | 0.2223 | 20.7667 | 0.0396 | 0.1061 | 0.6161 | 0.6737 | 2426.0 | 3601.0 | 0.8767 | 0.8646 | 0.8705 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
75
+ | No log | 3.0 | 57 | 2.8716 | 0.292 | 0.098 | 0.2328 | 0.2329 | 20.84 | 0.0461 | 0.1181 | 0.6082 | 0.6679 | 2405.0 | 3601.0 | 0.8812 | 0.8688 | 0.8749 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
76
+ | No log | 4.0 | 76 | 2.6529 | 0.3166 | 0.1192 | 0.2573 | 0.2566 | 20.92 | 0.0576 | 0.1366 | 0.6104 | 0.6695 | 2411.0 | 3601.0 | 0.8861 | 0.8725 | 0.8792 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
77
+ | No log | 5.0 | 95 | 2.5101 | 0.3441 | 0.1353 | 0.282 | 0.2813 | 20.94 | 0.0613 | 0.1495 | 0.593 | 0.6568 | 2365.0 | 3601.0 | 0.8895 | 0.8763 | 0.8828 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
78
+ | No log | 6.0 | 114 | 2.3985 | 0.3501 | 0.1415 | 0.2909 | 0.2912 | 20.92 | 0.0616 | 0.1514 | 0.5983 | 0.6606 | 2379.0 | 3601.0 | 0.8913 | 0.8771 | 0.884 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
79
+ | No log | 7.0 | 133 | 2.3215 | 0.3557 | 0.1398 | 0.295 | 0.2949 | 20.9667 | 0.0608 | 0.1545 | 0.593 | 0.6568 | 2365.0 | 3601.0 | 0.8919 | 0.8775 | 0.8846 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
80
+ | No log | 8.0 | 152 | 2.2783 | 0.3494 | 0.1417 | 0.2922 | 0.2918 | 20.9333 | 0.0637 | 0.1561 | 0.588 | 0.6532 | 2352.0 | 3601.0 | 0.8907 | 0.8769 | 0.8837 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
81
+ | No log | 9.0 | 171 | 2.2467 | 0.3566 | 0.145 | 0.297 | 0.2968 | 20.96 | 0.0649 | 0.1591 | 0.5926 | 0.6565 | 2364.0 | 3601.0 | 0.8921 | 0.8775 | 0.8846 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
82
+ | No log | 10.0 | 190 | 2.2405 | 0.3564 | 0.1444 | 0.296 | 0.2951 | 20.96 | 0.0646 | 0.1586 | 0.5945 | 0.6579 | 2369.0 | 3601.0 | 0.892 | 0.8776 | 0.8846 | roberta-large_L17_no-idf_version=0.3.12(hug_trans=4.53.1) |
83
 
84
 
85
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7f25a9d46079d13bf10debf56a2d332713ecff3973ac54c026c745fa1de0a556
3
  size 647614116
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47fe32bdfe1a6c39fe6e1a648757f87bbdcc40f003873b9f19d681c1ea0eaafe
3
  size 647614116
runs/Jul08_12-15-25_tardis/events.out.tfevents.1751969726.tardis.16633.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca91902a62fe9b3bba77f0d6a517deeca1f0b0bc74530e7057028b7c335d0bdd
3
+ size 5609
runs/Jul08_12-17-37_tardis/events.out.tfevents.1751969858.tardis.16793.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af2b0fe34b1dd9af239618d0921effb3ea604a62a58d65c5c700547e1dfd691a
3
+ size 17363
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:69962b27d31c44fea3d3a216cba8870d88cdabf004173350109864476f926475
3
  size 5905
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48089e91074ad3d1b0252b19ff9a4475c377591e49ea8005c801bea3b4bc7040
3
  size 5905