metadata
library_name: transformers
license: mit
base_model: microsoft/git-large-r-coco
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: git-large-r-coco-IDB_ADv1_COCO
results: []
git-large-r-coco-IDB_ADv1_COCO
This model is a fine-tuned version of microsoft/git-large-r-coco on the imagefolder dataset. It achieves the following results on the evaluation set:
- Loss: 0.1340
- Meteor Score: {'meteor': 0.5103395242652348}
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 200
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Meteor Score |
---|---|---|---|---|
4.8755 | 5.0 | 5 | 4.6208 | {'meteor': 0.41234022080924504} |
4.577 | 10.0 | 10 | 4.2632 | {'meteor': 0.462553330633559} |
4.228 | 15.0 | 15 | 3.9351 | {'meteor': 0.4637035445872813} |
3.8968 | 20.0 | 20 | 3.6117 | {'meteor': 0.4716545164583618} |
3.5731 | 25.0 | 25 | 3.2943 | {'meteor': 0.4775515760416854} |
3.2551 | 30.0 | 30 | 2.9844 | {'meteor': 0.4827646049987953} |
2.9433 | 35.0 | 35 | 2.6819 | {'meteor': 0.4820540318646651} |
2.6406 | 40.0 | 40 | 2.3893 | {'meteor': 0.48387008867521647} |
2.348 | 45.0 | 45 | 2.1093 | {'meteor': 0.48688764217538394} |
2.0685 | 50.0 | 50 | 1.8438 | {'meteor': 0.48840003275775357} |
1.8052 | 55.0 | 55 | 1.5954 | {'meteor': 0.49229450416352066} |
1.5592 | 60.0 | 60 | 1.3681 | {'meteor': 0.49462336346473573} |
1.3335 | 65.0 | 65 | 1.1642 | {'meteor': 0.4943789886645904} |
1.1308 | 70.0 | 70 | 0.9838 | {'meteor': 0.4932081022324161} |
0.9511 | 75.0 | 75 | 0.8281 | {'meteor': 0.4949580448605414} |
0.7953 | 80.0 | 80 | 0.6959 | {'meteor': 0.4945236890902709} |
0.6629 | 85.0 | 85 | 0.5849 | {'meteor': 0.49613363555493917} |
0.5518 | 90.0 | 90 | 0.4970 | {'meteor': 0.49476521946537905} |
0.4599 | 95.0 | 95 | 0.4203 | {'meteor': 0.49694213467111825} |
0.3856 | 100.0 | 100 | 0.3609 | {'meteor': 0.5023234677593583} |
0.3248 | 105.0 | 105 | 0.3137 | {'meteor': 0.49291655975794224} |
0.2756 | 110.0 | 110 | 0.2757 | {'meteor': 0.49187607478517975} |
0.236 | 115.0 | 115 | 0.2432 | {'meteor': 0.4999076360911653} |
0.2039 | 120.0 | 120 | 0.2196 | {'meteor': 0.5054381333125716} |
0.1782 | 125.0 | 125 | 0.2006 | {'meteor': 0.4998272338605217} |
0.1576 | 130.0 | 130 | 0.1864 | {'meteor': 0.5095656338179543} |
0.1411 | 135.0 | 135 | 0.1755 | {'meteor': 0.5030929069103355} |
0.1279 | 140.0 | 140 | 0.1653 | {'meteor': 0.5097532440481348} |
0.1173 | 145.0 | 145 | 0.1576 | {'meteor': 0.5126165420799782} |
0.1088 | 150.0 | 150 | 0.1516 | {'meteor': 0.5168283983418568} |
0.1023 | 155.0 | 155 | 0.1462 | {'meteor': 0.5145210432669091} |
0.097 | 160.0 | 160 | 0.1424 | {'meteor': 0.5135483205500848} |
0.0929 | 165.0 | 165 | 0.1399 | {'meteor': 0.5099977164420265} |
0.0899 | 170.0 | 170 | 0.1384 | {'meteor': 0.5093303675700068} |
0.0876 | 175.0 | 175 | 0.1369 | {'meteor': 0.5097771482308939} |
0.086 | 180.0 | 180 | 0.1357 | {'meteor': 0.5080664663529372} |
0.085 | 185.0 | 185 | 0.1347 | {'meteor': 0.5101483486776783} |
0.0843 | 190.0 | 190 | 0.1342 | {'meteor': 0.5110798690668398} |
0.0839 | 195.0 | 195 | 0.1340 | {'meteor': 0.5102824562761434} |
0.0838 | 200.0 | 200 | 0.1340 | {'meteor': 0.5103395242652348} |
Framework versions
- Transformers 4.46.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.20.2