layoutlm-funsd
This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the funsd dataset. It achieves the following results on the evaluation set:
- Loss: 0.6898
- Answer: {'precision': 0.6987951807228916, 'recall': 0.788627935723115, 'f1': 0.7409988385598142, 'number': 809}
- Header: {'precision': 0.3157894736842105, 'recall': 0.35294117647058826, 'f1': 0.33333333333333337, 'number': 119}
- Question: {'precision': 0.7780701754385965, 'recall': 0.8328638497652582, 'f1': 0.8045351473922903, 'number': 1065}
- Overall Precision: 0.7168
- Overall Recall: 0.7863
- Overall F1: 0.7499
- Overall Accuracy: 0.8057
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
1.8366 | 1.0 | 10 | 1.6180 | {'precision': 0.003418803418803419, 'recall': 0.002472187886279357, 'f1': 0.0028694404591104736, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.1913214990138067, 'recall': 0.09107981220657277, 'f1': 0.12340966921119592, 'number': 1065} | 0.0907 | 0.0497 | 0.0642 | 0.3481 |
1.4796 | 2.0 | 20 | 1.2677 | {'precision': 0.2348860257680872, 'recall': 0.29295426452410384, 'f1': 0.2607260726072607, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.448690728945506, 'recall': 0.5953051643192488, 'f1': 0.5117029862792575, 'number': 1065} | 0.3596 | 0.4370 | 0.3946 | 0.6123 |
1.112 | 3.0 | 30 | 0.9348 | {'precision': 0.46387832699619774, 'recall': 0.6032138442521632, 'f1': 0.5244492208490059, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.5677842565597667, 'recall': 0.7314553990610329, 'f1': 0.6393106278210915, 'number': 1065} | 0.5220 | 0.6357 | 0.5733 | 0.7072 |
0.8556 | 4.0 | 40 | 0.7811 | {'precision': 0.5760765550239234, 'recall': 0.7441285537700866, 'f1': 0.6494066882416398, 'number': 809} | {'precision': 0.043478260869565216, 'recall': 0.01680672268907563, 'f1': 0.024242424242424242, 'number': 119} | {'precision': 0.6784810126582278, 'recall': 0.7549295774647887, 'f1': 0.7146666666666667, 'number': 1065} | 0.6186 | 0.7065 | 0.6596 | 0.7636 |
0.679 | 5.0 | 50 | 0.7063 | {'precision': 0.6343519494204426, 'recall': 0.7441285537700866, 'f1': 0.6848691695108078, 'number': 809} | {'precision': 0.15294117647058825, 'recall': 0.1092436974789916, 'f1': 0.12745098039215685, 'number': 119} | {'precision': 0.6796812749003984, 'recall': 0.8009389671361502, 'f1': 0.7353448275862068, 'number': 1065} | 0.6413 | 0.7366 | 0.6857 | 0.7854 |
0.5692 | 6.0 | 60 | 0.6788 | {'precision': 0.6491769547325102, 'recall': 0.7799752781211372, 'f1': 0.708590679393599, 'number': 809} | {'precision': 0.26436781609195403, 'recall': 0.19327731092436976, 'f1': 0.22330097087378642, 'number': 119} | {'precision': 0.7327510917030567, 'recall': 0.787793427230047, 'f1': 0.7592760180995476, 'number': 1065} | 0.6774 | 0.7491 | 0.7115 | 0.7889 |
0.4895 | 7.0 | 70 | 0.6565 | {'precision': 0.6697722567287785, 'recall': 0.799752781211372, 'f1': 0.7290140845070422, 'number': 809} | {'precision': 0.29473684210526313, 'recall': 0.23529411764705882, 'f1': 0.2616822429906542, 'number': 119} | {'precision': 0.7526315789473684, 'recall': 0.8056338028169014, 'f1': 0.7782312925170067, 'number': 1065} | 0.6965 | 0.7692 | 0.7310 | 0.7999 |
0.441 | 8.0 | 80 | 0.6647 | {'precision': 0.6814345991561181, 'recall': 0.7985166872682324, 'f1': 0.7353443369379624, 'number': 809} | {'precision': 0.25196850393700787, 'recall': 0.2689075630252101, 'f1': 0.2601626016260163, 'number': 119} | {'precision': 0.7489177489177489, 'recall': 0.812206572769953, 'f1': 0.7792792792792793, 'number': 1065} | 0.6919 | 0.7742 | 0.7308 | 0.8008 |
0.3834 | 9.0 | 90 | 0.6705 | {'precision': 0.7025527192008879, 'recall': 0.7824474660074165, 'f1': 0.7403508771929823, 'number': 809} | {'precision': 0.31666666666666665, 'recall': 0.31932773109243695, 'f1': 0.3179916317991632, 'number': 119} | {'precision': 0.7519116397621071, 'recall': 0.8309859154929577, 'f1': 0.7894736842105263, 'number': 1065} | 0.7079 | 0.7807 | 0.7425 | 0.8010 |
0.3793 | 10.0 | 100 | 0.6591 | {'precision': 0.6965811965811965, 'recall': 0.8059332509270705, 'f1': 0.7472779369627507, 'number': 809} | {'precision': 0.3211009174311927, 'recall': 0.29411764705882354, 'f1': 0.30701754385964913, 'number': 119} | {'precision': 0.7831431079894644, 'recall': 0.8375586854460094, 'f1': 0.809437386569873, 'number': 1065} | 0.7230 | 0.7923 | 0.7560 | 0.8096 |
0.3189 | 11.0 | 110 | 0.6794 | {'precision': 0.6991247264770241, 'recall': 0.7898640296662547, 'f1': 0.7417295414973882, 'number': 809} | {'precision': 0.3111111111111111, 'recall': 0.35294117647058826, 'f1': 0.33070866141732286, 'number': 119} | {'precision': 0.779646017699115, 'recall': 0.8272300469483568, 'f1': 0.8027334851936219, 'number': 1065} | 0.7168 | 0.7837 | 0.7488 | 0.8043 |
0.3037 | 12.0 | 120 | 0.6780 | {'precision': 0.7, 'recall': 0.7873918417799752, 'f1': 0.7411285631180919, 'number': 809} | {'precision': 0.32558139534883723, 'recall': 0.35294117647058826, 'f1': 0.33870967741935487, 'number': 119} | {'precision': 0.7782646801051709, 'recall': 0.8338028169014085, 'f1': 0.8050770625566637, 'number': 1065} | 0.7188 | 0.7863 | 0.7510 | 0.8046 |
0.2878 | 13.0 | 130 | 0.6864 | {'precision': 0.7065934065934066, 'recall': 0.7948084054388134, 'f1': 0.748109365910413, 'number': 809} | {'precision': 0.33070866141732286, 'recall': 0.35294117647058826, 'f1': 0.34146341463414637, 'number': 119} | {'precision': 0.7889087656529516, 'recall': 0.828169014084507, 'f1': 0.8080622995877232, 'number': 1065} | 0.7271 | 0.7863 | 0.7555 | 0.8057 |
0.2626 | 14.0 | 140 | 0.6874 | {'precision': 0.7023153252480706, 'recall': 0.7873918417799752, 'f1': 0.7424242424242424, 'number': 809} | {'precision': 0.3181818181818182, 'recall': 0.35294117647058826, 'f1': 0.3346613545816733, 'number': 119} | {'precision': 0.7798245614035088, 'recall': 0.8347417840375587, 'f1': 0.8063492063492064, 'number': 1065} | 0.7196 | 0.7868 | 0.7517 | 0.8056 |
0.2683 | 15.0 | 150 | 0.6898 | {'precision': 0.6987951807228916, 'recall': 0.788627935723115, 'f1': 0.7409988385598142, 'number': 809} | {'precision': 0.3157894736842105, 'recall': 0.35294117647058826, 'f1': 0.33333333333333337, 'number': 119} | {'precision': 0.7780701754385965, 'recall': 0.8328638497652582, 'f1': 0.8045351473922903, 'number': 1065} | 0.7168 | 0.7863 | 0.7499 | 0.8057 |
Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
- Downloads last month
- -
Model tree for rushda7/layoutlm-funsd
Base model
microsoft/layoutlm-base-uncased