|
|
--- |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- generated_from_trainer |
|
|
datasets: |
|
|
- squad |
|
|
model-index: |
|
|
- name: mobilebert-uncased-squadv1-14blocks-structured39.8-int8 |
|
|
results: [] |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# mobilebert-uncased-squadv1-14blocks-structured39.8-int8 |
|
|
|
|
|
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the squad dataset. |
|
|
|
|
|
Notice that this model only has the first 14 transformer blocks. It is quantized and structually pruned by NNCF. The sparsity in remaining linear layers is 39.8%. |
|
|
|
|
|
- Torch f1: 90.15 |
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- Transformers 4.25.1 |
|
|
- Pytorch 1.13.1+cu116 |
|
|
- Datasets 2.8.0 |
|
|
- Tokenizers 0.13.2 |
|
|
|