bnina-ayoub commited on
Commit
25a993b
·
verified ·
1 Parent(s): b231bbf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -32,7 +32,7 @@ The model leverages the transformer architecture to process image patches and pr
32
 
33
  ## Intended uses & limitations
34
 
35
- - **Intented Uses:** This model can be used to demonstrate object detection with ViT. It can potentially be used in safety applications to identify individuals wearing or not wearing hardhats in construction sites or industrial environments.
36
  - **Limitations:** This model has been limitedly trained and may not generalize well to images with significantly different characteristics, viewpoints, or lighting conditions. It is not intended for production use without further evaluation and validation.
37
 
38
  ## Training and evaluation data
@@ -50,7 +50,7 @@ The model leverages the transformer architecture to process image patches and pr
50
  - Weight decay: 1e-4
51
  - Batch size: 1
52
  - Epochs: 3
53
- - Max steps: 500
54
  - Optimizer: AdamW
55
  - **Evaluation:** The model was evaluated on the test set using standard object detection metrics, including COCO metrics (Average Precision, Average Recall).
56
  - **Hardware:** Training was performed on Google Colab using GPU acceleration.
 
32
 
33
  ## Intended uses & limitations
34
 
35
+ - **Intended Uses:** This model can be used to demonstrate object detection with ViT. It can potentially be used in safety applications to identify individuals wearing or not wearing hardhats in construction sites or industrial environments.
36
  - **Limitations:** This model has been limitedly trained and may not generalize well to images with significantly different characteristics, viewpoints, or lighting conditions. It is not intended for production use without further evaluation and validation.
37
 
38
  ## Training and evaluation data
 
50
  - Weight decay: 1e-4
51
  - Batch size: 1
52
  - Epochs: 3
53
+ - Max steps: 2500
54
  - Optimizer: AdamW
55
  - **Evaluation:** The model was evaluated on the test set using standard object detection metrics, including COCO metrics (Average Precision, Average Recall).
56
  - **Hardware:** Training was performed on Google Colab using GPU acceleration.