Update README.md
Browse files
README.md
CHANGED
@@ -23,12 +23,16 @@ Finetune SigLIP2 Image Classification (Notebook)
|
|
23 |
|
24 |
|
25 |
This notebook demonstrates how to fine-tune SigLIP 2, a robust multilingual vision-language model, for single-label image classification tasks. The fine-tuning process incorporates advanced techniques such as captioning-based pretraining, self-distillation, and masked prediction, unified within a streamlined training pipeline. The workflow supports datasets in both structured and unstructured forms, making it adaptable to various domains and resource levels.
|
26 |
-
|
|
|
|
|
27 |
| Notebook Name | Description | Notebook Link |
|
28 |
|-------------------------------------|--------------------------------------------------|----------------|
|
29 |
| notebook-siglip2-finetune-type1 | Train/Test Splits | [⬇️Download](https://huggingface.co/prithivMLmods/FineTuning-SigLIP2-Notebook/blob/main/Finetune-SigLIP2-Image-Classification/1.SigLIP2_Finetune_ImageClassification_TrainTest_Splits.ipynb) |
|
30 |
| notebook-siglip2-finetune-type2 | Only Train Split | [⬇️Download](https://huggingface.co/prithivMLmods/FineTuning-SigLIP2-Notebook/blob/main/Finetune-SigLIP2-Image-Classification/2.SigLIP2_Finetune_ImageClassification_OnlyTrain_Splits.ipynb) |
|
31 |
|
|
|
|
|
32 |
The notebook outlines two data handling scenarios. In the first, datasets include predefined train and test splits, enabling conventional supervised learning and generalization evaluation. In the second scenario, only a training split is available; in such cases, the training set is either partially reserved for validation or reused entirely for evaluation. This flexibility supports experimentation in constrained or domain-specific settings, where standard test annotations may not exist.
|
33 |
|
34 |
```
|
|
|
23 |
|
24 |
|
25 |
This notebook demonstrates how to fine-tune SigLIP 2, a robust multilingual vision-language model, for single-label image classification tasks. The fine-tuning process incorporates advanced techniques such as captioning-based pretraining, self-distillation, and masked prediction, unified within a streamlined training pipeline. The workflow supports datasets in both structured and unstructured forms, making it adaptable to various domains and resource levels.
|
26 |
+
|
27 |
+
---
|
28 |
+
|
29 |
| Notebook Name | Description | Notebook Link |
|
30 |
|-------------------------------------|--------------------------------------------------|----------------|
|
31 |
| notebook-siglip2-finetune-type1 | Train/Test Splits | [⬇️Download](https://huggingface.co/prithivMLmods/FineTuning-SigLIP2-Notebook/blob/main/Finetune-SigLIP2-Image-Classification/1.SigLIP2_Finetune_ImageClassification_TrainTest_Splits.ipynb) |
|
32 |
| notebook-siglip2-finetune-type2 | Only Train Split | [⬇️Download](https://huggingface.co/prithivMLmods/FineTuning-SigLIP2-Notebook/blob/main/Finetune-SigLIP2-Image-Classification/2.SigLIP2_Finetune_ImageClassification_OnlyTrain_Splits.ipynb) |
|
33 |
|
34 |
+
---
|
35 |
+
|
36 |
The notebook outlines two data handling scenarios. In the first, datasets include predefined train and test splits, enabling conventional supervised learning and generalization evaluation. In the second scenario, only a training split is available; in such cases, the training set is either partially reserved for validation or reused entirely for evaluation. This flexibility supports experimentation in constrained or domain-specific settings, where standard test annotations may not exist.
|
37 |
|
38 |
```
|