File size: 3,103 Bytes
d3f3e7d 3b3665e 7782048 3b3665e 7782048 5c6364b 3b3665e 57e2915 fa2a95e df87b3c 5c6364b fa2a95e 3b3665e 57e2915 3b3665e 57e2915 10d526c 3b3665e 78c6f8f 3b3665e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
license: apache-2.0
language:
- en
pipeline_tag: image-classification
library_name: transformers
tags:
- notebook
- colab
- siglip2
- image-to-text
---
<div style="
background: rgba(255, 193, 61, 0.15);
padding: 16px;
border-radius: 6px;
border: 1px solid rgba(255, 165, 0, 0.3);
margin: 16px 0;
">
Finetune SigLIP2 Image Classification (Notebook)
</div>
This notebook demonstrates how to fine-tune SigLIP 2, a robust multilingual vision-language model, for single-label image classification tasks. The fine-tuning process incorporates advanced techniques such as captioning-based pretraining, self-distillation, and masked prediction, unified within a streamlined training pipeline. The workflow supports datasets in both structured and unstructured forms, making it adaptable to various domains and resource levels.
---
| Notebook Name | Description | Notebook Link |
|-------------------------------------|--------------------------------------------------|----------------|
| notebook-siglip2-finetune-type1 | Train/Test Splits | [⬇️Download](https://huggingface.co/prithivMLmods/FineTuning-SigLIP2-Notebook/blob/main/Finetune-SigLIP2-Image-Classification/1.SigLIP2_Finetune_ImageClassification_TrainTest_Splits.ipynb) |
| notebook-siglip2-finetune-type2 | Only Train Split | [⬇️Download](https://huggingface.co/prithivMLmods/FineTuning-SigLIP2-Notebook/blob/main/Finetune-SigLIP2-Image-Classification/2.SigLIP2_Finetune_ImageClassification_OnlyTrain_Splits.ipynb) |
> [!warning]
To avoid notebook loading errors, please download and use the notebook.
---
The notebook outlines two data handling scenarios. In the first, datasets include predefined train and test splits, enabling conventional supervised learning and generalization evaluation. In the second scenario, only a training split is available; in such cases, the training set is either partially reserved for validation or reused entirely for evaluation. This flexibility supports experimentation in constrained or domain-specific settings, where standard test annotations may not exist.
```
last updated : jul 2025
```
---
<div style="
background: rgba(255, 193, 61, 0.15);
padding: 16px;
border-radius: 6px;
border: 1px solid rgba(255, 165, 0, 0.3);
margin: 16px 0;
">
| **Type 1: Train/Test Splits** | **Type 2: Only Train Split** |
|------------------------------|------------------------------|
|  |  |
</div>
---
| Platform | Link |
|----------|------|
| Huggingface Blog | [](https://huggingface.co/blog/prithivMLmods/siglip2-finetune-image-classification) |
| GitHub Repository | [](https://github.com/PRITHIVSAKTHIUR/FineTuning-SigLIP-2) |
|