FGVC / README.md
nielsr's picture
nielsr HF Staff
Add comprehensive dataset card for V-PETL Bench
84192f8 verified
|
raw
history blame
4.94 kB
metadata
license: cc-by-nc-4.0
task_categories:
  - image-classification
  - video-classification
  - object-detection
  - image-segmentation
tags:
  - benchmark
  - peft
  - parameter-efficient-finetuning
  - computer-vision

V-PETL Bench: A Unified Visual Parameter-Efficient Transfer Learning Benchmark

This repository contains the V-PETL Bench, a unified benchmark designed to standardize the evaluation of Parameter-Efficient Fine-Tuning (PEFT) methods across a diverse set of vision tasks. It was introduced in the paper Parameter-Efficient Fine-Tuning for Pre-Trained Vision Models: A Survey and Benchmark.

Project Homepage: https://v-petl-bench.github.io/ GitHub Repository: https://github.com/synbol/Parameter-Efficient-Transfer-Learning-Benchmark

Logo

Introduction

Pre-trained vision models (PVMs) have demonstrated remarkable adaptability across a wide range of downstream vision tasks. However, as these models scale to billions or even trillions of parameters, conventional full fine-tuning has become increasingly impractical. Parameter-efficient fine-tuning (PEFT) has emerged as a promising alternative, aiming to achieve performance comparable to full fine-tuning while making minimal adjustments to the model parameters.

To address the challenge of direct employment or comparison of numerous proposed PEFT algorithms, V-PETL Bench constructs a unified benchmark for the computer vision (CV) domain. It selects 30 diverse, challenging, and comprehensive datasets from image recognition, video action recognition, and dense prediction tasks. On these datasets, it systematically evaluates 25 dominant PEFT algorithms and open-sources a modular and extensible codebase for fair evaluation.

Data Preparation

The V-PETL Bench includes datasets for three main categories:

  1. Image Classification Datasets:

    • Fine-Grained Visual Classification (FGVC) tasks: Comprises 5 datasets: CUB200 2011, NABirds, Oxford Flowers, Stanford Dogs, Stanford Cars.
      • Processed splits available on Hugging Face: FGVC
    • Visual Task Adaptation Benchmark (VTAB): Comprises 19 diverse visual classification datasets.
      • Processed data available on Hugging Face: VTAB-1k
  2. Video Action Recognition Datasets:

    • Kinetics-400
    • Something-Something V2 (SSv2)
  3. Dense Prediction Datasets:

    • MS-COCO
    • ADE20K
    • PASCAL VOC

For detailed download links and preprocessing procedures for all datasets, please refer to the Data Preparation section in the official GitHub repository.

Pre-trained Model Preparation

Instructions for downloading and preparing various pre-trained Vision Transformer (ViT) and Swin Transformer backbones required for the benchmark experiments are provided in the Pre-trained Model Preparation section of the GitHub repository.

Quick Start

To get started with the V-PETL Bench for training and evaluation of PEFT algorithms, please refer to the detailed instructions on environment setup, installation, and usage examples (including training and evaluation demos) in the Quick Start section of the GitHub repository.

Benchmark Results and Checkpoints

The repository provides comprehensive benchmark results for various PEFT algorithms across the diverse datasets, including:

  • Image classification on FGVC and VTAB.
  • Video action recognition on SSv2 and HMDB51.
  • Dense prediction on COCO, PASCAL VOC, and ADE20K.

Detailed tables and checkpoints are available for download from the Results and Checkpoints section in the GitHub repository.

Citation

If you find our survey and repository useful for your research, please cite it using the following BibTeX entry:

@article{xin2024bench,
  title={V-PETL Bench: A Unified Visual Parameter-Efficient Transfer Learning Benchmark},
  author={Yi Xin, Siqi Luo, Xuyang Liu, Haodi Zhou, Xinyu Cheng, Christina Luoluo Lee, Junlong Du, Yuntao Du., Haozhe Wang, MingCai Chen, Ting Liu, Guimin Hu, Zhongwei Wan, Rongchao Zhang, Aoxue Li, Mingyang Yi, Xiaohong Liu},
  year={2024}
}