YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Explanatory Instructions: Towards Unified Vision Tasks Understanding and Zero-shot Generalization

Computer Vision (CV) has yet to fully achieve the zero-shot task generalization observed in Natural Language Processing (NLP), despite following many of the milestones established in NLP, such as large transformer models, extensive pre-training, and the auto-regression paradigm, among others. In this paper, we rethink the reality that CV adopts discrete and terminological task definitions (e.g., "image segmentation"), and conjecture it is a key barrier that hampers zero-shot task generalization. Our hypothesis is that without truly understanding previously-seen tasks--due to these terminological definitions--deep models struggle to generalize to novel tasks. To verify this, we introduce Explanatory Instructions, which provide an intuitive way to define CV task objectives through detailed linguistic transformations from input images to outputs. We create a large-scale dataset comprising 12 million "image input → explanatory instruction → output" triplets, and train an auto-regressive-based vision-language model (AR-based VLM) that takes both images and explanatory instructions as input. By learning to follow these instructions, the AR-based VLM achieves instruction-level zero-shot capabilities for previously-seen tasks and demonstrates strong zero-shot generalization for unseen CV tasks.

📃Paper | 💻Github | 📚Dataset (Explanatory-based Vison Tasks) | 📚Dataset (Terminological-based Vision Tasks) | 🤗 Model (UVT-7B-448)

Simple Inference

The simplest code for inference (Please refer to Github):

from inference_solver import FlexARInferenceSolver
from PIL import Image
import os
import torch
import random
os.environ["CUDA_VISIBLE_DEVICES"] = "0"

def set_seed(seed):
    random.seed(seed)
    np.random.seed(seed)
    torch.manual_seed(seed)
    torch.cuda.manual_seed(seed)
    torch.cuda.manual_seed_all(seed)
    torch.backends.cudnn.deterministic = True
    torch.backends.cudnn.benchmark = False

inference_solver = FlexARInferenceSolver(
    model_path = "UVT_7B_448", #path to your model
    precision="fp16", #bf16
    target_size=448, #fixed 448
)

max_out = 1
for i in range(max_out):
    set_seed(i)
    
    qas = [["Acknowledge the spatial structure and identify variations in light intensity, translating these into a gradient scale representing distances. Accentuate regions where light diminishes gradually, enhancing the perception of depth by dimming peripheral areas. Adjust the distribution of luminance to highlight the central vanishing point, converting detailed textures into smooth transitions of grayscale." + " <|image|>", None]]
    images = [Image.open("./demo_input/rain_1.jpg")]

    generated = inference_solver.generate(
        images=images,
        qas=qas,
        max_gen_len=4096,
        temperature=1.0,
        logits_processor=inference_solver.create_logits_processor(cfg=1., image_top_k=2048),
    )
    new_image = generated[1][0]
    new_image.save(f'./test_output_{i}.png', format='PNG')
Downloads last month
2
Safetensors
Model size
7.01B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support