vLLM Inference Scripts
Ready-to-run UV scripts for GPU-accelerated inference using vLLM.
These scripts use UV's inline script metadata to automatically manage dependencies - just run with uv run
and everything installs automatically!
π Available Scripts
classify-dataset.py
Batch text classification using BERT-style encoder models (e.g., BERT, RoBERTa, DeBERTa, ModernBERT) with vLLM's optimized inference engine.
Note: This script is specifically for encoder-only classification models, not generative LLMs.
Features:
- π High-throughput batch processing
- π·οΈ Automatic label mapping from model config
- π Confidence scores for predictions
- π€ Direct integration with Hugging Face Hub
Usage:
# Local execution (requires GPU)
uv run classify-dataset.py \
davanstrien/ModernBERT-base-is-new-arxiv-dataset \
username/input-dataset \
username/output-dataset \
--inference-column text \
--batch-size 10000
HF Jobs execution:
hf jobs uv run \
--flavor l4x1 \
--image vllm/vllm-openai \
https://huggingface.co/datasets/uv-scripts/vllm/resolve/main/classify-dataset.py \
davanstrien/ModernBERT-base-is-new-arxiv-dataset \
username/input-dataset \
username/output-dataset \
--inference-column text \
--batch-size 100000
generate-responses.py
Generate responses for chat-formatted prompts using generative LLMs (e.g., Llama, Qwen, Mistral) with vLLM's high-performance inference engine.
Features:
- π¬ Automatic chat template application
- π Multi-GPU tensor parallelism support
- π Smart filtering for prompts exceeding context length
- π Comprehensive dataset cards with generation metadata
- β‘ HF Transfer enabled for fast model downloads
- ποΈ Full control over sampling parameters
Usage:
# Local execution with default Qwen model
uv run generate-responses.py \
username/input-dataset \
username/output-dataset \
--messages-column messages \
--max-tokens 1024
# With custom model and parameters
uv run generate-responses.py \
username/input-dataset \
username/output-dataset \
--model-id meta-llama/Llama-3.1-8B-Instruct \
--temperature 0.9 \
--top-p 0.95 \
--max-model-len 8192
HF Jobs execution (multi-GPU):
hf jobs uv run \
--flavor l4x4 \
--image vllm/vllm-openai \
-e UV_PRERELEASE=if-necessary \
-s HF_TOKEN=hf_*** \
https://huggingface.co/datasets/uv-scripts/vllm/raw/main/generate-responses.py \
davanstrien/cards_with_prompts \
davanstrien/test-generated-responses \
--model-id Qwen/Qwen3-30B-A3B-Instruct-2507 \
--gpu-memory-utilization 0.9 \
--max-tokens 600 \
--max-model-len 8000
Multi-GPU Tensor Parallelism
- Auto-detects available GPUs by default
- Use
--tensor-parallel-size
to manually specify - Required for models larger than single GPU memory (e.g., 30B+ models)
Handling Long Contexts
The generate-responses.py script includes smart prompt filtering:
- Default behavior: Skips prompts exceeding max_model_len
- Use
--max-model-len
: Limit context to reduce memory usage - Use
--no-skip-long-prompts
: Fail on long prompts instead of skipping - Skipped prompts receive empty responses and are logged
π About vLLM
vLLM is a high-throughput inference engine optimized for:
- Fast model serving with PagedAttention
- Efficient batch processing
- Support for various model architectures
- Seamless integration with Hugging Face models
π§ Technical Details
UV Script Benefits
- Zero setup: Dependencies install automatically on first run
- Reproducible: Locked dependencies ensure consistent behavior
- Self-contained: Everything needed is in the script file
- Direct execution: Run from local files or URLs
Dependencies
Scripts use UV's inline metadata for automatic dependency management:
# /// script
# requires-python = ">=3.10"
# dependencies = [
# "datasets",
# "flashinfer-python",
# "huggingface-hub[hf_transfer]",
# "torch",
# "transformers",
# "vllm",
# ]
# ///
For bleeding-edge features, use the UV_PRERELEASE=if-necessary
environment variable to allow pre-release versions when needed.
Docker Image
For HF Jobs, we recommend the official vLLM Docker image: vllm/vllm-openai
This image includes:
- Pre-installed CUDA libraries
- vLLM and all dependencies
- UV package manager
- Optimized for GPU inference
Environment Variables
HF_TOKEN
: Your Hugging Face authentication token (auto-detected if logged in)UV_PRERELEASE=if-necessary
: Allow pre-release packages when requiredHF_HUB_ENABLE_HF_TRANSFER=1
: Automatically enabled for faster downloads
π Resources
- Downloads last month
- 82