A newer version of the Gradio SDK is available:
5.47.0
BackgroundFX Pro Docker Deployment
This directory contains Docker configurations for deploying BackgroundFX Pro in various environments.
Quick Start
Prerequisites
- Docker 20.10+ installed
- Docker Compose 2.0+ installed
- NVIDIA Docker runtime (for GPU support)
- At least 16GB RAM
- 20GB+ free disk space
GPU Deployment (Recommended)
# Build and run with GPU support
make build
make run
# Or using docker-compose directly
docker-compose up -d backgroundfx-gpu
Access the application at http://localhost:7860
CPU-Only Deployment
# Build and run CPU version
make build-cpu
make run-cpu
# Or using docker-compose
docker-compose --profile cpu up -d backgroundfx-cpu
Access at http://localhost:7861
Available Docker Images
1. GPU Image (Dockerfile)
- Base: nvidia/cuda:12.1.0-cudnn8-runtime-ubuntu20.04
- Features: Full GPU acceleration, all models supported
- Use case: Production deployment with NVIDIA GPU
2. Production Image (Dockerfile.prod)
- Type: Multi-stage optimized build
- Features: Minimal size, pre-compiled Python, security hardening
- Use case: Production deployment with high performance requirements
3. CPU Image (Dockerfile.cpu)
- Base: python:3.10-slim
- Features: CPU-optimized, smaller footprint
- Use case: Development, testing, or CPU-only servers
Docker Compose Services
Core Services
- backgroundfx-gpu: Main application with GPU support
- backgroundfx-cpu: CPU-only variant
- redis: Cache and job queue
- nginx: Reverse proxy (production profile)
Support Services
- model-downloader: Pre-download models (setup profile)
- prometheus: Metrics collection (monitoring profile)
- grafana: Metrics visualization (monitoring profile)
Configuration
Environment Variables
Copy the example environment file and customize:
cp docker/.env.example docker/.env
Key settings:
DEVICE
: auto, cuda, or cpuMODEL_CACHE_DIR
: Model storage locationMAX_MEMORY_GB
: Memory limitQUALITY_PRESET
: low, medium, high, ultra
Volumes
model-cache
: Cached ML models (~2-5GB)uploads
: Uploaded filesoutputs
: Processed resultsredis-data
: Redis persistence
Makefile Commands
# Building
make build # Build GPU image
make build-cpu # Build CPU image
make build-all # Build all variants
make build-nocache # Build without cache
# Running
make run # Run GPU version
make run-cpu # Run CPU version
make run-dev # Development mode
make run-prod # Production with monitoring
# Management
make stop # Stop containers
make restart # Restart containers
make clean # Clean up
make logs # View logs
make shell # Container shell
# Models
make download-models # Download all models
make list-models # List available models
# Monitoring
make status # Container status
make stats # Resource usage
make health # Health checks
Production Deployment
1. Build Production Image
make build-prod
2. Configure Environment
Edit docker/.env
with production settings:
- Set secure
AUTH_SECRET_KEY
- Configure
REDIS_PASSWORD
- Adjust resource limits
- Enable authentication if needed
3. Deploy with Monitoring
make run-prod
This starts:
- Main application (port 7860)
- API server (port 8000)
- Nginx proxy (ports 80/443)
- Redis cache
- Prometheus + Grafana monitoring
4. SSL/TLS Setup
Place certificates in nginx/ssl/
:
nginx/ssl/cert.pem
nginx/ssl/key.pem
5. Scaling
For horizontal scaling:
docker-compose up -d --scale backgroundfx-gpu=3
GPU Support
Check GPU Availability
make gpu-check
# Or
docker run --rm --gpus all nvidia/cuda:12.1.0-base-ubuntu20.04 nvidia-smi
Install NVIDIA Docker Runtime
Ubuntu/Debian:
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update && sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
Troubleshooting
Out of Memory
Adjust memory limits in docker-compose.yml
:
deploy:
resources:
limits:
memory: 8G # Reduce if needed
Models Not Loading
Pre-download models:
make download-models
Permission Issues
Fix ownership:
docker-compose exec -u root backgroundfx-gpu chown -R appuser:appuser /app
Slow Processing
- Ensure GPU is detected:
make gpu-check
- Check resource usage:
make stats
- Adjust quality preset in
.env
Development
Local Development with Docker
# Mount local code for live reload
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
Running Tests
make test
Building Custom Images
# With custom registry
make push REGISTRY=myregistry.com VERSION=1.0.0
Monitoring
Grafana Dashboard
Access at http://localhost:3000 (admin/admin)
Pre-configured dashboards:
- Container metrics
- GPU utilization
- Processing statistics
- Error rates
Prometheus Metrics
Access at http://localhost:9090
Available metrics:
processing_time_seconds
frames_processed_total
model_load_time_seconds
memory_usage_bytes
Backup and Restore
Backup Volumes
make backup
Creates timestamped backups in ./backups/
Restore from Backup
make restore BACKUP_FILE=models-20240101-120000.tar.gz
Security Considerations
- Change default passwords in production
- Enable authentication via AUTH_ENABLED=true
- Configure CORS appropriately
- Use SSL/TLS in production
- Limit exposed ports using firewall rules
- Regular security updates:
docker pull
base images
Performance Optimization
GPU Optimization
- Use TensorRT models when available
- Enable mixed precision with FP16
- Adjust batch size based on GPU memory
CPU Optimization
- Use quantized models
- Enable OpenMP threading
- Adjust worker count based on cores
Memory Optimization
- Enable swap for large videos
- Use frame skipping for preview
- Implement progressive processing
Support
For issues or questions:
- Check logs:
make logs
- Verify health:
make health
- Review configuration:
docker-compose config
- Check system requirements:
make gpu-check