VAEEDOF - High-Resolution Multi-Focus Image Fusion
Model Description
VAEEDOF is a deep learning model designed to address the Depth-of-Field (DOF) constraint in photography using Multi-Focus Image Fusion (MFIF). Built upon a distilled Variational Autoencoder (VAE) architecture, this model fuses up to 7 images with different focus points into a single, high-resolution, all-in-focus image.
It is trained to produce artifact-free and photorealistic fused outputs and demonstrates strong generalization across both synthetic and real-world datasets.
π¦ Model Weights
This repository provides:
β Pretrained VAEEDOF weights used in our experiments
π Comparison model weights for evaluating against other state-of-the-art methods (baselines)
π§ͺ Training Data
The model is trained on the MattingMFIF dataset β a new, high-quality 4K synthetic dataset built using matting techniques applied to real-world photographs to simulate realistic depth-of-field blur and focus patterns.
π Resources
GitHub Repository (Code, training & inference scripts): π https://github.com/MalumaDev/VAEEDOF
π Citation
@article{piano2025addressing,
title={Addressing the Depth-of-Field Constraint: A New Paradigm for High Resolution Multi-Focus Image Fusion},
author={Piano, Luca and Huanwen, Peng and Bilcu, Radu Ciprian},
journal={arXiv preprint arXiv:2510.19581},
year={2025}
}
- Downloads last month
- -