metadata
Browse files
README.md
CHANGED
@@ -1,234 +1,9 @@
|
|
1 |
-
<h1 align="center">EDGS: Eliminating Densification for Efficient Convergence of 3DGS</h2>
|
2 |
-
|
3 |
-
<p align="center">
|
4 |
-
<a href="https://www.linkedin.com/in/dmitry-kotovenko-dl/">Dmytro Kotovenko</a><sup>*</sup> ·
|
5 |
-
<a href="https://www.linkedin.com/in/grebenkovao/">Olga Grebenkova</a><sup>*</sup> ·
|
6 |
-
<a href="https://ommer-lab.com/people/ommer/">Björn Ommer</a>
|
7 |
-
</p>
|
8 |
-
|
9 |
-
<p align="center">CompVis @ LMU Munich · Munich Center for Machine Learning (MCML) </p>
|
10 |
-
<p align="center">* equal contribution </p>
|
11 |
-
|
12 |
-
<p align="center">
|
13 |
-
<a href="https://compvis.github.io/EDGS/"><img src="https://img.shields.io/badge/Project-Page-blue" alt="Project Page"></a>
|
14 |
-
<a href="https://arxiv.org/pdf/2504.13204"><img src="https://img.shields.io/badge/arXiv-PDF-b31b1b" alt="Paper"></a>
|
15 |
-
<a href="https://colab.research.google.com/github/CompVis/EDGS/blob/main/notebooks/fit_model_to_scene_full.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
|
16 |
-
<a href="https://huggingface.co/spaces/CompVis/EDGS"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue" alt="Hugging Face"></a>
|
17 |
-
|
18 |
-
</p>
|
19 |
-
|
20 |
-
<p align="center">
|
21 |
-
<img src="./assets/Teaser2.png" width="99%">
|
22 |
-
</p>
|
23 |
-
|
24 |
-
<p>
|
25 |
-
<strong>3DGS</strong> initializes with a sparse set of Gaussians and progressively adds more in under-reconstructed regions. In contrast, <strong>EDGS</strong> starts with
|
26 |
-
a dense initialization from triangulated 2D correspondences across training image pairs,
|
27 |
-
requiring only minimal refinement. This leads to <strong>faster convergence</strong> and <strong>higher rendering quality</strong>. Our method reaches the original 3DGS <strong>LPIPS score in just 25% of the training time</strong> and uses only <strong>60% of the splats</strong>.
|
28 |
-
Renderings become <strong>nearly indistinguishable from ground truth after only 3,000 steps — without any densification</strong>.
|
29 |
-
</p>
|
30 |
-
|
31 |
-
<h3 align="center">3D scene reconstruction using our method in 11 seconds.</h3>
|
32 |
-
<p align="center">
|
33 |
-
<img src="assets/video_fruits_our_optimization.gif" width="480" alt="3D Reconstruction Demo">
|
34 |
-
</p>
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
## 📚 Table of Contents
|
39 |
-
- [🚀 Quickstart](#sec-quickstart)
|
40 |
-
- [🛠️ Installation](#sec-install)
|
41 |
-
- [📦 Data](#sec-data)
|
42 |
-
|
43 |
-
- [🏋️ Training](#sec-training)
|
44 |
-
- [🏗️ Reusing Our Model](#sec-reuse)
|
45 |
-
- [📄 Citation](#sec-citation)
|
46 |
-
|
47 |
-
<a id="sec-quickstart"></a>
|
48 |
-
## 🚀 Quickstart
|
49 |
-
The fastest way to try our model is through the [Hugging Face demo](https://huggingface.co/spaces/magistrkoljan/EDGS), which lets you upload images or a video and interactively rotate the resulting 3D scene. For broad accessibility, we currently support only **forward-facing scenes**.
|
50 |
-
#### Steps:
|
51 |
-
1. Upload a list of photos or a single video.
|
52 |
-
2. Click **📸 Preprocess Input** to estimate 3D positions using COLMAP.
|
53 |
-
3. Click **🚀 Start Reconstruction** to run the model.
|
54 |
-
|
55 |
-
You can also **explore the reconstructed scene in 3D** directly in the browser.
|
56 |
-
|
57 |
-
> ⚡ Runtime: EDGS typically takes just **10–20 seconds**, plus **5–10 seconds** for COLMAP processing. Additional time may be needed to save outputs (model, video, 3D preview).
|
58 |
-
|
59 |
-
You can also run the same app locally on your machine with command:
|
60 |
-
```CUDA_VISIBLE_DEVICES=0 python gradio_demo.py --port 7862 --no_share```
|
61 |
-
Without `--no_share` flag you will get the adress for gradio app that you can share with the others allowing others to process their data on your server.
|
62 |
-
|
63 |
-
Alternatively, check our [Colab notebook](https://colab.research.google.com/github/CompVis/EDGS/blob/main/notebooks/fit_model_to_scene_full.ipynb).
|
64 |
-
|
65 |
-
###
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
<a id="sec-install"></a>
|
70 |
-
## 🛠️ Installation
|
71 |
-
|
72 |
-
You can either run `install.sh` or manually install using the following:
|
73 |
-
|
74 |
-
```bash
|
75 |
-
git clone [email protected]:CompVis/EDGS.git --recursive
|
76 |
-
cd EDGS
|
77 |
-
git submodule update --init --recursive
|
78 |
-
|
79 |
-
conda create -y -n edgs python=3.10 pip
|
80 |
-
conda activate edgs
|
81 |
-
|
82 |
-
# Set up path to your CUDA. In our experience similar versions like 12.2 also work well
|
83 |
-
export CUDA_HOME=/usr/local/cuda-12.1
|
84 |
-
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$LD_LIBRARY_PATH
|
85 |
-
export PATH=$CUDA_HOME/bin:$PATH
|
86 |
-
|
87 |
-
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia -y
|
88 |
-
conda install nvidia/label/cuda-12.1.0::cuda-toolkit -y
|
89 |
-
|
90 |
-
pip install -e submodules/gaussian-splatting/submodules/diff-gaussian-rasterization
|
91 |
-
pip install -e submodules/gaussian-splatting/submodules/simple-knn
|
92 |
-
|
93 |
-
# For COLMAP and pycolmap
|
94 |
-
# Optionally install original colmap but probably pycolmap suffices
|
95 |
-
# conda install conda-forge/label/colmap_dev::colmap
|
96 |
-
pip install pycolmap
|
97 |
-
|
98 |
-
|
99 |
-
pip install wandb hydra-core tqdm torchmetrics lpips matplotlib rich plyfile imageio imageio-ffmpeg
|
100 |
-
conda install numpy=1.26.4 -y -c conda-forge --override-channels
|
101 |
-
|
102 |
-
pip install -e submodules/RoMa
|
103 |
-
conda install anaconda::jupyter --yes
|
104 |
-
|
105 |
-
# Stuff necessary for gradio and visualizations
|
106 |
-
pip install gradio
|
107 |
-
pip install plotly scikit-learn moviepy==2.1.1 ffmpeg
|
108 |
-
pip install open3d
|
109 |
-
```
|
110 |
-
|
111 |
-
<a id="sec-data"></a>
|
112 |
-
## 📦 Data
|
113 |
-
|
114 |
-
We evaluated on the following datasets:
|
115 |
-
|
116 |
-
- **MipNeRF360** — download [here](https://jonbarron.info/mipnerf360/). Unzip "Dataset Pt. 1" and "Dataset Pt. 2", then merge scenes.
|
117 |
-
- **Tanks & Temples + Deep Blending** — from the [original 3DGS repo](https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/datasets/input/tandt_db.zip).
|
118 |
-
|
119 |
-
### Using Your Own Dataset
|
120 |
-
|
121 |
-
You can use the same data format as the [3DGS project](https://github.com/graphdeco-inria/gaussian-splatting?tab=readme-ov-file#processing-your-own-scenes). Please follow their guide to prepare your scene.
|
122 |
-
|
123 |
-
Expected folder structure:
|
124 |
-
```
|
125 |
-
scene_folder
|
126 |
-
|---images
|
127 |
-
| |---<image 0>
|
128 |
-
| |---<image 1>
|
129 |
-
| |---...
|
130 |
-
|---sparse
|
131 |
-
|---0
|
132 |
-
|---cameras.bin
|
133 |
-
|---images.bin
|
134 |
-
|---points3D.bin
|
135 |
-
```
|
136 |
-
|
137 |
-
Nerf synthetic format is also acceptable.
|
138 |
-
|
139 |
-
You can also use functions provided in our code to convert a collection of images or a sinlge video into a desired format. However, this may requre tweaking and processing time can be large for large collection of images with little overlap.
|
140 |
-
|
141 |
-
<a id="sec-training"></a>
|
142 |
-
## 🏋️ Training
|
143 |
-
|
144 |
-
|
145 |
-
To optimize on a single scene in COLMAP format use this code.
|
146 |
-
```bash
|
147 |
-
python train.py \
|
148 |
-
train.gs_epochs=30000 \
|
149 |
-
train.no_densify=True \
|
150 |
-
gs.dataset.source_path=<scene folder> \
|
151 |
-
gs.dataset.model_path=<output folder> \
|
152 |
-
init_wC.matches_per_ref=20000 \
|
153 |
-
init_wC.nns_per_ref=3 \
|
154 |
-
init_wC.num_refs=180
|
155 |
-
```
|
156 |
-
<details>
|
157 |
-
<summary><span style="font-weight: bold;">Command Line Arguments for train.py</span></summary>
|
158 |
-
|
159 |
-
* `train.gs_epochs`
|
160 |
-
Number of training iterations (steps) for Gaussian Splatting.
|
161 |
-
* `train.no_densify`
|
162 |
-
Disables densification. True by default.
|
163 |
-
* `gs.dataset.source_path`
|
164 |
-
Path to your input dataset directory. This should follow the same format as the original 3DGS dataset structure.
|
165 |
-
* `gs.dataset.model_path`
|
166 |
-
Output directory where the trained model, logs, and renderings will be saved.
|
167 |
-
* `init_wC.matches_per_ref`
|
168 |
-
Number of 2D feature correspondences to extract per reference view for initialization. More matches leads to more gaussians.
|
169 |
-
* `init_wC.nns_per_ref`
|
170 |
-
Number of nearest neighbor images used per reference during matching.
|
171 |
-
* `init_wC.num_refs`
|
172 |
-
Total number of reference views sampled.
|
173 |
-
* `wandb.mode`
|
174 |
-
Specifies how Weights & Biases (W&B) logging is handled.
|
175 |
-
|
176 |
-
- Default: `"disabled"`
|
177 |
-
- Options:
|
178 |
-
- `"online"` — log to the W&B server in real-time
|
179 |
-
- `"offline"` — save logs locally to sync later
|
180 |
-
- `"disabled"` — turn off W&B logging entirely
|
181 |
-
|
182 |
-
If you want to enable W&B logging, make sure to also configure:
|
183 |
-
|
184 |
-
- `wandb.project` — the name of your W&B project
|
185 |
-
- `wandb.entity` — your W&B username or team name
|
186 |
-
|
187 |
-
Example override:
|
188 |
-
```bash
|
189 |
-
wandb.mode=online wandb.project=EDGS wandb.entity=your_username train.gs_epochs=15_000 init_wC.matches_per_ref=15_000
|
190 |
-
```
|
191 |
-
</details>
|
192 |
-
<br>
|
193 |
-
|
194 |
-
To run full evaluation on all datasets:
|
195 |
-
|
196 |
-
```bash
|
197 |
-
python full_eval.py -m360 <mipnerf360 folder> -tat <tanks and temples folder> -db <deep blending folder>
|
198 |
-
```
|
199 |
-
<a id="sec-reuse"></a>
|
200 |
-
## 🏗️ Reusing Our Model
|
201 |
-
|
202 |
-
Our model is essentially a better **initialization module** for Gaussian Splatting. You can integrate it into your pipeline by calling:
|
203 |
-
|
204 |
-
```python
|
205 |
-
source.corr_init.init_gaussians_with_corr(...)
|
206 |
-
```
|
207 |
-
### Input arguments:
|
208 |
-
- A GaussianModel and Scene instance
|
209 |
-
- A configuration namespace `cfg.init_wC` to specify parameters like the number of matches, neighbors, and reference views
|
210 |
-
- A RoMA model (automatically instantiated if not provided)
|
211 |
-
|
212 |
-
|
213 |
-
|
214 |
-
<a id="sec-citation"></a>
|
215 |
-
## 📄 Citation
|
216 |
-
```bibtex
|
217 |
-
@misc{kotovenko2025edgseliminatingdensificationefficient,
|
218 |
-
title={EDGS: Eliminating Densification for Efficient Convergence of 3DGS},
|
219 |
-
author={Dmytro Kotovenko and Olga Grebenkova and Björn Ommer},
|
220 |
-
year={2025},
|
221 |
-
eprint={2504.13204},
|
222 |
-
archivePrefix={arXiv},
|
223 |
-
primaryClass={cs.GR},
|
224 |
-
url={https://arxiv.org/abs/2504.13204},
|
225 |
-
}
|
226 |
-
```
|
227 |
---
|
228 |
-
|
229 |
-
|
230 |
-
|
231 |
-
|
232 |
-
|
233 |
-
|
234 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
title: Final de Moviles-3dgs
|
3 |
+
emoji: 🚀
|
4 |
+
colorFrom: blue
|
5 |
+
colorTo: purple
|
6 |
+
sdk: docker
|
7 |
+
app_file: main.py
|
8 |
+
pinned: false
|
9 |
+
---
|