🧊 FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models

ICCV 2023


Guangkai Xu1*, Wei Yin2*, Hao Chen3, Chunhua Shen3,4, Kai Cheng1, Feng Zhao1

1University of Science and Technology of China    2DJI Technology    3Zhejiang University    4Ant Group
* denotes equal contribution

Abstract


⏳ Reconstruct your pose-free video with 🧊 FrozenRecon in around a quarter of an hour! ⌛.
FrozenRecon demo

3D scene reconstruction is a long-standing vision task. Existing approaches can be categorized into geometry-based and learning-based methods. The former leverages multi-view geometry but can face catastrophic failures due to the reliance on accurate pixel correspondence across views. The latter was proffered to mitigate these issues by learning 2D or 3D representation directly. However, without a large-scale video or 3D training data, it can hardly generalize to diverse real-world scenarios due to the presence of tens of millions or even billions of optimization parameters in the deep network. Recently, robust monocular depth estimation models trained with large-scale datasets have been proven to possess weak 3D geometry prior, but they are insufficient for reconstruction due to the unknown camera parameters, the affine-invariant property, and inter-frame inconsistency. Here, we propose a novel test-time optimization approach that can transfer the robustness of affine-invariant depth models such as LeReS to challenging diverse scenes while ensuring inter-frame consistency, with only dozens of parameters to optimize per video frame. Specifically, our approach involves freezing the pre-trained affine-invariant depth model's depth predictions, rectifying them by optimizing the unknown scale-shift values with a geometric consistency alignment module, and employing the resulting scale-consistent depth maps to robustly obtain camera poses and achieve dense scene reconstruction, even in low-texture regions. Experiments show that our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.


Pipeline overview


FrozenRecon Pipeline

Given a monocular video, we use a frozen robust monocular depth estimation model to obtain the estimated depths of all frames. Then, we propose a geometric consistency alignment module, which optimizes a sparse set of parameters (i.e. scale, shift, and weight factors) to achieve multi-view geometric consistent depths among all frames. The camera's intrinsic parameters and poses are also optimized simultaneously. Finally, we can achieve high-quality dense 3D reconstruction with optimized depths and camera parameters.


Citation


@inproceedings{xu2023frozenrecon,
  title={FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models},
  author={Xu, Guangkai and Yin, Wei and Chen, Hao and Shen, Chunhua and Cheng, Kai and Zhao, Feng},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={9310--9320},
  year={2023}
}

Recommendations to other works from our group


Welcome to checkout our other works on monocular depth estimation (AdelaiDepth). Our work Metric3D achieves the challenge winner of 2nd Monocular Depth Estimation Challenge Workshop.