TINKER: Diffusion's Gift to 3D--Multi-View Consistent Editing From Sparse Inputs without Per-Scene Optimization

1Zhejiang University, 2Zhejiang University of Technology
*Equal contribution

TINKER achieves generalizable 3D editing with one or few inputs without per-scene optimization.

Abstract

We introduce Tinker, a versatile framework for high-fidelity 3D editing that operates in both one-shot and few-shot regimes without any per-scene finetuning. Unlike prior techniques that demand extensive per-scene optimization to ensure multi-view consistency or to produce dozens of consistent edited input views, Tinker delivers robust, multi-view consistent edits from as few as one or two images. This capability stems from repurposing pretrained diffusion models, which unlocks their latent 3D awareness. To drive research in this space, we curate the first large-scale multi-view editing dataset and data pipeline, spanning diverse scenes and styles. Building on this dataset, we develop our framework capable of generating multi-view consistent edited views without per-scene training, which consists of two novel components: (1) Referring multi-view editor: Enables precise, reference-driven edits that remain coherent across all viewpoints. (2) Any-view-to-video synthesizer: Leverages spatial-temporal priors from video diffusion to perform high-quality scene completion and novel-view generation even from sparse inputs. Through extensive experiments, Tinker significantly reduces the barrier to generalizable 3D content creation, achieving state-of-the-art performance on editing, novel-view synthesis, and rendering enhancement tasks. We believe that Tinker represents a key step towards truly scalable, zero-shot 3D editing.

Comparisons with existing methods

Tinker achieves high-quality 3D editing results with only one or few inputs, even for scenes with substantial overall style changes, such as oil paintings or black-and-white comics. No per-scene finetuning is required.

Comparisons with existing methods

Additional Features

Video Reconstruction

Tinker supports high-quality video reconstruction with only the first frame and video depth as inputs.

GT
Our Reconstruction
Video Reconstruction

Enhancing 3DGS Quality

Tinker further supports enhancing the quality of 3DGS, as such quality improvements can be regarded as a specialized form of editing.

Enhance Quality

BibTeX

@article{zhao2025tinker,
  author    = {Zhao, Canyu and Li, Xiaoman and Feng, Tianjian and Zhao, Zhiyue and Chen, Hao and Shen, Chunhua},
  title     = {Tinker: Generalizable 3D Editing through Sparse Inputs without Per-Scene Finetuning},
  year      = {2025},
}