DreamDissector: Learning Disentangled Text-to-3D Generation from 2D Diffusion Priors

1FNii-Shenzhen, 2SSE, CUHKSZ 3Guangdong Provincial Key Laboratory of Future Networks of Intelligence, CUHKSZ

ECCV 2024

Nerfies teaser image.

DreamDissector can generate multiple independent textured meshes with plausible interactions, facilitating various applications, including text-guided texturing at the object level, convenient manual user geometry editing through simple operations, and text-guided controllable object replacement.

Abstract

Text-to-3D generation has recently seen significant progress. To enhance its practicality in real-world applications, it is crucial to generate multiple independent objects with interactions, similar to layer-compositing in 2D image editing. However, existing text-to-3D methods struggle with this task, as they are designed to generate either non-independent objects or independent objects lacking spatially plausible interactions. Addressing this, we propose DreamDissector, a text-to-3D method capable of generating multiple independent objects with interactions. DreamDissector accepts a multi-object text-to-3D NeRF as input and produces independent textured meshes. To achieve this, we introduce the Neural Category Field (NeCF) for disentangling the input NeRF. Additionally, we present the Category Score Distillation Sampling (CSDS), facilitated by a Deep Concept Mining (DCM) module, to tackle the concept gap issue in diffusion models. By leveraging NeCF and CSDS, we can effectively derive sub-NeRFs from the original scene. Further refinement enhances geometry and texture. Our experimental results validate the effectiveness of DreamDissector, providing users with novel means to control 3D synthesis at the object level and potentially opening avenues for various creative applications in the future.

Methodology

Nerfies teaser image.

We generate multiple independent interactive 3D objects in a coarse-to-fine manner. Initially, we render a view of the input text-to-3D NeRF for Deep Concept Mining (DCM), obtaining both the T2I diffusion model and the corresponding text embedding. We then use the mined embedding and the T2I diffusion model to train the neural category field (NeCF) using category score distillation sampling (CSDS). After disentangling the input NeRF, we convert the sub-NeRFs into DMTets and fine-tune these for further refinement. Finally, we export independent surface meshes with improved geometries and textures.


Video


BibTeX

@article{yan2024dreamdissector,
  author    = {Zizheng, Yan and Jiapeng, Zhou and Fanpeng, Meng and Yushuang, Wu and Lingteng, Qiu and Zisheng, Ye and Shuguang, Cui and Guanying, Chen and Xiaoguang, Han},
  title     = {DreamDissector: Learning Disentangled Text-to-3D Generation from 2D Diffusion Priors},
  journal   = {ECCV},
  year      = {2024},
}