Title
DoubleField: Bridging the Neural Surface and Radiance Fields for High-fidelity Human Reconstruction and Rendering
Abstract
We introduce DoubleField, a novel framework combining the merits of both surface field and radiance field for high-fidelity human reconstruction and rendering. Within DoubleField, the surface field and radiance field are associated together by a shared feature embedding and a surface-guided sampling strategy. Moreover, a view-to-view transformer is introduced to fuse multi-view features and learn view-dependent features directly from high-resolution inputs. With the modeling power of DoubleField and the view-to-view transformer, our method significantly improves the reconstruction quality of both geometry and appearance, while supporting direct inference, scene-specific high-resolution finetuning, and fast rendering. The efficacy of DoubleField is validated by the quantitative evaluations on several datasets and the qualitative results in a real-world sparse multi-view system, showing its superior capability for high-quality human model reconstruction and photo-realistic free-viewpoint human rendering. Data and source code will be made public for the research purpose.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.01541
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
3D from multi-view and sensors, 3D from single images, Image and video synthesis and generation
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Ruizhi Shao100.34
Hongwen Zhang201.01
He Zhang300.34
Mingjia Chen400.34
Yan-Pei Cao500.34
Tao Yu685.87
Yebin Liu768849.05