CT2Hair: High-Fidelity 3D Hair Modeling using Computed Tomography

Meta Reality Labs1, Zhejiang University2, Carnegie Mellon University3
ACM SIGGRAPH 2023 (ACM Transactions on Graphics)
Selected in the Technical Papers Video Trailer

Abstract

We introduce CT2Hair, a fully automatic framework for creating high-fidelity 3D hair models that are suitable for use in downstream graphics applications. Our approach utilizes real-world hair wigs as input, and is able to reconstruct hair strands for a wide range of hair styles. Our method leverages computed tomography (CT) to create density volumes of the hair regions, allowing us to see through the hair unlike image-based approaches which are limited to reconstructing the visible surface. To address the noise and limited resolution of the input density volumes, we employ a coarse-to-fine approach. This process first recovers guide strands with estimated 3D orientation fields, and then populates dense strands through a novel neural interpolation of the guide strands. The generated strands are then refined to conform to the input density volumes. We demonstrate the robustness of our approach by presenting results on a wide variety of hair styles and conducting thorough evaluations on both real-world and synthetic datasets.

Video

This video shows all of the input wigs and volumes and our reconstructed results presented in our paper. We change the light direction to show the details of the reconstructed hair strands. Also, we apply a physically-based simulation on the reconstruction result to show that our results can be directly used for down-stream applications.

Pipeline

Our pipeline takes a coarse-to-fine approach and consists of two stages: guide strands initialization and dense strands optimization. The first stage starts with computing a 3D orientation field from the input 3D density volume. We then generate guide strands using the calculated orientations. In the second stage, we interpolate the estimated guide strands so that they distribute uniformly on the scalp. Next, we optimize the interpolated hair strands using the source density volume as the target. The optimized hair strands are the final 3D hair model that are ready for down-stream applications.

Results Gallery

BibTeX

@article{shen2023CT2Hair,
    title={CT2Hair: High-Fidelity 3D Hair Modeling using Computed Tomography},
    author={Shen, Yuefan and Saito, Shunsuke and Wang, Ziyan and Maury, Olivier and Wu, Chenglei and Hodgins, Jessica and Zheng, Youyi and Nam, Giljoo},
    journal={ACM Transactions on Graphics},
    volume={42},
    number={4},
    pages={1--13},
    year={2023},
    publisher={ACM New York, NY, USA}
}