DGH: Dynamic Gaussian Hair

NeurIPS 2025

Junying Wang1,2, Yuanlu Xu2, Edith Tretschk2, Ziyan Wang2, Anastasia Ianina2, Aljaz Bozic2,
Ulrich Neumann1, Tony Tung2

1University of Southern California, 2Meta Reality Labs Research

Teaser image placeholder

Dynamic Gaussian Hair (DGH) is a framework that learns dynamic deformation and photorealistic novel-view synthesis of arbitrary hairstyles driven by head motions, while respecting upper-body collision. At runtime, given a hairstyle and head motion (a), DGH infers initial hair deformations (b), refines the deformations with dynamics (c), and generates 3D Gaussian splats to achieve photorealistic novel-view synthesis (d).

Abstract

The creation of photorealistic dynamic hair remains a major challenge in digital human modeling because of the complex motions, occlusions, and light scattering. Existing methods often resort to static capture and physics-based models that do not scale as they require manual parameter fine-tuning to handle the diversity of hairstyles and motions, and heavy computation to obtain high-quality appearance. In this paper, we present Dynamic Gaussian Hair (DGH), a novel framework that efficiently learns hair dynamics and appearance. We propose: (1) a coarse-to-fine model that learns temporally coherent hair motion dynamics across diverse hairstyles; (2) a strand-guided optimization module that learns a dynamic 3D Gaussian representation for hair appearance with support for differentiable rendering, enabling gradient-based learning of view-consistent appearance under motion. Unlike prior simulation-based pipelines, our approach is fully data-driven, scales with training data, and generalizes across various hairstyles and head motion sequences. Additionally, DGH can be seamlessly integrated into a 3D Gaussian avatar framework, enabling realistic, animatable hair for high-fidelity avatar representation. DGH achieves promising geometry and appearance results, providing a scalable, data-driven alternative to physics-based simulation and rendering.

Our Goal

Here is our goal: Given a canonical groom and head pose, our model generates hair deformation, adds hair dynamics and enables novel view synthesis of dynamic hair with 3D Gaussian splatting.

Overview

Framework overview of the Dynamic Gaussian Hair (DGH) model

Framework Overview. DGH learns hair deformation dynamics and photorealistic appearance. Stage I: Coarse-to-Fine Dynamic Hair Modeling. The input hair model and the upper body are transformed into a canonical hair volume Vhairrigid and a pose volume Vpose, respectively. A coarse-to-fine strategy then deforms the hair model. At the coarse stage, points pi are sampled from the rigidly transformed hair, and interpolated features from Epose, Ehair, head pose H, and positional encoding E(p) are concatenated and fed into an MLP M to predict displacements Δp, producing deformed hair points Phair. The fine stage refines hair deformation with dynamics by estimating flow Fflowt through cross-attention between volumetric features from previous frames Vhairt-2 and Vhairt-1, ensuring smooth temporal transitions. Stage II: Appearance Optimization. We train an MLP D to predict color c', scale s', and opacity α' of 3D Gaussian splats from features of the deformed hair. Differentiable rasterization leverages this appearance model to synthesize high-quality renderings that adapt to hair movement and occlusion dynamics.

Video Demo

Video demo of Dynamic Gaussian Hair (DGH).

BibTeX

@inproceedings{wang2025dgh,
  title     = {DGH: Dynamic Gaussian Hair},
  author    = {Wang, Junying and Xu, Yuanlu and Tretschk, Edith and Wang, Ziyan
               and Ianina, Anastasia and Bozic, Aljaz and Neumann, Ulrich and Tung, Tony},
  booktitle = {The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS)},
  year      = {2025}
}