🌟 STAR: Skeleton-aware Text-based 4D Avatar Generation with In-Network Motion Retargeting

arXiv Preprint

Zenghao Chai
National University of Singapore
Chen Tang
Tsinghua University
Yongkang Wong
National University of Singapore
Mohan Kankanhalli
National University of Singapore
arXiv Page Code

Gallery


Abstract

The creation of 4D avatars (i.e., animated 3D avatars) from text description typically uses text-to-image (T2I) diffusion models to synthesize 3D avatars in the canonical space and subsequently applies animation with target motions. However, such an optimization-by-animation paradigm has several drawbacks. (1) For pose-agnostic optimization, the rendered images in canonical pose for naive Score Distillation Sampling (SDS) exhibit domain gap and cannot preserve view-consistency using only T2I priors, and (2) For post hoc animation, simply applying the source motions to target 3D avatars yields translation artifacts and misalignment. To address these issues, we propose Skeleton-aware Text-based 4D Avatar generation with in-network motion Retargeting (STAR). STAR considers the geometry and skeleton differences between the template mesh and target avatar, and corrects the mismatched source motion by resorting to the pretrained motion retargeting techniques. With the informatively retargeted and occlusion-aware skeleton, we embrace the skeleton-conditioned T2I and text-to-video (T2V) priors, and propose a hybrid SDS module to coherently provide multi-view and frame-consistent supervision signals. Hence, STAR can progressively optimize the geometry, texture, and motion in an end-to-end manner. The quantitative and qualitative experiments demonstrate our proposed STAR can synthesize high-quality 4D avatars with vivid animations that align well with the text description. Additional ablation studies shows the contributions of each component in STAR.

Overview of the proposed STAR. Left. Given a text description, we initialize the human motion with pretrained text-to-motion model. Note that the typical optimization-by-animation paradigm easily yields deteriorated body structures and animation artifacts for 4D avatar generation. Right. We eliminate the potential pose distribution bias in the SDS-based optimization by integrating the retargeted motion for animation. With the personalized and occlusion-aware skeleton, we leverage the hybrid T2I and T2V diffusion models to provide 3D consistent priors that progressively optimize the geometry, texture, and motion to produce 4D avatar in an end-to-end manner.

Examples of 4D Avatars

STAR generates high-fidelity 4D avatars from only text descriptions. Here are some examples of the rendered videos from our generated avatars:
Rick Sanchez in Rick and Morty, he/she is dancing happily with arms intersected.
Slim Moana with long curve hair in movie Moana, he/she is raising a picture from the ground placing it on a wall and adjusting the fit.
Shrek wearing cotton jersey fabric clothes, he/she is performing boxing jab cross medium and kicking quickly.
Ironman in Marvel, he/she is dancing capoeira idle.
Short cute young child Miguel Rivera in movie Coco, he/she is tiptoeing and hiding his hands so no one hears him.
Harry Potter, he/she is spinning quickly and taking off running.

Export Your Assets

Our generated 3D/4D avatars are compatible with existing graphics engine, have fun loading these 4D animation assets for display.


Citation

@misc{chai2024star,
  author = {Chai, Zenghao and Tang, Chen and Wong, Yongkang and Kankanhalli, Mohan},
  title = {STAR: Skeleton-aware Text-based 4D Avatar Generation with In-Network Motion Retargeting},
  eprint={2406.04629},
  archivePrefix={arXiv},
  year={2024},
}