DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing

1Show Lab, 2National University of Singapore  3ARC Lab, Tencent PCG

Method

(1) Our video-NeRF model represents the input video as the 3D foreground canonical human space coupled with the deformation field and the 3D background static space. (2) Orange flowchart: Given the reference subject image, we edit the animatable canonical human space under multi-view multi-pose configurations by leveraging reconstruction losses, 2D personalized diffusion priors, 3D diffusion priors, and local parts super-resolution. (3) Green flowchart: A style transfer loss in feature spaces is utilized to transfer the reference style to our 3D background model. (4) Edited videos can be accordingly rendered by volume rendering in the edited video-NeRF model under source video camera poses.

Abstract

Despite remarkable research advances in diffusion-based video editing, existing methods are limited to short-length videos due to the contradiction between long-range consistency and frame-wise editing. Recent approaches attempt to tackle this challenge by introducing video-2D representations to degrade video editing to image editing. However, they encounter significant difficulties in handling large-scale motion- and view-change videos especially for human-centric videos.

This motivates us to introduce the dynamic Neural Radiance Fields (NeRF) as the human-centric video representation to ease the video editing problem to a 3D space editing task. As such, editing can be performed in the 3D spaces and propagated to the entire video via the deformation field. To provide finer and direct controllable editing, we propose the image-based 3D space editing pipeline with a set of effective designs. These include multi-view multi-pose Score Distillation Sampling (SDS) from both 2D personalized diffusion priors and 3D diffusion priors, reconstruction losses on the reference image, text-guided local parts super-resolution, and style transfer for 3D background space.

Extensive experiments demonstrate that our method, dubbed as DynVideo-E, significantly outperforms SOTA approaches on two challenging datasets by a large margin of 50% ∼ 95% in terms of human preference. Our code and data will be released to the community.

Video

BibTeX

@article{liu2023dynvideoe,
  title     = {DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing},
  author    = {Liu, Jia-Wei and Cao, Yan-Pei and Wu, Jay Zhangjie and Mao, Weijia and Gu, Yuchao and Zhao, Rui and Keppo, Jussi and Shan, Ying and Shou, Mike Zheng},  
  journal   = {arXiv preprint arXiv:2310.10624},
  year      = {2023},
}