Fast View Synthesis of Casual Videos with Soup-of-Planes

1University of Maryland, College Park    2Adobe Research    3Adobe

ECCV 2024

Our novel view sythesis of casual videos



Our novel view sythesis on Sora videos

We demonstrate our possibility of novel view synthesis in video generation domain.


Abstract

Novel view synthesis from an in-the-wild video is difficult due to challenges like scene dynamics and lack of parallax. While existing methods have shown promising results with implicit neural radiance fields, they are slow to train and render. This paper revisits explicit video representations to synthesize high-quality novel views from a monocular video efficiently. We treat static and dynamic video content separately. Specifically, we build a global static scene model using an extended plane-based scene representation to synthesize temporally coherent novel video. Our plane-based scene representation is augmented with spherical harmonics and displacement maps to capture view-dependent effects and model non-planar complex surface geometry. We opt to represent the dynamic content as per-frame point clouds for efficiency. While such representations are inconsistency-prone, minor temporal inconsistencies are perceptually masked due to motion. We develop a method to quickly estimate such a hybrid video representation and render novel views in real time. Our experiments show that our method can render high-quality novel views from an in-the-wild video with comparable quality to state-of-the-art methods while being 100x faster in training and enabling real-time rendering.

Our method only takes 15 minutes to optimize a representation from an in-the-wild video and can render novel views at 27 FPS.

On the NVIDIA Dataset, our method achieves a rendering quality comparable to state-of-the-art NeRF-based methods but is much faster to train and render.

* The bubble size in the figure indicates the training time (GPU-hours). The training time does not include preprocessing time for all methods.

Method Overview



We first preprocess an input monocular video to obtain obtain the video depth and pose as well as the dynamic masks. The input video is then decomposed into static and dynamic content. We initialize a soup of oriented planes by fitting them to the static scene. These planes are augmented to capture view-dependent effects and non-planar complex surface. To generate the target static view, these planes are back-warped to the target view and composited from far to near efficiently. We estimate per-frame point clouds for dynamic content together with dynamic masks. For temporal consistency, we use optical flows to blend the dynamic content from neighboring frames. The blended dynamic content is then warped to the target view. Finally, the target novel view is rendered by blending the static and dynamic novel content.


View-dependent plane texture

(a) Since a flat plane cannot sufficiently represent a non-flat surface, we augment it with both view-dependent appearance and displacement, encoded by spherical harmonic coefficients. (b) Given a view direction, we first obtain the view-specific color and displacement, then shift (backwarp) into the final view-specific texture for planar homography warping to the target view. (c) Thus, the planes with both view-dependent appearance and displacement can better represent the complex non-planar surface.

Our method renders high-quality novel views comparable to state-of-the-art methods
but is >100x faster to train and render.

Comparison with the state-of-the-art

DynIBaR

Ours

*For a fair comparison, we use our video depth and pose preprocessing for both methods
and use the rendering path provided by DynIBaR's offical codes.

Comparison with 4D-GS

4D-GS [Wu et al. 2023]

Ours

*We use our video depth and pose preprocessing for the initial 3D point cloud of 4D-GS.
We train 4D-GS with their official codes and their default configuration.

Comparison with depth warping

Depth warping

Ours

*We use a different video-depth-pose pre-processing to demonstrate our robustness to different 3D pre-processing methods. Our method can mitigate the imperfect depth estimation through scene optimization.

Ablation study

without view-dependent displacement

with view-dependent displacement

The result without view-dependent displacement (left) may yield floaters or blurriness.

Failure cases

Input frame

Novel view synthesis

Unseen areas from the input frame may be revealed in novel views by our per-frame dynamic scene representation.

Real-time interactive demos

We demonstrate a real-time application where the mouse can move the viewpoint interactively. The real-time demos are run on an ipython notebook with an A100 GPU and screen-recorded.

Stereo video application

Given an input monocular video, we can produce a stereo video through novel view synthesis on virtual left and right camera views

Acknowledgements

We thank Zhengqi Li and Yu-Lun Liu for sharing their experimental results of DynIBaR and RoDynRF on the NVIDIA dataset, and thank Jae Shin Yoon for sharing the time performance details of [Yoon et al. CVPR 2020]. We also extend our gratitude to Tim Brooks for granting us permission to use Sora videos from the official website and from the authors' Twitters [1,2,3,4,5].

BibTeX

@article{lee2023casual-fvs,
  author    = {Lee, Yao-Chih and Zhang, Zhoutong and Blackburn-Matzen, Kevin and Niklaus, Simon and Zhang, Jianming and Huang, Jia-Bin and Liu, Feng},
  title     = {Fast View Synthesis of Casual Videos with Soup-of-Planes},
  journal   = {arXiv preprint arXiv:2312.02135},
  year      = {2023},
}