ABSTRACT
We present a variety of new compositing techniques using Multi-plane Images (MPI's) [Zhou et al. 2018] derived from footage shot with an inexpensive and portable light field video camera array. The effects include camera stabilization, foreground object removal, synthetic depth of field, and deep compositing. Traditional compositing is based around layering RGBA images to visually integrate elements into the same scene, and often requires manual 2D and/or 3D artist intervention to achieve realism in the presence of volumetric effects such as smoke or splashing water. We leverage the newly introduced DeepView solver [Flynn et al. 2019] and a light field camera array to generate MPIs stored in the DeepEXR format for compositing with realistic spatial integration and a simple workflow which offers new creative capabilities. We demonstrate using this technique by combining footage that would otherwise be very challenging and time intensive to achieve when using traditional techniques, with minimal artist intervention.
Supplemental Material
Available for Download
Supplemental material.
- John Flynn, Michael Broxton, Paul Debevec, Matthew DuVall, Graham Fyffe, Ryan Overbeck, Noah Snavely, and Richard Tucker. 2019. DeepView: View synthesis with learned gradient descent. CVPR (2019).Google Scholar
- Florian Kainz. 2013. Interpreting OpenEXR Deep Pixels. (2013). Retrieved from https://lists.nongnu.org/archive/html/openexr-devel/2013-09/pdtPyDOYBTkje.pdf.Google Scholar
- Tom Lokovic and Eric Veach. 2000. Deep shadow maps. Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (2000). Google ScholarDigital Library
- Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. 2018. Stereo Magnification: Learning view synthesis using multiplane images. Siggraph (2018). Google ScholarDigital Library
Index Terms
- Compositing light field video using multiplane images
Recommendations
DeepView Immersive Light Field Video
SIGGRAPH '20: ACM SIGGRAPH 2020 Immersive PavilionThis Immersive Pavilion installation introduces our new system for capturing, reconstructing, compressing, and rendering light field video content. By leveraging DeepView, a recently introduced view synthesis algorithm, our system can reconstruct ...
Real-time global illumination using precomputed light field probes
I3D '17: Proceedings of the 21st ACM SIGGRAPH Symposium on Interactive 3D Graphics and GamesWe introduce a new data structure and algorithms that employ it to compute real-time global illumination from static environments. Light field probes encode a scene's full light field and internal visibility. They extend current radiance and irradiance ...
Rendering for an interactive 360° light field display
SIGGRAPH '07: ACM SIGGRAPH 2007 papersWe describe a set of rendering techniques for an autostereoscopic light field display able to present interactive 3D graphics to multiple simultaneous viewers 360 degrees around the display. The display consists of a high-speed video projector, a ...
Comments