skip to main content
10.1145/3306214.3338614acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
poster

Compositing light field video using multiplane images

Published:28 July 2019Publication History

ABSTRACT

We present a variety of new compositing techniques using Multi-plane Images (MPI's) [Zhou et al. 2018] derived from footage shot with an inexpensive and portable light field video camera array. The effects include camera stabilization, foreground object removal, synthetic depth of field, and deep compositing. Traditional compositing is based around layering RGBA images to visually integrate elements into the same scene, and often requires manual 2D and/or 3D artist intervention to achieve realism in the presence of volumetric effects such as smoke or splashing water. We leverage the newly introduced DeepView solver [Flynn et al. 2019] and a light field camera array to generate MPIs stored in the DeepEXR format for compositing with realistic spatial integration and a simple workflow which offers new creative capabilities. We demonstrate using this technique by combining footage that would otherwise be very challenging and time intensive to achieve when using traditional techniques, with minimal artist intervention.

Skip Supplemental Material Section

Supplemental Material

a67-duvall.mp4

mp4

35.9 MB

References

  1. John Flynn, Michael Broxton, Paul Debevec, Matthew DuVall, Graham Fyffe, Ryan Overbeck, Noah Snavely, and Richard Tucker. 2019. DeepView: View synthesis with learned gradient descent. CVPR (2019).Google ScholarGoogle Scholar
  2. Florian Kainz. 2013. Interpreting OpenEXR Deep Pixels. (2013). Retrieved from https://lists.nongnu.org/archive/html/openexr-devel/2013-09/pdtPyDOYBTkje.pdf.Google ScholarGoogle Scholar
  3. Tom Lokovic and Eric Veach. 2000. Deep shadow maps. Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (2000). Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. 2018. Stereo Magnification: Learning view synthesis using multiplane images. Siggraph (2018). Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Compositing light field video using multiplane images

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGGRAPH '19: ACM SIGGRAPH 2019 Posters
      July 2019
      148 pages
      ISBN:9781450363143
      DOI:10.1145/3306214

      Copyright © 2019 Owner/Author

      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 28 July 2019

      Check for updates

      Qualifiers

      • poster

      Upcoming Conference

      SIGGRAPH '24

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader