skip to main content
10.1145/3306307.3328173acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
invited-talk

DeepLight: learning illumination for unconstrained mobile mixed reality

Published:28 July 2019Publication History

ABSTRACT

We present a learning-based method to infer plausible high dynamic range (HDR), omnidirectional illumination given an unconstrained, low dynamic range (LDR) image from a mobile phone camera with a limited field of view (FOV). For training data, we collect videos of various reflective spheres placed within the camera's FOV, leaving most of the background unoccluded, leveraging that materials with diverse reflectance functions reveal different lighting cues in a single exposure. We train a deep neural network to regress from the LDR background image to HDR lighting by matching the LDR ground truth sphere images to those rendered with the predicted illumination using image-based relighting, which is differentiable. Our inference runs at interactive frame rates on a mobile device, enabling realistic rendering of virtual objects into real scenes for mobile mixed reality. Training on auto-exposed and white-balanced videos, we improve the realism of rendered objects compared to the state-of-the art methods for both indoor and outdoor scenes.

Skip Supplemental Material Section

Supplemental Material

a46-legendre.mp4

mp4

181 MB

References

  1. Paul Debevec. 1998. Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques. ACM, 189--198. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, and Mark Sagar. 2000. Acquiring the reflectance field of a human face. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., 145--156. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Marc-André Gardner, Kalyan Sunkavalli, Ersin Yumer, Xiaohui Shen, Emiliano Gam-baretto, Christian Gagné, and Jean-François Lalonde. 2017. Learning to Predict Indoor Illumination from a Single Image. ACM Trans. Graph. 36, 6, Article 176 (Nov. 2017), 14 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Yannick Hold-Geoffroy, Kalyan Sunkavalli, Sunil Hadap, Emiliano Gambaretto, and Jean-François Lalonde. 2017. Deep outdoor illumination estimation. In IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 2.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. DeepLight: learning illumination for unconstrained mobile mixed reality

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SIGGRAPH '19: ACM SIGGRAPH 2019 Talks
        July 2019
        143 pages
        ISBN:9781450363174
        DOI:10.1145/3306307

        Copyright © 2019 Owner/Author

        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 28 July 2019

        Check for updates

        Qualifiers

        • invited-talk

        Acceptance Rates

        Overall Acceptance Rate1,822of8,601submissions,21%

        Upcoming Conference

        SIGGRAPH '24

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader