ABSTRACT
We present a learning-based method to infer plausible high dynamic range (HDR), omnidirectional illumination given an unconstrained, low dynamic range (LDR) image from a mobile phone camera with a limited field of view (FOV). For training data, we collect videos of various reflective spheres placed within the camera's FOV, leaving most of the background unoccluded, leveraging that materials with diverse reflectance functions reveal different lighting cues in a single exposure. We train a deep neural network to regress from the LDR background image to HDR lighting by matching the LDR ground truth sphere images to those rendered with the predicted illumination using image-based relighting, which is differentiable. Our inference runs at interactive frame rates on a mobile device, enabling realistic rendering of virtual objects into real scenes for mobile mixed reality. Training on auto-exposed and white-balanced videos, we improve the realism of rendered objects compared to the state-of-the art methods for both indoor and outdoor scenes.
Supplemental Material
Available for Download
Supplemental material.
- Paul Debevec. 1998. Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques. ACM, 189--198. Google ScholarDigital Library
- Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, and Mark Sagar. 2000. Acquiring the reflectance field of a human face. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., 145--156. Google ScholarDigital Library
- Marc-André Gardner, Kalyan Sunkavalli, Ersin Yumer, Xiaohui Shen, Emiliano Gam-baretto, Christian Gagné, and Jean-François Lalonde. 2017. Learning to Predict Indoor Illumination from a Single Image. ACM Trans. Graph. 36, 6, Article 176 (Nov. 2017), 14 pages. Google ScholarDigital Library
- Yannick Hold-Geoffroy, Kalyan Sunkavalli, Sunil Hadap, Emiliano Gambaretto, and Jean-François Lalonde. 2017. Deep outdoor illumination estimation. In IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 2.Google ScholarCross Ref
Index Terms
- DeepLight: learning illumination for unconstrained mobile mixed reality
Recommendations
Performance comparison of techniques for approximating image-based lighting by directional light sources
SCIA'07: Proceedings of the 15th Scandinavian conference on Image analysisImage-Based Lighting (IBL) has become a very popular approach in computer graphics. In essence IBL is based on capturing the illumination conditions in a scene in an omni-directional image, called a light probe image. Using the illumination information ...
Recovery of material under complex illumination conditions
GRAPHITE '04: Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Australasia and South East AsiaTo calculate the inter-reflection between real objects and virtual ones in augmented reality, recovery of material for the real objects is necessary. In this paper, an approach is proposed to recover the material properties from a single high dynamic ...
Spatially and color consistent environment lighting estimation using deep neural networks for mixed reality
AbstractThe representation of consistent mixed reality (XR) environments requires adequate real and virtual illumination composition in real-time. Estimating the lighting of a real scenario is still a challenge. Due to the ill-posed nature of ...
Graphical abstractDisplay Omitted
Highlights- Automatic end-to-end method to estimate the environment lighting in XR applications.
Comments