skip to main content
research-article
Open Access

Fast gaze-contingent optimal decompositions for multifocal displays

Published:20 November 2017Publication History
Skip Abstract Section

Abstract

As head-mounted displays (HMDs) commonly present a single, fixed-focus display plane, a conflict can be created between the vergence and accommodation responses of the viewer. Multifocal HMDs have long been investigated as a potential solution in which multiple image planes span the viewer's accommodation range. Such displays require a scene decomposition algorithm to distribute the depiction of objects across image planes, and previous work has shown that simple decompositions can be achieved in real-time. However, recent optimal decompositions further improve image quality, particularly with complex content. Such decompositions are more computationally involved and likely require better alignment of the image planes with the viewer's eyes, which are potential barriers to practical applications.

Our goal is to enable interactive optimal decomposition algorithms capable of driving a vergence- and accommodation-tracked multifocal testbed. Ultimately, such a testbed is necessary to establish the requirements for the practical use of multifocal displays, in terms of computational demand and hardware accuracy. To this end, we present an efficient algorithm for optimal decompositions, incorporating insights from vision science. Our method is amenable to GPU implementations and achieves a three-orders-of-magnitude speedup over previous work. We further show that eye tracking can be used for adequate plane alignment with efficient image-based deformations, adjusting for both eye rotation and head movement relative to the display. We also build the first binocular multifocal testbed with integrated eye tracking and accommodation measurement, paving the way to establish practical eye tracking and rendering requirements for this promising class of display. Finally, we report preliminary results from a pilot user study utilizing our testbed, investigating the accommodation response of users to dynamic stimuli presented under optimal decomposition.

Skip Supplemental Material Section

Supplemental Material

a237-mercier.mp4

mp4

341.7 MB

References

  1. Kurt Akeley. 2004. Achieving near-correct focus cues using multiple image planes. Ph.D. Dissertation. Stanford University. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Kurt Akeley, Simon J. Watt, Ahna Reza Girshick, and Martin S. Banks. 2004. A Stereo Display Prototype with Multiple Focal Distances. ACM Trans. Graph. 23, 3 (2004), 804--813. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Mathew Alpern. 1958. Variability of accommodation during steady fixation at various levels of illuminance. Journal of the Optical Society of America 48 (1958), 193--197.Google ScholarGoogle ScholarCross RefCross Ref
  4. Barry Blundell and Adam Schwartz. 1999. Volumetric Three-Dimensional Display Systems. Wiley-IEEE Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. David H. Brainard. 1989. Calibration of a computer controlled color monitor. Color Research and Application 14, 1 (1989), 23--34.Google ScholarGoogle ScholarCross RefCross Ref
  6. Richard L. Burden and J. Douglas Faires. 2011. Numerical Analysis, 9th International Edition. Brooks/Cole, Cencag Learning (2011).Google ScholarGoogle Scholar
  7. Johannes Burge and Wilson S. Geisler. 2011. Optimal defocus estimation in individual natural images. PNAS 108 (2011), 16849--16854.Google ScholarGoogle ScholarCross RefCross Ref
  8. David Dunn, Cary Tippets, Kent Torell, Petr Kellnhofer, Kaan Akşit, Piotr Didyk, Karol Myszkowski, David Luebke, and Henry Fuchs. 2017. Wide Field Of View Varifocal Near-Eye Display Using See-Through Deformable Membrane Mirrors. IEEE Transactions on Visualization and Computer Graphics 23, 4 (2017), 1322--1331. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Stuart J. Gilson, Andrew W. Fitzgibbon, and Andrew Glennerster. 2011. An automated calibration method for non-see-through head mounted displays. Journal of Neuroscience Methods 199, 2 (2011), 328--335.Google ScholarGoogle ScholarCross RefCross Ref
  10. Dan W. Hansen and Qiang Ji. 2010. In the Eye of the Beholder: A Survey of Models for Eyes and Gaze. IEEE Trans. on Pattern Analysis and Machine Intelligence 32, 3 (2010), 478--500. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Robert T. Held, Emily A. Cooper, and Martin S. Banks. 2012. Blur and Disparity Are Complementary Cues to Depth. Current Biology 22 (2012). Issue 5.Google ScholarGoogle Scholar
  12. Robert T. Held, Emily A. Cooper, James F. O'Brien, and Martin S. Banks. 2010. Using Blur to Affect Perceived Distance and Size. ACM Trans. Graph. (2010). Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. David M. Hoffman, Ahna R. Girshick, Kurt Akeley, and Martin S. Banks. 2008. Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. Journal of Vision 8, 3 (2008), 33.Google ScholarGoogle ScholarCross RefCross Ref
  14. Xinda Hu and Hong Hua. 2014. High-resolution optical see-through multi-focal-plane head-mounted display using freeform optics. Optics Express 22, 11 (2014).Google ScholarGoogle Scholar
  15. Hong Hua and Bahram Javidi. 2014. A 3D integral imaging optical see-through head-mounted display. Optics Express 22, 11 (2014).Google ScholarGoogle Scholar
  16. Fu-Chung Huang, Kevin Chen, and Gordon Wetzstein. 2015. The Light Field Stereoscope: Immersive Computer Graphics via Factored Near-eye Light Field Displays with Focus Cues. ACM Trans. Graph. 34, 4, Article 60 (2015), 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Paul V. Johnson, Jared A.Q. Parnell, Joohwan Kim, Christopher D. Saunter, Gordon D. Love, and Martin S. Banks. 2016. Dynamic lens and monovision 3D displays to improve viewer comfort. Optics Express 24, 11 (2016).Google ScholarGoogle Scholar
  18. Petr Kellnhofer, Piotr Didyk, Karol Myszkowski, Mohamed M Hefeeda, Hans-Peter Seidel, and Wojciech Matusik. 2016. GazeStereo3D: seamless disparity manipulations. ACM Transactions on Graphics (TOG) 35, 4 (2016), 68. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Robert Konrad, Emily A. Cooper, and Gordon Wetzstein. 2016. Novel Optical Configurations for Virtual Reality: Evaluating User Preference and Performance with Focus-tunable and Monovision Near-eye Displays. ACM Conference on Human Factors in Computing Systems (CHI) (2016), 1211--1220. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Robert Konrad, Nitish Padmanaban, Keenan Molner, Emily A. Cooper, and Gordon Wetzstein. 2017. Accommodation-invariant Computational Near-eye Displays. ACM Trans. Graph. (SIGGRAPH) 4 (2017). Issue 36. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jonh C. Kotulak and Clifton M. Schor. 1986. A computational model of the error detector of human visual accommodation. Biological Cybernetics 54 (1986), 189--194. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. George-Alex Koulieris, Bee Bui, Martin S. Banks, and George Drettakis. 2017. Accommodation and Comfort in Head-Mounted Displays. ACM Transactions on Graphics (SIGGRAPH Conference Proceedings) 36, 4 (July 2017), 11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Gregory Kramida. 2016. Resolving the Vergence-Accommodation Conflict in Head-Mounted Displays. IEEE Transactions on Visualization and Computer Graphics 22, 7 (2016), 1912--1931.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Douglas Lanman and David Luebke. 2013. Near-eye Light Field Displays. ACM Trans. Graph. 32, 6, Article 220 (2013), 10 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Junzhong Liang, Bernhard Grimm, Stefan Goelz, and Josef F. Bille. 1994. Objective measurement of wave aberrations of the human eye with the use of a Hartmann-Shack wave-front sensor. Journal of the Optical Society of America A 11, 7 (1994), 1949--1957.Google ScholarGoogle ScholarCross RefCross Ref
  26. Sheng Liu and Hong Hua. 2010. A systematic method for designing depth-fused multi-focal plane three-dimensional displays. Optics express 18, 11 (2010), 11562--11573.Google ScholarGoogle Scholar
  27. Sheng Liu, Hong Hua, and Dewen Cheng. 2010. A Novel Prototype for an Optical See-Through Head-Mounted Display with Addressable Focus Cues. IEEE Transactions on Visualization and Computer Graphics 16, 3 (2010), 381--393. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Patrick Llull, Noah Bedard, Wanmin Wu, Ivana Tošić, Kathrin Berkner, and Nikhil Balram. 2015. Design and optimization of a near-eye multifocal display system for augmented reality, In Imaging and Applied Optics. Imaging and Applied Optics.Google ScholarGoogle Scholar
  29. Gordon D. Love, David M. Hoffman, Philip J.W. Hands, James Gao, Andrew K. Kirby, and Martin S. Banks. 2009. High-speed switchable lens enables the development of a volumetric stereoscopic display. Optics Express 17, 18 (2009).Google ScholarGoogle Scholar
  30. Kevin J. MacKenzie, Ruth A. Dickson, and Simon J. Watt. 2012. Vergence and accommodation to multiple-image-plane stereoscopic displays: "real world" responses with practical image-plane separations? Journal of Electronic Imaging 21, 1 (2012).Google ScholarGoogle ScholarCross RefCross Ref
  31. Kevin J MacKenzie, David M Hoffman, and Simon J Watt. 2010. Accommodation to multiple-focal-plane displays: Implications for improving stereoscopic displays and for accommodation control. Journal of Vision 10, 8 (2010), 22--22.Google ScholarGoogle ScholarCross RefCross Ref
  32. Rafat Mantiuk, Kil Joong Kim, Allan G Rempel, and Wolfgang Heidrich. 2011. HDR-VDP-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. In ACM Transactions on Graphics (TOG), Vol. 30. ACM, 40. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Nathan Matsuda, Alexander Fix, and Douglas Lanman. 2017. Focal Surface Displays. ACM Transactions on Graphics (SIGGRAPH Conference Proceedings) 36, 4 (July 2017), 14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Sangeetha Metlapally, Jianliang L Tong, Humza J Tahir, and Clifton M Schor. 2014. The impact of higher-order aberrations on the strength of directional signals produced by accommodative microfluctuations. Journal of vision 14, 12 (2014), 25--25.Google ScholarGoogle ScholarCross RefCross Ref
  35. Rahul Narain, Rachel A. Albert, Abdullah Bulbul, Gregory J. Ward, Martin S. Banks, and James F. O'Brien. 2015. Optimal Presentation of Imagery with Focus Cues on Multi-plane Displays. ACM Trans. Graph. 34, 4, Article 59 (2015), 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. D. A. Owens. 1980. A comparison of accommodation responses and contrast sensitivity for sinusoidal gratings. Vision Research 29 (1980), 159--167.Google ScholarGoogle ScholarCross RefCross Ref
  37. Nitish Padmanaban, Robert Konrad, Tal Stramer, Emily A Cooper, and Gordon Wetzstein. 2017. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays. Proceedings of the National Academy of Sciences (2017), 201617251.Google ScholarGoogle ScholarCross RefCross Ref
  38. Victor Podlozhnyuk. 2007. Image convolution with CUDA. NVIDIA Corporation white paper, June 2097, 3 (2007).Google ScholarGoogle Scholar
  39. Michael Potmesil and Indranil Chakravarty. 1981. A Lens and Aperture Camera Model for Synthetic Image Generation. Computer Graphics 15, 3 (1981), 297--305. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Sowmya Ravikumar, Kurt Akeley, and Martin S. Banks. 2011. Creating effective focus cues in multi-plane 3D displays. Optics Express 19, 21 (2011).Google ScholarGoogle Scholar
  41. Jannick P. Rolland, Myron W. Krueger, and Alexei Goon. 2000. Multifocal Planes Head-Mounted Displays. Applied Optics 39 (2000), 3209--3215.Google ScholarGoogle ScholarCross RefCross Ref
  42. Takashi Shibata, Joohwan Kim, David M. Hoffman, and Martin S. Banks. 2011. The zone of comfort: Predicting visual discomfort with stereo displays. Journal of Vision 11, 8 (2011), 11.Google ScholarGoogle ScholarCross RefCross Ref
  43. Weitao Song, Yongtian Wang, Dewen Cheng, and Yue Liu. 2014. Light field head-mounted display with correct focus cue using micro structure array. Chinese Optics Letters 12, 6 (2014), 060010.Google ScholarGoogle ScholarCross RefCross Ref
  44. Michael Stengel, Steve Grogorick, Martin Eisemann, Elmar Eisemann, and Marcus Magnor. 2015. An Affordable Solution for Binocular Eye Tracking and Calibration in Head-mounted Displays. In Proc. ACM Multimedia. 15--24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Hakan Urey, Kishore V Chellappan, Erdem Erden, and Phil Surman. 2011. State of the art in stereoscopic and autostereoscopic displays. Proc. IEEE 99, 4 (2011), 540--555.Google ScholarGoogle ScholarCross RefCross Ref
  46. Dhanraj Vishwanath and Erik Blaser. 2010. Retinal blur and the perception of egocentric distance. Journal of Vision 10 (2010).Google ScholarGoogle Scholar
  47. Benjamin A. Watson and Larry F. Hodges. 1995. Using texture maps to correct for optical distortion in head-mounted displays. In Virtual Reality Annual International Symposium. 172--178. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Simon J. Watt, Kevin J. MacKenzie, and Louise Ryan. 2005. Real-world stereoscopic performance in multiple-focal-plane displays: How far apart should the image planes be?. In SPIE Stereoscopic Displays And Applications, Vol. 8288.Google ScholarGoogle Scholar
  49. W. Wu, P. Llull, I. Tošić, N. Bedard, K. Berkner, and N. Balram. 2016. Content-adaptive focus configuration for near-eye multi-focal displays. In IEEE Multimedia and Expo. Marina Zannoli, Gordon D. Love, Rahul Narain, and Martin S. Banks. 2016. Blur and the perception of depth at occlusions. Journal of Vision 16, 6 (2016), 17.Google ScholarGoogle Scholar

Index Terms

  1. Fast gaze-contingent optimal decompositions for multifocal displays

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Graphics
        ACM Transactions on Graphics  Volume 36, Issue 6
        December 2017
        973 pages
        ISSN:0730-0301
        EISSN:1557-7368
        DOI:10.1145/3130800
        Issue’s Table of Contents

        Copyright © 2017 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 20 November 2017
        Published in tog Volume 36, Issue 6

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader