skip to main content
research-article

Preference and artifact analysis for video transitions of places

Published:19 August 2013Publication History
Skip Abstract Section

Abstract

Emerging interfaces for video collections of places attempt to link similar content with seamless transitions. However, the automatic computer vision techniques that enable these transitions have many failure cases which lead to artifacts in the final rendered transition. Under these conditions, which transitions are preferred by participants and which artifacts are most objectionable? We perform an experiment with participants comparing seven transition types, from movie cuts and dissolves to image-based warps and virtual camera transitions, across five scenes in a city. This document describes how we condition this experiment on slight and considerable view change cases, and how we analyze the feedback from participants to find their preference for transition types and artifacts. We discover that transition preference varies with view change, that automatic rendered transitions are significantly preferred even with some artifacts, and that dissolve transitions are comparable to less-sophisticated rendered transitions. This leads to insights into what visual features are important to maintain in a rendered transition, and to an artifact ordering within our transitions.

Skip Supplemental Material Section

Supplemental Material

References

  1. Ballan, L., Brostow, G. J., Puwein, J., and Pollefeys, M. 2010. Unstructured video-based rendering: interactive exploration of casually captured videos.ACM Trans. Graph. 29, 4, 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Borg, I. and Groenen, P. 2010. Modern Multidimensional Scaling: Theory and Applications. Springer Series in Statistics.Google ScholarGoogle Scholar
  3. Chaurasia, G., Sorkine, O., Drettakis, G., and Inria, R. 2011. Silhouette-aware warping for image-based rendering. In Proceedings of the Eurographics Symposium on Rendering. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Cui, C. 2000. Comparison of two psychophysical methods for image color quality measurement: paired comparison and rank order. In Proceedings of the 8th Color Imaging Conference on Color Science and Engineering Systems, Technologies and Applications (CIC'00). IS&T, 222--227.Google ScholarGoogle Scholar
  5. Debevec, P., Yu, Y., and Borshukov, G. 1998. Efficient View-dependent image-based rendering with projective texturemapping. In Rendering Techniques 98: Proceedings of the Eurographics Workshop, Number CSD-98-1003.Google ScholarGoogle Scholar
  6. Debuchi, J. 1982. Frozen Time {film}.Google ScholarGoogle Scholar
  7. Dmytryk, E. 1984. On Film Editing.Google ScholarGoogle Scholar
  8. Eisemann, M., Decker, B. D., Magnor, M., Bekaert, P., de Aguiar, E., Ahmed, N., Theobalt, C., and Sellent, A. 2008. Floating textures. Comput. Graphics Forum 27, 2, 409--418.Google ScholarGoogle ScholarCross RefCross Ref
  9. Engeldrum, P. G. 2000. Psychometric Scaling: A Toolkit for Imaging Systems Development. Imcotek Press, Winchester, MA.Google ScholarGoogle Scholar
  10. Fincher, D. 2002. Panic Room {film}.Google ScholarGoogle Scholar
  11. Fischler, M. A. and Bolles, R. C. 1981. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Comm. ACM 24, 6, 381--395. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Furukawa, Y., Curless, B., Seitz, S. M., and Szeliski, R. 2010. Towards Internet-scale multi-view stereo. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1434--1441.Google ScholarGoogle Scholar
  13. Furukawa, Y. and Ponce, J. 2010. Accurate, dense, and robust multi-view stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 32, 8, 1362--1376. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Goesele, M., Ackermann, J., Fuhrmann, S., Haubold, C., and Klowsky, R. 2010. Ambient point clouds for view interpolation. ACM Trans. Graph. 29, 4, 1--6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Hartley, R. and Zisserman, A. 2004. Multiple View Geometry in Computer Vision 2nd Ed. Cambridge University Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Horry, Y., Anjyo, K.-I. A., and Arai, K. 1997. Tour Into The Picture: Using a Spidery Mesh Interface to make Animation from a Single Image. In Proceedings of the ACM SIGGRAPH International Conference on Computer Graphics and Interactive Techniques. 225--232. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Kazhdan, M., Bolitho, M., and Hoppe, H. 2006. Poisson surface reconstruction. In Proceedings of the Eurographics Symposium on Geometry Processing. Eurographics Association, 61--70. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Lipski, C., Linz, C., Neumann, T., Wacker, M., and Magnor, M. 2010. High Resolution Image Correspondences for Video Post-Production. In Proceedings of the European Conference on Visual Media Production. IEEE, 33--39. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Lourakis, M. I. A. and Argyros, A. A. 2004. The design and implementation of a generic sparse bundle adjustment software package based on the Levenberg-Marquardt algorithm. ICSFORTH Tech. rep. TR 340, 340.Google ScholarGoogle Scholar
  20. Lowe, D. G. 2004. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60, 2, 91--110. Macmillan, J. 1984. Early Work {film}. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Mccurdy, N. J. 2007. RealityFlythrough: A system for ubiquitous video. Ph.D. thesis, University of California, San Diego. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Morvan, Y. and O'Sullivan, C. 2009a. A perceptual approach to trimming and tuning unstructured lumigraphs. ACM Trans. Appl. Percept. 5, 4, 19:1--19:24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Morvan, Y. and O'Sullivan, C. 2009b. Handling occluders in transitions from panoramic images: A perceptual study. ACM Trans. Appl. Percept. 6, 4, 1--15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Murch, W. 2001. In the Blink of an Eye. Silman-James Press.Google ScholarGoogle Scholar
  25. Mustafa, M., Guthe, S., and Magnor, M. 2012. Single-trial EEG Classification of Artifacts in Videos. ACM Trans. Appl. Percept. 9, 3, 12:1--12:15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Oh, B. M., Chen, M., Dorsey, J., and Durand, F. 2001. Image-based modeling and photo editing. In Proceedings of the ACM SIGGRAPH International Conference on Computer Graphics and Interactive Techniques. ACM Press, New York, 433--442. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Schaefer, S., Mcphail, T., and Warren, J. 2006. Image deformation using moving least squares. ACM Trans. Graph. 25, 3, 533. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Snavely, N., Garg, R., Seitz, S. M., and Szeliski, R. 2008. Finding paths through the world's photos. ACM Trans. Graph. 27, 3, 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Snavely, N., Seitz, S. M., and Szeliski, R. 2006. Photo tourism: Exploring photo collections in 3D. ACM Trans. Graph. 25, 835--846. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Stich, T., Linz, C., Wallraven, C., Cunningham, D., and Magnor, M. 2011. Perception-motivated interpolation of image sequences. ACM Trans. Appl. Percept. 8, 2, 11:1--11:25. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Thormählen, T. 2006. Zuverlässige Schä tzung der Kamerabewegung aus einer Bildfolge. Ph.D. thesis, Universität Hannover.Google ScholarGoogle Scholar
  32. Tompkin, J., Kim, K. I., Kautz, J., and Theobalt, C. 2012. Videoscapes: Exploring Sparse, Unstructured Video Collections. ACM Trans. Graph. 31, 4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Torgerson, W. S. 1958. Theory and Methods of Scaling. Wiley, New York.Google ScholarGoogle Scholar
  34. Vangorp, P., Chaurasia, G., Laffont, P.-Y., Fleming, R. W., and Drettakis, G. 2011. Perception of visual artifacts in image-based rendering of façades. In Proceedings of the Eurographics Symposium on Rendering. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Vangorp, P., Richardt, C., Cooper, E. A., Chaurasia, G., Banks, M. S., and Drettakis, G. 2013. Perception of perspective distortions in image-based rendering. ACM Trans. Graph. 32, 4. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Preference and artifact analysis for video transitions of places

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Applied Perception
            ACM Transactions on Applied Perception  Volume 10, Issue 3
            Special issue SAP 2013
            August 2013
            83 pages
            ISSN:1544-3558
            EISSN:1544-3965
            DOI:10.1145/2506206
            Issue’s Table of Contents

            Copyright © 2013 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 19 August 2013
            • Accepted: 1 July 2013
            • Received: 1 June 2013
            Published in tap Volume 10, Issue 3

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article
            • Research
            • Refereed

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader