skip to main content
research-article

tempoGAN: a temporally coherent, volumetric GAN for super-resolution fluid flow

Published:30 July 2018Publication History
Skip Abstract Section

Abstract

We propose a temporally coherent generative model addressing the super-resolution problem for fluid flows. Our work represents a first approach to synthesize four-dimensional physics fields with neural networks. Based on a conditional generative adversarial network that is designed for the inference of three-dimensional volumetric data, our model generates consistent and detailed results by using a novel temporal discriminator, in addition to the commonly used spatial one. Our experiments show that the generator is able to infer more realistic high-resolution details by using additional physical quantities, such as low-resolution velocities or vorticities. Besides improvements in the training process and in the generated outputs, these inputs offer means for artistic control as well. We additionally employ a physics-aware data augmentation step, which is crucial to avoid overfitting and to reduce memory requirements. In this way, our network learns to generate adverted quantities with highly detailed, realistic, and temporally coherent features. Our method works instantaneously, using only a single time-step of low-resolution fluid data. We demonstrate the abilities of our method using a variety of complex inputs and applications in two and three dimensions.

Skip Supplemental Material Section

Supplemental Material

095-218.mp4

mp4

205.2 MB

a95-xie.mp4

mp4

261 MB

References

  1. Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein GAN. arXiv:1701.07875 (2017).Google ScholarGoogle Scholar
  2. Steve Bako, Thijs Vogels, Brian McWilliams, Mark Meyer, Jan NováK, Alex Harvill, Pradeep Sen, Tony Derose, and Fabrice Rousselle. 2017. Kernel-predicting convolutional networks for denoising Monte Carlo renderings. ACM Transactions on Graphics (TOG) 36, 4 (2017), 97. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. David Berthelot, Tom Schumm, and Luke Metz. 2017. BeGAN: Boundary equilibrium generative adversarial networks. arXiv:1703.10717 (2017).Google ScholarGoogle Scholar
  4. Prateep Bhattacharjee and Sukhendu Das. 2017. Temporal Coherency based Criteria for Predicting Video Frames using Deep Multi-stage Generative Adversarial Networks. In Advances in Neural Information Processing Systems. 4271--4280.Google ScholarGoogle Scholar
  5. Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Chakravarty Alla Chaitanya, Anton Kaplanyan, Christoph Schied, Marco Salvi, Aaron Lefohn, Derek Nowrouzezahrai, and Timo Aila. 2017. Interactive reconstruction of Monte Carlo image sequences using a recurrent denoising autoencoder. ACM Transactions on Graphics (TOG) 36, 4 (2017), 98. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Dongdong Chen, Jing Liao, Lu Yuan, Nenghai Yu, and Gang Hua. 2017. Coherent Online Video Style Transfer. In The IEEE International Conference on Computer Vision (ICCV).Google ScholarGoogle Scholar
  8. Mengyu Chu and Nils Thuerey. 2017. Data-Driven Synthesis of Smoke Flows with CNN-based Feature Descriptors. ACM Trans. Graph. 36(4), 69 (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Emmanuel de Bezenac, Arthur Pajot, and Patrick Gallinari. 2017. Deep Learning for Physical Processes: Incorporating Prior Scientific Knowledge. arXiv preprint arXiv:1711.07970 (2017).Google ScholarGoogle Scholar
  10. Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. 2016. Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence 38, 2 (2016), 295--307. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Alexey Dosovitskiy and Thomas Brox. 2016. Generating images with perceptual similarity metrics based on deep networks. In Advances in Neural Information Processing Systems. 658--666. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. 2016. Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE Trans. Pattern Analysis and Mach. Int. 38, 9 (2016), 1734--1747.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Amir Barati Farimani, Joseph Gomes, and Vijay S Pande. 2017. Deep Learning the Physics of Transport Phenomena. arXiv:1709.02432 (2017).Google ScholarGoogle Scholar
  14. John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. 2016. DeepStereo: Learning to predict new views from the world's imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 5515--5524.Google ScholarGoogle Scholar
  15. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proc. of IEEE Comp. Vision and Pattern Rec. IEEE, 580--587. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Ian Goodfellow. 2016. NIPS 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160 (2016).Google ScholarGoogle Scholar
  17. Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. stat 1050 (2014), 10.Google ScholarGoogle Scholar
  19. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-to-image translation with conditional adversarial networks. Proc. of IEEE Comp. Vision and Pattern Rec. (2017).Google ScholarGoogle ScholarCross RefCross Ref
  20. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision. Springer, 694--711.Google ScholarGoogle ScholarCross RefCross Ref
  21. Simon Kallweit, Thomas Müller, Brian McWilliams, Markus Gross, and Jan Novák. 2017. Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks. arXiv:1709.05418 (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2017. Progressive growing of gans for improved quality, stability, and variation. arXiv:1710.10196 (2017).Google ScholarGoogle Scholar
  23. Ladislav Kavan, Dan Gerszewski, Adam W Bargteil, and Peter-Pike Sloan. 2011. Physics-inspired upsampling for cloth simulation in games. In ACM Transactions on Graphics (TOG), Vol. 30. ACM, 93. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Byungmoon Kim, Yingjie Liu, Ignacio Llamas, and Jarek Rossignac. 2005. FlowFixer: Using BFECC for Fluid Simulation. In Proceedings of the First Eurographics conference on Natural Phenomena. 51--56. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. 2016. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1646--1654.Google ScholarGoogle ScholarCross RefCross Ref
  26. Theodore Kim, Nils Thuerey, Doug James, and Markus Gross. 2008. Wavelet Turbulence for Fluid Simulation. ACM Trans. Graph. 27 (3) (2008), 50:1--6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. NIPS, 1097--1105. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Lubor Ladicky, SoHyeon Jeong, Barbara Solenthaler, Marc Pollefeys, and Markus Gross. 2015. Data-driven fluid simulations using regression forests. ACM Trans. Graph. 34, 6 (2015), 199. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. 2016. Photo-realistic single image super-resolution using a generative adversarial network. arXiv:1609.04802 (2016).Google ScholarGoogle Scholar
  30. Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. 2017. Enhanced deep residual networks for single image super-resolution. In Proc. of IEEE Comp. Vision and Pattern Rec., Vol. 1. 3.Google ScholarGoogle Scholar
  31. Ding Liu, Zhaowen Wang, Yuchen Fan, Xianming Liu, Zhangyang Wang, Shiyu Chang, and Thomas Huang. 2017. Robust Video Super-Resolution With Learned Temporal Dynamics. In The IEEE International Conference on Computer Vision (ICCV).Google ScholarGoogle Scholar
  32. Zichao Long, Yiping Lu, Xianzhong Ma, and Bin Dong. 2017. PDE-Net: Learning PDEs from Data. arXiv:1710.09668 (2017).Google ScholarGoogle Scholar
  33. Fujun Luan, Sylvain Paris, Eli Shechtman, and Kavita Bala. 2017. Deep Photo Style Transfer. arXiv preprint arXiv:1703.07511 (2017).Google ScholarGoogle Scholar
  34. W Magnus, F Henrik, A Chris, and M Stephen. 2011. Capturing Thin Features in Smoke Simulations. Siggraph Talk (2011).Google ScholarGoogle Scholar
  35. Michael Mathieu, Camille Couprie, and Yann LeCun. 2015. Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440 (2015).Google ScholarGoogle Scholar
  36. Antoine McNamara, Adrien Treuille, Zoran Popović, and Jos Stam. 2004. Fluid Control Using the Adjoint Method. ACM Trans. Graph. 23, 3 (2004), 449--456. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014).Google ScholarGoogle Scholar
  38. Lukas Mosser, Olivier Dubrule, and Martin J Blunt. 2017. Reconstruction of three-dimensional porous media using generative adversarial neural networks. arXiv:1704.03225 (2017).Google ScholarGoogle Scholar
  39. Rahul Narain, Jason Sewall, Mark Carlson, and Ming C. Lin. 2008. Fast Animation of Turbulence Using Energy Transport and Procedural Synthesis. ACM Trans. Graph. 27, 5 (2008), article 166. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Augustus Odena, Vincent Dumoulin, and Chris Olah. 2016. Deconvolution and Checker-board Artifacts. Distill (2016).Google ScholarGoogle Scholar
  41. Zherong Pan, Jin Huang, Yiying Tong, Changxi Zheng, and Hujun Bao. 2013. Interactive Localized Liquid Motion Editing. ACM Trans. Graph. 32, 6 (Nov. 2013). Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Xue Bin Peng, Glen Berseth, KangKang Yin, and Michiel Van De Panne. 2017. Deeploco: Dynamic locomotion skills using hierarchical deep reinforcement learning. ACM Trans. Graph. 36, 4 (2017), 41. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Lukas Prantl, Boris Bonev, and Nils Thuerey. 2017. Pre-computed Liquid Spaces with Generative Neural Networks and Optical Flow. arXiv:1704.07854 (2017).Google ScholarGoogle Scholar
  44. Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. Proc. ICLR (2016).Google ScholarGoogle Scholar
  45. Nick Rasmussen, Due Quang Nguyen, Willi Geiger, and Ronald Fedkiw. 2003. Smoke Simulation for large scale phenomena. In ACM Transactions on Graphics (TOG), Vol. 22. ACM, 703--707. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 234--241.Google ScholarGoogle ScholarCross RefCross Ref
  47. Manuel Ruder, Alexey Dosovitskiy, and Thomas Brox. 2016. Artistic Style Transfer for Videos. In Pattern Recognition - 38th German Conference, GCPR 2016, Hannover, Germany, September 12-15, 2016, Proceedings. 26--36.Google ScholarGoogle Scholar
  48. Masaki Saito, Eiichi Matsumoto, and Shunta Saito. 2017. Temporal generative adversarial nets with singular value clipping. In IEEE International Conference on Computer Vision (ICCV). 2830--2839.Google ScholarGoogle ScholarCross RefCross Ref
  49. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. In Advances in Neural Information Processing Systems. 2234--2242. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Hagit Schechter and Robert Bridson. 2008. Evolving sub-grid turbulence for smoke animation. In Proceedings of the 2008 ACM SIGGRAPH/Eurographics symposium on Computer animation. Eurographics Association, 1--7. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Andrew Seile, Ronald Fedkiw, Byungmoon Kim, Yingjie Liu, and Jarek Rossignac. 2008. An Unconditionally Stable MacCormack Method. J. Sci. Comput. 35, 2-3 (June 2008), 350--371. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv.1409.1556 (2014).Google ScholarGoogle Scholar
  53. Jos Stam. 1999. Stable Fluids. In Proc. ACM SIGGRAPH. ACM, 121--128. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Jonathan Tompson, Kristofer Schlachter, Pablo Sprechmann, and Ken Perlin. 2016. Accelerating Eulerian Fluid Simulation With Convolutional Networks. arXiv:1607.03597 (2016).Google ScholarGoogle Scholar
  55. Kiwon Um, Xiangyu Hu, and Nils Thuerey. 2017. Splash Modeling with Neural Networks. arXiv:1704.04456 (2017).Google ScholarGoogle Scholar
  56. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient.. In AAAI. 2852--2858.Google ScholarGoogle Scholar
  57. Hang Zhao, Orazio Gallo, Iuri Frosio, and Jan Kautz. 2015. Loss Functions for Neural Networks for Image Processing. arXiv preprint arXiv:1511.08861 (2015).Google ScholarGoogle Scholar
  58. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv:1703.10593 (2017).Google ScholarGoogle Scholar

Index Terms

  1. tempoGAN: a temporally coherent, volumetric GAN for super-resolution fluid flow

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Graphics
        ACM Transactions on Graphics  Volume 37, Issue 4
        August 2018
        1670 pages
        ISSN:0730-0301
        EISSN:1557-7368
        DOI:10.1145/3197517
        Issue’s Table of Contents

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 30 July 2018
        Published in tog Volume 37, Issue 4

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader