Skip to main content
Log in

Progress Variable Variance and Filtered Rate Modelling Using Convolutional Neural Networks and Flamelet Methods

  • Published:
Flow, Turbulence and Combustion Aims and scope Submit manuscript

Abstract

A purely data-driven modelling approach using deep convolutional neural networks is discussed in the context of Large Eddy Simulation (LES) of turbulent premixed flames. The assessment of the method is conducted a priori using direct numerical simulation data. The network has been trained to perform deconvolution on the filtered density and the filtered density-progress variable product, and by doing so obtain estimates of the un-filtered progress variable field. A filtered function of the progress variable can then be approximated on the LES mesh using the deconvoluted field. This new strategy for tackling turbulent combustion modelling is demonstrated with success for both the sub-grid scale progress variable variance and the filtered reaction rate, using flamelet methods, two fundamental ingredients of premixed turbulent combustion modelling.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Batchelor, G.K.: The Theory of Homoqeneous Turbulence, 2nd edn. Cambridge University Press, Cambridge (1971)

    Google Scholar 

  2. Pope, S.B.: Turbulent Flows. Cambridge University Press, Cambridge (2000)

    Book  MATH  Google Scholar 

  3. Smagorinsky, J.: General circulation experiments with the primitive equations. Monthly Weath. Rev. 91, 99–164 (1963)

    Article  Google Scholar 

  4. Gicquel, L.Y.M., Staffelbach, G., Poinsot, T.: Large eddy simulations of gaseous flames in gas turbine combustion chambers. Prog. En Combust. Sc. 38, 782–817 (2012)

    Article  Google Scholar 

  5. Pitsch, H.: Large eddy simulation of turbulent combustion. Ann Rev. Fluid Mech. 38, 453–482 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  6. Sagaut, P.: Large Eddy Simulation for Incompressible Flows: an Introduction. Springer, Berlin (2001)

    Book  MATH  Google Scholar 

  7. Leonard, A.: Energy cascade in large eddy simulation of turbulent fluid flows. Adv. Geophys. 18A, 237–248 (1974)

    Google Scholar 

  8. Clark, R.A.: Evaluation of sub-grid scalar models using an accurately simulated turbulent flow. J. Fluid Mech. 91, 1–16 (1979)

    Article  Google Scholar 

  9. Geurts, B.G.: Inverse modelling for large-eddy simulation. Phys. Fluids 9, 3585–3587 (1997)

    Article  Google Scholar 

  10. Domaradzki, J.A., Saiki, E.M.: A sub-grid-scale model based on the estimation of unresolved scales of turbulence. Phys. Fluids 9, 2148–2164 (1997)

    Article  Google Scholar 

  11. Stolz, S., Adams, N.: An approximate deconvolution procedure for large-eddy simulation. Phys. Fluids 11, 1699–1701 (1999)

    Article  MATH  Google Scholar 

  12. Stolz, S., Adams, N.: An approximate deconvolution model for large-eddy simulation with application to incompressible wall-bounded flows. Phys. Fluids 13, 997–1015 (2001)

    Article  MATH  Google Scholar 

  13. Bose, S., Moin, P.: A dynamic slip boundary condition for wall-modeled large-eddy simulation. Phys. Fluids 26, 1–18 (2014)

    Article  Google Scholar 

  14. Locci, C., Vervisch, L.: Eulerian scalar projection in Lagrangian point source context: an approximate inverse filtering approach. Flow Turb. Combust. 97, 363–368 (2016)

    Article  Google Scholar 

  15. Mathew, J.: Large Eddy Simulation of a premixed flame with approximate deconvolution modelling. Proc. Combust. Inst. 29, 1995–2000 (2002)

    Article  Google Scholar 

  16. Domingo, P., Vervisch, L.: Large eddy simulation of premixed turbulent combustion using approximate deconvolution and explicit flame filtering. Proc. Combust. Inst. 35, 1349–1357 (2015)

    Article  Google Scholar 

  17. Domingo, P., Vervisch, L.: DNS and approximate deconvolution as a tool to analyse one-dimensional filtered flame sub-grid scale modelling. Combust. Flame 177, 109–122 (2017)

    Article  Google Scholar 

  18. Mehl, C., Idier, J., Fiorina, B.: Evaluation of deconvolution modelling applied to numerical combustion. Combust. Th Model. 22, 38–70 (2018)

    Article  MathSciNet  Google Scholar 

  19. Wang, Q., Ihme, M.: Regularized deconvolution method for turbulent combustion modelling. Combust. Flame 176, 125–142 (2017)

    Article  Google Scholar 

  20. Nikolaou, Z.M., Vervisch, L., Cant, R.S.: Scalar flux modelling in turbulent flames using iterative deconvolution. Phys. Rev. Fluids 3, 043201 (2018)

    Article  Google Scholar 

  21. Nikolaou, Z.M., Vervisch, L.: A priori assessment of an iterative deconvolution method for LES sub-grid scale variance modelling. Flow Turb. Combust. 101, 33–53 (2018a)

    Article  Google Scholar 

  22. Nikolaou, Z.M., Vervisch, L.: Assessment of deconvolution-based flamelet methods for progress variable rate modelling. Aeron. Aero Open Access J. 2, 274–281 (2018b)

    Google Scholar 

  23. Khan, J., Wei, J.S., Ringer, M., Saal, L.H., Ladanyi, M., Westermann, F., Berthold, F., Schwab, M., Antonescu, C.R., Peterson, C., Meltzer, P.S.: Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nature 7, 673–679 (2001)

    Google Scholar 

  24. Mikolov, T., Deoras, A., Povey, D., Burget, L., Cernocky, J.: Strategies for training large scale neural network language models. Proc. Aut Speech Recog. Underst. 978, 196–201 (2011)

    Google Scholar 

  25. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. Proc. Advan. Neural Inf. Proc. Syst. 27, 3104–3112 (2014)

    Google Scholar 

  26. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518, 529–533 (2015)

    Article  Google Scholar 

  27. Silver, D., et al.: Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016)

    Article  Google Scholar 

  28. Milano, M., Koumoutsakos, P.: Neural network modelling for near wall turbulent flow. J. Comput. Phys. 182, 1–26 (2002)

    Article  MATH  Google Scholar 

  29. Ling, J., Templeton, J.: Evaluation of machine learning algorithms for prediction of regions of high Reynolds averaged navier-Stokes uncertainty. Phys. Fluids 27, 085103 (2015)

    Article  Google Scholar 

  30. Ling, J., Kurawski, A., Templeton, J.: Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. J. Fluid Mech. 807, 155–166 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  31. Wang, J., Wu, J., Xiao, H.: Physics-informed machine learning approach for reconstructing Reynolds stress modelling discrepancies based on DNS data. Phys. Rev. Fluids 2, 034603 (2017)

    Article  Google Scholar 

  32. Maulik, R., San, O.: A neural network approach for the blind deconvolution of turbulent flows. J. Fluid Mech. 831, 151–181 (2017)

    Article  MathSciNet  Google Scholar 

  33. Wang, Z., Luo, K., Li, D., Tan, J., Fan, J.: Investigation of data-drive closure for subgrid-scale stress in large-eddy simulation. Phys. Fluids 30, 1–12 (2018)

    Google Scholar 

  34. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Networks 61, 85–117 (2015)

    Article  Google Scholar 

  35. Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)

    Article  Google Scholar 

  36. Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Proc. Advan. Neur. Inf. Proc. Syst. 25, 1090–1098 (2012)

    Google Scholar 

  37. Lapeyre, C.J., Misdariis, A., Cazard, N., Veynante, D., Poinsot, T.: Training convolutional neural networks to estimate turbulent sub-grid scale reaction rates. arXiv:1810.03691 [physics.flu-dyn] (2018)

  38. Cant, R.S.: SENGA2 user guide CUED/a–THERMO/TR67 (2012)

  39. Nikolaou, Z.M., Swaminathan, N.: A 5-step reduced mechanism for combustion of C O/H 2/H 2 O/C H 4/C O 2 mixtures with low hydrogen/methane and high H 2 O content. Combust. Flame 160, 56–75 (2013)

    Article  Google Scholar 

  40. Nikolaou, Z.M., Swaminathan, N.: Evaluation of a reduced mechanism for turbulent premixed combustion. Combust. Flame 161, 3085–3099 (2014)

    Article  Google Scholar 

  41. Nikolaou, Z.M., Swaminathan, N.: Direct numerical simulation of complex fuel combustion with detailed chemistry: physical insight and mean reaction rate modelling. Combust. Sc. Tech. 187, 1759–1789 (2015)

    Article  Google Scholar 

  42. Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems. https://www.tensorflow.org/ (2015)

  43. Peters, N.: Laminar flamelet concepts in turbulent combustion. 21st Symp. Combust. (The Combustion Institute, Pittsburgh, PA) (1986)

  44. Cook, A.W., Riley, J.J.: A sub-grid model for equilibrium chemistry in turbulent flows. Phys. Fluids 6, 2868–2870 (1994)

    Article  Google Scholar 

  45. Cook, A.W.: Determination of the constant coefficient in scale similarity models of turbulence. Phys. Fluids 9, 1485–1487 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  46. Pierce, C.D., Moin, P.: A dynamic model for subgrid-scale variance and dissipation rate of a conserved scalar. Phys. Fluids 10, 3041–3044 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  47. Girimaji, S., Zhou, Y.: Analysis and modelling of subgrid scalar mixing using numerical data. Phys. Fluids 8, 1224–1236 (1996)

    Article  MATH  Google Scholar 

  48. Veynante, D., Knikker, R.: Comparison between LES results and experimental data in reacting flows. J. Turb. 7, N35 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  49. Balarac, G., Pitsch, H., Raman, V.: Development of a dynamic model for the subfilter scalar variance using the concept of optimal estimators. Phys. Fluids 20, 035114 (2008)

    Article  MATH  Google Scholar 

  50. Kaul, C.M., Raman, V., Balarac, G., Pitsch, H.: Numerical errors in the computation of subfilter scalar variance in large eddy simulations. Phys. Fluids 21, 055102 (2009)

    Article  MATH  Google Scholar 

  51. Kaul, C.M., Raman, V.: A posteriori analysis of numerical errors in subfilter scalar variance modelling for large eddy simulations. Phys. Fluids 23, 035102 (2011)

    Article  Google Scholar 

  52. Pera, C., Reveillon, J., Vervisch, L., Domingo, P.: Modelling subgrid scale mixture fraction variance in LES of evaporating spray. Combust. Flame 146, 635–648 (2006)

    Article  Google Scholar 

  53. Domingo, P., Vervisch, L., Veynante, D.: Large-eddy simulation of a lifted methane jet flame in a vitiated co-flow. Combust. Flame 152, 415–432 (2008)

    Article  Google Scholar 

  54. Moureau, V., Domingo, P., Vervisch, L.: From large-eddy simulation to direct numerical simulation of a lean premixed swirl flame: filtered laminar flame-PDF modelling. Combust. Flame 158, 1340–1357 (2011)

    Article  Google Scholar 

  55. Nambully, S., Domingo, P., Moureau, V., Vervisch, L.: A filtered-Laminar-flame PDF sub-grid scale closure for LES of premixed turbulent flames Part I Formalism and application to a bluff-body burner with differential diffusion. Combust. Flame 161, 1756–1774 (2014)

    Article  Google Scholar 

  56. Kanov, K., Burns, R., Lalescu, C., Eyink, G.: The Johns Hopkins turbulence databases: an open simulation laboratory for turbulence research. Comput. Sci. Eng. 17, 10–17 (2015)

    Article  Google Scholar 

  57. Lecun, Y., Bose, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Wayne, E., Jackel, L.D.: Handwritten digit recognition with a back-propagation network. Adv. Neur. Inf. Proc. Syst. 2, 396–404 (1990)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Z. M. Nikolaou.

Ethics declarations

Conflict of interests

The authors declare they have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Network structure

Appendix: Network structure

A CNN usually consists of convolutional and sub-sampling layers accompanied by fully connected layers. Each of the convolutional layers of the CNN can have K number of filters (kernels). An essential aspect of the CNN is the size of the filters, which can identify the locally connected structure and in-turn convoluted with the input to create K feature maps. Each of the generated K feature maps can then be sub-sampled using min or max pooling for a defined region, typically between 2-5 points. Furthermore, another important part of the CNN is the addition of a bias parameter and the application of a linear or non-linear activation function for each feature map. The use of bias and the activation function can be applied either before or after the sampling of the feature maps. A mean squared error was used as an error measure between the predicted and target variables, and the training conducted in the standard approach using the back-propagation algorithm [57].

Figure 8 shows the structure of the network used for the deconvolution. The network consists of a series of convolution, normalisation and application of an activation function layers, in that order. The input layer consists of the set of 113 points holding the filtered values in the halo cube for a given point on the LES mesh. The output from the first convolution layer is an 83 set of features for each of the 256 kernels used. The output from a convolution layer may have values which differ by a large amount, and are batch-normalised first before applying the activation function. In this process, the data are normalised by subtracting the batch mean, and dividing it by the standard deviation of the set. Following this, a leaky Rectified Linear Unit activation function (RELU) [35] is applied to the extracted features (thresholding)-in our case we have used f(x) = x for x > 0, and f(x) = 0.3x otherwise. A big advantage of this type of activation function is that is computationally faster to evaluate when compared for example to a sigmoid function which involves the evaluation of an exponential term. This speeds up the training process but also helps convergence [36]. In the second convolution layer, the output from the RELU is convoluted using 128 kernels, resulting in an output set of 53 features for each of the 128 kernels. The process is repeated with convolutional, RELU and normalisation layers. During the training phase, the weights of all kernels in each convolutional layer are adjusted so as to minimise the mean-squared error between the deconvoluted and actual field. In the end, a total of 32 features are extracted which are connected to a single node having a linear activation function and resulting in a single output namely the deconvoluted field. The total size of the training data for each case depends on the size of the LES mesh. In particular, for an LES mesh having Nx,Ny,Nz points in space and Nt datasets in time, the total size of the training data is \(N_{x} \cdot N_{y} \cdot N_{z} \cdot N_{t} \cdot {N_{h}^{3}}\) where Nh is the size of the halo cube around each point on the LES mesh. Therefore depending on the size of the DNS database the training data size can become significant.

Fig. 8
figure 8

The structure of the convolutional network

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nikolaou, Z.M., Chrysostomou, C., Vervisch, L. et al. Progress Variable Variance and Filtered Rate Modelling Using Convolutional Neural Networks and Flamelet Methods. Flow Turbulence Combust 103, 485–501 (2019). https://doi.org/10.1007/s10494-019-00028-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10494-019-00028-w

Keywords

Navigation