1 Introduction

Hyperbolic conservation laws are very important and arise in many fields such as shallow water flow, gas dynamics, weather prediction, and many other fields. The exact solutions of such equations are only available in a few special cases. So, we should use numerical methods. Since the nonlinear hyperbolic conservation laws develop discontinuities in their solution even if the initial conditions are smooth. Therefore, if the solutions contain discontinuities as well as rich structures, the numerical schemes should avoid the spurious oscillations as well as be high order accuracy. It is well known that the central methods are more efficient than the upwind methods in the smooth regions of the solutions since the central methods give a less dissipative solution. However, near discontinuities, to avoid the numerical instability, the upwind scheme is better than the central scheme. Thus successful methods should be high order of accuracy in smooth regions and present the discontinuities with the correct position and without spurious oscillations.

In the past, six decades or more, there are many high order numerical high resolution methods have been developed, such as Total Variation Diminishing (TVD) [6], random choice method (RCM) [2, 4, 16], weighted essentially non-oscillatory schemes (WENO) [1, 9, 24, 25], ADER methods [11,12,13,14,15, 17, 18, 23], and so on. These methods produce satisfactory numerical results. There are a lot of studies on them that have been done to improve the computational robustness, accuracy, and efficiency and reduce dissipation and dispersion errors. TVD schemes, introduced by Harten [6], are at most first-order accurate near discontinuities and smooth extrema and thus they are not suitable for many applications areas especially, acoustics, compressible turbulence, and problems with long-time evolution wave propagation. In such applications, the extrema are clipped with time evolution and thus the numerical dissipation may become dominant. To fade this flaw, uniformly very high order methods are required for such applications. For example, the weighted essentially non-oscillatory (WENO) schemes and ADER methods. WENO schemes are essentially non-oscillatory. On the other side, RCM is a shock-capturing method. It is neither a finite element nor a finite difference method. The main advantage of the RCM is that it captures discontinuities (shocks and contact surfaces) with infinite resolution (zero width). The first step in the RCM is to suppose that the initial data at a given time level tn, of the original problem, is approximated by a piece-wise constant function. Thus, the original problem is reduced to a set of local Riemann problems (RPs). The exact solution of these local RPs is pieced together to form the solution at the next time level tn+1. RCM takes the updated solution at the next time level tn+1 to be computed from these solutions by using a sequence of random numbers. RCM has the following flaws:1- although RCM captures the discontinuities with zero width, this property comes at a cost, one should solve the RPs exactly. However, the exact solution of RP is usually complex and unavailable for many hyperbolic problems for practical interest. 2- the accuracy in smooth regions of the flow, such as rarefactions, is poor. ADER methods are explicit, one-step schemes with a very high order of accuracy in time and space. They depend on the solution of the generalized RP (GRP). In ADER schemes the initial data are approximated by piecewise polynomial functions of arbitrary order, unlike piece-wise constant in RCM. The ADER methods [12,13,14,15] were developed as an extension of the Godunov upwind methods [3]. By using Taylor expansion of the state (or flux) functions, the GRP is reduced to a set of m conventional RPs. The first RP is nonlinear and the remaining ones are linear RPs of the k-th order spatial derivatives of the initial conditions, with k = 0,1,…m, where m is the order of accuracy (arbitrary) of the scheme. To compute the upwind flux, in ADER methods, the RPs are solved exactly. Again, the exact solutions of RPs are mostly unavailable and more complex. In [23], an improved version of ADER methods (central ADER) was introduced where the RPs were solved numerically and used central fluxes, instead of upwind fluxes. The improved central ADER schemes are more accurate, faster, simple to implement, RP solver free, and need less computer memory. To fade the drawbacks of the above schemes and keep their advantages, we propose, in this paper, an improved version of the RCM. We merge the central ADER technique with RCM. The resulting scheme is called Central RCM (CRCM). The improvements are fourfold. Firstly, we propose to reconstruct the initial data by piece-wise polynomial functions of arbitrary order. Here we use the WENO reconstruction. Secondly, we use Taylor expansion of the state functions as ADER methods, and therefore the original problem is reduced to a set of m conventional RPs. Thirdly, we solve these RPs numerically by using central finite difference schemes. Here we use two different finite difference schemes, fully discrete and semi-discrete schemes. Fourthly, we use random sampling, to update the solution, as the original RCM. Moreover, we use the staggered and non-staggered RCM. To enhance the accuracy of the new methods, we use a third-order TVD flux [22], instead of a first-order Lax–Friedrichs (LxF) flux. Compared with the original RCM and the central ADER, the new methods combine the advantages of RCM, ADER, and central finite difference methods as follows: more accurate, very simple to implement, and need less computer memory and RP solver free. Moreover, the new methods capture the discontinuities with infinite resolution (zero-width) and improve the accuracy of the smooth parts. This is due to using the central finite difference schemes for both the function and its derivatives. Moreover, the new methods have a faster computing time than the central ADER method, this is due to less flux evaluation in CRCM.

The paper consists of the following: In Sect. 2 we review the original RCM. In Sect. 3 we briefly review the ADER method. The central RCM on both staggered and non-staggered grids is described in Sect. 4. In Sect. 5 the accuracy of the new schemes is discussed. In Sect. 6 the new methods are tested by solving some numerical examples for one and two-dimensional equations. Finally, in Sect. 7 we summarize the main findings of this work and give some outlook on future work.

2 Random choice method

In this section, we review the RCM on both staggered and non-staggered grids [16, 19].

2.1 RCM on non-staggered grids

Firstly, we consider the one-dimensional system of conservation laws, namely

$$ {\text{u}}_{{\text{t}}} + [{\text{f}}({\mathrm{u}})]_{{\text{x}}} = 0 $$
(2.1)

with initial and boundary conditions. Where u (x,t) is the vector of conservative variables and f (u) is the flux vector. Throughout this article, we consider regular grids. Let \(\Delta {\text{x}}\quad {\mathrm{and}}\;\Delta {\text{t}}\) be small spatial and time scales, \({\text{x}}_{{\text{j}}} = {\text{j}}\;\Delta {\text{x}},\,{\mathrm{x}}_{{{\text{j}} \pm \tfrac{1}{2}}} = {\text{x}}_{{\text{j}}} \pm \frac{1}{2}\Delta {\mathrm{x}},\quad {\text{t}}^{{\mathrm{n}}} = {\text{n}}\Delta {\mathrm{t}},\quad {\text{u}}_{{\text{j}}}^{{\mathrm{n}}} = {\text{u}}({\mathrm{x}}_{{\text{j}}} ,{\mathrm{t}}^{{\mathrm{n}}} )\) and the cell \( {\text{I}}_{{\text{j}}} = \left\lfloor {{\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ,{\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \right\rfloor .\) Suppose that the solution \({\text{u}}_{{\text{j}}}^{{\mathrm{n}}}\) at the time \({\text{t}} = {\text{t}}^{{\mathrm{n}}}\) is given. To compute the solution at the next time step \({\text{t}}^{{{\text{n}} + 1}}\), the RCM assumes that the data at \({\text{t}}^{{\mathrm{n}}}\) for (2.1) can be approximated by a piecewise constant function over the cell \({\text{I}}_{{\text{j}}}\) as

$$ {\text{u}}({\mathrm{x}},{\text{t}}) = {\text{u}}_{{\text{j}}}^{{\mathrm{n}}} ,\quad \quad {\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} \le {\text{x}} \le {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} $$
(2.2)

Therefore the problem (2.1) becomes a sequence of local Riemann Problems {RP(j,j + 1)}, that is the initial value problem (2.1) subject to the initial condition

$$ {\text{u}}({\mathrm{x}},{\text{t}}^{{\mathrm{n}}} ) = \left\{ {\begin{array}{*{20}l} {{\text{u}}_{{\text{j}}}^{{\mathrm{n}}} ,} \hfill & {{\text{x}} < {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \hfill \\ {{\text{u}}_{{{\text{j}} + 1}}^{{\mathrm{n}}} ,} \hfill & {{\text{x}} \ge {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \hfill \\ \end{array} } \right. $$
(2.3)

The exact solutions to these Riemann Problems are pieced together to give the solution for the next time level \({\text{t}}^{{{\text{n}} + 1}}\). Each local problem has an exact solution as shown in Fig. 1 (for Euler equations) [19].

Fig. 1
figure 1

Solution of local Riemann problems RP (j − 1, j) and RP(j, j + 1)

The RCM updates the solution \({\text{u}}_{{\text{j}}}^{{\mathrm{n}}}\) in \({\text{I}}_{\mathrm{j}}\) at \({\text{t}}^{{{\text{n}}}}\) to the \({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\) at \({\text{t}}^{{{\text{n}} + 1}}\), for sufficiently small \(\Delta t,\) in two steps as follows:

Step 1 Solve both RP (j − 1, j) and RP (j, j + 1) exactly, to obtain the solutions at \({\text{t}}^{{{\text{n}} + 1}}\).

Step2 Sample these solutions at \({\text{t}}^{{{\text{n}} + 1}}\) in the cell \({\text{I}}_{{\text{j}}}\) at a random position \({\text{u}}({\mathrm{Q}}_{{\text{j}}} ,\,{\text{t}}^{{{\text{n}} + 1}} )\) where \({\text{Q}}_{{\text{j}}} = ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} +\uptheta _{{}}^{{\mathrm{n}}} \Delta {\text{x}},\,{\text{t}}^{{{\text{n}} + 1}} )\). This sampling depends on a quasi-random number \(\uptheta_{{}}^{n} \in [0,1][0,1]\), see [16]. The updated solution \({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\) can be written in the form

$$ {\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}} = \left\{ {\begin{array}{*{20}l} {{\text{u}}({\mathrm{x}}_{{{\text{j}} - \tfrac{1}{2}}} + {\uptheta }^{{\mathrm{n}}} \Delta {\text{x}})} \hfill & {0 \le {\uptheta }^{{\mathrm{n}}} \le \tfrac{1}{2}} \hfill \\ {{\text{u}}({\mathrm{x}}_{{{\text{j}} + \tfrac{1}{2}}} + ({\uptheta }^{{\mathrm{n}}} - 1)\Delta {\text{x}})} \hfill & {\tfrac{1}{2} \le {\uptheta }^{{\mathrm{n}}} \le 1} \hfill \\ \end{array} } \right. $$
(2.4)

This sampling is shown in Fig. 2, [19]. It is clear that for \(\uptheta^{n}\) = 0, \({\text{Q}}_{{\text{j}}}\) lies on the inter-cell boundary x = \({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}}\) and therefore the solution \({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\) is the exact solution of RP (j − 1,j) at this position. For \(\uptheta^{n}\) = 1, \({\text{Q}}_{{\text{j}}}\) lies on the right boundary and the solution x = \({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}}\) and the solution \({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\) equals the exact solution of RP(j,j + 1). For \(\uptheta ^{{\mathrm{n}}}\) = 0.5, the solution \({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}} = {\text{u}}_{{\text{j}}}^{{\mathrm{n}}}\), i.e., the same old value. See [19] for more details.

Fig. 2
figure 2

RCM sampling on non-staggered grids

2.2 RCM on staggered grids

In this section, we describe the RCM on a staggered grid to solve the problem (2.1). It updates \({\text{u}}_{{\text{j}}}^{{{\text{n}}}}\) to the \({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\) in two steps as follows: [19]

Step 1 Solve the Riemann problems RP (j − 1, j) and RP (j, j + 1) to find the solutions \(\overline{\text{u}}_{{{\mathrm{j}} \pm \tfrac{1}{2}}}^{{n + \tfrac{1}{2}}}\) at \({\text{t}}_{{}}^{{{\text{n}} + \tfrac{1}{2}}} .\)

Step 2. Sample these solutions at \({\text{t}}_{{}}^{{{\text{n}} + \tfrac{1}{2}}}\) as

$$ \begin{aligned} & {\text{u}}_{{{\text{j}} - \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} = {\overline{\text{u}}}_{{{\text{j}} - \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} + (\uptheta ^{{\mathrm{n}}} - \tfrac{1}{2})\Delta {\text{x}},\,{\text{t}}^{{{\text{n}} + \tfrac{1}{2}}} )\quad \quad \quad \quad 0 \le\uptheta ^{{\mathrm{n}}} \le 1 \\ & {\text{u}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} = {\overline{\text{u}}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} + (\uptheta ^{{\mathrm{n}}} - \tfrac{1}{2})\Delta {\text{x}},\,{\text{t}}^{{{\text{n}} + \tfrac{1}{2}}} )\quad \quad \quad \quad 0 \le\uptheta ^{{\mathrm{n}}} \le 1 \\ \end{aligned} $$
(2.5)

Step 3. Solve the \({\text{RP}}({\mathrm{u}}_{{{\text{j}} - \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} ,{\text{u}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} )\), exactly, to obtain the solution \(\overline{\text{u}}_{\mathrm{j}}^{{\mathrm{n}} + 1} (x,t)\) and then sample it, randomly, to get \({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}} ({\text{x}},{\text{t}})\) as

$$ {\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}} = \overline{\text{u}}_{{{\text{j}} - \tfrac{1}{2}}}^{{{\text{n}} + 1}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} +\uptheta ^{{{\text{n}} + 1}} \Delta {\text{x}},\,{\text{t}}^{{{\text{n}} + \tfrac{1}{2}}} )\quad \quad \quad \quad 0 \le\uptheta ^{{{\text{n}} + 1}} \le 1 $$
(2.6)

Note: we choose the time step \(\Delta t\) according to the CFL condition.

$$ CFL \le 0.25 $$
(2.7)

where \({\text{CFL}} = \mathop {\max }\limits_{{\text{j}}} \,\left( {{\text{S}}_{{\text{j}}}^{{\mathrm{n}}} \frac{{\Delta {\text{t}}}}{{\Delta {\text{x}}}}} \right)\), \({\text{S}}_{{\text{j}}}^{{\mathrm{n}}}\) is the maximum propagation speed in \({\text{I}}_{\mathrm{j}}\) at time level n.

3 ADER methods

Here we review the ADER methods for a system of hyperbolic conservation laws

$$ {\text{u}}_{{\text{t}}} + {\text{f}}({\mathrm{u}})_{{\text{x}}} = 0,\quad - \infty < {\text{x}} < \infty ,\quad {\text{t}} \ge 0,\quad {\text{u}}({\mathrm{x}},0) = {\text{u}}_{0} ({\text{x}}) $$
(3.1)

This equation may be written in the non-conservative form

$$ {\text{u}}_{{\text{t}}} + \lambda ({\text{u}})\;{\text{u}}_{{\text{x}}} = 0, $$
(3.2)

where \(\lambda (u) = \frac{df}{{du}}\) is the characteristic speed. We integrate (3.1) over the control volume in x-t space \([x_{{j - \tfrac{1}{2}}} ,x_{{j + \tfrac{1}{2}}} ] \times [t^{n} ,t^{n + 1} ]\) to get the fully discrete scheme

$$ {\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}} = {\text{u}}_{{\text{j}}}^{{\mathrm{n}}} - \frac{{\Delta {\text{t}}}}{{\Delta {\text{x}}}}\left[ {{\text{F}}_{{{\text{j}} + \tfrac{1}{2}}} - {\text{F}}_{{{\text{j}} - \tfrac{1}{2}}} } \right] $$
(3.3)

Here \({\text{u}}_{{\text{j}}}^{{{\text{n}}}}\) is the space average of the solution in the cell \({\text{I}}_{{\text{j}}} = [{\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ,{\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ]\) at \({\text{t}}^{{{\text{n}}}}\)

$$ {\text{u}}_{{\text{j}}}^{{\mathrm{n}}} = \frac{1}{\Delta x}\int\limits_{{{\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} }}^{{{\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} }} {{\text{u}}(x,t^{n} ){\text{dx}}} $$
(3.4a)

and the flux \(F_{{j + \tfrac{1}{2}}}\) is the time average of the flux at the cell interface \({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}}\):

$$ {\text{F}}_{{{\text{j}} + \tfrac{1}{2}}}^{{}} = \frac{1}{{\Delta {\text{t}}}}\int\limits_{{{\text{t}}^{{\mathrm{n}}} }}^{{{\text{t}}^{{{\text{n}} + 1}} }} {{\text{f}}({\mathrm{u}}({\mathrm{x}}_{{{\text{j}} + \tfrac{1}{2}}} ){\text{dt}}} , $$
(3.4b)

The ADER methods define the numerical fluxes \({\text{F}}_{{{\text{j}} + \tfrac{1}{2}}}\) in such a way that the scheme (3.3) calculates the numerical solutions of (3.1) to arbitrary high order in both space and time. They rely on the solution of GRP (exactly) with the initial condition consisting of polynomial functions of arbitrary order, unlike piece-wise constant construction in RCM. We will mention the details in the next subsections.

3.1 Original (upwind) ADER method

Let the solution at time \({\text{t}}_{{}}^{{\mathrm{n}}}\), \({\text{u}}_{{\text{j}}}^{{\mathrm{n}}}\), be known, the ADER methods [11,12,13] update the solution at the next time \({\text{t}}_{{}}^{{{\text{n}} + 1}}\)(\({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\)) in the following steps:

Step 1: Reconstruction

The first step is to generate a piece-wise polynomial reconstruction from the cell averages i.e.,

$$ {\text{u}}({\mathrm{x}},{\text{t}}) = {\text{P}}_{{\text{j}}}^{{\mathrm{n}}} ({\text{x}}),\quad \quad {\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} \le {\text{x}} \le {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} $$
(3.5)

where the polynomial \({\text{P}}_{{\text{j}}}^{{\mathrm{n}}} ({\text{x}})\), defined on \({\text{I}}_{{\text{j}}}\), must be conservative, high order accurate, and non-oscillatory. Thus, we obtain at each cell interface \({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}}\) the following local GRP(j,j + 1):

$$ \left. {\begin{array}{*{20}l} {{\text{u}}_{{\text{t}}} + {\text{f}}({\mathrm{u}})_{{\text{x}}} \; = 0,} \hfill \\ {{\text{u}}({\mathrm{x}},0) = \left\{ {\begin{array}{*{20}c} {{\text{u}}_{{\text{L}}} ({\text{x}}) = {\text{P}}_{{\text{j}}}^{{}} ({\text{x}},{\text{t}}),\quad \quad \quad {\text{x}} < {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ {{\text{u}}_{{\text{R}}} ({\text{x}}) = {\text{P}}_{{{\text{j}} + 1}}^{{}} ({\text{x}},{\text{t}}),\quad \quad \quad {\text{x}} > {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ \end{array} } \right.} \hfill \\ \end{array} } \right\} $$
(3.6)

It is well known that WENO reconstruction produces more accurate, non-oscillatory results and therefore we used here the WENO reconstruction in the design of our schemes.

Step 2 State expansion

Firstly, we compute the approximate solution \({\text{u}}({\mathrm{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,\tau )\), where \(\tau\) is local time \(\tau = {\text{t}} - {\text{t}}^{{\mathrm{n}}}\) which is sufficiently small. Using Taylor expansion of the interface state in time, we get:

$$ {\text{u}}_{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,\tau ) = {\text{u}}_{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0 + ) + \sum\limits_{k = 1}^{m - 1} {\left\{ {\frac{{\partial^{{\text{k}}} }}{{\partial {\text{t}}^{{\text{k}}} }}\;{\text{u}}_{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0 + )} \right\}\frac{{\tau^{{\text{k}}} }}{{{\text{k}}!}}} $$
(3.7)

where \(0 + = \mathop {\lim }\limits_{{{\text{t}} \to 0 + }} {\text{t}}\). Secondly, apply Cauchy-Kowaleski [11] procedure to replace all time derivatives in (3.7) by space derivatives. Equation (3.7) becomes

$$ {\text{u}}_{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,\tau ) = {\text{u}}_{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0 + ) + \sum\limits_{{{\text{k}} = 1}}^{{{\text{m}} - 1}} {\left\{ {{\text{A}}^{(k)} ({\text{u}}_{{\text{x}}}^{(0)} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0 + ), \ldots ,{\text{u}}_{{\text{x}}}^{{({\text{k}})}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0 + )} \right\}\frac{{\tau^{{\text{k}}} }}{{{\text{k}}!}}} $$
(3.8)

The first term \({\text{u}}_{{}} ({\text{x}}_{{{\mathrm{j}} + \tfrac{1}{2}}} ,0 + )\) is obtained by solving the conventional (piece-wise constant) RP:

$$ \left. {\begin{array}{*{20}l} {{\text{u}}_{{\text{t}}} + {\text{f}}({\mathrm{u}})_{{\text{x}}} \; = 0,} \hfill \\ {{\text{u}}({\mathrm{x}},0) = \left\{ {\begin{array}{*{20}c} {{\text{u}}_{{\text{L}}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ),\quad \quad \quad {\text{x}} < {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ {{\text{u}}_{{\text{R}}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ),\quad \quad \quad {\text{x}} > {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ \end{array} } \right.} \hfill \\ \end{array} } \right\} $$
(3.9)

This is done by using the upwind Godunov method. The remaining terms \({\text{u}}_{{\text{x}}}^{{({\text{k}})}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0 + )\), (k = 1,2…,m-1) are obtained by solving the k-th order derivative RPs which are locally linearised around the leading term \({\text{u}}_{{\text{x}}}^{(0)} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0 + )\) of (3.8):

$$ \left. {\begin{array}{*{20}l} {{\text{v}}_{{\text{t}}} + \lambda ({\text{u}}({\mathrm{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0 + )){\text{v}}_{{\text{x}}} \; = 0,} \hfill \\ {{\text{v}}({\mathrm{x}},0) = \left\{ {\begin{array}{*{20}c} {{\text{u}}_{{\text{L}}}^{{({\text{k}})}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ),\quad \quad \quad {\text{x}} < {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ {{\text{u}}_{{\text{R}}}^{(k)} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ),\quad \quad \quad {\text{x}} > {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ \end{array} } \right.} \hfill \\ \end{array} } \right\} $$
(3.10)

where \({\text{v}} = {\text{u}}_{{\text{x}}}^{{({\text{k}})}}\). The numerical flux \({\text{F}}_{{{\text{j}} + \tfrac{1}{2}}}\) of Eq. (3.3) is evaluated by using the m-th order accurate Gaussian rule

$$ {\text{F}}_{{{\text{j}} + \tfrac{1}{2}}} = \sum\limits_{\upalpha = 0}^{N} {{\text{F}}({\mathrm{u}}({\mathrm{x}}_{{{\text{j}} + \tfrac{1}{2}}} } ,\upgamma_{\upalpha } \Delta {\text{t}})){\text{K}}_{\upalpha } $$
(3.11)

where \(\upgamma _{{\text{j}}} \;{\text{and}}\;{\text{K}}_{{\text{j}}}\) are scaled nodes and weights of the rule and N is the number of nodes [11].

3.2 Central ADER method

However, the high order accuracy of the upwind ADER schemes comes at a cost, RPs must be solved exactly. The exact solutions of the RPs are mostly unavailable and more complex. To overcome this problem, central ADER schemes were presented in [23]. In these methods, central fluxes are used instead of upwind fluxes as the building block. These fluxes are the central first-order monotone LxF flux and the third-order TVD flux [22]. Now, we give a brief review of the central ADER methods.

To evaluate the terms in (3.8), we proceed as follows:

For the first term \({\text{u}}_{{}} ({\text{x}}_{{{\mathrm{j}} + \tfrac{1}{2}}} ,0 + )\), we use the staggered form of the central LxF scheme to solve the RP (3.9):

$$ {\text{u}}_{{{\text{j}} + \tfrac{1}{2}}}^{0 + } = \frac{1}{2}[{\text{u}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0) + {\text{u}}_{{\text{R}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0)] - \frac{{\upalpha_{0} \Delta {\text{t}}}}{{\Delta {\text{x}}}}[{\text{f}}({\mathrm{u}}_{{\text{R}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0)) - {\text{f}}({\mathrm{u}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0))] $$
(3.12)

For \({\text{u}}_{{\text{x}}}^{{({\text{k}})}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0 + )\), (k = 1,2…,m-1), are obtained by solving the k-th order derivative RPs which are locally linearized around the leading term \({\text{u}}_{{\text{x}}}^{(0)} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0 + )\) of (3.8) and they are computed by staggered LxF scheme:

$$ {\text{v}}_{{{\text{j}} + \tfrac{1}{2}}}^{0 + } = \frac{1}{2}[{\text{v}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0) + {\text{v}}_{{\text{R}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0)] - \lambda \frac{{\upalpha_{{\text{k}}} \Delta {\text{t}}}}{{\Delta {\text{x}}}}[{\text{v}}_{{\text{R}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0) - {\text{v}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,0)] $$
(3.13)

here the time weight \(\upalpha _{{\text{k}}} = \frac{1}{{({\text{k}} + 1)^{{\tfrac{1}{k}}} }},\quad {\text{k}} \ge 1\) see [11, 23]. It is easy to see that \(\upalpha_{k}\) lie in the interval [0.5,1] and we take \(\upalpha_{0} = 0.36788\). To get the flux \({\text{F}}_{{{\text{j}} + \tfrac{1}{2}}}\) of (3.2), there are two options. The first option is the m-th order accurate Gaussian rule. The second option is the flux expansion in which Taylor expansion of the flux directly is used [23]. To improve the accuracy of the central ADER method, the third-order TVD flux [22] is used rather than the first-order flux as the building block. For details see [23].

4 Central random choice methods

To fade the drawbacks of the above schemes and maintain their advantages, we merge the central ADER technique into RCM to obtain an improved version of RCM. We call these methods Central RCM (CRCM). We consider the system of hyperbolic conservation laws

$$ {\text{u}}_{{\text{t}}} + {\text{f}}({\mathrm{u}})_{{\text{x}}} = 0,\quad - \infty < {\text{x}} < \infty ,\quad {\text{t}} \ge 0,\quad {\text{u}}({\mathrm{x}},0) = {\text{u}}_{0} ({\text{x}}) $$
(4.1)

This equation may be written in the non-conservative form

$$ {\text{u}}_{{\text{t}}} + \lambda ({\text{u}}){\text{u}}_{{\text{x}}} = 0, $$
(4.2)

where \(\lambda ({\text{u}}) = \frac{{{\text{df}}}}{{{\text{du}}}}\) is the characteristic speed. We integrate (4.1) over the control volume in x-t space \([{\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ,{\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ] \times [{\text{t}}^{{\mathrm{n}}} ,{\text{t}}^{{{\text{n}} + 1}} ]\) to get the fully discrete finite difference scheme

$$ {\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}} = {\text{u}}_{{\text{j}}}^{{\mathrm{n}}} - \frac{{\Delta {\text{t}}}}{{\Delta {\text{x}}}}\;\,\left[ {{\text{F}}_{{{\text{j}} + \tfrac{1}{2}}} - {\text{F}}_{{{\text{j}} - \tfrac{1}{2}}} } \right] $$
(4.3)

\({\text{u}}_{{\text{j}}}^{{{\text{n}}}}\) is the space average of the solution in the cell \({\text{I}}_{{\text{j}}} = [{\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ,{\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ]\) at \({\text{t}}^{{{\text{n}}}}\) and the flux \(F_{{j + \tfrac{1}{2}}}\) is the time average of the flux at the cell interface \({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}}\):

$$ {\text{u}}_{{\text{j}}}^{{\mathrm{n}}} = \frac{1}{{\Delta {\text{x}}}}\int\limits_{{{\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} }}^{{{\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} }} {{\text{u}}({\mathrm{x}},{\text{t}}^{{\mathrm{n}}} ){\text{dx}}} \quad {\text{F}}_{{{\text{j}} + \tfrac{1}{2}}}^{{}} = \frac{1}{\Delta t}\int\limits_{{{\text{t}}^{{\mathrm{n}}} }}^{{{\text{t}}^{{{\text{n}} + 1}} }} {{\text{f}}({\mathrm{u}}({\mathrm{x}}_{{{\text{j}} + \tfrac{1}{2}}} ){\text{dt}}} , $$
(4.4)

There are two types of finite difference schemes to compute the numerical solutions; one-step and two-step finite difference schemes.

4.1 One-step RCM on non-staggered grids

Assuming that the solution at time \({\text{t}}_{{}}^{\mathrm{n}}\), \({\text{u}}_{{\text{j}}}^{{{\text{n}}}}\) is known, our task is to get the solution at the next time \({\text{t}}_{{}}^{{\mathrm{n}}+1}\)(\({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\)). The implementation of this method is described in the following steps:

Step 1:

The first step is to generate a piece-wise polynomial reconstruction from the cell averages i.e.

$$ {\text{u}}({\mathrm{x}},{\text{t}}) = {\text{P}}_{{\text{j}}}^{{\mathrm{n}}} ({\text{x}}),\quad \quad {\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} \le {\text{x}} \le {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} $$
(4.5)

where the polynomial \({\text{P}}_{{\text{j}}}^{{\mathrm{n}}} ({\text{x}})\), defined on \({\text{I}}_{\mathrm{j}}\), must be conservative, high order accurate, and non-oscillatory. We used here the WENO reconstruction in the design of our schemes. Thus, we obtain at each cell interface \({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}}\) the following local RP(j,j + 1):

$$ \left. {\begin{array}{*{20}l} {{\text{u}}_{{\text{t}}} \; + {\text{f}}({\mathrm{u}})_{{\text{x}}} \; = 0,} \hfill \\ {{\text{u}}({\mathrm{x}},0) = \left\{ {\begin{array}{*{20}c} {{\text{u}}_{{\text{L}}} ({\text{x}}) = {\text{P}}_{{\text{j}}}^{{}} ({\text{x}},{\text{t}}),\quad \quad \quad {\text{x}} < {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ {{\text{u}}_{{\text{R}}} ({\text{x}}) = {\text{P}}_{{{\text{j}} + 1}}^{{}} ({\text{x}},{\text{t}}),\quad \quad \quad {\text{x}} > {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ \end{array} } \right.} \hfill \\ \end{array} } \right\} $$
(4.6)

Step 2

We solve the local RPs RP(j − 1,j) and RP(j,j + 1), numerically, by using the staggered LxF scheme of (4.1)

$$ {\text{u}}_{{{\text{j}} - \tfrac{1}{2}}}^{{{\text{n}} + 1}} = \frac{1}{2}[{\text{u}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} ) + {\text{u}}_{{\text{R}}}^{{}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} )] - \frac{{\upalpha_{0} \Delta {\text{t}}}}{{\Delta {\text{x}}}}[{\text{f}}({\mathrm{u}}_{{\text{R}}}^{{}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} )) - {\text{f}}({\mathrm{u}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} ))] $$
(4.7a)
$$ {\text{u}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + 1}} = \frac{1}{2}[{\text{u}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} ) + {\text{u}}_{{\text{R}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} )] - \frac{{\Delta {\text{t}}}}{{\Delta {\text{x}}}}[{\text{f}}({\mathrm{u}}_{{\text{R}}}^{{}} (x_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} )) - {\text{f}}({\mathrm{u}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} ))] $$
(4.7b)

For the case of the piece-wise constant data \({\text{u}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ) = {\text{u}}_{{{\text{j}} - 1}}^{{\mathrm{n}}} \quad {\text{and}}\quad {\text{u}}_{{\text{R}}}^{{}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ) = {\text{u}}_{{\text{j}}}^{{\mathrm{n}}}\), the scheme (4.7a) becomes a monotone first order LxF scheme in the staggered form [10]

$$ {\text{u}}_{{{\text{j}} - \tfrac{1}{2}}}^{{{\text{n}} + 1}} = \frac{1}{2}[{\text{u}}_{{{\text{j}} - 1}}^{{\mathrm{n}}} + {\text{u}}_{{\text{j}}}^{{\mathrm{n}}} ] - \frac{{\upalpha_{0} \Delta {\text{t}}}}{{\Delta {\text{x}}}}[{\text{f}}({\mathrm{u}}_{{\text{j}}}^{{\mathrm{n}}} ) - {\text{f}}({\mathrm{u}}_{{{\text{j}} - 1}}^{{\mathrm{n}}} )] $$
(4.8a)

Similarly, the scheme (4.7b) leads to

$$ {\text{u}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + 1}} = \frac{1}{2}[{\text{u}}_{{\text{j}}}^{{\mathrm{n}}} + {\text{u}}_{{{\text{j}} + 1}}^{{\mathrm{n}}} ] - \frac{{\upalpha_{0} \Delta {\text{t}}}}{{\Delta {\text{x}}}}[{\text{f}}({\mathrm{u}}_{{{\text{j}} + 1}}^{{\mathrm{n}}} ) - {\text{f}}({\mathrm{u}}_{{\text{j}}}^{{\mathrm{n}}} )] $$
(4.8b)

Then we use a Taylor expansion to compute the updated solution at a random position \({\text{Q}}_{{\text{j}}} = ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} +\uptheta _{{}}^{{\mathrm{n}}} \Delta {\text{x}},{\text{t}}^{{{\text{n}} + 1}} )\) in the x-t plane:

$$ {\text{u}}_{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} + \uptheta^{{\mathrm{n}}} \Delta {\text{x}},{\text{t}}^{{{\text{n}} + 1}} ) = {\text{u}}_{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{{\text{n}} + 1}} ) + \sum\limits_{{{\text{k}} = 1}}^{{{\text{m}} - 1}} {\left\{ {\frac{{\partial^{{\text{k}}} }}{{\partial x^{{\text{k}}} }}\;{\text{u}}_{{}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{{\text{n}} + 1}} )} \right\}\frac{{(\uptheta^{{\mathrm{n}}} \Delta {\text{x}})^{{\text{k}}} }}{{{\text{k}}!}}} + {\text{o}}(\Delta {\text{x}})^{{\text{m}}} $$
(4.9)

The first term \({\text{u}}_{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{{\text{n}} + 1}} )\) is the solution of conventional (piece-wise constant) RP:

$$ \left. {\begin{array}{*{20}l} {{\text{u}}_{{\text{t}}} + {\text{f}}({\mathrm{u}})_{{\text{x}}} = 0,} \hfill \\ {{\text{u}}({\mathrm{x}},0) = \left\{ {\begin{array}{*{20}c} {{\text{u}}_{{\text{L}}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ),\quad \quad \quad {\text{x}} < {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ {{\text{u}}_{{\text{R}}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ),\quad \quad \quad {\text{x}} > {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ \end{array} } \right.} \hfill \\ \end{array} } \right\} $$
(4.10)

The first term should be calculated by a monotone scheme in order to fade entropy violation. It is well known that the LxF scheme is the best central monotone scheme. Here we use the staggered LxF schemes (4.7–4.8). The terms \({\text{u}}_{{\text{x}}}^{{({\text{k}})}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{{\text{n}} + 1}} )\) (k = 1,2,…,m-1) are the solutions of the k-th order derivative RPs linearized locally around the first term \({\text{u}}_{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{{\text{n}} + 1}} ) \equiv {\text{u}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + 1}}\) of (4.9):

$$ \left. {\begin{array}{*{20}l} {{\text{v}}_{{\text{t}}} + \lambda ({\text{u}}({\mathrm{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}_{{}}^{{{\text{n}} + 1}} )){\text{v}}_{{\text{x}}} \; = 0,} \hfill \\ {{\text{v}}({\mathrm{x}},0) = \left\{ {\begin{array}{*{20}c} {{\text{u}}_{{\text{L}}}^{{({\text{k}})}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ),\quad \quad \quad {\text{x}} < {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ {{\text{u}}_{{\text{R}}}^{{({\text{k}})}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ),\quad \quad \quad {\text{x}} > {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ \end{array} } \right.} \hfill \\ \end{array} } \right\} $$
(4.11)

Here \(v = u_{x}^{(k)}\) are evaluated by the staggered LxF scheme [10]:

$$ {\text{v}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + 1}} = \frac{1}{2}[{\text{v}}_{{\text{L}}}^{{\mathrm{n}}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ) + {\text{v}}_{{\text{R}}}^{{\mathrm{n}}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} )] - \lambda \frac{{\upalpha_{{\text{k}}} \Delta {\text{t}}}}{{\Delta {\text{x}}}}[{\text{v}}_{{\text{R}}}^{{\mathrm{n}}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ) - {\text{v}}_{{\text{L}}}^{{\mathrm{n}}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} )] $$
(4.12a)

Again, in the case of piece-wise constant, the scheme (4.12a) becomes

$$ {\text{v}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + 1}} = \frac{1}{2}[{\text{v}}_{{\text{j}}}^{{\mathrm{n}}} + {\text{v}}_{{{\text{j}} + 1}}^{{\mathrm{n}}} ] - \lambda \frac{{\upalpha_{{\text{k}}} \Delta {\text{t}}}}{{\Delta {\text{x}}}}[{\text{v}}_{{{\text{j}} + 1}}^{{\mathrm{n}}} - {\text{v}}_{{\text{j}}}^{{\mathrm{n}}} )] $$
(4.12b)

Once we obtain all the spatial derivatives, we calculate the Taylor expansion (4.9) and therefore compute the updated solution \({\text{u}}({\mathrm{x}}_{{{\text{j}} + \tfrac{1}{2}}} + \uptheta^{{\mathrm{n}}} \Delta {\text{x}},\,{\text{t}}^{{{\text{n}} + 1}} )\).

Now, we evaluate the updated solution \({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\) as follows:

$$ {\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}} = \left\{ {\begin{array}{*{20}l} {{\text{u}}({\mathrm{x}}_{{{\text{j}} - \tfrac{1}{2}}} + \uptheta^{{\mathrm{n}}} \Delta {\text{x}},{\text{t}}^{{{\text{n}} + 1}} ),\quad \quad 0 \le \uptheta^{{\mathrm{n}}} \le \tfrac{1}{2}} \hfill \\ {{\text{u}}({\mathrm{x}}_{{{\text{j}} + \tfrac{1}{2}}} + (\uptheta^{{\mathrm{n}}} - 1)\Delta {\text{x}},{\text{t}}^{{{\text{n}} + 1}} ),\quad \quad \tfrac{1}{2} \le \uptheta^{{\mathrm{n}}} \le 1} \hfill \\ \end{array} } \right. $$
(4.13)

4.2 Two steps RCM on staggered grids

In this section, we propose to use the two steps RCM on staggered grids. This method consists of the following steps:

Step 1

Firstly, we solve the local RPs: RP(j − 1, j) and RP(j, j + 1) by using the staggered LxF scheme of (4.1) at time \({\text{t}}^{{{\text{n}} + \tfrac{1}{2}}} = ({\text{n}} + \tfrac{1}{2})\Delta {\text{t}}\)

$$ {\text{u}}_{{{\text{j}} - \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} = \frac{1}{2}[{\text{u}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} ) + {\text{u}}_{{\text{R}}}^{{}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} )] - \frac{{\upalpha_{0} \Delta {\text{t}}}}{{2\Delta {\text{x}}}}[{\text{f}}({\mathrm{u}}_{{\text{R}}}^{{}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} )) - {\text{f}}({\mathrm{u}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} ))] $$
(4.14a)
$$ {\text{u}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} = \frac{1}{2}[{\text{u}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} ) + {\text{u}}_{{\text{R}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} )] - \frac{{\upalpha_{0} \Delta {\text{t}}}}{{2\Delta {\text{x}}}}[{\text{f}}({\mathrm{u}}_{{\text{R}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} )) - {\text{f}}({\mathrm{u}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{\mathrm{n}}} ))] $$
(4.14b)

For the case of piece-wise constant data \({\text{u}}_{{\text{L}}}^{{}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ) = {\text{u}}_{{{\text{j}} - 1}}^{{\mathrm{n}}} \quad {\text{and}}\quad {\text{u}}_{{\text{R}}}^{{}} ({\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} ) = {\text{u}}_{{\text{j}}}^{{\mathrm{n}}}\), these schemes become monotone first order LxF schemes in the staggered form

$$ \overline{\text{u}}_{{{\text{j}} - \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} = \frac{1}{2}[{\text{u}}_{{{\text{j}} - 1}}^{{\mathrm{n}}} + {\text{u}}_{{\text{j}}}^{{\mathrm{n}}} ] - \frac{{\upalpha_{0} \Delta {\text{t}}}}{{2\Delta {\text{x}}}}[{\text{f}}({\mathrm{u}}_{{\text{j}}}^{{\mathrm{n}}} ) - {\text{f}}({\mathrm{u}}_{{{\text{j}} - 1}}^{{\mathrm{n}}} )] $$
(4.15a)
$$ \overline{\text{u}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} = \frac{1}{2}[{\text{u}}_{{\text{j}}}^{{\mathrm{n}}} + {\text{u}}_{{{\text{j}} + 1}}^{{\mathrm{n}}} ] - \frac{{\upalpha_{0} \Delta {\text{t}}}}{{2\Delta {\text{x}}}}[{\text{f}}({\mathrm{u}}_{{{\text{j}} + 1}}^{{\mathrm{n}}} ) - {\text{f}}({\mathrm{u}}_{{\text{j}}}^{{\mathrm{n}}} )] $$
(4.15b)

Step 2

The updated solution to be determined by the numerical solutions of the local RPs: RP(j − 1, j) and RP(j, j + 1), evaluated at a random position \({\text{Q}}_{{\text{j}}} = ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} + \uptheta^{{\mathrm{n}}} \Delta {\text{x}},{\text{t}}^{{{\text{n}} + \tfrac{1}{2}}} )\) and computed by Taylor expansion

$$ \overline{\text{u}}_{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} + \uptheta^{{\mathrm{n}}} \Delta {\text{x}},{\text{t}}^{{{\text{n}} + \tfrac{1}{2}}} ) = \overline{\text{u}}_{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{{\text{n}} + \tfrac{1}{2}}} ) + \sum\limits_{{{\text{k}} = 1}}^{{{\text{m}} - 1}} {\left\{ {\frac{{\partial^{{\text{k}}} }}{{\partial x^{{\text{k}}} }}\;\overline{\text{u}}_{{}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{{\text{n}} + \tfrac{1}{2}}} )} \right\}\frac{{(\uptheta^{{\mathrm{n}}} \Delta {\text{x}})^{{\text{k}}} }}{{{\text{k}}!}}} + {\text{o}}(\Delta {\text{x}})^{{\text{m}}} $$
(4.16)

The first term is the solution of conventional RP (4.10) by using the scheme (4.15).

The terms \(\overline{\text{u}}_{{\text{x}}}^{{({\text{k}})}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{{\text{n}} + \tfrac{1}{2}}} )\) (k = 1,2,…,m-1) are the solutions of the k-th order derivative RPs linearized locally around the first term \({\text{u}}_{{}} ({\text{x}}_{{{\mathrm{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{{\mathrm{n}} + \tfrac{1}{2}}} )\) of (4.16):

$$ \left. {\begin{array}{*{20}l} {{\text{v}}_{{\text{t}}} + \lambda (\overline{\text{u}}({\mathrm{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}_{{}}^{{{\text{n}} + \tfrac{1}{2}}} )){\text{v}}_{{\text{x}}} \; = 0,} \hfill \\ {{\text{v}}({\mathrm{x}},0) = \left\{ {\begin{array}{*{20}c} {{\text{u}}_{{\text{L}}}^{{({\text{k}})}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ),\quad \quad \quad {\text{x}} < {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ {{\text{u}}_{{\text{R}}}^{{({\text{k}})}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ),\quad \quad \quad {\text{x}} > {\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} } \\ \end{array} } \right.} \hfill \\ \end{array} } \right\} $$
(4.17)

Here \({\text{v}} = \overline{\text{u}}_{{\text{x}}}^{{({\text{k}})}}\) are computed by the staggered LxF scheme:

$$ \overline{v}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} = \frac{1}{2}[{\text{v}}_{{\text{L}}}^{{\mathrm{n}}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ) + {\text{v}}_{{\text{R}}}^{{\mathrm{n}}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} )] - \lambda \frac{{\upalpha_{{\text{k}}} \Delta {\text{t}}}}{{2\Delta {\text{x}}}}[{\text{v}}_{{\text{R}}}^{{\mathrm{n}}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ) - {\text{v}}_{{\text{L}}}^{{\mathrm{n}}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} )] $$
(4.18a)

In the case of piece = wise constant, the scheme (4.18a) becomes

$$ \overline{v}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} = \frac{1}{2}[{\text{v}}_{{\text{j}}}^{{\mathrm{n}}} + {\text{v}}_{{{\text{j}} + 1}}^{{\mathrm{n}}} ] - \lambda \frac{{\upalpha_{{\text{k}}} \Delta {\text{t}}}}{{2\Delta {\text{x}}}}[{\text{v}}_{{{\text{j}} + 1}}^{{\mathrm{n}}} - {\text{v}}_{{\text{j}}}^{{\mathrm{n}}} )] $$
(4.18b)

Having computed all the derivatives in (4.16), we compute the updated solutions \({\text{u}}_{{{\text{j}} - \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} \;{\text{and}}\,\;{\text{u}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}}\) as follows:

$$ {\text{u}}_{{{\text{j}} - \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} = \overline{\text{u}}({\mathrm{x}}_{{{\text{j}} - \tfrac{1}{2}}} + (\uptheta^{{\mathrm{n}}} - \frac{1}{2})\Delta {\text{x}},{\text{t}}_{{}}^{{{\text{n}} + \tfrac{1}{2}}} )\quad {\text{and}}\quad {\text{u}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} = \overline{\text{u}}({\mathrm{x}}_{{j + \tfrac{1}{2}}} + (\uptheta^{{\mathrm{n}}} - \frac{1}{2})\Delta {\text{x}},{\text{t}}_{{}}^{{n + \tfrac{1}{2}}} ) $$
(4.19)

Step 3

Solve the \({\text{RP}}({\mathrm{j}} - \tfrac{1}{2},{\text{j}} + \tfrac{1}{2})\) with initial data (4.19) to find \({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\) and then find Taylor expansion

$$ {\text{u}}_{{}} ({\text{x}}_{{\text{j}}} + \uptheta^{{{\text{n}} + 1}} \Delta {\text{x}},{\text{t}}^{{{\text{n}} + 1}} ) = \overline{\text{u}}_{{}} ({\text{x}}_{{\text{j}}} ,{\text{t}}^{{{\text{n}} + 1}} ) + \sum\limits_{{{\text{k}} = 1}}^{{{\text{m}} - 1}} {\left\{ {\frac{{\partial^{{\text{k}}} }}{{\partial x^{{\text{k}}} }}\;\overline{\text{u}}_{{}}^{{({\text{k}})}} ({\text{x}}_{{\text{j}}} ,{\text{t}}^{{{\text{n}} + 1}} )} \right\}\frac{{(\uptheta^{{{\text{n}} + 1}} \Delta {\text{x}})^{{\text{k}}} }}{{{\text{k}}!}}} + {\text{o}}(\Delta {\text{x}})^{{\text{m}}} $$
(4.20)

where the first term is the solution of conventional \({\text{RP}}({\mathrm{j}} - \tfrac{1}{2},{\text{j}} + \tfrac{1}{2})\) by using the LxF scheme

$$ \overline{\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}} = \frac{1}{2}[{\text{u}}_{{{\text{j}} - \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} + {\text{u}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} ] - \frac{{\upalpha_{0} \Delta {\text{t}}}}{{2\Delta {\text{x}}}}[{\text{f}}({\mathrm{u}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} ) - {\text{f}}({\mathrm{u}}_{{{\text{j}} - \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} )] $$
(4.21)

The terms \(\overline{\text{u}}_{{\text{x}}}^{{({\text{k}})}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{{\text{n}} + \tfrac{1}{2}}} )\)(k = 1,2,…,m-1) are the solutions of the k-th order derivative RPs linearized locally around the first term of (4.20):

$$ \left. {\begin{array}{*{20}l} {{\text{v}}_{{\text{t}}} + \lambda (\overline{\text{u}}({\mathrm{x}}_{{\text{j}}} ,{\text{t}}_{{}}^{{{\text{n}} + 1}} )){\text{v}}_{{\text{x}}} \; = 0,} \hfill \\ {{\text{v}}({\mathrm{x}},0) = \left\{ {\begin{array}{*{20}c} {{\text{u}}_{{\text{L}}}^{{({\text{k}})}} ({\text{x}}_{{\text{j}}} ),\quad \quad \quad {\text{x}} < {\text{x}}_{{\text{j}}} } \\ {{\text{u}}_{{\text{R}}}^{{({\text{k}})}} ({\text{x}}_{{\text{j}}} ),\quad \quad \quad {\text{x}} > {\text{x}}_{{\text{j}}} } \\ \end{array} } \right.} \hfill \\ \end{array} } \right\} $$
(4.22)

\({\text{v}} = {\text{u}}_{{\text{x}}}^{{({\text{k}})}}\) are evaluated by the LxF scheme:

$$ \overline{v}_{{\text{j}}}^{{{\text{n}} + \tfrac{1}{2}}} = \frac{1}{2}[{\text{v}}_{{\text{L}}}^{{{\text{n}} + \tfrac{1}{2}}} ({\text{x}}_{{\text{j}}} ) + {\text{v}}_{{\text{R}}}^{{{\text{n}} + \tfrac{1}{2}}} ({\text{x}}_{{\text{j}}} )] - \lambda \frac{{\upalpha_{{\text{k}}} \Delta {\text{t}}}}{{2\Delta {\text{x}}}}[{\text{v}}_{{\text{R}}}^{{{\text{n}} + \tfrac{1}{2}}} ({\text{x}}_{{\text{j}}} ) - {\text{v}}_{{\text{L}}}^{{{\text{n}} + \tfrac{1}{2}}} ({\text{x}}_{{\text{j}}} )] $$
(4.23a)

In the case of piece-wise constant, this scheme becomes

$$ \overline{v}_{{\text{j}}}^{{{\text{n}} + 1}} = \frac{1}{2}[{\text{v}}_{{\text{j}}}^{{{\text{n}} + \tfrac{1}{2}}} + {\text{v}}_{{{\text{j}} + 1}}^{{{\text{n}} + \tfrac{1}{2}}} ] - \lambda \frac{{\upalpha_{{\text{k}}} \Delta {\text{t}}}}{{2\Delta {\text{x}}}}[{\text{v}}_{{{\text{j}} + 1}}^{{{\text{n}} + \tfrac{1}{2}}} - {\text{v}}_{{\text{j}}}^{{{\text{n}} + \tfrac{1}{2}}} )] $$
(4.23b)

After computing all the derivatives in (4.20), we sample the updated solutions \({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\) as follows:

$$ {\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}} = \overline{\text{u}}({\mathrm{x}}_{{\text{j}}} + (\uptheta^{{{\text{n}} + 1}} - \frac{1}{2})\Delta {\text{x}},{\text{t}}^{{{\text{n}} + 1}} )\quad 0 \le \uptheta^{{{\text{n}} + 1}} \le 1 $$
(4.24)

4.3 Central RCM with third-order Flux

The main target of this paper is to enhance the accuracy of the CRCM. For this purpose, we use the central third-order TVD flux [22] instead of the first-order flux as the building block of the presented methods. The new method is denoted by the CRCM-r-TVD method. It consists of the following steps:

Step 1

Firstly, we solve the \({\text{RP}}({\mathrm{j}} - \tfrac{1}{2},{\text{j}} + \tfrac{1}{2})\) to obtain the solution \(\overline{\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\) as

$$ \overline{\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}} = \frac{1}{2}[{\text{u}}_{{{\text{j}} - \tfrac{1}{2}}}^{{\mathrm{n}}} + {\text{u}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} ] - \frac{{\upalpha_{0} \Delta {\text{t}}}}{{\Delta {\text{x}}}}[{\text{F}}_{{{\text{j}} + \tfrac{1}{2}}}^{{\mathrm{n}}} - {\text{F}}_{{{\text{j}} - \tfrac{1}{2}}}^{{\mathrm{n}}} ] $$
(4.25)

Here we use the third-order TVD flux presented in [22]. For linear Eq. (4.14.2)

$$\begin{aligned}& {\text{F}}_{{{\text{j}} + {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} = \frac{1}{2}\left( {\lambda {\text{u}}_{{\text{j}}} + \lambda {\text{u}}_{{{\text{j}} + 1}} } \right) - \frac{1}{2}\left| \lambda \right|\Delta_{{{\text{j}} + {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} {\text{u}}\\ & + \left| \lambda \right|\left\{ {A_{0} \Delta_{{{\text{j}} + {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} {\text{u}} + {\text{A}}_{1} \Delta_{{{\text{j}} + {\text{L}} + {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} {\text{u}}} \right\}\varphi_{{\text{j}}} + \left| \lambda \right|{\text{A}}_{2} \Delta_{{{\text{j}} + {\text{M}} + {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} {\text{u}}\varphi_{{{\text{j}} + {\text{M}}}}\end{aligned} $$
(4.26)

where L = −1,M = 1 for c > 0 and L = 1, M = −1 for c < 0.

Here \({\text{c}} = \lambda \frac{{\Delta {\text{t}}}}{{\Delta {\text{x}}}}\) is the Courant number and \(\Delta_{{{\text{j}} + {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} {\text{u}} = {\text{u}}_{{{\text{j}} + 1}} - {\text{u}}_{{\text{j}}}\),

$$ A_{0} = \frac{1}{2} - \frac{\left| c \right|}{4},A_{1} = - \frac{\left| c \right|}{8} - \frac{{c^{2} }}{8},A_{2} = - \frac{\left| c \right|}{8} + \frac{{c^{2} }}{8} $$
(4.27)

Here \(\upphi_{{\text{j}}}\) and \(\upphi_{{{\text{j}} + {\text{M}}}}\) are flux limiter functions see [22].

For nonlinear scalar problems \(\lambda = \lambda ({\text{u}})\), we define the wave speed

$$ \lambda_{{{\text{j}} + \tfrac{1}{2}}} = \left\{ {\begin{array}{*{20}l} {\frac{{\Delta_{{{\text{j}} + \tfrac{1}{2}}} {\text{f}}}}{{\Delta_{{{\text{j}} + \tfrac{1}{2}}} {\text{u}}}}} \hfill & {\Delta_{{{\text{j}} + \tfrac{1}{2}}} {\text{u}} \ne 0} \hfill \\ {\left. {\frac{{\partial {\text{f}}}}{{\partial {\text{u}}}}} \right|_{{{\text{u}}_{{\text{j}}} }} } \hfill & {\Delta_{{{\text{j}} + \tfrac{1}{2}}} {\text{u}} = 0} \hfill \\ \end{array} } \right. $$
(4.28)

The numerical flux (4.2) takes the form

$$\begin{aligned}& {\text{F}}_{{{\text{j}} + {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} = \frac{1}{2}\left( {{\text{f}}_{{\text{j}}} + {\text{f}}_{{{\text{j}} + 1}} } \right) - \frac{1}{2}\left| {\lambda_{{{\text{j}} + \tfrac{1}{2}}} } \right|\Delta_{{{\text{j}} + {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} {\text{u}}\\ & + \left| {\lambda_{{{\text{j}} + \tfrac{1}{2}}} } \right|\,\left\{ {{\text{A}}_{0} \Delta_{{{\text{j}} + {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} {\text{u}} + {\text{A}}_{1} \Delta_{{{\text{j}} + {\text{L}} + {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} {\text{u}}} \right\}\upphi_{{\text{j}}} + \left| {\lambda_{{{\text{j}} + \tfrac{1}{2}}} } \right|{\text{A}}_{2} \Delta_{{{\text{j}} + {\text{M}} + {\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}} {\text{u}}\upphi_{{{\text{j}} + {\text{M}}}}\end{aligned} $$
(4.29)

The coefficients \({\text{A}}_{{\text{j}}}\) are the same (4.27) with replacing c by \({\text{c}}_{{{\text{j}} + \tfrac{1}{2}}} = \frac{{\Delta {\text{t}}}}{{\Delta {\text{x}}}}\lambda_{{{\text{j}} + \tfrac{1}{2}}}\).

Then we use Taylor expansion

$$ {\text{u}}_{{}} ({\text{x}}_{{\text{j}}} + \uptheta^{{{\text{n}} + 1}} \Delta {\text{x}},{\text{t}}^{{{\text{n}} + 1}} ) = \overline{\text{u}}_{{}} ({\text{x}}_{{\text{j}}} ,{\text{t}}^{{{\text{n}} + 1}} ) + \sum\limits_{{{\text{k}} = 1}}^{{{\text{m}} - 1}} {\left\{ {\frac{{\partial^{{\text{k}}} }}{{\partial x^{{\text{k}}} }}\overline{\text{u}}_{{}}^{{({\text{k}})}} ({\text{x}}_{{\text{j}}} ,{\text{t}}^{{{\text{n}} + 1}} )} \right\}\frac{{(\uptheta^{{{\text{n}} + 1}} \Delta {\text{x}})^{{\text{k}}} }}{{{\text{k}}!}}} + {\text{o}}(\Delta {\text{x}})^{{\text{m}}} $$
(4.30)

where \(\overline{\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\) is given by (4.25). The terms \(\overline{\text{u}}_{{\text{x}}}^{{({\text{k}})}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{{\text{n}} + \tfrac{1}{2}}} )\)(k = 1,2,…,m-1) are the solutions of the k-th order derivative RPs linearized locally around the first term of (4.30):

$$ \left. {\begin{array}{*{20}l} {{\text{v}}_{{\text{t}}} + \lambda (\overline{\text{u}}({\mathrm{x}}_{{\text{j}}} ,{\text{t}}_{{}}^{{{\text{n}} + 1}} )){\text{v}}_{{\text{x}}} = 0,} \hfill \\ {{\text{v}}({\mathrm{x}},0) = \left\{ {\begin{array}{*{20}c} {{\text{u}}_{{\text{L}}}^{{({\text{k}})}} ({\text{x}}_{{\text{j}}} ),\quad \quad \quad {\text{x}} < {\text{x}}_{{\text{j}}} } \\ {{\text{u}}_{{\text{R}}}^{{({\text{k}})}} ({\text{x}}_{{\text{j}}} ),\quad \quad \quad {\text{x}} > {\text{x}}_{{\text{j}}} } \\ \end{array} } \right.} \hfill \\ \end{array} } \right\} $$
(4.31)

\(v = u_{x}^{(k)}\) are evaluated by the finite difference scheme:

$$ {\text{v}}_{{\text{j}}}^{{{\text{n}} + 1}} = \frac{1}{2}[{\text{v}}_{{{\text{j}} - \tfrac{1}{2}}}^{{\mathrm{n}}} + {\text{v}}_{{{\text{j}} + \tfrac{1}{2}}}^{{{\text{n}} + \tfrac{1}{2}}} ] - \frac{{\upalpha_{{\text{k}}} \Delta {\text{t}}}}{{\Delta {\text{x}}}}[{\text{F}}_{{{\text{j}} + \tfrac{1}{2}}}^{{\mathrm{n}}} - {\text{F}}_{{{\text{j}} - \tfrac{1}{2}}}^{{\mathrm{n}}} ] $$
(4.32)

with the third-order TVD (4.25).

After computing all the derivatives in (4.30), we sample the updated solutions \({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\) as follows:

$$ {\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}} = \overline{\text{u}}({\mathrm{x}}_{{\text{j}}} + (\uptheta^{{{\text{n}} + 1}} - \frac{1}{2})\Delta {\text{x}},{\text{t}}^{{{\text{n}} + 1}} )\quad 0 \le \uptheta^{{{\text{n}} + 1}} \le 1 $$

4.4 semi-discrete finite volume scheme

There is another approach for establishing numerical methods, this is the semi-discrete method. In this approach, we consider the discretization in space, while leaving the problem continuous in time. This produces an Ordinary Differential Equation (ODE) in time. This approach gives more flexibility and is well suitable for constructing very high–order schemes.

By integrating (4.1) with respect to x and keeping the time variable continuous, we obtain the semi-discrete finite volume scheme

$$ \frac{{\text{d}}}{{{\text{dt}}}}{\text{u}}_{{\text{j}}} ({\text{t}}) = - \frac{1}{{\Delta {\text{x}}}}\left\{ {{\text{F}}_{{{\text{j}} + \tfrac{1}{2}}} - {\text{F}}_{{{\text{j}} - \tfrac{1}{2}}} } \right\} $$
(4.33)

where \({\text{u}}_{\mathrm{j}} ({\text{t}})\) and \({\text{F}}_{{{\mathrm{j}} + \tfrac{1}{2}}}\) are

$$ {\text{u}}_{{\text{j}}} ({\text{t}}) = \frac{1}{{\Delta {\text{x}}}}\int\limits_{{{\text{x}}_{{{\text{j}} - \tfrac{1}{2}}} }}^{{{\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} }} {{\text{u}}({\mathrm{x}},{\text{t}})\,{\text{dx}}} ,\quad {\text{F}}_{{{\text{j}} + \tfrac{1}{2}}} = {\text{F}}({\mathrm{u}}_{{{\text{j}} + \tfrac{1}{2}}} ({\text{t}})) $$
(4.34)

Equation (4.33) is a system of time-dependent ODEs that should be solved by using a stable ODE solver that keeps the spatial accuracy of the scheme. In this study, we use the linear strong-stability-preserving Runge–Kutta algorithm \(\ell SSPRK\)(s,s-1) i.e., s-stage and (s-1)-th order method, presented by Gottlieb [5].

For our central RCM, we use the semi-discrete scheme (4.33) instead of fully discrete schemes (4.3).

5 Accuracy

In this section, we discuss the accuracy of the central CRCM based on a semi-discrete scheme on non-staggered grids. We will discuss the accuracy of the assumption of a smooth solution.

Firstly, we reconstruct, at \({\text{t}}^{{\mathrm{n}}}\), the cell averages \({\text{u}}_{{\text{j}}}^{{\mathrm{n}}}\) by a piece-wise polynomial \(P_{j} (x)\). To fade the spurious oscillations near discontinuities, we use the WENO polynomial reconstruction. For r stencils, we construct the polynomial \({\text{P}}_{{\text{j}}} ({\text{x}})\) of degree (r-1) and therefore the spatial accuracy is (2r-1)-th order i.e.,

$$ {\text{P}}_{{\text{j}}} ({\text{x}}) = {\text{u}}({\mathrm{x}},{\text{t}}^{{\mathrm{n}}} ) + {\text{o}}(\Delta {\text{x}})^{{2{\text{r}} - 1}} $$
(5.1)

and the spatial accuracy for the k-th order derivative for u is (r-k)-th order

$$ \partial_{{\text{x}}}^{{({\text{k}})}} {\text{P}}_{{\text{j}}} ({\text{x}}) = {\text{u}}_{{\text{x}}}^{{({\text{k}})}} ({\text{x}},{\text{t}}^{{\mathrm{n}}} ) + {\text{o}}(\Delta {\text{x}})^{{{\text{r}} - {\text{k}}}} $$
(5.2)

Also, the function \({\text{g}}_{{\text{j}}} ({\text{x}}) = {\text{f}}({\mathrm{P}}_{{\text{j}}} ({\text{x}}))\) has the order of accuracy [11, 23]

$$ {\text{g}}_{{\text{j}}} ({\text{x}}) = {\text{f}}({\mathrm{x}},{\text{t}}^{{\mathrm{n}}} ) + {\text{o}}(\Delta {\text{x}})^{{2{\text{r}} - 1}} $$
(5.3a)
$$ \partial_{{\text{x}}}^{{({\text{k}})}} {\text{g}}_{{\text{j}}} ({\text{x}}) = {\text{f}}_{{\text{x}}}^{{({\text{k}})}} ({\text{x}},{\text{t}}^{{\mathrm{n}}} ) + {\text{o}}(\Delta {\text{x}})^{{{\text{r}} - {\text{k}}}} $$
(5.3b)

The numerical solution defined in (4.13) with (4.9) can be written as

$$\begin{aligned}& {\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}} = {\text{u}}_{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} + \uptheta^{{\mathrm{n}}} \Delta {\text{x}},{\text{t}}^{{{\text{n}} + 1}} ) = {\text{u}}_{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{{\text{n}} + 1}} )\\ &\quad + \sum\limits_{{{\text{k}} = 1}}^{{{\text{m}} - 1}} {\left\{ {\frac{{\partial^{{\text{k}}} }}{{\partial x^{{\text{k}}} }}\;{\text{u}}_{{}}^{{}} ({\text{x}}_{{{\text{j}} + \tfrac{1}{2}}} ,{\text{t}}^{{{\text{n}} + 1}} )} \right\}\frac{{(\uptheta^{{\mathrm{n}}} \Delta {\text{x}})^{{\text{k}}} }}{{{\text{k}}!}}} + {\text{o}}(\Delta {\text{x}})^{{\text{m}}}\end{aligned} $$
(5.4)

where the first term is the solution of (4.10) using the semi-discrete scheme (4.33) with the truncation error

$$ \begin{aligned} {\text{T}}_{0} ({\text{x}}_{{\text{j}}} ,{\text{t}}^{{\mathrm{n}}} ) & = \frac{{{\text{du}}_{{\text{j}}} }}{{{\text{dt}}}} + \frac{1}{{\Delta {\text{x}}}}\left\{ {{\text{F}}_{{{\text{j}} + \tfrac{1}{2}}} - {\text{F}}_{{{\text{j}} - \tfrac{1}{2}}} } \right\} = [{\text{u}}_{{\text{t}}} + {\text{o}}(\Delta {\text{t}})^{{\text{s}}} ] + [{\text{f}}_{{\text{x}}} + {\text{o}}(\Delta {\text{x}})^{{2{\text{r}} - 1}} ] \\ & = {\text{o}}(\Delta {\text{t}})^{{\text{s}}} + {\text{o}}(\Delta {\text{x}})^{{2{\text{r}} - 1}} \\ \end{aligned} $$
(5.5)

Since \({\text{u}}_{{\text{t}}} + {\text{f}}_{{\text{x}}} = 0\). Here s is the order of the time discretization method.

The remaining terms of (5.4) are the solutions of (4.11) with (4.33) with the error

$$ {\text{T}}_{{\text{k}}} ({\text{x}}_{{\text{j}}} ,{\text{t}}^{{\mathrm{n}}} ) = {\text{o}}(\Delta {\text{t}})^{{\text{s}}} + {\text{o}}(\Delta {\text{x}})^{{{\text{r}} - {\text{k}}}} $$
(5.6)

Therefore from (5.45.6) we get the solution \({\text{u}}_{{\text{j}}}^{{{\text{n}} + 1}}\) with the total truncation error

$$ \begin{aligned} {\text{T}}_{{}} ({\text{x}}_{{\mathrm{j}}} ,{\text{t}}^{{{\mathrm{n}} + 1}} ) & = [{\text{o}}(\Delta {\mathrm{t}})^{{\mathrm{s}}} + {\text{o}}(\Delta {\mathrm{x}})^{{2{\text{r}} - 1}} ] + \sum\limits_{{{\mathrm{k}} = 1}}^{{{\mathrm{m}} - 1}} {[{\text{o}}(\Delta {\mathrm{t}})^{{\mathrm{s}}} + {\text{o}}(\Delta {\mathrm{x}})^{{{\mathrm{r}} - {\mathrm{k}}}} } ]\,\frac{{(\uptheta \Delta {\text{x}})^{{\mathrm{k}}} }}{{{\mathrm{k}}!}} + {\text{o}}(\Delta {\mathrm{x}})^{{\text{m}}} \\ & = {\text{o}}(\Delta {\text{t}})^{{\mathrm{s}}} + {\text{o}}(\Delta {\mathrm{x}})^{{2{\mathrm{r}} - 1}} + {\text{o}}(\Delta {\mathrm{t}})^{{\text{s}}} \sum\limits_{{{\mathrm{k}} = 1}}^{{{\text{m}} - 1}} {{\mathrm{o}}(\Delta {\text{x}})^{{\mathrm{k}}} } \frac{{\uptheta^{{\text{k}}} }}{{{\mathrm{k}}!}} + {\text{o}}(\Delta {\mathrm{x}})^{{\mathrm{r}}} .\sum\limits_{{{\text{k}} = 1}}^{{{\mathrm{m}} - 1}} {\frac{{\uptheta^{{\text{k}}} }}{{{\mathrm{k}}!}} + } {\text{o}}(\Delta {\mathrm{x}})^{{\mathrm{m}}} \\ \end{aligned} $$
(5.7)

If the ratio \(\frac{{\Delta {\mathrm{t}}}}{{\Delta {\mathrm{x}}}}\) is constant, we get

$$ {\text{T}}_{{}} ({\text{x}}_{{\text{j}}} ,{\text{t}}^{{{\text{n}} + 1}} ) = {\text{o}}(\Delta {\text{t}})^{{\text{s}}} + {\text{o}}(\Delta {\text{x}})^{{2{\text{r}} - 1}} + {\text{o}}(\Delta {\text{x}})^{{\text{s}}} \sum\limits_{{{\text{k}} = 1}}^{{{\text{m}} - 1}} {{\text{o}}(\Delta {\text{x}})^{{\text{k}}} } \frac{{\uptheta^{{\text{k}}} }}{{{\text{k}}!}} + {\text{o}}(\Delta {\text{x}})^{{\text{r}}} \sum\limits_{{{\text{k}} = 1}}^{{{\text{m}} - 1}} {\frac{{\uptheta^{{\text{k}}} }}{{{\text{k}}!}} + } {\text{o}}(\Delta {\text{x}})^{{\text{m}}} $$

Since \(\uptheta \le 1\quad {\text{i.e.}},\quad \frac{{\uptheta^{{\text{k}}} }}{{{\text{k}}!}}\) is bounded, therefore

$$ \left| {{\text{T}}_{{}} ({\text{x}}_{{\text{j}}} ,{\text{t}}^{{{\text{n}} + 1}} )} \right| \le {\text{o}}(\Delta {\text{t}})^{{\text{s}}} + {\text{o}}(\Delta {\text{x}})^{{2{\text{r}} - 1}} + {\text{o}}(\Delta {\text{x}})^{{\text{s}}} \sum\limits_{{{\text{k}} = 1}}^{{{\text{m}} - 1}} {{\text{A}}_{1} {\text{o}}(\Delta {\text{x}})^{{\text{k}}} } + {\text{A}}_{2} {\text{o}}(\Delta {\text{x}})^{{\text{r}}} + {\text{o}}(\Delta {\text{x}})^{{\text{m}}} $$

where A1 and A2 are constants.

Therefore

$$ \left| {{\text{T}}({\mathrm{x}}_{{\text{j}}} ,{\text{t}}^{{\mathrm{n}}} )} \right| \le {\text{C}}_{1} (\Delta {\text{x}})^{{\text{d}}} + {\text{C}}_{2} (\Delta {\text{t}})^{{\text{s}}} $$
(5.8)

where C1 and C2 are constants and d = min{s + 1,r,m}. Using the WENO reconstruction of order r, the time discretization of order s and Taylor expansion (4.9) of order m, we get the scheme of order s in time and d in space.

For the central RCM on staggered grids and RCM with TVD fluxes, we will see the accuracy empirically.

6 Numerical experiments

In this section, we study, numerically, the efficiency and convergence properties of our schemes presented here. To show the efficiency of the schemes, we compare their numerical results with those of other state-of-the-art high-order shock-capturing schemes, e.g., RCM [16] and central ADER schemes [22] and others. For all schemes we use CFL = 0.45. For all examples, we use a uniform mesh with N as the number of cells. In all graphs, the exact solution is shown by the full line while the numerical solution is shown by symbols.

Here, we compare the following schemes:

  1. 1.

    RCM is the original RCM [16]

  2. 2.

    NONS-CRCM-r is the non-staggered central RCM of the r-th order.

  3. 3.

    STAG-CRCM-r is the staggered central RCM of r-th order.

  4. 4.

    ADER-r is the r-th order central ADER scheme [23] with LxF flux.

  5. 5.

    ADER-r-TVD is the r-th order central ADER scheme [23] with third-order TVD flux [23].

  6. 6.

    CRCM-r- TVD is the CRCM of order r with third-order TVD flux [22].

We use the \(\ell SSPRK\)(s, s-1), s = 6 method for time integration for the linear problems while for the nonlinear problems we replace the linear SSPRK schemes with SSPRK(5,4) method. [5]

6.1 Example 1 (Convergence tests)

Firstly, we test the accuracy of the new schemes on the linear scalar problems.

$$ {\text{u}}_{{\text{t}}} + {\text{u}}_{{\text{x}}} = 0,\quad {\text{x}} \in [ - 1,1] $$
(6.1)

with periodic initial conditions

$$ {\text{u}}({\mathrm{x}},0) = \sin (\uppi {\text{x}} - \frac{{\sin \uppi {\text{x}}}}{\uppi }) $$
(6.2)

defined on [− 1, 1]. Table 1 presents the convergence rates and errors in \(L^{1}\) norm at time t = 20. We notice, from the table, that ADER5, NONS-CRCM-5, STAG-CRCM-5, ADER-5-TVD, and CRCM-5-TVD methods reach the designed order of accuracy. Also, we observe that the central RCM schemes, presented here, are more accurate in both error sizes and order of accuracy while ADER-5-TVD and CRCM-5-TVD are more accurate (approximately 6.5). Moreover, CRCM-5-TVD is comparable with ADER-5-TVD. From Table 1 we have the following notes:

Table 1 Convergence study for example 1

1- the CPU time used by the NONS-CRCM-5 scheme is about 66% of that by the ADER5 scheme on a given mesh. While the CPU time used by the Stag-CRCM-5 scheme is about 75% of that by the ADER5 scheme on a given mesh. This is due to a less flux evaluation in CRCM than in the ADER methods.

2- As expected, ADER-5-TVD and CRCM-5-TVD are the most accurate in both error sizes and order of accuracy. This is due to utilizing high-order flux (third-order TVD) instead of first-order flux. However, the CPU time used by CRCM-5-TVD is about 35% of ADER-5-TVD on given meshes. Moreover, the CPU time used by CRCM5-TVD is about 52%of that by NONS-CRCM-5 and is about 57% of STAG-CRCM. This is due to the evaluations of the updated solution of NONS.CRCM-5 and STAG-CRCM methods are computed by using the information of the adjacent two RPs, while for CRCM-5-TVD, we use only one RP. Figure 3 shows the comparison of the efficiency of \(L^{1}\) errors versus CPU time. We observe that the scheme with the least CPU time to achieve the same error would have the best efficiency, therefore CRCM-5-TVD is the most efficient. It is beneficial to see how the numerical solution depends on the number of grid points N. For this target the errors of the numerical solutions in \(L^{1}\) norms are shown in Fig. 4 as a function of the number of grid points N. It is clear that NONS-CRCM-5, STAG-CRCM-5, and ADER-5 schemes achieve the fifth-order accuracy even on coarse grids. The ADER-5-TVD and CRCM-5-TVD schemes achieve approximately seventh order. For reference targets, we plot the lines with \(N^{ - 5} \,and\,N^{ - 7}\) slopes, which are consistent with the formal order of accuracy of the schemes.

Fig. 3
figure 3

CPU time and \({\text{L}}^{1} \,{\text{error}}\) curve for Example 1

Fig. 4
figure 4

Rate of convergence in terms of \({\text{L}}^{1} \,{\text{error}}\) for example 1

6.2 Example 2. (Example with discontinuities)

We consider the linear Eq. (6.1) with the following initial data

$$ {\text{u}}({\mathrm{x}},0) = \left\{ {\begin{array}{*{20}l} {\frac{1}{6}[{\text{G}}({\mathrm{x}},{\text{z}} - \delta ) + {\text{G}}({\mathrm{x}},{\text{z}} + \delta ) + 4{\text{G}}({\mathrm{x}},{\text{z}})],} \hfill & { - 0.8 \le {\text{x}} \le - 0.6} \hfill \\ {1,} \hfill & { - 0.4 \le {\text{x}} \le - 0.2} \hfill \\ {1 - \left| {10({\text{x}} - 0.1)} \right|} \hfill & {0 \le {\text{x}} \le 0.2} \hfill \\ {\frac{1}{6}[{\text{F}}({\mathrm{x}},{\text{a}} - \delta ) + {\text{F}}({\mathrm{x}},{\text{a}} + \delta ) + 4{\text{F}}({\mathrm{x}},{\text{a}})],} \hfill & {0.4 \le {\text{x}} \le 0.6} \hfill \\ {0,} \hfill & {{\text{otherwise}}} \hfill \\ \end{array} } \right. $$
(6.3)

Here \({\text{G}}({\mathrm{x}},{\text{z}}) = \exp ( - \upbeta ({\text{x}} - {\text{z}})^{2} ),\) \({\text{F}}({\mathrm{x}},{\text{a}}) = \{ \max (1 - \upalpha^{2} ({\text{x}} - {\text{a}})^{2} ,0\}^{{{\raise0.5ex\hbox{$\scriptstyle 1$} \kern-0.1em/\kern-0.15em \lower0.25ex\hbox{$\scriptstyle 2$}}}}\) and periodic boundary conditions on [-1,1]. Where the constants \({\text{a}} = 0.5,\;{\text{z}} = - 0.7,\;\delta = 0.005,\;\upalpha = 10\;{\text{and}}\;\upbeta = (\log 2)/36\delta^{2}\). It is clear that the initial condition consists of different shapes which are difficult for numerical schemes to resolve correctly. Some of them are not smooth and the others are smooth but very sharp. We compute the solution to this problem at time t = 20 with 200 cells. Figures 5, 6, 7 and 8 show the results of RCM, NONS-CRCM-5, Stag-CRCM-5 and CRCM-5-TVD schemes respectively. Comparing these results with the ADER5 and ADER5-TVD schemes (Figures 3–4 in [23]), we observe that the RCM produces sharp discontinuities while the accuracy in the smooth regions is poor, the numerical solution obtained by the ADER5 is quite satisfied. NONS-CRCM-5 and STAG-CRCM-5 schemes are better results while the numerical solutions obtained by CRCM-5-TVD and ADER-5-TVD schemes are almost indistinguishable from the exact solution. Comparing the results here with the sixth-order WENO schemes presented in [7, 8, 20] (figure 5 in [8] with N = 400, figure 4 in [20] with N = 400, and figure 6 in [7] with N = 200) we observed that our schemes are comparable to them but they are much better in capturing the shocks and approximating top of the semi-ellipse. Also, we notice that the CRCM-5-TVD is superior. In Table 2, the errors and the CPU time are presented. It is clear that the CPU time used by the NONS-CRCM-5 scheme is about 68% of that by the ADER5 scheme on a given mesh. While the CPU time used by the Stag-CRCM-5 scheme is about 75% of that by the ADER5 scheme on a given mesh. However, the CPU time used by CRCM-5-TVD is about 39% of ADER-5-TVD on a given mesh. Moreover, the CPU time used by CRCM5-TVD is about 75% of that by NONS-CRCM-5 and is about 69% of STAG-CRCM. Figure 9 shows the comparison of the efficiency of \(L^{1}\) errors versus CPU time. We note that the CRCM-5-TVD scheme is the most efficient. Figure 10 shows the distribution of the \(L^{1} \;\) norm of the solution error as a function of N. It is clear that all schemes achieve first-order accuracy, since the convergence rate is reduced to \(N^{ - 1}\) approximately, which is to be expected when discontinuities are involved.

Fig. 5
figure 5

Solution of Example 2 using RCM scheme at t = 20

Fig. 6
figure 6

Solution of Example 2 using NONS-CRCM-5 scheme at t = 20

Fig. 7
figure 7

Solution of Example 2 using Stag-CRCM-5 scheme at t = 20

Fig. 8
figure 8

Solution of Example 2 using CRCM-5-TVD scheme at t = 20

Table 2 Convergence study for example 2
Fig. 9
figure 9

CPU time and \({\text{L}}^{1} \,{\text{error}}\) curve for Example 2

Fig. 10
figure 10

Rate of convergence in terms of \({\text{L}}^{1} \,{\text{error}}\) for example 2

6.3 Example 3 (Burgers’ equation)

It is well known that the Burger problem has simple acoustic waves and therefore allows shocks. It has the form

$$ {\text{u}}_{{\text{t}}} + \left( {\frac{{{\text{u}}^{2} }}{2}} \right)_{{\text{x}}} = 0, $$
(6.4)

with initial condition

$$ {\text{u}}({\mathrm{x}},0) = \left\{ {\begin{array}{*{20}l} { - 1} \hfill & {\left| {\text{x}} \right| \ge 0.5} \hfill \\ 2 \hfill & {\left| {\text{x}} \right| < 0.5} \hfill \\ \end{array} } \right. $$
(6.5)

The breakdown of the initial discontinuity gives a shock wave with a speed of 0.5 and a simple centered expansion fan with a sonic point. At t = 2/3 the rarefaction hits the shock and therefore the solution has a rarefaction wave only. Figures 11, 12, 13 and 14 show the results at t = 0.4 (before the collision of the head of the rarefaction with the shock) and t = 1 (after collision), with 80 grid points, obtained by RCM, NONS-CRCM-5, STAG-CRCM-5, and CRCM-5-TVD schemes respectively. We notice from the figures that the RCM produces sharp discontinuities while the accuracy in the smooth regions is poor. The numerical solution obtained by the ADER5 is quite satisfied. NONS-CRCM-5 and STAG-CRCM-5 schemes are better results while and the numerical solutions obtained by CRCM-5-TVD and ADER-5-TVD schemes are almost indistinguishable from the exact solution.

Fig. 11
figure 11

Solution of Example 3 using RCM scheme

Fig. 12
figure 12

Solution of Example 3 using NONS-CRCM-5 scheme

Fig. 13
figure 13

Solution of Example 3 using STAG-CRCM-5 scheme

Fig. 14
figure 14

Solution of Example 3 using CRCM-5-TVD scheme

6.4 Example 4 (Buckley-Leverett Equation)

To show the performance of our schemes, we solve the Buckley-Leverett problem with non-convex fluxes

$$ u_{t} \; + \left( {\frac{{4u^{2} }}{{4u^{2} + (1 - u^{{}} )^{2} }}} \right)_{x} = 0,\quad $$
(6.6)

with initial condition

$$ {\text{u}}({\mathrm{x}},0) = \left\{ {\begin{array}{*{20}l} 1 &\quad { - 0.5 \le {\text{x}} \le 0} \\ 0 &\quad {{\text{elsewhere}}} \\ \end{array} } \right. $$
(6.7)

The exact solution to this problem is a shock-rarefaction-contact discontinuity mixture. It is observed that, for this problem, some high-order schemes fail to converge to the correct entropy solution. Figures 15, 16, 17 and 18 show the numerical results at t = 0.4 obtained by RCM, NONS-CRCM-5, Stag-CRCM-5 and CRCM-5TVD schemes respectively with 80 cells. We notice that NONS-CRCM-5 and Stag-CRCM-5 schemes capture the correct entropy solution well, with good resolutions for all the major features in the solution. While the RCM scheme produces the smooth regions of the solution with poor accuracy. Comparing these figures with the solution obtained by ADER5and ADER-5-TVD (Figures. 13–14 in [23]), we observe that our schemes are more accurate.

Fig. 15
figure 15

Solution of Example 4 using RCM

Fig. 16
figure 16

Solution of Example 4 using NONS-CRCM-5 scheme

Fig. 17
figure 17

Solution of Example 4 using STAG-CRCM-5 scheme

Fig. 18
figure 18

Solution of Example 4 using CRCM-5-TVD scheme

6.5 Systems of conservation laws

Here we extend our schemes to solve the system of Euler equations of gas dynamics as hyperbolic systems of conservation laws of the form

$$ {\text{U}}_{{\text{t}}} + {\text{F}}({\mathrm{U}})_{{\text{x}}} = 0, $$
(6.8)

where \({\text{U}} = (\uprho ,\uprho {\text{u}},{\text{E}})^{{\text{T}}} ,\;{\text{F}}({\mathrm{U}}) = (\uprho {\text{u}},\uprho {\text{u}}^{2} + {\text{P}},{\text{u}}({\mathrm{E}} + {\text{P}}))^{{\text{T}}}\), where \(\uprho\), u, P, E denote density, velocity, pressure, and the total energy \({\text{E}} = \frac{1}{2}\uprho {\text{u}}^{2} + \frac{{\text{P}}}{(\upgamma - 1)}\) respectively. The parameter \(\upgamma\) denotes the ratio of specific heats, taken as 1.4 here.

6.5.1 Example 5. Lax’s Problem

This problem is the Eq. (6.8) with the initial condition consisting of two states, left (L) and right (R)

$$ (\uprho_{{\text{L}}} ,{\text{u}}_{{\text{L}}} ,{\text{E}}_{{\text{L}}} ) = (0.445,\,0.698,\,8.928)\quad {\text{and}}\quad (\uprho_{{\text{R}}} ,{\text{u}}_{{\text{R}}} ,{\text{E}}_{{\text{R}}} ) = (0.5,\,0.0,0\,.571) $$
(6.9)

separated by a discontinuity at x = 0.5.The computational domain is taken as [0,1]. Figures 19, 20, 21 and 22 show the results obtained by RCM, NONS-CRCM-5, STAG-CRCM-5 and CRCM-5TVD schemes respectively at t = 0.16 with 100 cells. Comparing these results in the figures and Fig. (16–17) in [22] and figure 9 in [8], we note that our schemes are still better than the other schemes.

Fig. 19
figure 19

Solution of Example 5 using RCM

Fig. 20
figure 20

Solution of Example 5 using the NONS-CRCM-5 scheme

Fig. 21
figure 21

Solution of Example 5 using STAG-CRCM-5 scheme

Fig. 22
figure 22

Solution of Example 5 using CRCM-5-TVD scheme

6.5.2 Example 6. Blast wave problem

In order to examine the robustness of the numerical schemes presented here, we solve the blast problem introduced by Woodward and Colella [21] which is a severe test problem and is very difficult to be solved numerically. Its solution contains the collision of strong shock waves and interaction of shock waves and rarefactions, the propagation of strong shock waves into low-pressure regions, and is, therefore, a good test of the schemes. The initial condition consists of three states [21]

$$ {\text{U}}({\mathrm{x}},0) = \left\{ {\begin{array}{*{20}l} {(\uprho _{{\text{L}}} ,{\text{u}}_{{\text{L}}} ,{\text{P}}_{{\text{L}}} ) = (1,0,1000),} \hfill & {{\text{x}} < 0.1} \hfill \\ {(\uprho _{{\text{M}}} ,{\text{u}}_{{\text{M}}} ,{\text{P}}_{{\text{M}}} ) = (1,0,0.01),} \hfill & {0.1 < {\text{x}} < 0.9} \hfill \\ {(\uprho _{{\text{R}}} ,{\text{u}}_{{\text{R}}} ,{\text{P}}_{{\text{R}}} ) = (1,0,100),} \hfill & {{\text{x}} > 0.9} \hfill \\ \end{array} } \right. $$
(6.10)

with \(\upgamma = 1.4\). The boundary conditions are reflective at both boundaries. We present the numerical results of the density and velocity of this complex problem in Fig. 23 obtained by the CRCM-5-TVD scheme with 200 cells at time t = 0.028 and t = 0.038. The exact solution is computed by the fifth-order WENO [1] scheme with 4000 cells. It is clear that our scheme can compute such sharp resolution of the complex double-blast problem, especially, since the density peaks have almost the correct value. Comparing the figures here with the results in [7, 8, 20], we conclude that our schemes are more accurate and more efficient.

Fig. 23
figure 23

Solution of Example 6 using NONS-CRCM-5 scheme at t = .028 (up)and t = .038 (down)

Note: For the results obtained by NONS-CRCM-5 and STAG-CRCM-5 (not presented here), they are similar to Fig. 23.

6.5.3 Example 7. Shock/turbulence interaction problem

In order to show the advantages of the schemes presented here, we solve an example with a rich smooth structure and a shock wave. This example is the problem of shock interaction with entropy waves.

We take Eqs. (6.8) with a moving Mach = 3 shock interacting with sine waves in density; i.e., the initial condition is

$$ \begin{aligned} & (\uprho _{{\text{L}}} ,{\text{u}}_{{\text{L}}} ,{\text{P}}_{{\text{L}}} ) = (3.857143,2.629369,10.3333),\quad \quad \quad \quad {\text{for}}\quad {\text{x}} \le 0.1 \\ & (\uprho _{{\text{R}}} ,{\text{u}}_{{\text{R}}} ,{\text{P}}_{{\text{R}}} ) = (1 + 0.2{\kern 1pt} \sin 50x,\,0,\,1),\quad \quad \quad \quad \quad \quad \;\quad \quad {\text{for}}\quad {\text{x}} > 0.1 \\ \end{aligned}$$
(6.11)

The solution contains physical oscillations which should be resolved by the numerical method. We compute the solution at t = 0.18 with 200 grid points. Figures 24, 25 and 26 show the computed density by NONS-CRCM-5 STAG-CRCM-5 and CRCM-5TVD schemes against the reference solution, which is computed by the fifth-order WENO scheme [1] with 4000 grid points. Comparing the results with ADER 5 and ADER-5-TVD schemes (figure 19–20 in [23]) and the results in [7, 8, 20], we observe that all the schemes presented here produce the most accurate solution and are more efficient.

Figure24
figure 24

Solution of Example 7 using the NONS-CRCM-5 scheme

Fig. 25
figure 25

Solution of Example 7 using STAG-CRCM-5 scheme

Fig. 26
figure 26

Solution of Example 7 using CRCM-5-TVD scheme

6.6 Extension to two-dimensional problems

The schemes presented here can be applied to two-dimensional problems by using the space operator splitting technique. For example, we consider the two dimensional, Euler equations

$$ {\text{U}}_{{\text{t}}} + [{\text{F}}({\mathrm{U}})]_{{\text{x}}} + [{\text{G}}({\mathrm{U}})]_{{\text{y}}} = 0 $$
(6.12)

where \({\text{U}} = (\uprho ,\uprho {\text{u}},\uprho {\text{v}},{\text{E}})^{{\text{T}}} ,\quad {\text{F}}({\mathrm{U}}) = (\uprho {\text{u}},{\text{P}} + \uprho {\text{u}}^{2} ,\uprho {\text{uv}},{\text{u}}({\mathrm{P}} + {\text{E}}))^{T} ,\quad {\text{G}}({\mathrm{U}}) = (\uprho {\text{v}},\uprho {\text{uv}},{\text{P}} + \uprho {\text{v}}^{2} ,{\text{v}}({\mathrm{P}} + {\text{E}}))^{{\text{T}}}\), \({\text{P}} = (\upgamma - 1)\left[ {{\text{e}} - 0.5\uprho ({\text{u}}^{2} + {\text{v}}^{2} )} \right],\;{\text{S}} = \frac{{\text{P}}}{{\uprho^{\upgamma } }},\upgamma = 1.4.\)here \(\uprho ,{\text{P}},{\text{S}},{\text{e}},{\text{u}},{\text{v}}\) are density, pressure, entropy, the specific total energy of the fluid, x- component, and y-component of the velocity respectively. There are several forms of space splitting. We take here the simplest one, where the two-dimensional problem (6.12) is replaced by the sequence of two one-dimensional problems [16, 19]

$$ {\text{U}}_{{\text{t}}} + [{\text{F}}({\mathrm{U}})]_{{\text{x}}} = 0 $$
(6.13a)
$$ {\text{U}}_{{\text{t}}} + [{\text{G}}({\mathrm{U}})]_{{\text{y}}} = 0 $$
(6.13b)

If \({\text{U}}^{{\mathrm{n}}}\) is the data at time level n for the problem (6.12) are given, then the solution \({\text{U}}^{{{\text{n}} + 1}}\) at the time level n + 1 is obtained in the following two steps:

  1. (a)

    solve the Eq. (6.13a) with data \({\text{U}}^{{\mathrm{n}}}\) to obtain an intermediate solution \(\overline{\text{U}}^{{{\text{n}} + 1}}\) (x-sweep);

  2. (b)

    solve the Eq. (6.13b) with data \(\overline{\text{U}}^{{{\text{n}} + 1}}\) to obtain the complete solution \({\text{U}}^{{{\text{n}} + 1}}\) (y-sweep);

For three-dimensional problems, we add an extra z-sweep.

6.6.1 Example 8. Double Mach reflection problem

This problem is the two-dimensional Eqs. (6.12) on the domain is \([0,4] \times [0,1]\).The reflecting wall is starting from \({\text{x}} = \frac{1}{6}\) at the bottom of the domain. A right-moving Mach 10 shock is, initially, positioned at \(({\text{x}},{\text{y}}) = (\frac{1}{6},0)\) and forms \(60^{ \circ }\) an angle with the x-axis. The exact post-shock condition is assumed from \({\text{x}} = 0\;{\text{to}}\,{\text{x}} = \frac{1}{6}\), for the bottom boundary, and a reflective boundary condition is used for the rest of the x-axis. The data is put to describe the exact motion of the Mach 10 shock, at the top boundary of the computational domain [21]. We take the output time as t = 0.2. Figure 27 shows the density obtained by NONS-CRCM-5 STAG-CRCM-5 and CRCM-5TVD schemes on the \(960 \times 240\) cells. Comparing these results with those in the existing literature e.g., [7, 21] it is noticed that our schemes produce the flow pattern generally accepted at present as corrected. All discontinuities are correctly positioned and well resolved. In Table 3, we show the CPU time for the schemes presented here. We observe that the CRCM-5TVD is more efficient.

Fig. 27
figure 27

Double Mach reflection problem (Example 8). 30 density contours are equally spaced from 1.5 to 22.7 on a system size \(960 \times 240\) for Stag-CRCM-5 (top), NONS-CRCM-5 (middle) and CRCM-5-TVD (bottom)

Table 3 CPU time (in seconds) for the double Mach reflection problem (Example 8)

6.6.2 Example 9. Two-dimensional Vortex evolution problem

Here we examine the accuracy of our methods in the two dimensions. We solve the Eq. (6.12) with the initial conditions, corresponding to a smooth vortex, moving at \(45^{o}\) to the Cartesian mesh lines in the domain \([ - 5,\,5] \times [ - 5,\,5]\). We use periodic boundary conditions. The temperature and entropy are given as \({\text{T}} = \frac{{\text{P}}}{\uprho }\) and \({\text{S}} = \frac{{\text{P}}}{{\uprho^{\upgamma } }}\). The vortex is defined as the following isentropic perturbation to the uniform flow of unit values of primitive variables \((\uprho ,{\text{P}},{\text{u}},{\text{v}}) = (1,\,1,\,1,\,1)\):

$$ {\text{u}} = 1 - \frac{ \in }{2\pi }\,{\text{e}}^{{\tfrac{1}{2}(1 - {\text{r}}^{2} )}} y,\quad {\text{v}} = 1 + \frac{ \in }{2\pi }\,{\text{e}}^{{\tfrac{1}{2}(1 - {\text{r}}^{2} )}} {\text{x}}, $$

and the temperature

$$ T = = 1 - \frac{{(\upgamma - 1)\, \in^{2} }}{{8\upgamma \pi^{2} }}\,{\text{e}}^{{(1 - {\text{r}}^{2} )}} ,\quad \frac{{\text{p}}}{{\uprho^{\upgamma } }} = 1 $$

where \({\text{r}}^{2} = {\text{x}}^{2} + {\text{y}}^{2}\) and the vortex strength is \(\in = 5\). The numerical solution is computed at the time t = 10 when the vortex returns to the initial position. We used periodic boundary conditions in both directions. We observe that, although the flow is smooth over the whole domain, there are critical points at the center of the vortex and symmetric planes. Table 4 shows the errors and convergence for the NONS-CRCM-5, Stag-CRCM-5, ADER-5, and RCM-5-TVD schemes. We observe that NONS-CRCM-5 and Stag-CRCM-5 schemes achieve the theoretical accuracy while the ADER-5 scheme achieves approximately a fourth-order convergence rate. We note here that the CRCM-5-TVD is superior. Figure 28 shows the comparison of the efficiency of \(L^{1}\) errors versus CPU time. We observe that the scheme with the least CPU time to achieve the same error would have the best efficiency, therefore CRCM-5-TVD is the most efficient.

Table 4 Convergence study for Example 9
Fig. 28
figure 28

CPU time and \({\text{L}}^{1} \,{\text{error}}\) curve for Example 9

6.6.3 Example 10 Forward-facing step problem [21]

The problem is as follows: the wind tunnel is of length wide unit and three length units long. The step is 0.2 length units high and is located at 0.6 length units from the left-hand end of the tunnel. The problem is started by a right-going Mach 3 flow. Along the wall of the tunnel, the reflective boundary conditions are applied. At the entrance and the exit, in and outflow boundary conditions are applied. The corner of the step is a singular point and we treat it the same way as in [21], which is based on the assumption of a nearly steady flow in the near the corner. We compute the solution at t = 4. We show in Fig. 29 the numerical results on mesh with \(480 \times 160\) uniform cells by the NONS-CRCM-5 STAG-CRCM-5 and CRCM-5TVD schemes. The results are shown in Fig. 29 at t = 4. The figure shows 30 equally spaced density contours from 0.32 to 6.15. We can see that all the schemes capture the vortex sheet roll-up with better resolution in comparison to the other schemes.

Fig. 29
figure 29

Forward step problem (Example 10). 30 density contours are equally spaced from 0.32 to 6.15 on a system size \(480 \times 160\) for Stag-CRCM-5 (top), NONS-CRCM-5 (middle), and CRCM-5-TVD (bottom)

7 Conclusions

In this paper, we presented an improved version of the RCM. We merge the central ADER technique with the RCM for solving hyperbolic conservation laws. The resulting scheme is called Central RCM (CRCM). We use the WENO reconstruction for the initial data instead of constant piece-wise reconstruction in RCM. Also, we use the high order central finite difference schemes to solve the RPs numerically and then use random sampling to update the solution, as the original RCM. Here we use the staggered and non-staggered RCM. To enhance the accuracy of the new methods, we use a third-order TVD flux [22], instead of a first-order flux. The resulting methods have the advantages of the following methods: original RCM, ADER, and central finite difference methods. They capture the discontinuities with infinite resolution, are more accurate, faster, simple to implement, RP solver free, use less computer memory, and improve the accuracy in smooth regions. An extension of the schemes to general systems of nonlinear hyperbolic conservation laws in one and two dimensions is presented. Many numerical examples are presented for one and two-dimensional problems to confirm that the new schemes are superior to the other high-order schemes. In the future, we plan to use the present schemes for balance laws, multidimensional computations, and unstructured meshes.